Merge 5.10.65 into android12-5.10-lts
Changes in 5.10.65 locking/mutex: Fix HANDOFF condition regmap: fix the offset of register error log regulator: tps65910: Silence deferred probe error crypto: mxs-dcp - Check for DMA mapping errors sched/deadline: Fix reset_on_fork reporting of DL tasks power: supply: axp288_fuel_gauge: Report register-address on readb / writeb errors crypto: omap-sham - clear dma flags only after omap_sham_update_dma_stop() sched/deadline: Fix missing clock update in migrate_task_rq_dl() rcu/tree: Handle VM stoppage in stall detection EDAC/mce_amd: Do not load edac_mce_amd module on guests posix-cpu-timers: Force next expiration recalc after itimer reset hrtimer: Avoid double reprogramming in __hrtimer_start_range_ns() hrtimer: Ensure timerfd notification for HIGHRES=n udf: Check LVID earlier udf: Fix iocharset=utf8 mount option isofs: joliet: Fix iocharset=utf8 mount option bcache: add proper error unwinding in bcache_device_init blk-throtl: optimize IOPS throttle for large IO scenarios nvme-tcp: don't update queue count when failing to set io queues nvme-rdma: don't update queue count when failing to set io queues nvmet: pass back cntlid on successful completion power: supply: smb347-charger: Add missing pin control activation power: supply: max17042_battery: fix typo in MAx17042_TOFF s390/cio: add dev_busid sysfs entry for each subchannel s390/zcrypt: fix wrong offset index for APKA master key valid state libata: fix ata_host_start() crypto: omap - Fix inconsistent locking of device lists crypto: qat - do not ignore errors from enable_vf2pf_comms() crypto: qat - handle both source of interrupt in VF ISR crypto: qat - fix reuse of completion variable crypto: qat - fix naming for init/shutdown VF to PF notifications crypto: qat - do not export adf_iov_putmsg() fcntl: fix potential deadlock for &fasync_struct.fa_lock udf_get_extendedattr() had no boundary checks. s390/kasan: fix large PMD pages address alignment check s390/pci: fix misleading rc in clp_set_pci_fn() s390/debug: keep debug data on resize s390/debug: fix debug area life cycle s390/ap: fix state machine hang after failure to enable irq power: supply: cw2015: use dev_err_probe to allow deferred probe m68k: emu: Fix invalid free in nfeth_cleanup() sched/numa: Fix is_core_idle() sched: Fix UCLAMP_FLAG_IDLE setting rcu: Fix to include first blocked task in stall warning rcu: Add lockdep_assert_irqs_disabled() to rcu_sched_clock_irq() and callees rcu: Fix stall-warning deadlock due to non-release of rcu_node ->lock m68k: Fix invalid RMW_INSNS on CPUs that lack CAS block: return ELEVATOR_DISCARD_MERGE if possible spi: spi-fsl-dspi: Fix issue with uninitialized dma_slave_config spi: spi-pic32: Fix issue with uninitialized dma_slave_config genirq/timings: Fix error return code in irq_timings_test_irqs() irqchip/loongson-pch-pic: Improve edge triggered interrupt support lib/mpi: use kcalloc in mpi_resize clocksource/drivers/sh_cmt: Fix wrong setting if don't request IRQ for clock source channel block: nbd: add sanity check for first_minor spi: coldfire-qspi: Use clk_disable_unprepare in the remove function irqchip/gic-v3: Fix priority comparison when non-secure priorities are used crypto: qat - use proper type for vf_mask certs: Trigger creation of RSA module signing key if it's not an RSA key tpm: ibmvtpm: Avoid error message when process gets signal while waiting x86/mce: Defer processing of early errors spi: davinci: invoke chipselect callback blk-crypto: fix check for too-large dun_bytes regulator: vctrl: Use locked regulator_get_voltage in probe path regulator: vctrl: Avoid lockdep warning in enable/disable ops spi: sprd: Fix the wrong WDG_LOAD_VAL spi: spi-zynq-qspi: use wait_for_completion_timeout to make zynq_qspi_exec_mem_op not interruptible EDAC/i10nm: Fix NVDIMM detection drm/panfrost: Fix missing clk_disable_unprepare() on error in panfrost_clk_init() drm/gma500: Fix end of loop tests for list_for_each_entry ASoC: mediatek: mt8183: Fix Unbalanced pm_runtime_enable in mt8183_afe_pcm_dev_probe media: TDA1997x: enable EDID support leds: is31fl32xx: Fix missing error code in is31fl32xx_parse_dt() soc: rockchip: ROCKCHIP_GRF should not default to y, unconditionally media: cxd2880-spi: Fix an error handling path drm/of: free the right object bpf: Fix a typo of reuseport map in bpf.h. bpf: Fix potential memleak and UAF in the verifier. drm/of: free the iterator object on failure gve: fix the wrong AdminQ buffer overflow check libbpf: Fix the possible memory leak on error ARM: dts: aspeed-g6: Fix HVI3C function-group in pinctrl dtsi arm64: dts: renesas: r8a77995: draak: Remove bogus adv7511w properties i40e: improve locking of mac_filter_hash soc: qcom: rpmhpd: Use corner in power_off libbpf: Fix removal of inner map in bpf_object__create_map gfs2: Fix memory leak of object lsi on error return path firmware: fix theoretical UAF race with firmware cache and resume driver core: Fix error return code in really_probe() ionic: cleanly release devlink instance media: dvb-usb: fix uninit-value in dvb_usb_adapter_dvb_init media: dvb-usb: fix uninit-value in vp702x_read_mac_addr media: dvb-usb: Fix error handling in dvb_usb_i2c_init media: go7007: fix memory leak in go7007_usb_probe media: go7007: remove redundant initialization media: rockchip/rga: use pm_runtime_resume_and_get() media: rockchip/rga: fix error handling in probe media: coda: fix frame_mem_ctrl for YUV420 and YVU420 formats media: atomisp: fix the uninitialized use and rename "retvalue" Bluetooth: sco: prevent information leak in sco_conn_defer_accept() 6lowpan: iphc: Fix an off-by-one check of array index drm/amdgpu/acp: Make PM domain really work tcp: seq_file: Avoid skipping sk during tcp_seek_last_pos ARM: dts: meson8: Use a higher default GPU clock frequency ARM: dts: meson8b: odroidc1: Fix the pwm regulator supply properties ARM: dts: meson8b: mxq: Fix the pwm regulator supply properties ARM: dts: meson8b: ec100: Fix the pwm regulator supply properties net/mlx5e: Prohibit inner indir TIRs in IPoIB net/mlx5e: Block LRO if firmware asks for tunneled LRO cgroup/cpuset: Fix a partition bug with hotplug drm: mxsfb: Enable recovery on underflow drm: mxsfb: Increase number of outstanding requests on V4 and newer HW drm: mxsfb: Clear FIFO_CLEAR bit net: cipso: fix warnings in netlbl_cipsov4_add_std Bluetooth: mgmt: Fix wrong opcode in the response for add_adv cmd arm64: dts: renesas: rzg2: Convert EtherAVB to explicit delay handling arm64: dts: renesas: hihope-rzg2-ex: Add EtherAVB internal rx delay devlink: Break parameter notification sequence to be before/after unload/load driver net/mlx5: Fix missing return value in mlx5_devlink_eswitch_inline_mode_set() i2c: highlander: add IRQ check leds: lt3593: Put fwnode in any case during ->probe() leds: trigger: audio: Add an activate callback to ensure the initial brightness is set media: em28xx-input: fix refcount bug in em28xx_usb_disconnect media: venus: venc: Fix potential null pointer dereference on pointer fmt PCI: PM: Avoid forcing PCI_D0 for wakeup reasons inconsistently PCI: PM: Enable PME if it can be signaled from D3cold bpf, samples: Add missing mprog-disable to xdp_redirect_cpu's optstring soc: qcom: smsm: Fix missed interrupts if state changes while masked debugfs: Return error during {full/open}_proxy_open() on rmmod Bluetooth: increase BTNAMSIZ to 21 chars to fix potential buffer overflow PM: EM: Increase energy calculation precision selftests/bpf: Fix bpf-iter-tcp4 test to print correctly the dest IP drm/msm/mdp4: refactor HW revision detection into read_mdp_hw_revision drm/msm/mdp4: move HW revision detection to earlier phase drm/msm/dpu: make dpu_hw_ctl_clear_all_blendstages clear necessary LMs arm64: dts: exynos: correct GIC CPU interfaces address range on Exynos7 counter: 104-quad-8: Return error when invalid mode during ceiling_write cgroup/cpuset: Miscellaneous code cleanup cgroup/cpuset: Fix violation of cpuset locking rule ASoC: Intel: Fix platform ID matching Bluetooth: fix repeated calls to sco_sock_kill drm/msm/dsi: Fix some reference counted resource leaks net/mlx5: Register to devlink ingress VLAN filter trap net/mlx5: Fix unpublish devlink parameters ASoC: rt5682: Implement remove callback ASoC: rt5682: Properly turn off regulators if wrong device ID usb: dwc3: meson-g12a: add IRQ check usb: dwc3: qcom: add IRQ check usb: gadget: udc: at91: add IRQ check usb: gadget: udc: s3c2410: add IRQ check usb: phy: fsl-usb: add IRQ check usb: phy: twl6030: add IRQ checks usb: gadget: udc: renesas_usb3: Fix soc_device_match() abuse selftests/bpf: Fix test_core_autosize on big-endian machines devlink: Clear whole devlink_flash_notify struct samples: pktgen: add missing IPv6 option to pktgen scripts Bluetooth: Move shutdown callback before flushing tx and rx queue PM: cpu: Make notifier chain use a raw_spinlock_t usb: host: ohci-tmio: add IRQ check usb: phy: tahvo: add IRQ check libbpf: Re-build libbpf.so when libbpf.map changes mac80211: Fix insufficient headroom issue for AMSDU locking/lockdep: Mark local_lock_t locking/local_lock: Add missing owner initialization lockd: Fix invalid lockowner cast after vfs_test_lock nfsd4: Fix forced-expiry locking arm64: dts: marvell: armada-37xx: Extend PCIe MEM space clk: staging: correct reference to config IOMEM to config HAS_IOMEM i2c: synquacer: fix deferred probing firmware: raspberrypi: Keep count of all consumers firmware: raspberrypi: Fix a leak in 'rpi_firmware_get()' usb: gadget: mv_u3d: request_irq() after initializing UDC mm/swap: consider max pages in iomap_swapfile_add_extent lkdtm: replace SCSI_DISPATCH_CMD with SCSI_QUEUE_RQ Bluetooth: add timeout sanity check to hci_inquiry i2c: iop3xx: fix deferred probing i2c: s3c2410: fix IRQ check i2c: fix platform_get_irq.cocci warnings i2c: hix5hd2: fix IRQ check gfs2: init system threads before freeze lock rsi: fix error code in rsi_load_9116_firmware() rsi: fix an error code in rsi_probe() ASoC: Intel: kbl_da7219_max98927: Fix format selection for max98373 ASoC: Intel: Skylake: Leave data as is when invoking TLV IPCs ASoC: Intel: Skylake: Fix module resource and format selection mmc: sdhci: Fix issue with uninitialized dma_slave_config mmc: dw_mmc: Fix issue with uninitialized dma_slave_config mmc: moxart: Fix issue with uninitialized dma_slave_config bpf: Fix possible out of bound write in narrow load handling CIFS: Fix a potencially linear read overflow i2c: mt65xx: fix IRQ check i2c: xlp9xx: fix main IRQ check usb: ehci-orion: Handle errors of clk_prepare_enable() in probe usb: bdc: Fix an error handling path in 'bdc_probe()' when no suitable DMA config is available usb: bdc: Fix a resource leak in the error handling path of 'bdc_probe()' tty: serial: fsl_lpuart: fix the wrong mapbase value ASoC: wcd9335: Fix a double irq free in the remove function ASoC: wcd9335: Fix a memory leak in the error handling path of the probe function ASoC: wcd9335: Disable irq on slave ports in the remove function iwlwifi: follow the new inclusive terminology iwlwifi: skip first element in the WTAS ACPI table ice: Only lock to update netdev dev_addr ath6kl: wmi: fix an error code in ath6kl_wmi_sync_point() atlantic: Fix driver resume flow. bcma: Fix memory leak for internally-handled cores brcmfmac: pcie: fix oops on failure to resume and reprobe ipv6: make exception cache less predictible ipv4: make exception cache less predictible net: sched: Fix qdisc_rate_table refcount leak when get tcf_block failed net: qualcomm: fix QCA7000 checksum handling octeontx2-af: Fix loop in free and unmap counter octeontx2-af: Fix static code analyzer reported issues octeontx2-af: Set proper errorcode for IPv4 checksum errors ipv4: fix endianness issue in inet_rtm_getroute_build_skb() ASoC: rt5682: Remove unused variable in rt5682_i2c_remove() iwlwifi Add support for ax201 in Samsung Galaxy Book Flex2 Alpha f2fs: guarantee to write dirty data when enabling checkpoint back time: Handle negative seconds correctly in timespec64_to_ns() io_uring: IORING_OP_WRITE needs hash_reg_file set bio: fix page leak bio_add_hw_page failure tty: Fix data race between tiocsti() and flush_to_ldisc() perf/x86/amd/ibs: Extend PERF_PMU_CAP_NO_EXCLUDE to IBS Op x86/resctrl: Fix a maybe-uninitialized build warning treated as error Revert "KVM: x86: mmu: Add guest physical address check in translate_gpa()" KVM: s390: index kvm->arch.idle_mask by vcpu_idx KVM: x86: Update vCPU's hv_clock before back to guest when tsc_offset is adjusted KVM: VMX: avoid running vmx_handle_exit_irqoff in case of emulation KVM: nVMX: Unconditionally clear nested.pi_pending on nested VM-Enter ARM: dts: at91: add pinctrl-{names, 0} for all gpios fuse: truncate pagecache on atomic_o_trunc fuse: flush extending writes IMA: remove -Wmissing-prototypes warning IMA: remove the dependency on CRYPTO_MD5 fbmem: don't allow too huge resolutions backlight: pwm_bl: Improve bootloader/kernel device handover clk: kirkwood: Fix a clocking boot regression Linux 5.10.65 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ie0b9306ba6ee4193de3200df7cdacaeba152b83e
This commit is contained in:
@@ -29,7 +29,7 @@ recur_count
|
|||||||
cpoint_name
|
cpoint_name
|
||||||
Where in the kernel to trigger the action. It can be
|
Where in the kernel to trigger the action. It can be
|
||||||
one of INT_HARDWARE_ENTRY, INT_HW_IRQ_EN, INT_TASKLET_ENTRY,
|
one of INT_HARDWARE_ENTRY, INT_HW_IRQ_EN, INT_TASKLET_ENTRY,
|
||||||
FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_DISPATCH_CMD,
|
FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_QUEUE_RQ,
|
||||||
IDE_CORE_CP, or DIRECT
|
IDE_CORE_CP, or DIRECT
|
||||||
|
|
||||||
cpoint_type
|
cpoint_type
|
||||||
|
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 10
|
PATCHLEVEL = 10
|
||||||
SUBLEVEL = 64
|
SUBLEVEL = 65
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Dare mighty things
|
NAME = Dare mighty things
|
||||||
|
|
||||||
|
@@ -208,12 +208,12 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_hvi3c3_default: hvi3c3_default {
|
pinctrl_hvi3c3_default: hvi3c3_default {
|
||||||
function = "HVI3C3";
|
function = "I3C3";
|
||||||
groups = "HVI3C3";
|
groups = "HVI3C3";
|
||||||
};
|
};
|
||||||
|
|
||||||
pinctrl_hvi3c4_default: hvi3c4_default {
|
pinctrl_hvi3c4_default: hvi3c4_default {
|
||||||
function = "HVI3C4";
|
function = "I3C4";
|
||||||
groups = "HVI3C4";
|
groups = "HVI3C4";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@@ -92,6 +92,8 @@
|
|||||||
|
|
||||||
leds {
|
leds {
|
||||||
compatible = "gpio-leds";
|
compatible = "gpio-leds";
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&pinctrl_gpio_leds>;
|
||||||
status = "okay"; /* Conflict with pwm0. */
|
status = "okay"; /* Conflict with pwm0. */
|
||||||
|
|
||||||
red {
|
red {
|
||||||
@@ -537,6 +539,10 @@
|
|||||||
AT91_PIOA 19 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI) /* PA19 DAT2 periph A with pullup */
|
AT91_PIOA 19 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI) /* PA19 DAT2 periph A with pullup */
|
||||||
AT91_PIOA 20 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI)>; /* PA20 DAT3 periph A with pullup */
|
AT91_PIOA 20 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI)>; /* PA20 DAT3 periph A with pullup */
|
||||||
};
|
};
|
||||||
|
pinctrl_sdmmc0_cd: sdmmc0_cd {
|
||||||
|
atmel,pins =
|
||||||
|
<AT91_PIOA 23 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
sdmmc1 {
|
sdmmc1 {
|
||||||
@@ -569,6 +575,14 @@
|
|||||||
AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
leds {
|
||||||
|
pinctrl_gpio_leds: gpio_leds {
|
||||||
|
atmel,pins = <AT91_PIOB 11 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
|
||||||
|
AT91_PIOB 12 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
|
||||||
|
AT91_PIOB 13 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
|
};
|
||||||
}; /* pinctrl */
|
}; /* pinctrl */
|
||||||
|
|
||||||
&pwm0 {
|
&pwm0 {
|
||||||
@@ -580,7 +594,7 @@
|
|||||||
&sdmmc0 {
|
&sdmmc0 {
|
||||||
bus-width = <4>;
|
bus-width = <4>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_sdmmc0_default>;
|
pinctrl-0 = <&pinctrl_sdmmc0_default &pinctrl_sdmmc0_cd>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
cd-gpios = <&pioA 23 GPIO_ACTIVE_LOW>;
|
cd-gpios = <&pioA 23 GPIO_ACTIVE_LOW>;
|
||||||
disable-wp;
|
disable-wp;
|
||||||
|
@@ -57,6 +57,8 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
spi0: spi@f0004000 {
|
spi0: spi@f0004000 {
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&pinctrl_spi0_cs>;
|
||||||
cs-gpios = <&pioD 13 0>, <0>, <0>, <&pioD 16 0>;
|
cs-gpios = <&pioD 13 0>, <0>, <0>, <&pioD 16 0>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
@@ -169,6 +171,8 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
spi1: spi@f8008000 {
|
spi1: spi@f8008000 {
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&pinctrl_spi1_cs>;
|
||||||
cs-gpios = <&pioC 25 0>;
|
cs-gpios = <&pioC 25 0>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
@@ -248,6 +252,26 @@
|
|||||||
<AT91_PIOE 3 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
|
<AT91_PIOE 3 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
|
||||||
AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
pinctrl_gpio_leds: gpio_leds_default {
|
||||||
|
atmel,pins =
|
||||||
|
<AT91_PIOE 23 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
|
||||||
|
AT91_PIOE 24 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
|
|
||||||
|
pinctrl_spi0_cs: spi0_cs_default {
|
||||||
|
atmel,pins =
|
||||||
|
<AT91_PIOD 13 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
|
||||||
|
AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
|
|
||||||
|
pinctrl_spi1_cs: spi1_cs_default {
|
||||||
|
atmel,pins = <AT91_PIOC 25 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
|
|
||||||
|
pinctrl_vcc_mmc0_reg_gpio: vcc_mmc0_reg_gpio_default {
|
||||||
|
atmel,pins = <AT91_PIOE 2 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
@@ -339,6 +363,8 @@
|
|||||||
|
|
||||||
vcc_mmc0_reg: fixedregulator_mmc0 {
|
vcc_mmc0_reg: fixedregulator_mmc0 {
|
||||||
compatible = "regulator-fixed";
|
compatible = "regulator-fixed";
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&pinctrl_vcc_mmc0_reg_gpio>;
|
||||||
gpio = <&pioE 2 GPIO_ACTIVE_LOW>;
|
gpio = <&pioE 2 GPIO_ACTIVE_LOW>;
|
||||||
regulator-name = "mmc0-card-supply";
|
regulator-name = "mmc0-card-supply";
|
||||||
regulator-min-microvolt = <3300000>;
|
regulator-min-microvolt = <3300000>;
|
||||||
@@ -362,6 +388,9 @@
|
|||||||
|
|
||||||
leds {
|
leds {
|
||||||
compatible = "gpio-leds";
|
compatible = "gpio-leds";
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&pinctrl_gpio_leds>;
|
||||||
|
status = "okay";
|
||||||
|
|
||||||
d2 {
|
d2 {
|
||||||
label = "d2";
|
label = "d2";
|
||||||
|
@@ -90,6 +90,8 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
spi1: spi@fc018000 {
|
spi1: spi@fc018000 {
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&pinctrl_spi0_cs>;
|
||||||
cs-gpios = <&pioB 21 0>;
|
cs-gpios = <&pioB 21 0>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
@@ -147,6 +149,19 @@
|
|||||||
atmel,pins =
|
atmel,pins =
|
||||||
<AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
|
<AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
|
||||||
};
|
};
|
||||||
|
pinctrl_spi0_cs: spi0_cs_default {
|
||||||
|
atmel,pins =
|
||||||
|
<AT91_PIOB 21 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
|
pinctrl_gpio_leds: gpio_leds_default {
|
||||||
|
atmel,pins =
|
||||||
|
<AT91_PIOD 30 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
|
||||||
|
AT91_PIOE 15 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
|
pinctrl_vcc_mmc1_reg: vcc_mmc1_reg {
|
||||||
|
atmel,pins =
|
||||||
|
<AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
@@ -252,6 +267,8 @@
|
|||||||
|
|
||||||
leds {
|
leds {
|
||||||
compatible = "gpio-leds";
|
compatible = "gpio-leds";
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&pinctrl_gpio_leds>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
d8 {
|
d8 {
|
||||||
@@ -278,6 +295,8 @@
|
|||||||
|
|
||||||
vcc_mmc1_reg: fixedregulator_mmc1 {
|
vcc_mmc1_reg: fixedregulator_mmc1 {
|
||||||
compatible = "regulator-fixed";
|
compatible = "regulator-fixed";
|
||||||
|
pinctrl-names = "default";
|
||||||
|
pinctrl-0 = <&pinctrl_vcc_mmc1_reg>;
|
||||||
gpio = <&pioE 4 GPIO_ACTIVE_LOW>;
|
gpio = <&pioE 4 GPIO_ACTIVE_LOW>;
|
||||||
regulator-name = "VDD MCI1";
|
regulator-name = "VDD MCI1";
|
||||||
regulator-min-microvolt = <3300000>;
|
regulator-min-microvolt = <3300000>;
|
||||||
|
@@ -251,8 +251,13 @@
|
|||||||
"pp2", "ppmmu2", "pp4", "ppmmu4",
|
"pp2", "ppmmu2", "pp4", "ppmmu4",
|
||||||
"pp5", "ppmmu5", "pp6", "ppmmu6";
|
"pp5", "ppmmu5", "pp6", "ppmmu6";
|
||||||
resets = <&reset RESET_MALI>;
|
resets = <&reset RESET_MALI>;
|
||||||
|
|
||||||
clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>;
|
clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>;
|
||||||
clock-names = "bus", "core";
|
clock-names = "bus", "core";
|
||||||
|
|
||||||
|
assigned-clocks = <&clkc CLKID_MALI>;
|
||||||
|
assigned-clock-rates = <318750000>;
|
||||||
|
|
||||||
operating-points-v2 = <&gpu_opp_table>;
|
operating-points-v2 = <&gpu_opp_table>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@@ -153,7 +153,7 @@
|
|||||||
regulator-min-microvolt = <860000>;
|
regulator-min-microvolt = <860000>;
|
||||||
regulator-max-microvolt = <1140000>;
|
regulator-max-microvolt = <1140000>;
|
||||||
|
|
||||||
vin-supply = <&vcc_5v>;
|
pwm-supply = <&vcc_5v>;
|
||||||
|
|
||||||
pwms = <&pwm_cd 0 1148 0>;
|
pwms = <&pwm_cd 0 1148 0>;
|
||||||
pwm-dutycycle-range = <100 0>;
|
pwm-dutycycle-range = <100 0>;
|
||||||
@@ -237,7 +237,7 @@
|
|||||||
regulator-min-microvolt = <860000>;
|
regulator-min-microvolt = <860000>;
|
||||||
regulator-max-microvolt = <1140000>;
|
regulator-max-microvolt = <1140000>;
|
||||||
|
|
||||||
vin-supply = <&vcc_5v>;
|
pwm-supply = <&vcc_5v>;
|
||||||
|
|
||||||
pwms = <&pwm_cd 1 1148 0>;
|
pwms = <&pwm_cd 1 1148 0>;
|
||||||
pwm-dutycycle-range = <100 0>;
|
pwm-dutycycle-range = <100 0>;
|
||||||
|
@@ -39,6 +39,8 @@
|
|||||||
regulator-min-microvolt = <860000>;
|
regulator-min-microvolt = <860000>;
|
||||||
regulator-max-microvolt = <1140000>;
|
regulator-max-microvolt = <1140000>;
|
||||||
|
|
||||||
|
pwm-supply = <&vcc_5v>;
|
||||||
|
|
||||||
pwms = <&pwm_cd 0 1148 0>;
|
pwms = <&pwm_cd 0 1148 0>;
|
||||||
pwm-dutycycle-range = <100 0>;
|
pwm-dutycycle-range = <100 0>;
|
||||||
|
|
||||||
@@ -84,7 +86,7 @@
|
|||||||
regulator-min-microvolt = <860000>;
|
regulator-min-microvolt = <860000>;
|
||||||
regulator-max-microvolt = <1140000>;
|
regulator-max-microvolt = <1140000>;
|
||||||
|
|
||||||
vin-supply = <&vcc_5v>;
|
pwm-supply = <&vcc_5v>;
|
||||||
|
|
||||||
pwms = <&pwm_cd 1 1148 0>;
|
pwms = <&pwm_cd 1 1148 0>;
|
||||||
pwm-dutycycle-range = <100 0>;
|
pwm-dutycycle-range = <100 0>;
|
||||||
|
@@ -136,7 +136,7 @@
|
|||||||
regulator-min-microvolt = <860000>;
|
regulator-min-microvolt = <860000>;
|
||||||
regulator-max-microvolt = <1140000>;
|
regulator-max-microvolt = <1140000>;
|
||||||
|
|
||||||
vin-supply = <&p5v0>;
|
pwm-supply = <&p5v0>;
|
||||||
|
|
||||||
pwms = <&pwm_cd 0 12218 0>;
|
pwms = <&pwm_cd 0 12218 0>;
|
||||||
pwm-dutycycle-range = <91 0>;
|
pwm-dutycycle-range = <91 0>;
|
||||||
@@ -168,7 +168,7 @@
|
|||||||
regulator-min-microvolt = <860000>;
|
regulator-min-microvolt = <860000>;
|
||||||
regulator-max-microvolt = <1140000>;
|
regulator-max-microvolt = <1140000>;
|
||||||
|
|
||||||
vin-supply = <&p5v0>;
|
pwm-supply = <&p5v0>;
|
||||||
|
|
||||||
pwms = <&pwm_cd 1 12218 0>;
|
pwms = <&pwm_cd 1 12218 0>;
|
||||||
pwm-dutycycle-range = <91 0>;
|
pwm-dutycycle-range = <91 0>;
|
||||||
|
@@ -102,7 +102,7 @@
|
|||||||
#address-cells = <0>;
|
#address-cells = <0>;
|
||||||
interrupt-controller;
|
interrupt-controller;
|
||||||
reg = <0x11001000 0x1000>,
|
reg = <0x11001000 0x1000>,
|
||||||
<0x11002000 0x1000>,
|
<0x11002000 0x2000>,
|
||||||
<0x11004000 0x2000>,
|
<0x11004000 0x2000>,
|
||||||
<0x11006000 0x2000>;
|
<0x11006000 0x2000>;
|
||||||
};
|
};
|
||||||
|
@@ -134,6 +134,23 @@
|
|||||||
pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
|
pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
|
reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
|
||||||
|
/*
|
||||||
|
* U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property
|
||||||
|
* contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and
|
||||||
|
* 2 size cells and also expects that the second range starts at 16 MB offset. If these
|
||||||
|
* conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address
|
||||||
|
* space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window
|
||||||
|
* for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB.
|
||||||
|
* This bug is not present in U-Boot ports for other Armada 3700 devices and is fixed in
|
||||||
|
* U-Boot version 2021.07. See relevant U-Boot commits (the last one contains fix):
|
||||||
|
* https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7
|
||||||
|
* https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf
|
||||||
|
* https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33
|
||||||
|
*/
|
||||||
|
#address-cells = <3>;
|
||||||
|
#size-cells = <2>;
|
||||||
|
ranges = <0x81000000 0 0xe8000000 0 0xe8000000 0 0x01000000 /* Port 0 IO */
|
||||||
|
0x82000000 0 0xe9000000 0 0xe9000000 0 0x07000000>; /* Port 0 MEM */
|
||||||
|
|
||||||
/* enabled by U-Boot if PCIe module is present */
|
/* enabled by U-Boot if PCIe module is present */
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
|
@@ -487,8 +487,15 @@
|
|||||||
#interrupt-cells = <1>;
|
#interrupt-cells = <1>;
|
||||||
msi-parent = <&pcie0>;
|
msi-parent = <&pcie0>;
|
||||||
msi-controller;
|
msi-controller;
|
||||||
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000 /* Port 0 MEM */
|
/*
|
||||||
0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>; /* Port 0 IO*/
|
* The 128 MiB address range [0xe8000000-0xf0000000] is
|
||||||
|
* dedicated for PCIe and can be assigned to 8 windows
|
||||||
|
* with size a power of two. Use one 64 KiB window for
|
||||||
|
* IO at the end and the remaining seven windows
|
||||||
|
* (totaling 127 MiB) for MEM.
|
||||||
|
*/
|
||||||
|
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x07f00000 /* Port 0 MEM */
|
||||||
|
0x81000000 0 0xefff0000 0 0xefff0000 0 0x00010000>; /* Port 0 IO */
|
||||||
interrupt-map-mask = <0 0 0 7>;
|
interrupt-map-mask = <0 0 0 7>;
|
||||||
interrupt-map = <0 0 0 1 &pcie_intc 0>,
|
interrupt-map = <0 0 0 1 &pcie_intc 0>,
|
||||||
<0 0 0 2 &pcie_intc 1>,
|
<0 0 0 2 &pcie_intc 1>,
|
||||||
|
@@ -55,7 +55,8 @@
|
|||||||
pinctrl-0 = <&avb_pins>;
|
pinctrl-0 = <&avb_pins>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
phy-handle = <&phy0>;
|
phy-handle = <&phy0>;
|
||||||
phy-mode = "rgmii-id";
|
rx-internal-delay-ps = <1800>;
|
||||||
|
tx-internal-delay-ps = <2000>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
phy0: ethernet-phy@0 {
|
phy0: ethernet-phy@0 {
|
||||||
|
@@ -19,7 +19,8 @@
|
|||||||
pinctrl-0 = <&avb_pins>;
|
pinctrl-0 = <&avb_pins>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
phy-handle = <&phy0>;
|
phy-handle = <&phy0>;
|
||||||
phy-mode = "rgmii-txid";
|
tx-internal-delay-ps = <2000>;
|
||||||
|
rx-internal-delay-ps = <1800>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
phy0: ethernet-phy@0 {
|
phy0: ethernet-phy@0 {
|
||||||
|
@@ -1131,6 +1131,8 @@
|
|||||||
power-domains = <&sysc R8A774A1_PD_ALWAYS_ON>;
|
power-domains = <&sysc R8A774A1_PD_ALWAYS_ON>;
|
||||||
resets = <&cpg 812>;
|
resets = <&cpg 812>;
|
||||||
phy-mode = "rgmii";
|
phy-mode = "rgmii";
|
||||||
|
rx-internal-delay-ps = <0>;
|
||||||
|
tx-internal-delay-ps = <0>;
|
||||||
iommus = <&ipmmu_ds0 16>;
|
iommus = <&ipmmu_ds0 16>;
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <0>;
|
#size-cells = <0>;
|
||||||
|
@@ -1004,6 +1004,8 @@
|
|||||||
power-domains = <&sysc R8A774B1_PD_ALWAYS_ON>;
|
power-domains = <&sysc R8A774B1_PD_ALWAYS_ON>;
|
||||||
resets = <&cpg 812>;
|
resets = <&cpg 812>;
|
||||||
phy-mode = "rgmii";
|
phy-mode = "rgmii";
|
||||||
|
rx-internal-delay-ps = <0>;
|
||||||
|
tx-internal-delay-ps = <0>;
|
||||||
iommus = <&ipmmu_ds0 16>;
|
iommus = <&ipmmu_ds0 16>;
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <0>;
|
#size-cells = <0>;
|
||||||
|
@@ -960,6 +960,7 @@
|
|||||||
power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>;
|
power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>;
|
||||||
resets = <&cpg 812>;
|
resets = <&cpg 812>;
|
||||||
phy-mode = "rgmii";
|
phy-mode = "rgmii";
|
||||||
|
rx-internal-delay-ps = <0>;
|
||||||
iommus = <&ipmmu_ds0 16>;
|
iommus = <&ipmmu_ds0 16>;
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <0>;
|
#size-cells = <0>;
|
||||||
|
@@ -1233,6 +1233,8 @@
|
|||||||
power-domains = <&sysc R8A774E1_PD_ALWAYS_ON>;
|
power-domains = <&sysc R8A774E1_PD_ALWAYS_ON>;
|
||||||
resets = <&cpg 812>;
|
resets = <&cpg 812>;
|
||||||
phy-mode = "rgmii";
|
phy-mode = "rgmii";
|
||||||
|
rx-internal-delay-ps = <0>;
|
||||||
|
tx-internal-delay-ps = <0>;
|
||||||
iommus = <&ipmmu_ds0 16>;
|
iommus = <&ipmmu_ds0 16>;
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <0>;
|
#size-cells = <0>;
|
||||||
|
@@ -277,10 +277,6 @@
|
|||||||
interrupt-parent = <&gpio1>;
|
interrupt-parent = <&gpio1>;
|
||||||
interrupts = <28 IRQ_TYPE_LEVEL_LOW>;
|
interrupts = <28 IRQ_TYPE_LEVEL_LOW>;
|
||||||
|
|
||||||
/* Depends on LVDS */
|
|
||||||
max-clock = <135000000>;
|
|
||||||
min-vrefresh = <50>;
|
|
||||||
|
|
||||||
adi,input-depth = <8>;
|
adi,input-depth = <8>;
|
||||||
adi,input-colorspace = "rgb";
|
adi,input-colorspace = "rgb";
|
||||||
adi,input-clock = "1x";
|
adi,input-clock = "1x";
|
||||||
|
@@ -25,6 +25,7 @@ config COLDFIRE
|
|||||||
bool "Coldfire CPU family support"
|
bool "Coldfire CPU family support"
|
||||||
select ARCH_HAVE_CUSTOM_GPIO_H
|
select ARCH_HAVE_CUSTOM_GPIO_H
|
||||||
select CPU_HAS_NO_BITFIELDS
|
select CPU_HAS_NO_BITFIELDS
|
||||||
|
select CPU_HAS_NO_CAS
|
||||||
select CPU_HAS_NO_MULDIV64
|
select CPU_HAS_NO_MULDIV64
|
||||||
select GENERIC_CSUM
|
select GENERIC_CSUM
|
||||||
select GPIOLIB
|
select GPIOLIB
|
||||||
@@ -38,6 +39,7 @@ config M68000
|
|||||||
bool "MC68000"
|
bool "MC68000"
|
||||||
depends on !MMU
|
depends on !MMU
|
||||||
select CPU_HAS_NO_BITFIELDS
|
select CPU_HAS_NO_BITFIELDS
|
||||||
|
select CPU_HAS_NO_CAS
|
||||||
select CPU_HAS_NO_MULDIV64
|
select CPU_HAS_NO_MULDIV64
|
||||||
select CPU_HAS_NO_UNALIGNED
|
select CPU_HAS_NO_UNALIGNED
|
||||||
select GENERIC_CSUM
|
select GENERIC_CSUM
|
||||||
@@ -53,6 +55,7 @@ config M68000
|
|||||||
config MCPU32
|
config MCPU32
|
||||||
bool
|
bool
|
||||||
select CPU_HAS_NO_BITFIELDS
|
select CPU_HAS_NO_BITFIELDS
|
||||||
|
select CPU_HAS_NO_CAS
|
||||||
select CPU_HAS_NO_UNALIGNED
|
select CPU_HAS_NO_UNALIGNED
|
||||||
select CPU_NO_EFFICIENT_FFS
|
select CPU_NO_EFFICIENT_FFS
|
||||||
help
|
help
|
||||||
@@ -357,7 +360,7 @@ config ADVANCED
|
|||||||
|
|
||||||
config RMW_INSNS
|
config RMW_INSNS
|
||||||
bool "Use read-modify-write instructions"
|
bool "Use read-modify-write instructions"
|
||||||
depends on ADVANCED
|
depends on ADVANCED && !CPU_HAS_NO_CAS
|
||||||
help
|
help
|
||||||
This allows to use certain instructions that work with indivisible
|
This allows to use certain instructions that work with indivisible
|
||||||
read-modify-write bus cycles. While this is faster than the
|
read-modify-write bus cycles. While this is faster than the
|
||||||
@@ -411,6 +414,9 @@ config NODES_SHIFT
|
|||||||
config CPU_HAS_NO_BITFIELDS
|
config CPU_HAS_NO_BITFIELDS
|
||||||
bool
|
bool
|
||||||
|
|
||||||
|
config CPU_HAS_NO_CAS
|
||||||
|
bool
|
||||||
|
|
||||||
config CPU_HAS_NO_MULDIV64
|
config CPU_HAS_NO_MULDIV64
|
||||||
bool
|
bool
|
||||||
|
|
||||||
|
@@ -254,8 +254,8 @@ static void __exit nfeth_cleanup(void)
|
|||||||
|
|
||||||
for (i = 0; i < MAX_UNIT; i++) {
|
for (i = 0; i < MAX_UNIT; i++) {
|
||||||
if (nfeth_dev[i]) {
|
if (nfeth_dev[i]) {
|
||||||
unregister_netdev(nfeth_dev[0]);
|
unregister_netdev(nfeth_dev[i]);
|
||||||
free_netdev(nfeth_dev[0]);
|
free_netdev(nfeth_dev[i]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
free_irq(nfEtherIRQ, nfeth_interrupt);
|
free_irq(nfEtherIRQ, nfeth_interrupt);
|
||||||
|
@@ -957,6 +957,7 @@ struct kvm_arch{
|
|||||||
atomic64_t cmma_dirty_pages;
|
atomic64_t cmma_dirty_pages;
|
||||||
/* subset of available cpu features enabled by user space */
|
/* subset of available cpu features enabled by user space */
|
||||||
DECLARE_BITMAP(cpu_feat, KVM_S390_VM_CPU_FEAT_NR_BITS);
|
DECLARE_BITMAP(cpu_feat, KVM_S390_VM_CPU_FEAT_NR_BITS);
|
||||||
|
/* indexed by vcpu_idx */
|
||||||
DECLARE_BITMAP(idle_mask, KVM_MAX_VCPUS);
|
DECLARE_BITMAP(idle_mask, KVM_MAX_VCPUS);
|
||||||
struct kvm_s390_gisa_interrupt gisa_int;
|
struct kvm_s390_gisa_interrupt gisa_int;
|
||||||
struct kvm_s390_pv pv;
|
struct kvm_s390_pv pv;
|
||||||
|
@@ -24,6 +24,7 @@
|
|||||||
#include <linux/export.h>
|
#include <linux/export.h>
|
||||||
#include <linux/init.h>
|
#include <linux/init.h>
|
||||||
#include <linux/fs.h>
|
#include <linux/fs.h>
|
||||||
|
#include <linux/minmax.h>
|
||||||
#include <linux/debugfs.h>
|
#include <linux/debugfs.h>
|
||||||
|
|
||||||
#include <asm/debug.h>
|
#include <asm/debug.h>
|
||||||
@@ -92,6 +93,8 @@ static int debug_hex_ascii_format_fn(debug_info_t *id, struct debug_view *view,
|
|||||||
char *out_buf, const char *in_buf);
|
char *out_buf, const char *in_buf);
|
||||||
static int debug_sprintf_format_fn(debug_info_t *id, struct debug_view *view,
|
static int debug_sprintf_format_fn(debug_info_t *id, struct debug_view *view,
|
||||||
char *out_buf, debug_sprintf_entry_t *curr_event);
|
char *out_buf, debug_sprintf_entry_t *curr_event);
|
||||||
|
static void debug_areas_swap(debug_info_t *a, debug_info_t *b);
|
||||||
|
static void debug_events_append(debug_info_t *dest, debug_info_t *src);
|
||||||
|
|
||||||
/* globals */
|
/* globals */
|
||||||
|
|
||||||
@@ -311,24 +314,6 @@ static debug_info_t *debug_info_create(const char *name, int pages_per_area,
|
|||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
rc->mode = mode & ~S_IFMT;
|
rc->mode = mode & ~S_IFMT;
|
||||||
|
|
||||||
/* create root directory */
|
|
||||||
rc->debugfs_root_entry = debugfs_create_dir(rc->name,
|
|
||||||
debug_debugfs_root_entry);
|
|
||||||
|
|
||||||
/* append new element to linked list */
|
|
||||||
if (!debug_area_first) {
|
|
||||||
/* first element in list */
|
|
||||||
debug_area_first = rc;
|
|
||||||
rc->prev = NULL;
|
|
||||||
} else {
|
|
||||||
/* append element to end of list */
|
|
||||||
debug_area_last->next = rc;
|
|
||||||
rc->prev = debug_area_last;
|
|
||||||
}
|
|
||||||
debug_area_last = rc;
|
|
||||||
rc->next = NULL;
|
|
||||||
|
|
||||||
refcount_set(&rc->ref_count, 1);
|
refcount_set(&rc->ref_count, 1);
|
||||||
out:
|
out:
|
||||||
return rc;
|
return rc;
|
||||||
@@ -388,27 +373,10 @@ static void debug_info_get(debug_info_t *db_info)
|
|||||||
*/
|
*/
|
||||||
static void debug_info_put(debug_info_t *db_info)
|
static void debug_info_put(debug_info_t *db_info)
|
||||||
{
|
{
|
||||||
int i;
|
|
||||||
|
|
||||||
if (!db_info)
|
if (!db_info)
|
||||||
return;
|
return;
|
||||||
if (refcount_dec_and_test(&db_info->ref_count)) {
|
if (refcount_dec_and_test(&db_info->ref_count))
|
||||||
for (i = 0; i < DEBUG_MAX_VIEWS; i++) {
|
|
||||||
if (!db_info->views[i])
|
|
||||||
continue;
|
|
||||||
debugfs_remove(db_info->debugfs_entries[i]);
|
|
||||||
}
|
|
||||||
debugfs_remove(db_info->debugfs_root_entry);
|
|
||||||
if (db_info == debug_area_first)
|
|
||||||
debug_area_first = db_info->next;
|
|
||||||
if (db_info == debug_area_last)
|
|
||||||
debug_area_last = db_info->prev;
|
|
||||||
if (db_info->prev)
|
|
||||||
db_info->prev->next = db_info->next;
|
|
||||||
if (db_info->next)
|
|
||||||
db_info->next->prev = db_info->prev;
|
|
||||||
debug_info_free(db_info);
|
debug_info_free(db_info);
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -632,6 +600,31 @@ static int debug_close(struct inode *inode, struct file *file)
|
|||||||
return 0; /* success */
|
return 0; /* success */
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Create debugfs entries and add to internal list. */
|
||||||
|
static void _debug_register(debug_info_t *id)
|
||||||
|
{
|
||||||
|
/* create root directory */
|
||||||
|
id->debugfs_root_entry = debugfs_create_dir(id->name,
|
||||||
|
debug_debugfs_root_entry);
|
||||||
|
|
||||||
|
/* append new element to linked list */
|
||||||
|
if (!debug_area_first) {
|
||||||
|
/* first element in list */
|
||||||
|
debug_area_first = id;
|
||||||
|
id->prev = NULL;
|
||||||
|
} else {
|
||||||
|
/* append element to end of list */
|
||||||
|
debug_area_last->next = id;
|
||||||
|
id->prev = debug_area_last;
|
||||||
|
}
|
||||||
|
debug_area_last = id;
|
||||||
|
id->next = NULL;
|
||||||
|
|
||||||
|
debug_register_view(id, &debug_level_view);
|
||||||
|
debug_register_view(id, &debug_flush_view);
|
||||||
|
debug_register_view(id, &debug_pages_view);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* debug_register_mode() - creates and initializes debug area.
|
* debug_register_mode() - creates and initializes debug area.
|
||||||
*
|
*
|
||||||
@@ -661,19 +654,16 @@ debug_info_t *debug_register_mode(const char *name, int pages_per_area,
|
|||||||
if ((uid != 0) || (gid != 0))
|
if ((uid != 0) || (gid != 0))
|
||||||
pr_warn("Root becomes the owner of all s390dbf files in sysfs\n");
|
pr_warn("Root becomes the owner of all s390dbf files in sysfs\n");
|
||||||
BUG_ON(!initialized);
|
BUG_ON(!initialized);
|
||||||
mutex_lock(&debug_mutex);
|
|
||||||
|
|
||||||
/* create new debug_info */
|
/* create new debug_info */
|
||||||
rc = debug_info_create(name, pages_per_area, nr_areas, buf_size, mode);
|
rc = debug_info_create(name, pages_per_area, nr_areas, buf_size, mode);
|
||||||
if (!rc)
|
if (rc) {
|
||||||
goto out;
|
mutex_lock(&debug_mutex);
|
||||||
debug_register_view(rc, &debug_level_view);
|
_debug_register(rc);
|
||||||
debug_register_view(rc, &debug_flush_view);
|
|
||||||
debug_register_view(rc, &debug_pages_view);
|
|
||||||
out:
|
|
||||||
if (!rc)
|
|
||||||
pr_err("Registering debug feature %s failed\n", name);
|
|
||||||
mutex_unlock(&debug_mutex);
|
mutex_unlock(&debug_mutex);
|
||||||
|
} else {
|
||||||
|
pr_err("Registering debug feature %s failed\n", name);
|
||||||
|
}
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(debug_register_mode);
|
EXPORT_SYMBOL(debug_register_mode);
|
||||||
@@ -702,6 +692,27 @@ debug_info_t *debug_register(const char *name, int pages_per_area,
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL(debug_register);
|
EXPORT_SYMBOL(debug_register);
|
||||||
|
|
||||||
|
/* Remove debugfs entries and remove from internal list. */
|
||||||
|
static void _debug_unregister(debug_info_t *id)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < DEBUG_MAX_VIEWS; i++) {
|
||||||
|
if (!id->views[i])
|
||||||
|
continue;
|
||||||
|
debugfs_remove(id->debugfs_entries[i]);
|
||||||
|
}
|
||||||
|
debugfs_remove(id->debugfs_root_entry);
|
||||||
|
if (id == debug_area_first)
|
||||||
|
debug_area_first = id->next;
|
||||||
|
if (id == debug_area_last)
|
||||||
|
debug_area_last = id->prev;
|
||||||
|
if (id->prev)
|
||||||
|
id->prev->next = id->next;
|
||||||
|
if (id->next)
|
||||||
|
id->next->prev = id->prev;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* debug_unregister() - give back debug area.
|
* debug_unregister() - give back debug area.
|
||||||
*
|
*
|
||||||
@@ -715,8 +726,10 @@ void debug_unregister(debug_info_t *id)
|
|||||||
if (!id)
|
if (!id)
|
||||||
return;
|
return;
|
||||||
mutex_lock(&debug_mutex);
|
mutex_lock(&debug_mutex);
|
||||||
debug_info_put(id);
|
_debug_unregister(id);
|
||||||
mutex_unlock(&debug_mutex);
|
mutex_unlock(&debug_mutex);
|
||||||
|
|
||||||
|
debug_info_put(id);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(debug_unregister);
|
EXPORT_SYMBOL(debug_unregister);
|
||||||
|
|
||||||
@@ -726,35 +739,28 @@ EXPORT_SYMBOL(debug_unregister);
|
|||||||
*/
|
*/
|
||||||
static int debug_set_size(debug_info_t *id, int nr_areas, int pages_per_area)
|
static int debug_set_size(debug_info_t *id, int nr_areas, int pages_per_area)
|
||||||
{
|
{
|
||||||
debug_entry_t ***new_areas;
|
debug_info_t *new_id;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int rc = 0;
|
|
||||||
|
|
||||||
if (!id || (nr_areas <= 0) || (pages_per_area < 0))
|
if (!id || (nr_areas <= 0) || (pages_per_area < 0))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
if (pages_per_area > 0) {
|
|
||||||
new_areas = debug_areas_alloc(pages_per_area, nr_areas);
|
new_id = debug_info_alloc("", pages_per_area, nr_areas, id->buf_size,
|
||||||
if (!new_areas) {
|
id->level, ALL_AREAS);
|
||||||
|
if (!new_id) {
|
||||||
pr_info("Allocating memory for %i pages failed\n",
|
pr_info("Allocating memory for %i pages failed\n",
|
||||||
pages_per_area);
|
pages_per_area);
|
||||||
rc = -ENOMEM;
|
return -ENOMEM;
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
new_areas = NULL;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock_irqsave(&id->lock, flags);
|
spin_lock_irqsave(&id->lock, flags);
|
||||||
debug_areas_free(id);
|
debug_events_append(new_id, id);
|
||||||
id->areas = new_areas;
|
debug_areas_swap(new_id, id);
|
||||||
id->nr_areas = nr_areas;
|
debug_info_free(new_id);
|
||||||
id->pages_per_area = pages_per_area;
|
|
||||||
id->active_area = 0;
|
|
||||||
memset(id->active_entries, 0, sizeof(int)*id->nr_areas);
|
|
||||||
memset(id->active_pages, 0, sizeof(int)*id->nr_areas);
|
|
||||||
spin_unlock_irqrestore(&id->lock, flags);
|
spin_unlock_irqrestore(&id->lock, flags);
|
||||||
pr_info("%s: set new size (%i pages)\n", id->name, pages_per_area);
|
pr_info("%s: set new size (%i pages)\n", id->name, pages_per_area);
|
||||||
out:
|
|
||||||
return rc;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -821,6 +827,42 @@ static inline debug_entry_t *get_active_entry(debug_info_t *id)
|
|||||||
id->active_entries[id->active_area]);
|
id->active_entries[id->active_area]);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Swap debug areas of a and b. */
|
||||||
|
static void debug_areas_swap(debug_info_t *a, debug_info_t *b)
|
||||||
|
{
|
||||||
|
swap(a->nr_areas, b->nr_areas);
|
||||||
|
swap(a->pages_per_area, b->pages_per_area);
|
||||||
|
swap(a->areas, b->areas);
|
||||||
|
swap(a->active_area, b->active_area);
|
||||||
|
swap(a->active_pages, b->active_pages);
|
||||||
|
swap(a->active_entries, b->active_entries);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Append all debug events in active area from source to destination log. */
|
||||||
|
static void debug_events_append(debug_info_t *dest, debug_info_t *src)
|
||||||
|
{
|
||||||
|
debug_entry_t *from, *to, *last;
|
||||||
|
|
||||||
|
if (!src->areas || !dest->areas)
|
||||||
|
return;
|
||||||
|
|
||||||
|
/* Loop over all entries in src, starting with oldest. */
|
||||||
|
from = get_active_entry(src);
|
||||||
|
last = from;
|
||||||
|
do {
|
||||||
|
if (from->clock != 0LL) {
|
||||||
|
to = get_active_entry(dest);
|
||||||
|
memset(to, 0, dest->entry_size);
|
||||||
|
memcpy(to, from, min(src->entry_size,
|
||||||
|
dest->entry_size));
|
||||||
|
proceed_active_entry(dest);
|
||||||
|
}
|
||||||
|
|
||||||
|
proceed_active_entry(src);
|
||||||
|
from = get_active_entry(src);
|
||||||
|
} while (from != last);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* debug_finish_entry:
|
* debug_finish_entry:
|
||||||
* - set timestamp, caller address, cpu number etc.
|
* - set timestamp, caller address, cpu number etc.
|
||||||
|
@@ -419,13 +419,13 @@ static unsigned long deliverable_irqs(struct kvm_vcpu *vcpu)
|
|||||||
static void __set_cpu_idle(struct kvm_vcpu *vcpu)
|
static void __set_cpu_idle(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
kvm_s390_set_cpuflags(vcpu, CPUSTAT_WAIT);
|
kvm_s390_set_cpuflags(vcpu, CPUSTAT_WAIT);
|
||||||
set_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
|
set_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
|
static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT);
|
kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT);
|
||||||
clear_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
|
clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)
|
static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)
|
||||||
@@ -3050,18 +3050,18 @@ int kvm_s390_get_irq_state(struct kvm_vcpu *vcpu, __u8 __user *buf, int len)
|
|||||||
|
|
||||||
static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
|
static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
|
||||||
{
|
{
|
||||||
int vcpu_id, online_vcpus = atomic_read(&kvm->online_vcpus);
|
int vcpu_idx, online_vcpus = atomic_read(&kvm->online_vcpus);
|
||||||
struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
|
struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
|
||||||
struct kvm_vcpu *vcpu;
|
struct kvm_vcpu *vcpu;
|
||||||
|
|
||||||
for_each_set_bit(vcpu_id, kvm->arch.idle_mask, online_vcpus) {
|
for_each_set_bit(vcpu_idx, kvm->arch.idle_mask, online_vcpus) {
|
||||||
vcpu = kvm_get_vcpu(kvm, vcpu_id);
|
vcpu = kvm_get_vcpu(kvm, vcpu_idx);
|
||||||
if (psw_ioint_disabled(vcpu))
|
if (psw_ioint_disabled(vcpu))
|
||||||
continue;
|
continue;
|
||||||
deliverable_mask &= (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
|
deliverable_mask &= (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
|
||||||
if (deliverable_mask) {
|
if (deliverable_mask) {
|
||||||
/* lately kicked but not yet running */
|
/* lately kicked but not yet running */
|
||||||
if (test_and_set_bit(vcpu_id, gi->kicked_mask))
|
if (test_and_set_bit(vcpu_idx, gi->kicked_mask))
|
||||||
return;
|
return;
|
||||||
kvm_s390_vcpu_wakeup(vcpu);
|
kvm_s390_vcpu_wakeup(vcpu);
|
||||||
return;
|
return;
|
||||||
|
@@ -4015,7 +4015,7 @@ static int vcpu_pre_run(struct kvm_vcpu *vcpu)
|
|||||||
kvm_s390_patch_guest_per_regs(vcpu);
|
kvm_s390_patch_guest_per_regs(vcpu);
|
||||||
}
|
}
|
||||||
|
|
||||||
clear_bit(vcpu->vcpu_id, vcpu->kvm->arch.gisa_int.kicked_mask);
|
clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.gisa_int.kicked_mask);
|
||||||
|
|
||||||
vcpu->arch.sie_block->icptcode = 0;
|
vcpu->arch.sie_block->icptcode = 0;
|
||||||
cpuflags = atomic_read(&vcpu->arch.sie_block->cpuflags);
|
cpuflags = atomic_read(&vcpu->arch.sie_block->cpuflags);
|
||||||
|
@@ -79,7 +79,7 @@ static inline int is_vcpu_stopped(struct kvm_vcpu *vcpu)
|
|||||||
|
|
||||||
static inline int is_vcpu_idle(struct kvm_vcpu *vcpu)
|
static inline int is_vcpu_idle(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
return test_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
|
return test_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int kvm_is_ucontrol(struct kvm *kvm)
|
static inline int kvm_is_ucontrol(struct kvm *kvm)
|
||||||
|
@@ -108,6 +108,9 @@ static void __init kasan_early_vmemmap_populate(unsigned long address,
|
|||||||
sgt_prot &= ~_SEGMENT_ENTRY_NOEXEC;
|
sgt_prot &= ~_SEGMENT_ENTRY_NOEXEC;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The first 1MB of 1:1 mapping is mapped with 4KB pages
|
||||||
|
*/
|
||||||
while (address < end) {
|
while (address < end) {
|
||||||
pg_dir = pgd_offset_k(address);
|
pg_dir = pgd_offset_k(address);
|
||||||
if (pgd_none(*pg_dir)) {
|
if (pgd_none(*pg_dir)) {
|
||||||
@@ -165,17 +168,13 @@ static void __init kasan_early_vmemmap_populate(unsigned long address,
|
|||||||
|
|
||||||
pm_dir = pmd_offset(pu_dir, address);
|
pm_dir = pmd_offset(pu_dir, address);
|
||||||
if (pmd_none(*pm_dir)) {
|
if (pmd_none(*pm_dir)) {
|
||||||
if (mode == POPULATE_ZERO_SHADOW &&
|
if (IS_ALIGNED(address, PMD_SIZE) &&
|
||||||
IS_ALIGNED(address, PMD_SIZE) &&
|
|
||||||
end - address >= PMD_SIZE) {
|
end - address >= PMD_SIZE) {
|
||||||
pmd_populate(&init_mm, pm_dir,
|
if (mode == POPULATE_ZERO_SHADOW) {
|
||||||
kasan_early_shadow_pte);
|
pmd_populate(&init_mm, pm_dir, kasan_early_shadow_pte);
|
||||||
address = (address + PMD_SIZE) & PMD_MASK;
|
address = (address + PMD_SIZE) & PMD_MASK;
|
||||||
continue;
|
continue;
|
||||||
}
|
} else if (has_edat && address) {
|
||||||
/* the first megabyte of 1:1 is mapped with 4k pages */
|
|
||||||
if (has_edat && address && end - address >= PMD_SIZE &&
|
|
||||||
mode != POPULATE_ZERO_SHADOW) {
|
|
||||||
void *page;
|
void *page;
|
||||||
|
|
||||||
if (mode == POPULATE_ONE2ONE) {
|
if (mode == POPULATE_ONE2ONE) {
|
||||||
@@ -188,7 +187,7 @@ static void __init kasan_early_vmemmap_populate(unsigned long address,
|
|||||||
address = (address + PMD_SIZE) & PMD_MASK;
|
address = (address + PMD_SIZE) & PMD_MASK;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
pt_dir = kasan_early_pte_alloc();
|
pt_dir = kasan_early_pte_alloc();
|
||||||
pmd_populate(&init_mm, pm_dir, pt_dir);
|
pmd_populate(&init_mm, pm_dir, pt_dir);
|
||||||
} else if (pmd_large(*pm_dir)) {
|
} else if (pmd_large(*pm_dir)) {
|
||||||
|
@@ -659,9 +659,10 @@ int zpci_enable_device(struct zpci_dev *zdev)
|
|||||||
{
|
{
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
rc = clp_enable_fh(zdev, ZPCI_NR_DMA_SPACES);
|
if (clp_enable_fh(zdev, ZPCI_NR_DMA_SPACES)) {
|
||||||
if (rc)
|
rc = -EIO;
|
||||||
goto out;
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
rc = zpci_dma_init_device(zdev);
|
rc = zpci_dma_init_device(zdev);
|
||||||
if (rc)
|
if (rc)
|
||||||
@@ -684,7 +685,7 @@ int zpci_disable_device(struct zpci_dev *zdev)
|
|||||||
* The zPCI function may already be disabled by the platform, this is
|
* The zPCI function may already be disabled by the platform, this is
|
||||||
* detected in clp_disable_fh() which becomes a no-op.
|
* detected in clp_disable_fh() which becomes a no-op.
|
||||||
*/
|
*/
|
||||||
return clp_disable_fh(zdev);
|
return clp_disable_fh(zdev) ? -EIO : 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(zpci_disable_device);
|
EXPORT_SYMBOL_GPL(zpci_disable_device);
|
||||||
|
|
||||||
|
@@ -213,15 +213,19 @@ out:
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int clp_refresh_fh(u32 fid);
|
static int clp_refresh_fh(u32 fid);
|
||||||
/*
|
/**
|
||||||
* Enable/Disable a given PCI function and update its function handle if
|
* clp_set_pci_fn() - Execute a command on a PCI function
|
||||||
* necessary
|
* @zdev: Function that will be affected
|
||||||
|
* @nr_dma_as: DMA address space number
|
||||||
|
* @command: The command code to execute
|
||||||
|
*
|
||||||
|
* Returns: 0 on success, < 0 for Linux errors (e.g. -ENOMEM), and
|
||||||
|
* > 0 for non-success platform responses
|
||||||
*/
|
*/
|
||||||
static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
|
static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
|
||||||
{
|
{
|
||||||
struct clp_req_rsp_set_pci *rrb;
|
struct clp_req_rsp_set_pci *rrb;
|
||||||
int rc, retries = 100;
|
int rc, retries = 100;
|
||||||
u32 fid = zdev->fid;
|
|
||||||
|
|
||||||
rrb = clp_alloc_block(GFP_KERNEL);
|
rrb = clp_alloc_block(GFP_KERNEL);
|
||||||
if (!rrb)
|
if (!rrb)
|
||||||
@@ -245,17 +249,16 @@ static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
|
|||||||
}
|
}
|
||||||
} while (rrb->response.hdr.rsp == CLP_RC_SETPCIFN_BUSY);
|
} while (rrb->response.hdr.rsp == CLP_RC_SETPCIFN_BUSY);
|
||||||
|
|
||||||
if (rc || rrb->response.hdr.rsp != CLP_RC_OK) {
|
|
||||||
zpci_err("Set PCI FN:\n");
|
|
||||||
zpci_err_clp(rrb->response.hdr.rsp, rc);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!rc && rrb->response.hdr.rsp == CLP_RC_OK) {
|
if (!rc && rrb->response.hdr.rsp == CLP_RC_OK) {
|
||||||
zdev->fh = rrb->response.fh;
|
zdev->fh = rrb->response.fh;
|
||||||
} else if (!rc && rrb->response.hdr.rsp == CLP_RC_SETPCIFN_ALRDY &&
|
} else if (!rc && rrb->response.hdr.rsp == CLP_RC_SETPCIFN_ALRDY) {
|
||||||
rrb->response.fh == 0) {
|
|
||||||
/* Function is already in desired state - update handle */
|
/* Function is already in desired state - update handle */
|
||||||
rc = clp_refresh_fh(fid);
|
rc = clp_refresh_fh(zdev->fid);
|
||||||
|
} else {
|
||||||
|
zpci_err("Set PCI FN:\n");
|
||||||
|
zpci_err_clp(rrb->response.hdr.rsp, rc);
|
||||||
|
if (!rc)
|
||||||
|
rc = rrb->response.hdr.rsp;
|
||||||
}
|
}
|
||||||
clp_free_block(rrb);
|
clp_free_block(rrb);
|
||||||
return rc;
|
return rc;
|
||||||
@@ -301,17 +304,13 @@ int clp_enable_fh(struct zpci_dev *zdev, u8 nr_dma_as)
|
|||||||
|
|
||||||
rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_PCI_FN);
|
rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_PCI_FN);
|
||||||
zpci_dbg(3, "ena fid:%x, fh:%x, rc:%d\n", zdev->fid, zdev->fh, rc);
|
zpci_dbg(3, "ena fid:%x, fh:%x, rc:%d\n", zdev->fid, zdev->fh, rc);
|
||||||
if (rc)
|
if (!rc && zpci_use_mio(zdev)) {
|
||||||
goto out;
|
|
||||||
|
|
||||||
if (zpci_use_mio(zdev)) {
|
|
||||||
rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_MIO);
|
rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_MIO);
|
||||||
zpci_dbg(3, "ena mio fid:%x, fh:%x, rc:%d\n",
|
zpci_dbg(3, "ena mio fid:%x, fh:%x, rc:%d\n",
|
||||||
zdev->fid, zdev->fh, rc);
|
zdev->fid, zdev->fh, rc);
|
||||||
if (rc)
|
if (rc)
|
||||||
clp_disable_fh(zdev);
|
clp_disable_fh(zdev);
|
||||||
}
|
}
|
||||||
out:
|
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -571,6 +571,7 @@ static struct perf_ibs perf_ibs_op = {
|
|||||||
.start = perf_ibs_start,
|
.start = perf_ibs_start,
|
||||||
.stop = perf_ibs_stop,
|
.stop = perf_ibs_stop,
|
||||||
.read = perf_ibs_read,
|
.read = perf_ibs_read,
|
||||||
|
.capabilities = PERF_PMU_CAP_NO_EXCLUDE,
|
||||||
},
|
},
|
||||||
.msr = MSR_AMD64_IBSOPCTL,
|
.msr = MSR_AMD64_IBSOPCTL,
|
||||||
.config_mask = IBS_OP_CONFIG_MASK,
|
.config_mask = IBS_OP_CONFIG_MASK,
|
||||||
|
@@ -259,6 +259,7 @@ enum mcp_flags {
|
|||||||
MCP_TIMESTAMP = BIT(0), /* log time stamp */
|
MCP_TIMESTAMP = BIT(0), /* log time stamp */
|
||||||
MCP_UC = BIT(1), /* log uncorrected errors */
|
MCP_UC = BIT(1), /* log uncorrected errors */
|
||||||
MCP_DONTLOG = BIT(2), /* only clear, don't log */
|
MCP_DONTLOG = BIT(2), /* only clear, don't log */
|
||||||
|
MCP_QUEUE_LOG = BIT(3), /* only queue to genpool */
|
||||||
};
|
};
|
||||||
bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b);
|
bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b);
|
||||||
|
|
||||||
|
@@ -817,6 +817,9 @@ log_it:
|
|||||||
if (mca_cfg.dont_log_ce && !mce_usable_address(&m))
|
if (mca_cfg.dont_log_ce && !mce_usable_address(&m))
|
||||||
goto clear_it;
|
goto clear_it;
|
||||||
|
|
||||||
|
if (flags & MCP_QUEUE_LOG)
|
||||||
|
mce_gen_pool_add(&m);
|
||||||
|
else
|
||||||
mce_log(&m);
|
mce_log(&m);
|
||||||
|
|
||||||
clear_it:
|
clear_it:
|
||||||
@@ -1628,10 +1631,12 @@ static void __mcheck_cpu_init_generic(void)
|
|||||||
m_fl = MCP_DONTLOG;
|
m_fl = MCP_DONTLOG;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Log the machine checks left over from the previous reset.
|
* Log the machine checks left over from the previous reset. Log them
|
||||||
|
* only, do not start processing them. That will happen in mcheck_late_init()
|
||||||
|
* when all consumers have been registered on the notifier chain.
|
||||||
*/
|
*/
|
||||||
bitmap_fill(all_banks, MAX_NR_BANKS);
|
bitmap_fill(all_banks, MAX_NR_BANKS);
|
||||||
machine_check_poll(MCP_UC | m_fl, &all_banks);
|
machine_check_poll(MCP_UC | MCP_QUEUE_LOG | m_fl, &all_banks);
|
||||||
|
|
||||||
cr4_set_bits(X86_CR4_MCE);
|
cr4_set_bits(X86_CR4_MCE);
|
||||||
|
|
||||||
|
@@ -241,6 +241,12 @@ static u64 __mon_event_count(u32 rmid, struct rmid_read *rr)
|
|||||||
case QOS_L3_MBM_LOCAL_EVENT_ID:
|
case QOS_L3_MBM_LOCAL_EVENT_ID:
|
||||||
m = &rr->d->mbm_local[rmid];
|
m = &rr->d->mbm_local[rmid];
|
||||||
break;
|
break;
|
||||||
|
default:
|
||||||
|
/*
|
||||||
|
* Code would never reach here because an invalid
|
||||||
|
* event id would fail the __rmid_read.
|
||||||
|
*/
|
||||||
|
return RMID_VAL_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (rr->first) {
|
if (rr->first) {
|
||||||
|
@@ -267,12 +267,6 @@ static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte)
|
|||||||
static gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
|
static gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
|
||||||
struct x86_exception *exception)
|
struct x86_exception *exception)
|
||||||
{
|
{
|
||||||
/* Check if guest physical address doesn't exceed guest maximum */
|
|
||||||
if (kvm_vcpu_is_illegal_gpa(vcpu, gpa)) {
|
|
||||||
exception->error_code |= PFERR_RSVD_MASK;
|
|
||||||
return UNMAPPED_GVA;
|
|
||||||
}
|
|
||||||
|
|
||||||
return gpa;
|
return gpa;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -2243,12 +2243,11 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
|
|||||||
~PIN_BASED_VMX_PREEMPTION_TIMER);
|
~PIN_BASED_VMX_PREEMPTION_TIMER);
|
||||||
|
|
||||||
/* Posted interrupts setting is only taken from vmcs12. */
|
/* Posted interrupts setting is only taken from vmcs12. */
|
||||||
if (nested_cpu_has_posted_intr(vmcs12)) {
|
|
||||||
vmx->nested.posted_intr_nv = vmcs12->posted_intr_nv;
|
|
||||||
vmx->nested.pi_pending = false;
|
vmx->nested.pi_pending = false;
|
||||||
} else {
|
if (nested_cpu_has_posted_intr(vmcs12))
|
||||||
|
vmx->nested.posted_intr_nv = vmcs12->posted_intr_nv;
|
||||||
|
else
|
||||||
exec_control &= ~PIN_BASED_POSTED_INTR;
|
exec_control &= ~PIN_BASED_POSTED_INTR;
|
||||||
}
|
|
||||||
pin_controls_set(vmx, exec_control);
|
pin_controls_set(vmx, exec_control);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@@ -6396,6 +6396,9 @@ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
|
|||||||
{
|
{
|
||||||
struct vcpu_vmx *vmx = to_vmx(vcpu);
|
struct vcpu_vmx *vmx = to_vmx(vcpu);
|
||||||
|
|
||||||
|
if (vmx->emulation_required)
|
||||||
|
return;
|
||||||
|
|
||||||
if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
|
if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
|
||||||
handle_external_interrupt_irqoff(vcpu);
|
handle_external_interrupt_irqoff(vcpu);
|
||||||
else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
|
else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
|
||||||
|
@@ -3116,6 +3116,10 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
|||||||
if (!msr_info->host_initiated) {
|
if (!msr_info->host_initiated) {
|
||||||
s64 adj = data - vcpu->arch.ia32_tsc_adjust_msr;
|
s64 adj = data - vcpu->arch.ia32_tsc_adjust_msr;
|
||||||
adjust_tsc_offset_guest(vcpu, adj);
|
adjust_tsc_offset_guest(vcpu, adj);
|
||||||
|
/* Before back to guest, tsc_timestamp must be adjusted
|
||||||
|
* as well, otherwise guest's percpu pvclock time could jump.
|
||||||
|
*/
|
||||||
|
kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
|
||||||
}
|
}
|
||||||
vcpu->arch.ia32_tsc_adjust_msr = data;
|
vcpu->arch.ia32_tsc_adjust_msr = data;
|
||||||
}
|
}
|
||||||
|
@@ -2251,6 +2251,9 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
|
|||||||
__rq = bfq_find_rq_fmerge(bfqd, bio, q);
|
__rq = bfq_find_rq_fmerge(bfqd, bio, q);
|
||||||
if (__rq && elv_bio_merge_ok(__rq, bio)) {
|
if (__rq && elv_bio_merge_ok(__rq, bio)) {
|
||||||
*req = __rq;
|
*req = __rq;
|
||||||
|
|
||||||
|
if (blk_discard_mergable(__rq))
|
||||||
|
return ELEVATOR_DISCARD_MERGE;
|
||||||
return ELEVATOR_FRONT_MERGE;
|
return ELEVATOR_FRONT_MERGE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
13
block/bio.c
13
block/bio.c
@@ -978,6 +978,14 @@ static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void bio_put_pages(struct page **pages, size_t size, size_t off)
|
||||||
|
{
|
||||||
|
size_t i, nr = DIV_ROUND_UP(size + (off & ~PAGE_MASK), PAGE_SIZE);
|
||||||
|
|
||||||
|
for (i = 0; i < nr; i++)
|
||||||
|
put_page(pages[i]);
|
||||||
|
}
|
||||||
|
|
||||||
#define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *))
|
#define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -1022,8 +1030,10 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
|
|||||||
if (same_page)
|
if (same_page)
|
||||||
put_page(page);
|
put_page(page);
|
||||||
} else {
|
} else {
|
||||||
if (WARN_ON_ONCE(bio_full(bio, len)))
|
if (WARN_ON_ONCE(bio_full(bio, len))) {
|
||||||
|
bio_put_pages(pages + i, left, offset);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
}
|
||||||
__bio_add_page(bio, page, len, offset);
|
__bio_add_page(bio, page, len, offset);
|
||||||
}
|
}
|
||||||
offset = 0;
|
offset = 0;
|
||||||
@@ -1068,6 +1078,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
|
|||||||
len = min_t(size_t, PAGE_SIZE - offset, left);
|
len = min_t(size_t, PAGE_SIZE - offset, left);
|
||||||
if (bio_add_hw_page(q, bio, page, len, offset,
|
if (bio_add_hw_page(q, bio, page, len, offset,
|
||||||
max_append_sectors, &same_page) != len) {
|
max_append_sectors, &same_page) != len) {
|
||||||
|
bio_put_pages(pages + i, left, offset);
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@@ -348,7 +348,7 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (dun_bytes == 0 || dun_bytes > BLK_CRYPTO_MAX_IV_SIZE)
|
if (dun_bytes == 0 || dun_bytes > mode->ivsize)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (!is_power_of_2(data_unit_size))
|
if (!is_power_of_2(data_unit_size))
|
||||||
|
@@ -341,6 +341,8 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs)
|
|||||||
trace_block_split(q, split, (*bio)->bi_iter.bi_sector);
|
trace_block_split(q, split, (*bio)->bi_iter.bi_sector);
|
||||||
submit_bio_noacct(*bio);
|
submit_bio_noacct(*bio);
|
||||||
*bio = split;
|
*bio = split;
|
||||||
|
|
||||||
|
blk_throtl_charge_bio_split(*bio);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -700,22 +702,6 @@ static void blk_account_io_merge_request(struct request *req)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Two cases of handling DISCARD merge:
|
|
||||||
* If max_discard_segments > 1, the driver takes every bio
|
|
||||||
* as a range and send them to controller together. The ranges
|
|
||||||
* needn't to be contiguous.
|
|
||||||
* Otherwise, the bios/requests will be handled as same as
|
|
||||||
* others which should be contiguous.
|
|
||||||
*/
|
|
||||||
static inline bool blk_discard_mergable(struct request *req)
|
|
||||||
{
|
|
||||||
if (req_op(req) == REQ_OP_DISCARD &&
|
|
||||||
queue_max_discard_segments(req->q) > 1)
|
|
||||||
return true;
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
static enum elv_merge blk_try_req_merge(struct request *req,
|
static enum elv_merge blk_try_req_merge(struct request *req,
|
||||||
struct request *next)
|
struct request *next)
|
||||||
{
|
{
|
||||||
|
@@ -178,6 +178,9 @@ struct throtl_grp {
|
|||||||
unsigned int bad_bio_cnt; /* bios exceeding latency threshold */
|
unsigned int bad_bio_cnt; /* bios exceeding latency threshold */
|
||||||
unsigned long bio_cnt_reset_time;
|
unsigned long bio_cnt_reset_time;
|
||||||
|
|
||||||
|
atomic_t io_split_cnt[2];
|
||||||
|
atomic_t last_io_split_cnt[2];
|
||||||
|
|
||||||
struct blkg_rwstat stat_bytes;
|
struct blkg_rwstat stat_bytes;
|
||||||
struct blkg_rwstat stat_ios;
|
struct blkg_rwstat stat_ios;
|
||||||
};
|
};
|
||||||
@@ -771,6 +774,8 @@ static inline void throtl_start_new_slice_with_credit(struct throtl_grp *tg,
|
|||||||
tg->bytes_disp[rw] = 0;
|
tg->bytes_disp[rw] = 0;
|
||||||
tg->io_disp[rw] = 0;
|
tg->io_disp[rw] = 0;
|
||||||
|
|
||||||
|
atomic_set(&tg->io_split_cnt[rw], 0);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Previous slice has expired. We must have trimmed it after last
|
* Previous slice has expired. We must have trimmed it after last
|
||||||
* bio dispatch. That means since start of last slice, we never used
|
* bio dispatch. That means since start of last slice, we never used
|
||||||
@@ -793,6 +798,9 @@ static inline void throtl_start_new_slice(struct throtl_grp *tg, bool rw)
|
|||||||
tg->io_disp[rw] = 0;
|
tg->io_disp[rw] = 0;
|
||||||
tg->slice_start[rw] = jiffies;
|
tg->slice_start[rw] = jiffies;
|
||||||
tg->slice_end[rw] = jiffies + tg->td->throtl_slice;
|
tg->slice_end[rw] = jiffies + tg->td->throtl_slice;
|
||||||
|
|
||||||
|
atomic_set(&tg->io_split_cnt[rw], 0);
|
||||||
|
|
||||||
throtl_log(&tg->service_queue,
|
throtl_log(&tg->service_queue,
|
||||||
"[%c] new slice start=%lu end=%lu jiffies=%lu",
|
"[%c] new slice start=%lu end=%lu jiffies=%lu",
|
||||||
rw == READ ? 'R' : 'W', tg->slice_start[rw],
|
rw == READ ? 'R' : 'W', tg->slice_start[rw],
|
||||||
@@ -1025,6 +1033,9 @@ static bool tg_may_dispatch(struct throtl_grp *tg, struct bio *bio,
|
|||||||
jiffies + tg->td->throtl_slice);
|
jiffies + tg->td->throtl_slice);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (iops_limit != UINT_MAX)
|
||||||
|
tg->io_disp[rw] += atomic_xchg(&tg->io_split_cnt[rw], 0);
|
||||||
|
|
||||||
if (tg_with_in_bps_limit(tg, bio, bps_limit, &bps_wait) &&
|
if (tg_with_in_bps_limit(tg, bio, bps_limit, &bps_wait) &&
|
||||||
tg_with_in_iops_limit(tg, bio, iops_limit, &iops_wait)) {
|
tg_with_in_iops_limit(tg, bio, iops_limit, &iops_wait)) {
|
||||||
if (wait)
|
if (wait)
|
||||||
@@ -2046,12 +2057,14 @@ static void throtl_downgrade_check(struct throtl_grp *tg)
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (tg->iops[READ][LIMIT_LOW]) {
|
if (tg->iops[READ][LIMIT_LOW]) {
|
||||||
|
tg->last_io_disp[READ] += atomic_xchg(&tg->last_io_split_cnt[READ], 0);
|
||||||
iops = tg->last_io_disp[READ] * HZ / elapsed_time;
|
iops = tg->last_io_disp[READ] * HZ / elapsed_time;
|
||||||
if (iops >= tg->iops[READ][LIMIT_LOW])
|
if (iops >= tg->iops[READ][LIMIT_LOW])
|
||||||
tg->last_low_overflow_time[READ] = now;
|
tg->last_low_overflow_time[READ] = now;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (tg->iops[WRITE][LIMIT_LOW]) {
|
if (tg->iops[WRITE][LIMIT_LOW]) {
|
||||||
|
tg->last_io_disp[WRITE] += atomic_xchg(&tg->last_io_split_cnt[WRITE], 0);
|
||||||
iops = tg->last_io_disp[WRITE] * HZ / elapsed_time;
|
iops = tg->last_io_disp[WRITE] * HZ / elapsed_time;
|
||||||
if (iops >= tg->iops[WRITE][LIMIT_LOW])
|
if (iops >= tg->iops[WRITE][LIMIT_LOW])
|
||||||
tg->last_low_overflow_time[WRITE] = now;
|
tg->last_low_overflow_time[WRITE] = now;
|
||||||
@@ -2170,6 +2183,25 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
void blk_throtl_charge_bio_split(struct bio *bio)
|
||||||
|
{
|
||||||
|
struct blkcg_gq *blkg = bio->bi_blkg;
|
||||||
|
struct throtl_grp *parent = blkg_to_tg(blkg);
|
||||||
|
struct throtl_service_queue *parent_sq;
|
||||||
|
bool rw = bio_data_dir(bio);
|
||||||
|
|
||||||
|
do {
|
||||||
|
if (!parent->has_rules[rw])
|
||||||
|
break;
|
||||||
|
|
||||||
|
atomic_inc(&parent->io_split_cnt[rw]);
|
||||||
|
atomic_inc(&parent->last_io_split_cnt[rw]);
|
||||||
|
|
||||||
|
parent_sq = parent->service_queue.parent_sq;
|
||||||
|
parent = sq_to_tg(parent_sq);
|
||||||
|
} while (parent);
|
||||||
|
}
|
||||||
|
|
||||||
bool blk_throtl_bio(struct bio *bio)
|
bool blk_throtl_bio(struct bio *bio)
|
||||||
{
|
{
|
||||||
struct request_queue *q = bio->bi_disk->queue;
|
struct request_queue *q = bio->bi_disk->queue;
|
||||||
|
@@ -299,11 +299,13 @@ int create_task_io_context(struct task_struct *task, gfp_t gfp_mask, int node);
|
|||||||
extern int blk_throtl_init(struct request_queue *q);
|
extern int blk_throtl_init(struct request_queue *q);
|
||||||
extern void blk_throtl_exit(struct request_queue *q);
|
extern void blk_throtl_exit(struct request_queue *q);
|
||||||
extern void blk_throtl_register_queue(struct request_queue *q);
|
extern void blk_throtl_register_queue(struct request_queue *q);
|
||||||
|
extern void blk_throtl_charge_bio_split(struct bio *bio);
|
||||||
bool blk_throtl_bio(struct bio *bio);
|
bool blk_throtl_bio(struct bio *bio);
|
||||||
#else /* CONFIG_BLK_DEV_THROTTLING */
|
#else /* CONFIG_BLK_DEV_THROTTLING */
|
||||||
static inline int blk_throtl_init(struct request_queue *q) { return 0; }
|
static inline int blk_throtl_init(struct request_queue *q) { return 0; }
|
||||||
static inline void blk_throtl_exit(struct request_queue *q) { }
|
static inline void blk_throtl_exit(struct request_queue *q) { }
|
||||||
static inline void blk_throtl_register_queue(struct request_queue *q) { }
|
static inline void blk_throtl_register_queue(struct request_queue *q) { }
|
||||||
|
static inline void blk_throtl_charge_bio_split(struct bio *bio) { }
|
||||||
static inline bool blk_throtl_bio(struct bio *bio) { return false; }
|
static inline bool blk_throtl_bio(struct bio *bio) { return false; }
|
||||||
#endif /* CONFIG_BLK_DEV_THROTTLING */
|
#endif /* CONFIG_BLK_DEV_THROTTLING */
|
||||||
#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
|
#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
|
||||||
|
@@ -336,6 +336,9 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
|
|||||||
__rq = elv_rqhash_find(q, bio->bi_iter.bi_sector);
|
__rq = elv_rqhash_find(q, bio->bi_iter.bi_sector);
|
||||||
if (__rq && elv_bio_merge_ok(__rq, bio)) {
|
if (__rq && elv_bio_merge_ok(__rq, bio)) {
|
||||||
*req = __rq;
|
*req = __rq;
|
||||||
|
|
||||||
|
if (blk_discard_mergable(__rq))
|
||||||
|
return ELEVATOR_DISCARD_MERGE;
|
||||||
return ELEVATOR_BACK_MERGE;
|
return ELEVATOR_BACK_MERGE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -675,6 +675,8 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
|
|||||||
|
|
||||||
if (elv_bio_merge_ok(__rq, bio)) {
|
if (elv_bio_merge_ok(__rq, bio)) {
|
||||||
*rq = __rq;
|
*rq = __rq;
|
||||||
|
if (blk_discard_mergable(__rq))
|
||||||
|
return ELEVATOR_DISCARD_MERGE;
|
||||||
return ELEVATOR_FRONT_MERGE;
|
return ELEVATOR_FRONT_MERGE;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@@ -47,11 +47,19 @@ endif
|
|||||||
redirect_openssl = 2>&1
|
redirect_openssl = 2>&1
|
||||||
quiet_redirect_openssl = 2>&1
|
quiet_redirect_openssl = 2>&1
|
||||||
silent_redirect_openssl = 2>/dev/null
|
silent_redirect_openssl = 2>/dev/null
|
||||||
|
openssl_available = $(shell openssl help 2>/dev/null && echo yes)
|
||||||
|
|
||||||
# We do it this way rather than having a boolean option for enabling an
|
# We do it this way rather than having a boolean option for enabling an
|
||||||
# external private key, because 'make randconfig' might enable such a
|
# external private key, because 'make randconfig' might enable such a
|
||||||
# boolean option and we unfortunately can't make it depend on !RANDCONFIG.
|
# boolean option and we unfortunately can't make it depend on !RANDCONFIG.
|
||||||
ifeq ($(CONFIG_MODULE_SIG_KEY),"certs/signing_key.pem")
|
ifeq ($(CONFIG_MODULE_SIG_KEY),"certs/signing_key.pem")
|
||||||
|
|
||||||
|
ifeq ($(openssl_available),yes)
|
||||||
|
X509TEXT=$(shell openssl x509 -in "certs/signing_key.pem" -text 2>/dev/null)
|
||||||
|
|
||||||
|
$(if $(findstring rsaEncryption,$(X509TEXT)),,$(shell rm -f "certs/signing_key.pem"))
|
||||||
|
endif
|
||||||
|
|
||||||
$(obj)/signing_key.pem: $(obj)/x509.genkey
|
$(obj)/signing_key.pem: $(obj)/x509.genkey
|
||||||
@$(kecho) "###"
|
@$(kecho) "###"
|
||||||
@$(kecho) "### Now generating an X.509 key pair to be used for signing modules."
|
@$(kecho) "### Now generating an X.509 key pair to be used for signing modules."
|
||||||
|
@@ -5573,7 +5573,7 @@ int ata_host_start(struct ata_host *host)
|
|||||||
have_stop = 1;
|
have_stop = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (host->ops->host_stop)
|
if (host->ops && host->ops->host_stop)
|
||||||
have_stop = 1;
|
have_stop = 1;
|
||||||
|
|
||||||
if (have_stop) {
|
if (have_stop) {
|
||||||
|
@@ -543,7 +543,8 @@ re_probe:
|
|||||||
goto probe_failed;
|
goto probe_failed;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (driver_sysfs_add(dev)) {
|
ret = driver_sysfs_add(dev);
|
||||||
|
if (ret) {
|
||||||
pr_err("%s: driver_sysfs_add(%s) failed\n",
|
pr_err("%s: driver_sysfs_add(%s) failed\n",
|
||||||
__func__, dev_name(dev));
|
__func__, dev_name(dev));
|
||||||
goto probe_failed;
|
goto probe_failed;
|
||||||
@@ -565,16 +566,19 @@ re_probe:
|
|||||||
goto probe_failed;
|
goto probe_failed;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (device_add_groups(dev, drv->dev_groups)) {
|
ret = device_add_groups(dev, drv->dev_groups);
|
||||||
|
if (ret) {
|
||||||
dev_err(dev, "device_add_groups() failed\n");
|
dev_err(dev, "device_add_groups() failed\n");
|
||||||
goto dev_groups_failed;
|
goto dev_groups_failed;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (dev_has_sync_state(dev) &&
|
if (dev_has_sync_state(dev)) {
|
||||||
device_create_file(dev, &dev_attr_state_synced)) {
|
ret = device_create_file(dev, &dev_attr_state_synced);
|
||||||
|
if (ret) {
|
||||||
dev_err(dev, "state_synced sysfs add failed\n");
|
dev_err(dev, "state_synced sysfs add failed\n");
|
||||||
goto dev_sysfs_state_synced_failed;
|
goto dev_sysfs_state_synced_failed;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (test_remove) {
|
if (test_remove) {
|
||||||
test_remove = false;
|
test_remove = false;
|
||||||
|
@@ -164,7 +164,7 @@ static inline int fw_state_wait(struct fw_priv *fw_priv)
|
|||||||
return __fw_state_wait_common(fw_priv, MAX_SCHEDULE_TIMEOUT);
|
return __fw_state_wait_common(fw_priv, MAX_SCHEDULE_TIMEOUT);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int fw_cache_piggyback_on_request(const char *name);
|
static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv);
|
||||||
|
|
||||||
static struct fw_priv *__allocate_fw_priv(const char *fw_name,
|
static struct fw_priv *__allocate_fw_priv(const char *fw_name,
|
||||||
struct firmware_cache *fwc,
|
struct firmware_cache *fwc,
|
||||||
@@ -705,10 +705,8 @@ int assign_fw(struct firmware *fw, struct device *device)
|
|||||||
* on request firmware.
|
* on request firmware.
|
||||||
*/
|
*/
|
||||||
if (!(fw_priv->opt_flags & FW_OPT_NOCACHE) &&
|
if (!(fw_priv->opt_flags & FW_OPT_NOCACHE) &&
|
||||||
fw_priv->fwc->state == FW_LOADER_START_CACHE) {
|
fw_priv->fwc->state == FW_LOADER_START_CACHE)
|
||||||
if (fw_cache_piggyback_on_request(fw_priv->fw_name))
|
fw_cache_piggyback_on_request(fw_priv);
|
||||||
kref_get(&fw_priv->ref);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* pass the pages buffer to driver at the last minute */
|
/* pass the pages buffer to driver at the last minute */
|
||||||
fw_set_page_data(fw_priv, fw);
|
fw_set_page_data(fw_priv, fw);
|
||||||
@@ -1257,11 +1255,11 @@ static int __fw_entry_found(const char *name)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int fw_cache_piggyback_on_request(const char *name)
|
static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv)
|
||||||
{
|
{
|
||||||
struct firmware_cache *fwc = &fw_cache;
|
const char *name = fw_priv->fw_name;
|
||||||
|
struct firmware_cache *fwc = fw_priv->fwc;
|
||||||
struct fw_cache_entry *fce;
|
struct fw_cache_entry *fce;
|
||||||
int ret = 0;
|
|
||||||
|
|
||||||
spin_lock(&fwc->name_lock);
|
spin_lock(&fwc->name_lock);
|
||||||
if (__fw_entry_found(name))
|
if (__fw_entry_found(name))
|
||||||
@@ -1269,13 +1267,12 @@ static int fw_cache_piggyback_on_request(const char *name)
|
|||||||
|
|
||||||
fce = alloc_fw_cache_entry(name);
|
fce = alloc_fw_cache_entry(name);
|
||||||
if (fce) {
|
if (fce) {
|
||||||
ret = 1;
|
|
||||||
list_add(&fce->list, &fwc->fw_names);
|
list_add(&fce->list, &fwc->fw_names);
|
||||||
|
kref_get(&fw_priv->ref);
|
||||||
pr_debug("%s: fw: %s\n", __func__, name);
|
pr_debug("%s: fw: %s\n", __func__, name);
|
||||||
}
|
}
|
||||||
found:
|
found:
|
||||||
spin_unlock(&fwc->name_lock);
|
spin_unlock(&fwc->name_lock);
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void free_fw_cache_entry(struct fw_cache_entry *fce)
|
static void free_fw_cache_entry(struct fw_cache_entry *fce)
|
||||||
@@ -1506,9 +1503,8 @@ static inline void unregister_fw_pm_ops(void)
|
|||||||
unregister_pm_notifier(&fw_cache.pm_notify);
|
unregister_pm_notifier(&fw_cache.pm_notify);
|
||||||
}
|
}
|
||||||
#else
|
#else
|
||||||
static int fw_cache_piggyback_on_request(const char *name)
|
static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv)
|
||||||
{
|
{
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
static inline int register_fw_pm_ops(void)
|
static inline int register_fw_pm_ops(void)
|
||||||
{
|
{
|
||||||
|
@@ -1652,7 +1652,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
|
|||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(map->dev,
|
dev_err(map->dev,
|
||||||
"Error in caching of register: %x ret: %d\n",
|
"Error in caching of register: %x ret: %d\n",
|
||||||
reg + i, ret);
|
reg + regmap_get_offset(map, i), ret);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@@ -236,6 +236,7 @@ EXPORT_SYMBOL(bcma_core_irq);
|
|||||||
|
|
||||||
void bcma_prepare_core(struct bcma_bus *bus, struct bcma_device *core)
|
void bcma_prepare_core(struct bcma_bus *bus, struct bcma_device *core)
|
||||||
{
|
{
|
||||||
|
device_initialize(&core->dev);
|
||||||
core->dev.release = bcma_release_core_dev;
|
core->dev.release = bcma_release_core_dev;
|
||||||
core->dev.bus = &bcma_bus_type;
|
core->dev.bus = &bcma_bus_type;
|
||||||
dev_set_name(&core->dev, "bcma%d:%d", bus->num, core->core_index);
|
dev_set_name(&core->dev, "bcma%d:%d", bus->num, core->core_index);
|
||||||
@@ -277,11 +278,10 @@ static void bcma_register_core(struct bcma_bus *bus, struct bcma_device *core)
|
|||||||
{
|
{
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
err = device_register(&core->dev);
|
err = device_add(&core->dev);
|
||||||
if (err) {
|
if (err) {
|
||||||
bcma_err(bus, "Could not register dev for core 0x%03X\n",
|
bcma_err(bus, "Could not register dev for core 0x%03X\n",
|
||||||
core->id.id);
|
core->id.id);
|
||||||
put_device(&core->dev);
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
core->dev_registered = true;
|
core->dev_registered = true;
|
||||||
@@ -372,7 +372,7 @@ void bcma_unregister_cores(struct bcma_bus *bus)
|
|||||||
/* Now noone uses internally-handled cores, we can free them */
|
/* Now noone uses internally-handled cores, we can free them */
|
||||||
list_for_each_entry_safe(core, tmp, &bus->cores, list) {
|
list_for_each_entry_safe(core, tmp, &bus->cores, list) {
|
||||||
list_del(&core->list);
|
list_del(&core->list);
|
||||||
kfree(core);
|
put_device(&core->dev);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -1759,7 +1759,17 @@ static int nbd_dev_add(int index)
|
|||||||
refcount_set(&nbd->refs, 1);
|
refcount_set(&nbd->refs, 1);
|
||||||
INIT_LIST_HEAD(&nbd->list);
|
INIT_LIST_HEAD(&nbd->list);
|
||||||
disk->major = NBD_MAJOR;
|
disk->major = NBD_MAJOR;
|
||||||
|
|
||||||
|
/* Too big first_minor can cause duplicate creation of
|
||||||
|
* sysfs files/links, since first_minor will be truncated to
|
||||||
|
* byte in __device_add_disk().
|
||||||
|
*/
|
||||||
disk->first_minor = index << part_shift;
|
disk->first_minor = index << part_shift;
|
||||||
|
if (disk->first_minor > 0xff) {
|
||||||
|
err = -EINVAL;
|
||||||
|
goto out_free_idr;
|
||||||
|
}
|
||||||
|
|
||||||
disk->fops = &nbd_fops;
|
disk->fops = &nbd_fops;
|
||||||
disk->private_data = nbd;
|
disk->private_data = nbd;
|
||||||
sprintf(disk->disk_name, "nbd%d", index);
|
sprintf(disk->disk_name, "nbd%d", index);
|
||||||
|
@@ -106,17 +106,12 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
|
|||||||
{
|
{
|
||||||
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
|
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
|
||||||
u16 len;
|
u16 len;
|
||||||
int sig;
|
|
||||||
|
|
||||||
if (!ibmvtpm->rtce_buf) {
|
if (!ibmvtpm->rtce_buf) {
|
||||||
dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n");
|
dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n");
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
sig = wait_event_interruptible(ibmvtpm->wq, !ibmvtpm->tpm_processing_cmd);
|
|
||||||
if (sig)
|
|
||||||
return -EINTR;
|
|
||||||
|
|
||||||
len = ibmvtpm->res_len;
|
len = ibmvtpm->res_len;
|
||||||
|
|
||||||
if (count < len) {
|
if (count < len) {
|
||||||
@@ -237,7 +232,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
|
|||||||
* set the processing flag before the Hcall, since we may get the
|
* set the processing flag before the Hcall, since we may get the
|
||||||
* result (interrupt) before even being able to check rc.
|
* result (interrupt) before even being able to check rc.
|
||||||
*/
|
*/
|
||||||
ibmvtpm->tpm_processing_cmd = true;
|
ibmvtpm->tpm_processing_cmd = 1;
|
||||||
|
|
||||||
again:
|
again:
|
||||||
rc = ibmvtpm_send_crq(ibmvtpm->vdev,
|
rc = ibmvtpm_send_crq(ibmvtpm->vdev,
|
||||||
@@ -255,7 +250,7 @@ again:
|
|||||||
goto again;
|
goto again;
|
||||||
}
|
}
|
||||||
dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
|
dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
|
||||||
ibmvtpm->tpm_processing_cmd = false;
|
ibmvtpm->tpm_processing_cmd = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_unlock(&ibmvtpm->rtce_lock);
|
spin_unlock(&ibmvtpm->rtce_lock);
|
||||||
@@ -269,7 +264,9 @@ static void tpm_ibmvtpm_cancel(struct tpm_chip *chip)
|
|||||||
|
|
||||||
static u8 tpm_ibmvtpm_status(struct tpm_chip *chip)
|
static u8 tpm_ibmvtpm_status(struct tpm_chip *chip)
|
||||||
{
|
{
|
||||||
return 0;
|
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
|
||||||
|
|
||||||
|
return ibmvtpm->tpm_processing_cmd;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -459,7 +456,7 @@ static const struct tpm_class_ops tpm_ibmvtpm = {
|
|||||||
.send = tpm_ibmvtpm_send,
|
.send = tpm_ibmvtpm_send,
|
||||||
.cancel = tpm_ibmvtpm_cancel,
|
.cancel = tpm_ibmvtpm_cancel,
|
||||||
.status = tpm_ibmvtpm_status,
|
.status = tpm_ibmvtpm_status,
|
||||||
.req_complete_mask = 0,
|
.req_complete_mask = 1,
|
||||||
.req_complete_val = 0,
|
.req_complete_val = 0,
|
||||||
.req_canceled = tpm_ibmvtpm_req_canceled,
|
.req_canceled = tpm_ibmvtpm_req_canceled,
|
||||||
};
|
};
|
||||||
@@ -552,7 +549,7 @@ static void ibmvtpm_crq_process(struct ibmvtpm_crq *crq,
|
|||||||
case VTPM_TPM_COMMAND_RES:
|
case VTPM_TPM_COMMAND_RES:
|
||||||
/* len of the data in rtce buffer */
|
/* len of the data in rtce buffer */
|
||||||
ibmvtpm->res_len = be16_to_cpu(crq->len);
|
ibmvtpm->res_len = be16_to_cpu(crq->len);
|
||||||
ibmvtpm->tpm_processing_cmd = false;
|
ibmvtpm->tpm_processing_cmd = 0;
|
||||||
wake_up_interruptible(&ibmvtpm->wq);
|
wake_up_interruptible(&ibmvtpm->wq);
|
||||||
return;
|
return;
|
||||||
default:
|
default:
|
||||||
@@ -690,8 +687,15 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
|
|||||||
goto init_irq_cleanup;
|
goto init_irq_cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!strcmp(id->compat, "IBM,vtpm20")) {
|
|
||||||
|
if (!strcmp(id->compat, "IBM,vtpm20"))
|
||||||
chip->flags |= TPM_CHIP_FLAG_TPM2;
|
chip->flags |= TPM_CHIP_FLAG_TPM2;
|
||||||
|
|
||||||
|
rc = tpm_get_timeouts(chip);
|
||||||
|
if (rc)
|
||||||
|
goto init_irq_cleanup;
|
||||||
|
|
||||||
|
if (chip->flags & TPM_CHIP_FLAG_TPM2) {
|
||||||
rc = tpm2_get_cc_attrs_tbl(chip);
|
rc = tpm2_get_cc_attrs_tbl(chip);
|
||||||
if (rc)
|
if (rc)
|
||||||
goto init_irq_cleanup;
|
goto init_irq_cleanup;
|
||||||
|
@@ -41,7 +41,7 @@ struct ibmvtpm_dev {
|
|||||||
wait_queue_head_t wq;
|
wait_queue_head_t wq;
|
||||||
u16 res_len;
|
u16 res_len;
|
||||||
u32 vtpm_version;
|
u32 vtpm_version;
|
||||||
bool tpm_processing_cmd;
|
u8 tpm_processing_cmd;
|
||||||
};
|
};
|
||||||
|
|
||||||
#define CRQ_RES_BUF_SIZE PAGE_SIZE
|
#define CRQ_RES_BUF_SIZE PAGE_SIZE
|
||||||
|
@@ -265,6 +265,7 @@ static const char *powersave_parents[] = {
|
|||||||
static const struct clk_muxing_soc_desc kirkwood_mux_desc[] __initconst = {
|
static const struct clk_muxing_soc_desc kirkwood_mux_desc[] __initconst = {
|
||||||
{ "powersave", powersave_parents, ARRAY_SIZE(powersave_parents),
|
{ "powersave", powersave_parents, ARRAY_SIZE(powersave_parents),
|
||||||
11, 1, 0 },
|
11, 1, 0 },
|
||||||
|
{ }
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct clk *clk_muxing_get_src(
|
static struct clk *clk_muxing_get_src(
|
||||||
|
@@ -572,7 +572,8 @@ static int sh_cmt_start(struct sh_cmt_channel *ch, unsigned long flag)
|
|||||||
ch->flags |= flag;
|
ch->flags |= flag;
|
||||||
|
|
||||||
/* setup timeout if no clockevent */
|
/* setup timeout if no clockevent */
|
||||||
if ((flag == FLAG_CLOCKSOURCE) && (!(ch->flags & FLAG_CLOCKEVENT)))
|
if (ch->cmt->num_channels == 1 &&
|
||||||
|
flag == FLAG_CLOCKSOURCE && (!(ch->flags & FLAG_CLOCKEVENT)))
|
||||||
__sh_cmt_set_next(ch, ch->max_match_value);
|
__sh_cmt_set_next(ch, ch->max_match_value);
|
||||||
out:
|
out:
|
||||||
raw_spin_unlock_irqrestore(&ch->lock, flags);
|
raw_spin_unlock_irqrestore(&ch->lock, flags);
|
||||||
@@ -608,8 +609,10 @@ static struct sh_cmt_channel *cs_to_sh_cmt(struct clocksource *cs)
|
|||||||
static u64 sh_cmt_clocksource_read(struct clocksource *cs)
|
static u64 sh_cmt_clocksource_read(struct clocksource *cs)
|
||||||
{
|
{
|
||||||
struct sh_cmt_channel *ch = cs_to_sh_cmt(cs);
|
struct sh_cmt_channel *ch = cs_to_sh_cmt(cs);
|
||||||
unsigned long flags;
|
|
||||||
u32 has_wrapped;
|
u32 has_wrapped;
|
||||||
|
|
||||||
|
if (ch->cmt->num_channels == 1) {
|
||||||
|
unsigned long flags;
|
||||||
u64 value;
|
u64 value;
|
||||||
u32 raw;
|
u32 raw;
|
||||||
|
|
||||||
@@ -622,6 +625,9 @@ static u64 sh_cmt_clocksource_read(struct clocksource *cs)
|
|||||||
raw_spin_unlock_irqrestore(&ch->lock, flags);
|
raw_spin_unlock_irqrestore(&ch->lock, flags);
|
||||||
|
|
||||||
return value + raw;
|
return value + raw;
|
||||||
|
}
|
||||||
|
|
||||||
|
return sh_cmt_get_counter(ch, &has_wrapped);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int sh_cmt_clocksource_enable(struct clocksource *cs)
|
static int sh_cmt_clocksource_enable(struct clocksource *cs)
|
||||||
@@ -684,7 +690,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_channel *ch,
|
|||||||
cs->disable = sh_cmt_clocksource_disable;
|
cs->disable = sh_cmt_clocksource_disable;
|
||||||
cs->suspend = sh_cmt_clocksource_suspend;
|
cs->suspend = sh_cmt_clocksource_suspend;
|
||||||
cs->resume = sh_cmt_clocksource_resume;
|
cs->resume = sh_cmt_clocksource_resume;
|
||||||
cs->mask = CLOCKSOURCE_MASK(sizeof(u64) * 8);
|
cs->mask = CLOCKSOURCE_MASK(ch->cmt->info->width);
|
||||||
cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
|
cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
|
||||||
|
|
||||||
dev_info(&ch->cmt->pdev->dev, "ch%u: used as clock source\n",
|
dev_info(&ch->cmt->pdev->dev, "ch%u: used as clock source\n",
|
||||||
|
@@ -1224,12 +1224,13 @@ static ssize_t quad8_count_ceiling_write(struct counter_device *counter,
|
|||||||
case 1:
|
case 1:
|
||||||
case 3:
|
case 3:
|
||||||
quad8_preset_register_set(priv, count->id, ceiling);
|
quad8_preset_register_set(priv, count->id, ceiling);
|
||||||
break;
|
mutex_unlock(&priv->lock);
|
||||||
|
return len;
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&priv->lock);
|
mutex_unlock(&priv->lock);
|
||||||
|
|
||||||
return len;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static ssize_t quad8_count_preset_enable_read(struct counter_device *counter,
|
static ssize_t quad8_count_preset_enable_read(struct counter_device *counter,
|
||||||
|
@@ -169,15 +169,19 @@ static struct dcp *global_sdcp;
|
|||||||
|
|
||||||
static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
|
static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
|
||||||
{
|
{
|
||||||
|
int dma_err;
|
||||||
struct dcp *sdcp = global_sdcp;
|
struct dcp *sdcp = global_sdcp;
|
||||||
const int chan = actx->chan;
|
const int chan = actx->chan;
|
||||||
uint32_t stat;
|
uint32_t stat;
|
||||||
unsigned long ret;
|
unsigned long ret;
|
||||||
struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
|
struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
|
||||||
|
|
||||||
dma_addr_t desc_phys = dma_map_single(sdcp->dev, desc, sizeof(*desc),
|
dma_addr_t desc_phys = dma_map_single(sdcp->dev, desc, sizeof(*desc),
|
||||||
DMA_TO_DEVICE);
|
DMA_TO_DEVICE);
|
||||||
|
|
||||||
|
dma_err = dma_mapping_error(sdcp->dev, desc_phys);
|
||||||
|
if (dma_err)
|
||||||
|
return dma_err;
|
||||||
|
|
||||||
reinit_completion(&sdcp->completion[chan]);
|
reinit_completion(&sdcp->completion[chan]);
|
||||||
|
|
||||||
/* Clear status register. */
|
/* Clear status register. */
|
||||||
@@ -215,18 +219,29 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
|
|||||||
static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
|
static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
|
||||||
struct skcipher_request *req, int init)
|
struct skcipher_request *req, int init)
|
||||||
{
|
{
|
||||||
|
dma_addr_t key_phys, src_phys, dst_phys;
|
||||||
struct dcp *sdcp = global_sdcp;
|
struct dcp *sdcp = global_sdcp;
|
||||||
struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
|
struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
|
||||||
struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
|
struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
dma_addr_t key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
|
key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
|
||||||
2 * AES_KEYSIZE_128,
|
2 * AES_KEYSIZE_128, DMA_TO_DEVICE);
|
||||||
DMA_TO_DEVICE);
|
ret = dma_mapping_error(sdcp->dev, key_phys);
|
||||||
dma_addr_t src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
|
||||||
DCP_BUF_SZ, DMA_TO_DEVICE);
|
DCP_BUF_SZ, DMA_TO_DEVICE);
|
||||||
dma_addr_t dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
|
ret = dma_mapping_error(sdcp->dev, src_phys);
|
||||||
|
if (ret)
|
||||||
|
goto err_src;
|
||||||
|
|
||||||
|
dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
|
||||||
DCP_BUF_SZ, DMA_FROM_DEVICE);
|
DCP_BUF_SZ, DMA_FROM_DEVICE);
|
||||||
|
ret = dma_mapping_error(sdcp->dev, dst_phys);
|
||||||
|
if (ret)
|
||||||
|
goto err_dst;
|
||||||
|
|
||||||
if (actx->fill % AES_BLOCK_SIZE) {
|
if (actx->fill % AES_BLOCK_SIZE) {
|
||||||
dev_err(sdcp->dev, "Invalid block size!\n");
|
dev_err(sdcp->dev, "Invalid block size!\n");
|
||||||
@@ -264,10 +279,12 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
|
|||||||
ret = mxs_dcp_start_dma(actx);
|
ret = mxs_dcp_start_dma(actx);
|
||||||
|
|
||||||
aes_done_run:
|
aes_done_run:
|
||||||
|
dma_unmap_single(sdcp->dev, dst_phys, DCP_BUF_SZ, DMA_FROM_DEVICE);
|
||||||
|
err_dst:
|
||||||
|
dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
|
||||||
|
err_src:
|
||||||
dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
|
dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
|
||||||
DMA_TO_DEVICE);
|
DMA_TO_DEVICE);
|
||||||
dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
|
|
||||||
dma_unmap_single(sdcp->dev, dst_phys, DCP_BUF_SZ, DMA_FROM_DEVICE);
|
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@@ -556,6 +573,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
|
|||||||
dma_addr_t buf_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_in_buf,
|
dma_addr_t buf_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_in_buf,
|
||||||
DCP_BUF_SZ, DMA_TO_DEVICE);
|
DCP_BUF_SZ, DMA_TO_DEVICE);
|
||||||
|
|
||||||
|
ret = dma_mapping_error(sdcp->dev, buf_phys);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
/* Fill in the DMA descriptor. */
|
/* Fill in the DMA descriptor. */
|
||||||
desc->control0 = MXS_DCP_CONTROL0_DECR_SEMAPHORE |
|
desc->control0 = MXS_DCP_CONTROL0_DECR_SEMAPHORE |
|
||||||
MXS_DCP_CONTROL0_INTERRUPT |
|
MXS_DCP_CONTROL0_INTERRUPT |
|
||||||
@@ -588,6 +609,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
|
|||||||
if (rctx->fini) {
|
if (rctx->fini) {
|
||||||
digest_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_out_buf,
|
digest_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_out_buf,
|
||||||
DCP_SHA_PAY_SZ, DMA_FROM_DEVICE);
|
DCP_SHA_PAY_SZ, DMA_FROM_DEVICE);
|
||||||
|
ret = dma_mapping_error(sdcp->dev, digest_phys);
|
||||||
|
if (ret)
|
||||||
|
goto done_run;
|
||||||
|
|
||||||
desc->control0 |= MXS_DCP_CONTROL0_HASH_TERM;
|
desc->control0 |= MXS_DCP_CONTROL0_HASH_TERM;
|
||||||
desc->payload = digest_phys;
|
desc->payload = digest_phys;
|
||||||
}
|
}
|
||||||
|
@@ -1175,9 +1175,9 @@ static int omap_aes_probe(struct platform_device *pdev)
|
|||||||
spin_lock_init(&dd->lock);
|
spin_lock_init(&dd->lock);
|
||||||
|
|
||||||
INIT_LIST_HEAD(&dd->list);
|
INIT_LIST_HEAD(&dd->list);
|
||||||
spin_lock(&list_lock);
|
spin_lock_bh(&list_lock);
|
||||||
list_add_tail(&dd->list, &dev_list);
|
list_add_tail(&dd->list, &dev_list);
|
||||||
spin_unlock(&list_lock);
|
spin_unlock_bh(&list_lock);
|
||||||
|
|
||||||
/* Initialize crypto engine */
|
/* Initialize crypto engine */
|
||||||
dd->engine = crypto_engine_alloc_init(dev, 1);
|
dd->engine = crypto_engine_alloc_init(dev, 1);
|
||||||
@@ -1264,9 +1264,9 @@ static int omap_aes_remove(struct platform_device *pdev)
|
|||||||
if (!dd)
|
if (!dd)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
spin_lock(&list_lock);
|
spin_lock_bh(&list_lock);
|
||||||
list_del(&dd->list);
|
list_del(&dd->list);
|
||||||
spin_unlock(&list_lock);
|
spin_unlock_bh(&list_lock);
|
||||||
|
|
||||||
for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
|
for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
|
||||||
for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
|
for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
|
||||||
|
@@ -1035,9 +1035,9 @@ static int omap_des_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
|
|
||||||
INIT_LIST_HEAD(&dd->list);
|
INIT_LIST_HEAD(&dd->list);
|
||||||
spin_lock(&list_lock);
|
spin_lock_bh(&list_lock);
|
||||||
list_add_tail(&dd->list, &dev_list);
|
list_add_tail(&dd->list, &dev_list);
|
||||||
spin_unlock(&list_lock);
|
spin_unlock_bh(&list_lock);
|
||||||
|
|
||||||
/* Initialize des crypto engine */
|
/* Initialize des crypto engine */
|
||||||
dd->engine = crypto_engine_alloc_init(dev, 1);
|
dd->engine = crypto_engine_alloc_init(dev, 1);
|
||||||
@@ -1096,9 +1096,9 @@ static int omap_des_remove(struct platform_device *pdev)
|
|||||||
if (!dd)
|
if (!dd)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
spin_lock(&list_lock);
|
spin_lock_bh(&list_lock);
|
||||||
list_del(&dd->list);
|
list_del(&dd->list);
|
||||||
spin_unlock(&list_lock);
|
spin_unlock_bh(&list_lock);
|
||||||
|
|
||||||
for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
|
for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
|
||||||
for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
|
for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
|
||||||
|
@@ -1735,7 +1735,7 @@ static void omap_sham_done_task(unsigned long data)
|
|||||||
if (test_and_clear_bit(FLAGS_OUTPUT_READY, &dd->flags))
|
if (test_and_clear_bit(FLAGS_OUTPUT_READY, &dd->flags))
|
||||||
goto finish;
|
goto finish;
|
||||||
} else if (test_bit(FLAGS_DMA_READY, &dd->flags)) {
|
} else if (test_bit(FLAGS_DMA_READY, &dd->flags)) {
|
||||||
if (test_and_clear_bit(FLAGS_DMA_ACTIVE, &dd->flags)) {
|
if (test_bit(FLAGS_DMA_ACTIVE, &dd->flags)) {
|
||||||
omap_sham_update_dma_stop(dd);
|
omap_sham_update_dma_stop(dd);
|
||||||
if (dd->err) {
|
if (dd->err) {
|
||||||
err = dd->err;
|
err = dd->err;
|
||||||
@@ -2143,9 +2143,9 @@ static int omap_sham_probe(struct platform_device *pdev)
|
|||||||
(rev & dd->pdata->major_mask) >> dd->pdata->major_shift,
|
(rev & dd->pdata->major_mask) >> dd->pdata->major_shift,
|
||||||
(rev & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
|
(rev & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
|
||||||
|
|
||||||
spin_lock(&sham.lock);
|
spin_lock_bh(&sham.lock);
|
||||||
list_add_tail(&dd->list, &sham.dev_list);
|
list_add_tail(&dd->list, &sham.dev_list);
|
||||||
spin_unlock(&sham.lock);
|
spin_unlock_bh(&sham.lock);
|
||||||
|
|
||||||
dd->engine = crypto_engine_alloc_init(dev, 1);
|
dd->engine = crypto_engine_alloc_init(dev, 1);
|
||||||
if (!dd->engine) {
|
if (!dd->engine) {
|
||||||
@@ -2193,9 +2193,9 @@ err_algs:
|
|||||||
err_engine_start:
|
err_engine_start:
|
||||||
crypto_engine_exit(dd->engine);
|
crypto_engine_exit(dd->engine);
|
||||||
err_engine:
|
err_engine:
|
||||||
spin_lock(&sham.lock);
|
spin_lock_bh(&sham.lock);
|
||||||
list_del(&dd->list);
|
list_del(&dd->list);
|
||||||
spin_unlock(&sham.lock);
|
spin_unlock_bh(&sham.lock);
|
||||||
err_pm:
|
err_pm:
|
||||||
pm_runtime_disable(dev);
|
pm_runtime_disable(dev);
|
||||||
if (!dd->polling_mode)
|
if (!dd->polling_mode)
|
||||||
@@ -2214,9 +2214,9 @@ static int omap_sham_remove(struct platform_device *pdev)
|
|||||||
dd = platform_get_drvdata(pdev);
|
dd = platform_get_drvdata(pdev);
|
||||||
if (!dd)
|
if (!dd)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
spin_lock(&sham.lock);
|
spin_lock_bh(&sham.lock);
|
||||||
list_del(&dd->list);
|
list_del(&dd->list);
|
||||||
spin_unlock(&sham.lock);
|
spin_unlock_bh(&sham.lock);
|
||||||
for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
|
for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
|
||||||
for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
|
for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
|
||||||
crypto_unregister_ahash(
|
crypto_unregister_ahash(
|
||||||
|
@@ -79,10 +79,10 @@ void adf_init_hw_data_c3xxxiov(struct adf_hw_device_data *hw_data)
|
|||||||
hw_data->enable_error_correction = adf_vf_void_noop;
|
hw_data->enable_error_correction = adf_vf_void_noop;
|
||||||
hw_data->init_admin_comms = adf_vf_int_noop;
|
hw_data->init_admin_comms = adf_vf_int_noop;
|
||||||
hw_data->exit_admin_comms = adf_vf_void_noop;
|
hw_data->exit_admin_comms = adf_vf_void_noop;
|
||||||
hw_data->send_admin_init = adf_vf2pf_init;
|
hw_data->send_admin_init = adf_vf2pf_notify_init;
|
||||||
hw_data->init_arb = adf_vf_int_noop;
|
hw_data->init_arb = adf_vf_int_noop;
|
||||||
hw_data->exit_arb = adf_vf_void_noop;
|
hw_data->exit_arb = adf_vf_void_noop;
|
||||||
hw_data->disable_iov = adf_vf2pf_shutdown;
|
hw_data->disable_iov = adf_vf2pf_notify_shutdown;
|
||||||
hw_data->get_accel_mask = get_accel_mask;
|
hw_data->get_accel_mask = get_accel_mask;
|
||||||
hw_data->get_ae_mask = get_ae_mask;
|
hw_data->get_ae_mask = get_ae_mask;
|
||||||
hw_data->get_num_accels = get_num_accels;
|
hw_data->get_num_accels = get_num_accels;
|
||||||
|
@@ -79,10 +79,10 @@ void adf_init_hw_data_c62xiov(struct adf_hw_device_data *hw_data)
|
|||||||
hw_data->enable_error_correction = adf_vf_void_noop;
|
hw_data->enable_error_correction = adf_vf_void_noop;
|
||||||
hw_data->init_admin_comms = adf_vf_int_noop;
|
hw_data->init_admin_comms = adf_vf_int_noop;
|
||||||
hw_data->exit_admin_comms = adf_vf_void_noop;
|
hw_data->exit_admin_comms = adf_vf_void_noop;
|
||||||
hw_data->send_admin_init = adf_vf2pf_init;
|
hw_data->send_admin_init = adf_vf2pf_notify_init;
|
||||||
hw_data->init_arb = adf_vf_int_noop;
|
hw_data->init_arb = adf_vf_int_noop;
|
||||||
hw_data->exit_arb = adf_vf_void_noop;
|
hw_data->exit_arb = adf_vf_void_noop;
|
||||||
hw_data->disable_iov = adf_vf2pf_shutdown;
|
hw_data->disable_iov = adf_vf2pf_notify_shutdown;
|
||||||
hw_data->get_accel_mask = get_accel_mask;
|
hw_data->get_accel_mask = get_accel_mask;
|
||||||
hw_data->get_ae_mask = get_ae_mask;
|
hw_data->get_ae_mask = get_ae_mask;
|
||||||
hw_data->get_num_accels = get_num_accels;
|
hw_data->get_num_accels = get_num_accels;
|
||||||
|
@@ -195,8 +195,8 @@ void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev,
|
|||||||
void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
|
void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
|
||||||
void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
|
void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
|
||||||
|
|
||||||
int adf_vf2pf_init(struct adf_accel_dev *accel_dev);
|
int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev);
|
||||||
void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev);
|
void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev);
|
||||||
int adf_init_pf_wq(void);
|
int adf_init_pf_wq(void);
|
||||||
void adf_exit_pf_wq(void);
|
void adf_exit_pf_wq(void);
|
||||||
int adf_init_vf_wq(void);
|
int adf_init_vf_wq(void);
|
||||||
@@ -219,12 +219,12 @@ static inline void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev)
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
|
static inline int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev)
|
||||||
{
|
{
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
|
static inline void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -61,6 +61,7 @@ int adf_dev_init(struct adf_accel_dev *accel_dev)
|
|||||||
struct service_hndl *service;
|
struct service_hndl *service;
|
||||||
struct list_head *list_itr;
|
struct list_head *list_itr;
|
||||||
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
|
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
|
||||||
|
int ret;
|
||||||
|
|
||||||
if (!hw_data) {
|
if (!hw_data) {
|
||||||
dev_err(&GET_DEV(accel_dev),
|
dev_err(&GET_DEV(accel_dev),
|
||||||
@@ -127,9 +128,9 @@ int adf_dev_init(struct adf_accel_dev *accel_dev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
hw_data->enable_error_correction(accel_dev);
|
hw_data->enable_error_correction(accel_dev);
|
||||||
hw_data->enable_vf2pf_comms(accel_dev);
|
ret = hw_data->enable_vf2pf_comms(accel_dev);
|
||||||
|
|
||||||
return 0;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(adf_dev_init);
|
EXPORT_SYMBOL_GPL(adf_dev_init);
|
||||||
|
|
||||||
|
@@ -15,6 +15,8 @@
|
|||||||
#include "adf_transport_access_macros.h"
|
#include "adf_transport_access_macros.h"
|
||||||
#include "adf_transport_internal.h"
|
#include "adf_transport_internal.h"
|
||||||
|
|
||||||
|
#define ADF_MAX_NUM_VFS 32
|
||||||
|
|
||||||
static int adf_enable_msix(struct adf_accel_dev *accel_dev)
|
static int adf_enable_msix(struct adf_accel_dev *accel_dev)
|
||||||
{
|
{
|
||||||
struct adf_accel_pci *pci_dev_info = &accel_dev->accel_pci_dev;
|
struct adf_accel_pci *pci_dev_info = &accel_dev->accel_pci_dev;
|
||||||
@@ -67,7 +69,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
|
|||||||
struct adf_bar *pmisc =
|
struct adf_bar *pmisc =
|
||||||
&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
|
&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
|
||||||
void __iomem *pmisc_bar_addr = pmisc->virt_addr;
|
void __iomem *pmisc_bar_addr = pmisc->virt_addr;
|
||||||
u32 vf_mask;
|
unsigned long vf_mask;
|
||||||
|
|
||||||
/* Get the interrupt sources triggered by VFs */
|
/* Get the interrupt sources triggered by VFs */
|
||||||
vf_mask = ((ADF_CSR_RD(pmisc_bar_addr, ADF_ERRSOU5) &
|
vf_mask = ((ADF_CSR_RD(pmisc_bar_addr, ADF_ERRSOU5) &
|
||||||
@@ -88,8 +90,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
|
|||||||
* unless the VF is malicious and is attempting to
|
* unless the VF is malicious and is attempting to
|
||||||
* flood the host OS with VF2PF interrupts.
|
* flood the host OS with VF2PF interrupts.
|
||||||
*/
|
*/
|
||||||
for_each_set_bit(i, (const unsigned long *)&vf_mask,
|
for_each_set_bit(i, &vf_mask, ADF_MAX_NUM_VFS) {
|
||||||
(sizeof(vf_mask) * BITS_PER_BYTE)) {
|
|
||||||
vf_info = accel_dev->pf.vf_info + i;
|
vf_info = accel_dev->pf.vf_info + i;
|
||||||
|
|
||||||
if (!__ratelimit(&vf_info->vf2pf_ratelimit)) {
|
if (!__ratelimit(&vf_info->vf2pf_ratelimit)) {
|
||||||
|
@@ -186,7 +186,6 @@ int adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
|
|||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(adf_iov_putmsg);
|
|
||||||
|
|
||||||
void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
|
void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
|
||||||
{
|
{
|
||||||
@@ -316,6 +315,8 @@ static int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev)
|
|||||||
msg |= ADF_PFVF_COMPATIBILITY_VERSION << ADF_VF2PF_COMPAT_VER_REQ_SHIFT;
|
msg |= ADF_PFVF_COMPATIBILITY_VERSION << ADF_VF2PF_COMPAT_VER_REQ_SHIFT;
|
||||||
BUILD_BUG_ON(ADF_PFVF_COMPATIBILITY_VERSION > 255);
|
BUILD_BUG_ON(ADF_PFVF_COMPATIBILITY_VERSION > 255);
|
||||||
|
|
||||||
|
reinit_completion(&accel_dev->vf.iov_msg_completion);
|
||||||
|
|
||||||
/* Send request from VF to PF */
|
/* Send request from VF to PF */
|
||||||
ret = adf_iov_putmsg(accel_dev, msg, 0);
|
ret = adf_iov_putmsg(accel_dev, msg, 0);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
@@ -5,14 +5,14 @@
|
|||||||
#include "adf_pf2vf_msg.h"
|
#include "adf_pf2vf_msg.h"
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* adf_vf2pf_init() - send init msg to PF
|
* adf_vf2pf_notify_init() - send init msg to PF
|
||||||
* @accel_dev: Pointer to acceleration VF device.
|
* @accel_dev: Pointer to acceleration VF device.
|
||||||
*
|
*
|
||||||
* Function sends an init messge from the VF to a PF
|
* Function sends an init messge from the VF to a PF
|
||||||
*
|
*
|
||||||
* Return: 0 on success, error code otherwise.
|
* Return: 0 on success, error code otherwise.
|
||||||
*/
|
*/
|
||||||
int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
|
int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev)
|
||||||
{
|
{
|
||||||
u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
|
u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
|
||||||
(ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
|
(ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
|
||||||
@@ -25,17 +25,17 @@ int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
|
|||||||
set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
|
set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(adf_vf2pf_init);
|
EXPORT_SYMBOL_GPL(adf_vf2pf_notify_init);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* adf_vf2pf_shutdown() - send shutdown msg to PF
|
* adf_vf2pf_notify_shutdown() - send shutdown msg to PF
|
||||||
* @accel_dev: Pointer to acceleration VF device.
|
* @accel_dev: Pointer to acceleration VF device.
|
||||||
*
|
*
|
||||||
* Function sends a shutdown messge from the VF to a PF
|
* Function sends a shutdown messge from the VF to a PF
|
||||||
*
|
*
|
||||||
* Return: void
|
* Return: void
|
||||||
*/
|
*/
|
||||||
void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
|
void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
|
||||||
{
|
{
|
||||||
u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
|
u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
|
||||||
(ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
|
(ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
|
||||||
@@ -45,4 +45,4 @@ void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
|
|||||||
dev_err(&GET_DEV(accel_dev),
|
dev_err(&GET_DEV(accel_dev),
|
||||||
"Failed to send Shutdown event to PF\n");
|
"Failed to send Shutdown event to PF\n");
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(adf_vf2pf_shutdown);
|
EXPORT_SYMBOL_GPL(adf_vf2pf_notify_shutdown);
|
||||||
|
@@ -159,6 +159,7 @@ static irqreturn_t adf_isr(int irq, void *privdata)
|
|||||||
struct adf_bar *pmisc =
|
struct adf_bar *pmisc =
|
||||||
&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
|
&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
|
||||||
void __iomem *pmisc_bar_addr = pmisc->virt_addr;
|
void __iomem *pmisc_bar_addr = pmisc->virt_addr;
|
||||||
|
bool handled = false;
|
||||||
u32 v_int;
|
u32 v_int;
|
||||||
|
|
||||||
/* Read VF INT source CSR to determine the source of VF interrupt */
|
/* Read VF INT source CSR to determine the source of VF interrupt */
|
||||||
@@ -171,7 +172,7 @@ static irqreturn_t adf_isr(int irq, void *privdata)
|
|||||||
|
|
||||||
/* Schedule tasklet to handle interrupt BH */
|
/* Schedule tasklet to handle interrupt BH */
|
||||||
tasklet_hi_schedule(&accel_dev->vf.pf2vf_bh_tasklet);
|
tasklet_hi_schedule(&accel_dev->vf.pf2vf_bh_tasklet);
|
||||||
return IRQ_HANDLED;
|
handled = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Check bundle interrupt */
|
/* Check bundle interrupt */
|
||||||
@@ -183,10 +184,10 @@ static irqreturn_t adf_isr(int irq, void *privdata)
|
|||||||
WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, bank->bank_number,
|
WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, bank->bank_number,
|
||||||
0);
|
0);
|
||||||
tasklet_hi_schedule(&bank->resp_handler);
|
tasklet_hi_schedule(&bank->resp_handler);
|
||||||
return IRQ_HANDLED;
|
handled = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
return IRQ_NONE;
|
return handled ? IRQ_HANDLED : IRQ_NONE;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int adf_request_msi_irq(struct adf_accel_dev *accel_dev)
|
static int adf_request_msi_irq(struct adf_accel_dev *accel_dev)
|
||||||
|
@@ -79,10 +79,10 @@ void adf_init_hw_data_dh895xcciov(struct adf_hw_device_data *hw_data)
|
|||||||
hw_data->enable_error_correction = adf_vf_void_noop;
|
hw_data->enable_error_correction = adf_vf_void_noop;
|
||||||
hw_data->init_admin_comms = adf_vf_int_noop;
|
hw_data->init_admin_comms = adf_vf_int_noop;
|
||||||
hw_data->exit_admin_comms = adf_vf_void_noop;
|
hw_data->exit_admin_comms = adf_vf_void_noop;
|
||||||
hw_data->send_admin_init = adf_vf2pf_init;
|
hw_data->send_admin_init = adf_vf2pf_notify_init;
|
||||||
hw_data->init_arb = adf_vf_int_noop;
|
hw_data->init_arb = adf_vf_int_noop;
|
||||||
hw_data->exit_arb = adf_vf_void_noop;
|
hw_data->exit_arb = adf_vf_void_noop;
|
||||||
hw_data->disable_iov = adf_vf2pf_shutdown;
|
hw_data->disable_iov = adf_vf2pf_notify_shutdown;
|
||||||
hw_data->get_accel_mask = get_accel_mask;
|
hw_data->get_accel_mask = get_accel_mask;
|
||||||
hw_data->get_ae_mask = get_ae_mask;
|
hw_data->get_ae_mask = get_ae_mask;
|
||||||
hw_data->get_num_accels = get_num_accels;
|
hw_data->get_num_accels = get_num_accels;
|
||||||
|
@@ -26,8 +26,8 @@
|
|||||||
pci_read_config_dword((d)->uracu, 0xd8 + (i) * 4, &(reg))
|
pci_read_config_dword((d)->uracu, 0xd8 + (i) * 4, &(reg))
|
||||||
#define I10NM_GET_DIMMMTR(m, i, j) \
|
#define I10NM_GET_DIMMMTR(m, i, j) \
|
||||||
readl((m)->mbase + 0x2080c + (i) * 0x4000 + (j) * 4)
|
readl((m)->mbase + 0x2080c + (i) * 0x4000 + (j) * 4)
|
||||||
#define I10NM_GET_MCDDRTCFG(m, i, j) \
|
#define I10NM_GET_MCDDRTCFG(m, i) \
|
||||||
readl((m)->mbase + 0x20970 + (i) * 0x4000 + (j) * 4)
|
readl((m)->mbase + 0x20970 + (i) * 0x4000)
|
||||||
#define I10NM_GET_MCMTR(m, i) \
|
#define I10NM_GET_MCMTR(m, i) \
|
||||||
readl((m)->mbase + 0x20ef8 + (i) * 0x4000)
|
readl((m)->mbase + 0x20ef8 + (i) * 0x4000)
|
||||||
|
|
||||||
@@ -170,10 +170,10 @@ static int i10nm_get_dimm_config(struct mem_ctl_info *mci)
|
|||||||
continue;
|
continue;
|
||||||
|
|
||||||
ndimms = 0;
|
ndimms = 0;
|
||||||
|
mcddrtcfg = I10NM_GET_MCDDRTCFG(imc, i);
|
||||||
for (j = 0; j < I10NM_NUM_DIMMS; j++) {
|
for (j = 0; j < I10NM_NUM_DIMMS; j++) {
|
||||||
dimm = edac_get_dimm(mci, i, j, 0);
|
dimm = edac_get_dimm(mci, i, j, 0);
|
||||||
mtr = I10NM_GET_DIMMMTR(imc, i, j);
|
mtr = I10NM_GET_DIMMMTR(imc, i, j);
|
||||||
mcddrtcfg = I10NM_GET_MCDDRTCFG(imc, i, j);
|
|
||||||
edac_dbg(1, "dimmmtr 0x%x mcddrtcfg 0x%x (mc%d ch%d dimm%d)\n",
|
edac_dbg(1, "dimmmtr 0x%x mcddrtcfg 0x%x (mc%d ch%d dimm%d)\n",
|
||||||
mtr, mcddrtcfg, imc->mc, i, j);
|
mtr, mcddrtcfg, imc->mc, i, j);
|
||||||
|
|
||||||
|
@@ -1176,6 +1176,9 @@ static int __init mce_amd_init(void)
|
|||||||
c->x86_vendor != X86_VENDOR_HYGON)
|
c->x86_vendor != X86_VENDOR_HYGON)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
|
if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
|
||||||
|
return -ENODEV;
|
||||||
|
|
||||||
if (boot_cpu_has(X86_FEATURE_SMCA)) {
|
if (boot_cpu_has(X86_FEATURE_SMCA)) {
|
||||||
xec_mask = 0x3f;
|
xec_mask = 0x3f;
|
||||||
goto out;
|
goto out;
|
||||||
|
@@ -7,6 +7,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/dma-mapping.h>
|
#include <linux/dma-mapping.h>
|
||||||
|
#include <linux/kref.h>
|
||||||
#include <linux/mailbox_client.h>
|
#include <linux/mailbox_client.h>
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
#include <linux/of_platform.h>
|
#include <linux/of_platform.h>
|
||||||
@@ -27,6 +28,8 @@ struct rpi_firmware {
|
|||||||
struct mbox_chan *chan; /* The property channel. */
|
struct mbox_chan *chan; /* The property channel. */
|
||||||
struct completion c;
|
struct completion c;
|
||||||
u32 enabled;
|
u32 enabled;
|
||||||
|
|
||||||
|
struct kref consumers;
|
||||||
};
|
};
|
||||||
|
|
||||||
static DEFINE_MUTEX(transaction_lock);
|
static DEFINE_MUTEX(transaction_lock);
|
||||||
@@ -225,12 +228,31 @@ static void rpi_register_clk_driver(struct device *dev)
|
|||||||
-1, NULL, 0);
|
-1, NULL, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void rpi_firmware_delete(struct kref *kref)
|
||||||
|
{
|
||||||
|
struct rpi_firmware *fw = container_of(kref, struct rpi_firmware,
|
||||||
|
consumers);
|
||||||
|
|
||||||
|
mbox_free_channel(fw->chan);
|
||||||
|
kfree(fw);
|
||||||
|
}
|
||||||
|
|
||||||
|
void rpi_firmware_put(struct rpi_firmware *fw)
|
||||||
|
{
|
||||||
|
kref_put(&fw->consumers, rpi_firmware_delete);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rpi_firmware_put);
|
||||||
|
|
||||||
static int rpi_firmware_probe(struct platform_device *pdev)
|
static int rpi_firmware_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
struct rpi_firmware *fw;
|
struct rpi_firmware *fw;
|
||||||
|
|
||||||
fw = devm_kzalloc(dev, sizeof(*fw), GFP_KERNEL);
|
/*
|
||||||
|
* Memory will be freed by rpi_firmware_delete() once all users have
|
||||||
|
* released their firmware handles. Don't use devm_kzalloc() here.
|
||||||
|
*/
|
||||||
|
fw = kzalloc(sizeof(*fw), GFP_KERNEL);
|
||||||
if (!fw)
|
if (!fw)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
@@ -247,6 +269,7 @@ static int rpi_firmware_probe(struct platform_device *pdev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
init_completion(&fw->c);
|
init_completion(&fw->c);
|
||||||
|
kref_init(&fw->consumers);
|
||||||
|
|
||||||
platform_set_drvdata(pdev, fw);
|
platform_set_drvdata(pdev, fw);
|
||||||
|
|
||||||
@@ -275,7 +298,8 @@ static int rpi_firmware_remove(struct platform_device *pdev)
|
|||||||
rpi_hwmon = NULL;
|
rpi_hwmon = NULL;
|
||||||
platform_device_unregister(rpi_clk);
|
platform_device_unregister(rpi_clk);
|
||||||
rpi_clk = NULL;
|
rpi_clk = NULL;
|
||||||
mbox_free_channel(fw->chan);
|
|
||||||
|
rpi_firmware_put(fw);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -284,16 +308,32 @@ static int rpi_firmware_remove(struct platform_device *pdev)
|
|||||||
* rpi_firmware_get - Get pointer to rpi_firmware structure.
|
* rpi_firmware_get - Get pointer to rpi_firmware structure.
|
||||||
* @firmware_node: Pointer to the firmware Device Tree node.
|
* @firmware_node: Pointer to the firmware Device Tree node.
|
||||||
*
|
*
|
||||||
|
* The reference to rpi_firmware has to be released with rpi_firmware_put().
|
||||||
|
*
|
||||||
* Returns NULL is the firmware device is not ready.
|
* Returns NULL is the firmware device is not ready.
|
||||||
*/
|
*/
|
||||||
struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
|
struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
|
||||||
{
|
{
|
||||||
struct platform_device *pdev = of_find_device_by_node(firmware_node);
|
struct platform_device *pdev = of_find_device_by_node(firmware_node);
|
||||||
|
struct rpi_firmware *fw;
|
||||||
|
|
||||||
if (!pdev)
|
if (!pdev)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
return platform_get_drvdata(pdev);
|
fw = platform_get_drvdata(pdev);
|
||||||
|
if (!fw)
|
||||||
|
goto err_put_device;
|
||||||
|
|
||||||
|
if (!kref_get_unless_zero(&fw->consumers))
|
||||||
|
goto err_put_device;
|
||||||
|
|
||||||
|
put_device(&pdev->dev);
|
||||||
|
|
||||||
|
return fw;
|
||||||
|
|
||||||
|
err_put_device:
|
||||||
|
put_device(&pdev->dev);
|
||||||
|
return NULL;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rpi_firmware_get);
|
EXPORT_SYMBOL_GPL(rpi_firmware_get);
|
||||||
|
|
||||||
|
@@ -160,17 +160,28 @@ static int acp_poweron(struct generic_pm_domain *genpd)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct device *get_mfd_cell_dev(const char *device_name, int r)
|
static int acp_genpd_add_device(struct device *dev, void *data)
|
||||||
{
|
{
|
||||||
char auto_dev_name[25];
|
struct generic_pm_domain *gpd = data;
|
||||||
struct device *dev;
|
int ret;
|
||||||
|
|
||||||
snprintf(auto_dev_name, sizeof(auto_dev_name),
|
ret = pm_genpd_add_device(gpd, dev);
|
||||||
"%s.%d.auto", device_name, r);
|
if (ret)
|
||||||
dev = bus_find_device_by_name(&platform_bus_type, NULL, auto_dev_name);
|
dev_err(dev, "Failed to add dev to genpd %d\n", ret);
|
||||||
dev_info(dev, "device %s added to pm domain\n", auto_dev_name);
|
|
||||||
|
|
||||||
return dev;
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int acp_genpd_remove_device(struct device *dev, void *data)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = pm_genpd_remove_device(dev);
|
||||||
|
if (ret)
|
||||||
|
dev_err(dev, "Failed to remove dev from genpd %d\n", ret);
|
||||||
|
|
||||||
|
/* Continue to remove */
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -181,11 +192,10 @@ static struct device *get_mfd_cell_dev(const char *device_name, int r)
|
|||||||
*/
|
*/
|
||||||
static int acp_hw_init(void *handle)
|
static int acp_hw_init(void *handle)
|
||||||
{
|
{
|
||||||
int r, i;
|
int r;
|
||||||
uint64_t acp_base;
|
uint64_t acp_base;
|
||||||
u32 val = 0;
|
u32 val = 0;
|
||||||
u32 count = 0;
|
u32 count = 0;
|
||||||
struct device *dev;
|
|
||||||
struct i2s_platform_data *i2s_pdata = NULL;
|
struct i2s_platform_data *i2s_pdata = NULL;
|
||||||
|
|
||||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||||
@@ -341,15 +351,10 @@ static int acp_hw_init(void *handle)
|
|||||||
if (r)
|
if (r)
|
||||||
goto failure;
|
goto failure;
|
||||||
|
|
||||||
for (i = 0; i < ACP_DEVS ; i++) {
|
r = device_for_each_child(adev->acp.parent, &adev->acp.acp_genpd->gpd,
|
||||||
dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
|
acp_genpd_add_device);
|
||||||
r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev);
|
if (r)
|
||||||
if (r) {
|
|
||||||
dev_err(dev, "Failed to add dev to genpd\n");
|
|
||||||
goto failure;
|
goto failure;
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/* Assert Soft reset of ACP */
|
/* Assert Soft reset of ACP */
|
||||||
val = cgs_read_register(adev->acp.cgs_device, mmACP_SOFT_RESET);
|
val = cgs_read_register(adev->acp.cgs_device, mmACP_SOFT_RESET);
|
||||||
@@ -410,10 +415,8 @@ failure:
|
|||||||
*/
|
*/
|
||||||
static int acp_hw_fini(void *handle)
|
static int acp_hw_fini(void *handle)
|
||||||
{
|
{
|
||||||
int i, ret;
|
|
||||||
u32 val = 0;
|
u32 val = 0;
|
||||||
u32 count = 0;
|
u32 count = 0;
|
||||||
struct device *dev;
|
|
||||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||||
|
|
||||||
/* return early if no ACP */
|
/* return early if no ACP */
|
||||||
@@ -458,13 +461,8 @@ static int acp_hw_fini(void *handle)
|
|||||||
udelay(100);
|
udelay(100);
|
||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < ACP_DEVS ; i++) {
|
device_for_each_child(adev->acp.parent, NULL,
|
||||||
dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
|
acp_genpd_remove_device);
|
||||||
ret = pm_genpd_remove_device(dev);
|
|
||||||
/* If removal fails, dont giveup and try rest */
|
|
||||||
if (ret)
|
|
||||||
dev_err(dev, "remove dev from genpd failed\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
mfd_remove_devices(adev->acp.parent);
|
mfd_remove_devices(adev->acp.parent);
|
||||||
kfree(adev->acp.acp_res);
|
kfree(adev->acp.acp_res);
|
||||||
|
@@ -315,7 +315,7 @@ static int drm_of_lvds_get_remote_pixels_type(
|
|||||||
|
|
||||||
remote_port = of_graph_get_remote_port(endpoint);
|
remote_port = of_graph_get_remote_port(endpoint);
|
||||||
if (!remote_port) {
|
if (!remote_port) {
|
||||||
of_node_put(remote_port);
|
of_node_put(endpoint);
|
||||||
return -EPIPE;
|
return -EPIPE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -331,9 +331,11 @@ static int drm_of_lvds_get_remote_pixels_type(
|
|||||||
* configurations by passing the endpoints explicitly to
|
* configurations by passing the endpoints explicitly to
|
||||||
* drm_of_lvds_get_dual_link_pixel_order().
|
* drm_of_lvds_get_dual_link_pixel_order().
|
||||||
*/
|
*/
|
||||||
if (!current_pt || pixels_type != current_pt)
|
if (!current_pt || pixels_type != current_pt) {
|
||||||
|
of_node_put(endpoint);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return pixels_type;
|
return pixels_type;
|
||||||
}
|
}
|
||||||
|
@@ -117,7 +117,7 @@ static void oaktrail_lvds_mode_set(struct drm_encoder *encoder,
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!connector) {
|
if (list_entry_is_head(connector, &mode_config->connector_list, head)) {
|
||||||
DRM_ERROR("Couldn't find connector when setting mode");
|
DRM_ERROR("Couldn't find connector when setting mode");
|
||||||
gma_power_end(dev);
|
gma_power_end(dev);
|
||||||
return;
|
return;
|
||||||
|
@@ -340,10 +340,12 @@ static void dpu_hw_ctl_clear_all_blendstages(struct dpu_hw_ctl *ctx)
|
|||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = 0; i < ctx->mixer_count; i++) {
|
for (i = 0; i < ctx->mixer_count; i++) {
|
||||||
DPU_REG_WRITE(c, CTL_LAYER(LM_0 + i), 0);
|
enum dpu_lm mixer_id = ctx->mixer_hw_caps[i].id;
|
||||||
DPU_REG_WRITE(c, CTL_LAYER_EXT(LM_0 + i), 0);
|
|
||||||
DPU_REG_WRITE(c, CTL_LAYER_EXT2(LM_0 + i), 0);
|
DPU_REG_WRITE(c, CTL_LAYER(mixer_id), 0);
|
||||||
DPU_REG_WRITE(c, CTL_LAYER_EXT3(LM_0 + i), 0);
|
DPU_REG_WRITE(c, CTL_LAYER_EXT(mixer_id), 0);
|
||||||
|
DPU_REG_WRITE(c, CTL_LAYER_EXT2(mixer_id), 0);
|
||||||
|
DPU_REG_WRITE(c, CTL_LAYER_EXT3(mixer_id), 0);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -19,30 +19,12 @@ static int mdp4_hw_init(struct msm_kms *kms)
|
|||||||
{
|
{
|
||||||
struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
|
struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
|
||||||
struct drm_device *dev = mdp4_kms->dev;
|
struct drm_device *dev = mdp4_kms->dev;
|
||||||
uint32_t version, major, minor, dmap_cfg, vg_cfg;
|
u32 dmap_cfg, vg_cfg;
|
||||||
unsigned long clk;
|
unsigned long clk;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
pm_runtime_get_sync(dev->dev);
|
pm_runtime_get_sync(dev->dev);
|
||||||
|
|
||||||
mdp4_enable(mdp4_kms);
|
|
||||||
version = mdp4_read(mdp4_kms, REG_MDP4_VERSION);
|
|
||||||
mdp4_disable(mdp4_kms);
|
|
||||||
|
|
||||||
major = FIELD(version, MDP4_VERSION_MAJOR);
|
|
||||||
minor = FIELD(version, MDP4_VERSION_MINOR);
|
|
||||||
|
|
||||||
DBG("found MDP4 version v%d.%d", major, minor);
|
|
||||||
|
|
||||||
if (major != 4) {
|
|
||||||
DRM_DEV_ERROR(dev->dev, "unexpected MDP version: v%d.%d\n",
|
|
||||||
major, minor);
|
|
||||||
ret = -ENXIO;
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
|
|
||||||
mdp4_kms->rev = minor;
|
|
||||||
|
|
||||||
if (mdp4_kms->rev > 1) {
|
if (mdp4_kms->rev > 1) {
|
||||||
mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER0, 0x0707ffff);
|
mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER0, 0x0707ffff);
|
||||||
mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER1, 0x03073f3f);
|
mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER1, 0x03073f3f);
|
||||||
@@ -88,7 +70,6 @@ static int mdp4_hw_init(struct msm_kms *kms)
|
|||||||
if (mdp4_kms->rev > 1)
|
if (mdp4_kms->rev > 1)
|
||||||
mdp4_write(mdp4_kms, REG_MDP4_RESET_STATUS, 1);
|
mdp4_write(mdp4_kms, REG_MDP4_RESET_STATUS, 1);
|
||||||
|
|
||||||
out:
|
|
||||||
pm_runtime_put_sync(dev->dev);
|
pm_runtime_put_sync(dev->dev);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
@@ -409,6 +390,22 @@ fail:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void read_mdp_hw_revision(struct mdp4_kms *mdp4_kms,
|
||||||
|
u32 *major, u32 *minor)
|
||||||
|
{
|
||||||
|
struct drm_device *dev = mdp4_kms->dev;
|
||||||
|
u32 version;
|
||||||
|
|
||||||
|
mdp4_enable(mdp4_kms);
|
||||||
|
version = mdp4_read(mdp4_kms, REG_MDP4_VERSION);
|
||||||
|
mdp4_disable(mdp4_kms);
|
||||||
|
|
||||||
|
*major = FIELD(version, MDP4_VERSION_MAJOR);
|
||||||
|
*minor = FIELD(version, MDP4_VERSION_MINOR);
|
||||||
|
|
||||||
|
DRM_DEV_INFO(dev->dev, "MDP4 version v%d.%d", *major, *minor);
|
||||||
|
}
|
||||||
|
|
||||||
struct msm_kms *mdp4_kms_init(struct drm_device *dev)
|
struct msm_kms *mdp4_kms_init(struct drm_device *dev)
|
||||||
{
|
{
|
||||||
struct platform_device *pdev = to_platform_device(dev->dev);
|
struct platform_device *pdev = to_platform_device(dev->dev);
|
||||||
@@ -417,6 +414,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
|
|||||||
struct msm_kms *kms = NULL;
|
struct msm_kms *kms = NULL;
|
||||||
struct msm_gem_address_space *aspace;
|
struct msm_gem_address_space *aspace;
|
||||||
int irq, ret;
|
int irq, ret;
|
||||||
|
u32 major, minor;
|
||||||
|
|
||||||
mdp4_kms = kzalloc(sizeof(*mdp4_kms), GFP_KERNEL);
|
mdp4_kms = kzalloc(sizeof(*mdp4_kms), GFP_KERNEL);
|
||||||
if (!mdp4_kms) {
|
if (!mdp4_kms) {
|
||||||
@@ -473,15 +471,6 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
|
|||||||
if (IS_ERR(mdp4_kms->pclk))
|
if (IS_ERR(mdp4_kms->pclk))
|
||||||
mdp4_kms->pclk = NULL;
|
mdp4_kms->pclk = NULL;
|
||||||
|
|
||||||
if (mdp4_kms->rev >= 2) {
|
|
||||||
mdp4_kms->lut_clk = devm_clk_get(&pdev->dev, "lut_clk");
|
|
||||||
if (IS_ERR(mdp4_kms->lut_clk)) {
|
|
||||||
DRM_DEV_ERROR(dev->dev, "failed to get lut_clk\n");
|
|
||||||
ret = PTR_ERR(mdp4_kms->lut_clk);
|
|
||||||
goto fail;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
mdp4_kms->axi_clk = devm_clk_get(&pdev->dev, "bus_clk");
|
mdp4_kms->axi_clk = devm_clk_get(&pdev->dev, "bus_clk");
|
||||||
if (IS_ERR(mdp4_kms->axi_clk)) {
|
if (IS_ERR(mdp4_kms->axi_clk)) {
|
||||||
DRM_DEV_ERROR(dev->dev, "failed to get axi_clk\n");
|
DRM_DEV_ERROR(dev->dev, "failed to get axi_clk\n");
|
||||||
@@ -490,8 +479,27 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
clk_set_rate(mdp4_kms->clk, config->max_clk);
|
clk_set_rate(mdp4_kms->clk, config->max_clk);
|
||||||
if (mdp4_kms->lut_clk)
|
|
||||||
|
read_mdp_hw_revision(mdp4_kms, &major, &minor);
|
||||||
|
|
||||||
|
if (major != 4) {
|
||||||
|
DRM_DEV_ERROR(dev->dev, "unexpected MDP version: v%d.%d\n",
|
||||||
|
major, minor);
|
||||||
|
ret = -ENXIO;
|
||||||
|
goto fail;
|
||||||
|
}
|
||||||
|
|
||||||
|
mdp4_kms->rev = minor;
|
||||||
|
|
||||||
|
if (mdp4_kms->rev >= 2) {
|
||||||
|
mdp4_kms->lut_clk = devm_clk_get(&pdev->dev, "lut_clk");
|
||||||
|
if (IS_ERR(mdp4_kms->lut_clk)) {
|
||||||
|
DRM_DEV_ERROR(dev->dev, "failed to get lut_clk\n");
|
||||||
|
ret = PTR_ERR(mdp4_kms->lut_clk);
|
||||||
|
goto fail;
|
||||||
|
}
|
||||||
clk_set_rate(mdp4_kms->lut_clk, config->max_clk);
|
clk_set_rate(mdp4_kms->lut_clk, config->max_clk);
|
||||||
|
}
|
||||||
|
|
||||||
pm_runtime_enable(dev->dev);
|
pm_runtime_enable(dev->dev);
|
||||||
mdp4_kms->rpm_enabled = true;
|
mdp4_kms->rpm_enabled = true;
|
||||||
|
@@ -26,8 +26,10 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
|
|||||||
}
|
}
|
||||||
|
|
||||||
phy_pdev = of_find_device_by_node(phy_node);
|
phy_pdev = of_find_device_by_node(phy_node);
|
||||||
if (phy_pdev)
|
if (phy_pdev) {
|
||||||
msm_dsi->phy = platform_get_drvdata(phy_pdev);
|
msm_dsi->phy = platform_get_drvdata(phy_pdev);
|
||||||
|
msm_dsi->phy_dev = &phy_pdev->dev;
|
||||||
|
}
|
||||||
|
|
||||||
of_node_put(phy_node);
|
of_node_put(phy_node);
|
||||||
|
|
||||||
@@ -36,8 +38,6 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
|
|||||||
return -EPROBE_DEFER;
|
return -EPROBE_DEFER;
|
||||||
}
|
}
|
||||||
|
|
||||||
msm_dsi->phy_dev = get_device(&phy_pdev->dev);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -51,6 +51,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
|
|||||||
.hs_wdth_mask = 0xff,
|
.hs_wdth_mask = 0xff,
|
||||||
.hs_wdth_shift = 24,
|
.hs_wdth_shift = 24,
|
||||||
.has_overlay = false,
|
.has_overlay = false,
|
||||||
|
.has_ctrl2 = false,
|
||||||
},
|
},
|
||||||
[MXSFB_V4] = {
|
[MXSFB_V4] = {
|
||||||
.transfer_count = LCDC_V4_TRANSFER_COUNT,
|
.transfer_count = LCDC_V4_TRANSFER_COUNT,
|
||||||
@@ -59,6 +60,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
|
|||||||
.hs_wdth_mask = 0x3fff,
|
.hs_wdth_mask = 0x3fff,
|
||||||
.hs_wdth_shift = 18,
|
.hs_wdth_shift = 18,
|
||||||
.has_overlay = false,
|
.has_overlay = false,
|
||||||
|
.has_ctrl2 = true,
|
||||||
},
|
},
|
||||||
[MXSFB_V6] = {
|
[MXSFB_V6] = {
|
||||||
.transfer_count = LCDC_V4_TRANSFER_COUNT,
|
.transfer_count = LCDC_V4_TRANSFER_COUNT,
|
||||||
@@ -67,6 +69,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
|
|||||||
.hs_wdth_mask = 0x3fff,
|
.hs_wdth_mask = 0x3fff,
|
||||||
.hs_wdth_shift = 18,
|
.hs_wdth_shift = 18,
|
||||||
.has_overlay = true,
|
.has_overlay = true,
|
||||||
|
.has_ctrl2 = true,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@@ -22,6 +22,7 @@ struct mxsfb_devdata {
|
|||||||
unsigned int hs_wdth_mask;
|
unsigned int hs_wdth_mask;
|
||||||
unsigned int hs_wdth_shift;
|
unsigned int hs_wdth_shift;
|
||||||
bool has_overlay;
|
bool has_overlay;
|
||||||
|
bool has_ctrl2;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct mxsfb_drm_private {
|
struct mxsfb_drm_private {
|
||||||
|
@@ -107,6 +107,14 @@ static void mxsfb_enable_controller(struct mxsfb_drm_private *mxsfb)
|
|||||||
clk_prepare_enable(mxsfb->clk_disp_axi);
|
clk_prepare_enable(mxsfb->clk_disp_axi);
|
||||||
clk_prepare_enable(mxsfb->clk);
|
clk_prepare_enable(mxsfb->clk);
|
||||||
|
|
||||||
|
/* Increase number of outstanding requests on all supported IPs */
|
||||||
|
if (mxsfb->devdata->has_ctrl2) {
|
||||||
|
reg = readl(mxsfb->base + LCDC_V4_CTRL2);
|
||||||
|
reg &= ~CTRL2_SET_OUTSTANDING_REQS_MASK;
|
||||||
|
reg |= CTRL2_SET_OUTSTANDING_REQS_16;
|
||||||
|
writel(reg, mxsfb->base + LCDC_V4_CTRL2);
|
||||||
|
}
|
||||||
|
|
||||||
/* If it was disabled, re-enable the mode again */
|
/* If it was disabled, re-enable the mode again */
|
||||||
writel(CTRL_DOTCLK_MODE, mxsfb->base + LCDC_CTRL + REG_SET);
|
writel(CTRL_DOTCLK_MODE, mxsfb->base + LCDC_CTRL + REG_SET);
|
||||||
|
|
||||||
@@ -115,6 +123,35 @@ static void mxsfb_enable_controller(struct mxsfb_drm_private *mxsfb)
|
|||||||
reg |= VDCTRL4_SYNC_SIGNALS_ON;
|
reg |= VDCTRL4_SYNC_SIGNALS_ON;
|
||||||
writel(reg, mxsfb->base + LCDC_VDCTRL4);
|
writel(reg, mxsfb->base + LCDC_VDCTRL4);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Enable recovery on underflow.
|
||||||
|
*
|
||||||
|
* There is some sort of corner case behavior of the controller,
|
||||||
|
* which could rarely be triggered at least on i.MX6SX connected
|
||||||
|
* to 800x480 DPI panel and i.MX8MM connected to DPI->DSI->LVDS
|
||||||
|
* bridged 1920x1080 panel (and likely on other setups too), where
|
||||||
|
* the image on the panel shifts to the right and wraps around.
|
||||||
|
* This happens either when the controller is enabled on boot or
|
||||||
|
* even later during run time. The condition does not correct
|
||||||
|
* itself automatically, i.e. the display image remains shifted.
|
||||||
|
*
|
||||||
|
* It seems this problem is known and is due to sporadic underflows
|
||||||
|
* of the LCDIF FIFO. While the LCDIF IP does have underflow/overflow
|
||||||
|
* IRQs, neither of the IRQs trigger and neither IRQ status bit is
|
||||||
|
* asserted when this condition occurs.
|
||||||
|
*
|
||||||
|
* All known revisions of the LCDIF IP have CTRL1 RECOVER_ON_UNDERFLOW
|
||||||
|
* bit, which is described in the reference manual since i.MX23 as
|
||||||
|
* "
|
||||||
|
* Set this bit to enable the LCDIF block to recover in the next
|
||||||
|
* field/frame if there was an underflow in the current field/frame.
|
||||||
|
* "
|
||||||
|
* Enable this bit to mitigate the sporadic underflows.
|
||||||
|
*/
|
||||||
|
reg = readl(mxsfb->base + LCDC_CTRL1);
|
||||||
|
reg |= CTRL1_RECOVER_ON_UNDERFLOW;
|
||||||
|
writel(reg, mxsfb->base + LCDC_CTRL1);
|
||||||
|
|
||||||
writel(CTRL_RUN, mxsfb->base + LCDC_CTRL + REG_SET);
|
writel(CTRL_RUN, mxsfb->base + LCDC_CTRL + REG_SET);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -206,6 +243,9 @@ static void mxsfb_crtc_mode_set_nofb(struct mxsfb_drm_private *mxsfb)
|
|||||||
|
|
||||||
/* Clear the FIFOs */
|
/* Clear the FIFOs */
|
||||||
writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_SET);
|
writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_SET);
|
||||||
|
readl(mxsfb->base + LCDC_CTRL1);
|
||||||
|
writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_CLR);
|
||||||
|
readl(mxsfb->base + LCDC_CTRL1);
|
||||||
|
|
||||||
if (mxsfb->devdata->has_overlay)
|
if (mxsfb->devdata->has_overlay)
|
||||||
writel(0, mxsfb->base + LCDC_AS_CTRL);
|
writel(0, mxsfb->base + LCDC_AS_CTRL);
|
||||||
|
@@ -15,6 +15,7 @@
|
|||||||
#define LCDC_CTRL 0x00
|
#define LCDC_CTRL 0x00
|
||||||
#define LCDC_CTRL1 0x10
|
#define LCDC_CTRL1 0x10
|
||||||
#define LCDC_V3_TRANSFER_COUNT 0x20
|
#define LCDC_V3_TRANSFER_COUNT 0x20
|
||||||
|
#define LCDC_V4_CTRL2 0x20
|
||||||
#define LCDC_V4_TRANSFER_COUNT 0x30
|
#define LCDC_V4_TRANSFER_COUNT 0x30
|
||||||
#define LCDC_V4_CUR_BUF 0x40
|
#define LCDC_V4_CUR_BUF 0x40
|
||||||
#define LCDC_V4_NEXT_BUF 0x50
|
#define LCDC_V4_NEXT_BUF 0x50
|
||||||
@@ -54,12 +55,20 @@
|
|||||||
#define CTRL_DF24 BIT(1)
|
#define CTRL_DF24 BIT(1)
|
||||||
#define CTRL_RUN BIT(0)
|
#define CTRL_RUN BIT(0)
|
||||||
|
|
||||||
|
#define CTRL1_RECOVER_ON_UNDERFLOW BIT(24)
|
||||||
#define CTRL1_FIFO_CLEAR BIT(21)
|
#define CTRL1_FIFO_CLEAR BIT(21)
|
||||||
#define CTRL1_SET_BYTE_PACKAGING(x) (((x) & 0xf) << 16)
|
#define CTRL1_SET_BYTE_PACKAGING(x) (((x) & 0xf) << 16)
|
||||||
#define CTRL1_GET_BYTE_PACKAGING(x) (((x) >> 16) & 0xf)
|
#define CTRL1_GET_BYTE_PACKAGING(x) (((x) >> 16) & 0xf)
|
||||||
#define CTRL1_CUR_FRAME_DONE_IRQ_EN BIT(13)
|
#define CTRL1_CUR_FRAME_DONE_IRQ_EN BIT(13)
|
||||||
#define CTRL1_CUR_FRAME_DONE_IRQ BIT(9)
|
#define CTRL1_CUR_FRAME_DONE_IRQ BIT(9)
|
||||||
|
|
||||||
|
#define CTRL2_SET_OUTSTANDING_REQS_1 0
|
||||||
|
#define CTRL2_SET_OUTSTANDING_REQS_2 (0x1 << 21)
|
||||||
|
#define CTRL2_SET_OUTSTANDING_REQS_4 (0x2 << 21)
|
||||||
|
#define CTRL2_SET_OUTSTANDING_REQS_8 (0x3 << 21)
|
||||||
|
#define CTRL2_SET_OUTSTANDING_REQS_16 (0x4 << 21)
|
||||||
|
#define CTRL2_SET_OUTSTANDING_REQS_MASK (0x7 << 21)
|
||||||
|
|
||||||
#define TRANSFER_COUNT_SET_VCOUNT(x) (((x) & 0xffff) << 16)
|
#define TRANSFER_COUNT_SET_VCOUNT(x) (((x) & 0xffff) << 16)
|
||||||
#define TRANSFER_COUNT_GET_VCOUNT(x) (((x) >> 16) & 0xffff)
|
#define TRANSFER_COUNT_GET_VCOUNT(x) (((x) >> 16) & 0xffff)
|
||||||
#define TRANSFER_COUNT_SET_HCOUNT(x) ((x) & 0xffff)
|
#define TRANSFER_COUNT_SET_HCOUNT(x) ((x) & 0xffff)
|
||||||
|
@@ -60,7 +60,8 @@ static int panfrost_clk_init(struct panfrost_device *pfdev)
|
|||||||
if (IS_ERR(pfdev->bus_clock)) {
|
if (IS_ERR(pfdev->bus_clock)) {
|
||||||
dev_err(pfdev->dev, "get bus_clock failed %ld\n",
|
dev_err(pfdev->dev, "get bus_clock failed %ld\n",
|
||||||
PTR_ERR(pfdev->bus_clock));
|
PTR_ERR(pfdev->bus_clock));
|
||||||
return PTR_ERR(pfdev->bus_clock);
|
err = PTR_ERR(pfdev->bus_clock);
|
||||||
|
goto disable_clock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pfdev->bus_clock) {
|
if (pfdev->bus_clock) {
|
||||||
|
@@ -379,7 +379,7 @@ static int highlander_i2c_probe(struct platform_device *pdev)
|
|||||||
platform_set_drvdata(pdev, dev);
|
platform_set_drvdata(pdev, dev);
|
||||||
|
|
||||||
dev->irq = platform_get_irq(pdev, 0);
|
dev->irq = platform_get_irq(pdev, 0);
|
||||||
if (iic_force_poll)
|
if (dev->irq < 0 || iic_force_poll)
|
||||||
dev->irq = 0;
|
dev->irq = 0;
|
||||||
|
|
||||||
if (dev->irq) {
|
if (dev->irq) {
|
||||||
|
@@ -413,10 +413,8 @@ static int hix5hd2_i2c_probe(struct platform_device *pdev)
|
|||||||
return PTR_ERR(priv->regs);
|
return PTR_ERR(priv->regs);
|
||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
irq = platform_get_irq(pdev, 0);
|
||||||
if (irq <= 0) {
|
if (irq < 0)
|
||||||
dev_err(&pdev->dev, "cannot find HS-I2C IRQ\n");
|
|
||||||
return irq;
|
return irq;
|
||||||
}
|
|
||||||
|
|
||||||
priv->clk = devm_clk_get(&pdev->dev, NULL);
|
priv->clk = devm_clk_get(&pdev->dev, NULL);
|
||||||
if (IS_ERR(priv->clk)) {
|
if (IS_ERR(priv->clk)) {
|
||||||
|
@@ -467,16 +467,14 @@ iop3xx_i2c_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
irq = platform_get_irq(pdev, 0);
|
||||||
if (irq < 0) {
|
if (irq < 0) {
|
||||||
ret = -ENXIO;
|
ret = irq;
|
||||||
goto unmap;
|
goto unmap;
|
||||||
}
|
}
|
||||||
ret = request_irq(irq, iop3xx_i2c_irq_handler, 0,
|
ret = request_irq(irq, iop3xx_i2c_irq_handler, 0,
|
||||||
pdev->name, adapter_data);
|
pdev->name, adapter_data);
|
||||||
|
|
||||||
if (ret) {
|
if (ret)
|
||||||
ret = -EIO;
|
|
||||||
goto unmap;
|
goto unmap;
|
||||||
}
|
|
||||||
|
|
||||||
memcpy(new_adapter->name, pdev->name, strlen(pdev->name));
|
memcpy(new_adapter->name, pdev->name, strlen(pdev->name));
|
||||||
new_adapter->owner = THIS_MODULE;
|
new_adapter->owner = THIS_MODULE;
|
||||||
|
@@ -1207,7 +1207,7 @@ static int mtk_i2c_probe(struct platform_device *pdev)
|
|||||||
return PTR_ERR(i2c->pdmabase);
|
return PTR_ERR(i2c->pdmabase);
|
||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
irq = platform_get_irq(pdev, 0);
|
||||||
if (irq <= 0)
|
if (irq < 0)
|
||||||
return irq;
|
return irq;
|
||||||
|
|
||||||
init_completion(&i2c->msg_complete);
|
init_completion(&i2c->msg_complete);
|
||||||
|
@@ -1140,7 +1140,7 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
|
|||||||
*/
|
*/
|
||||||
if (!(i2c->quirks & QUIRK_POLL)) {
|
if (!(i2c->quirks & QUIRK_POLL)) {
|
||||||
i2c->irq = ret = platform_get_irq(pdev, 0);
|
i2c->irq = ret = platform_get_irq(pdev, 0);
|
||||||
if (ret <= 0) {
|
if (ret < 0) {
|
||||||
dev_err(&pdev->dev, "cannot find IRQ\n");
|
dev_err(&pdev->dev, "cannot find IRQ\n");
|
||||||
clk_unprepare(i2c->clk);
|
clk_unprepare(i2c->clk);
|
||||||
return ret;
|
return ret;
|
||||||
|
@@ -578,7 +578,7 @@ static int synquacer_i2c_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
i2c->irq = platform_get_irq(pdev, 0);
|
i2c->irq = platform_get_irq(pdev, 0);
|
||||||
if (i2c->irq < 0)
|
if (i2c->irq < 0)
|
||||||
return -ENODEV;
|
return i2c->irq;
|
||||||
|
|
||||||
ret = devm_request_irq(&pdev->dev, i2c->irq, synquacer_i2c_isr,
|
ret = devm_request_irq(&pdev->dev, i2c->irq, synquacer_i2c_isr,
|
||||||
0, dev_name(&pdev->dev), i2c);
|
0, dev_name(&pdev->dev), i2c);
|
||||||
|
@@ -517,7 +517,7 @@ static int xlp9xx_i2c_probe(struct platform_device *pdev)
|
|||||||
return PTR_ERR(priv->base);
|
return PTR_ERR(priv->base);
|
||||||
|
|
||||||
priv->irq = platform_get_irq(pdev, 0);
|
priv->irq = platform_get_irq(pdev, 0);
|
||||||
if (priv->irq <= 0)
|
if (priv->irq < 0)
|
||||||
return priv->irq;
|
return priv->irq;
|
||||||
/* SMBAlert irq */
|
/* SMBAlert irq */
|
||||||
priv->alert_data.irq = platform_get_irq(pdev, 1);
|
priv->alert_data.irq = platform_get_irq(pdev, 1);
|
||||||
|
@@ -92,6 +92,27 @@ EXPORT_SYMBOL(gic_pmr_sync);
|
|||||||
DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities);
|
DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities);
|
||||||
EXPORT_SYMBOL(gic_nonsecure_priorities);
|
EXPORT_SYMBOL(gic_nonsecure_priorities);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* When the Non-secure world has access to group 0 interrupts (as a
|
||||||
|
* consequence of SCR_EL3.FIQ == 0), reading the ICC_RPR_EL1 register will
|
||||||
|
* return the Distributor's view of the interrupt priority.
|
||||||
|
*
|
||||||
|
* When GIC security is enabled (GICD_CTLR.DS == 0), the interrupt priority
|
||||||
|
* written by software is moved to the Non-secure range by the Distributor.
|
||||||
|
*
|
||||||
|
* If both are true (which is when gic_nonsecure_priorities gets enabled),
|
||||||
|
* we need to shift down the priority programmed by software to match it
|
||||||
|
* against the value returned by ICC_RPR_EL1.
|
||||||
|
*/
|
||||||
|
#define GICD_INT_RPR_PRI(priority) \
|
||||||
|
({ \
|
||||||
|
u32 __priority = (priority); \
|
||||||
|
if (static_branch_unlikely(&gic_nonsecure_priorities)) \
|
||||||
|
__priority = 0x80 | (__priority >> 1); \
|
||||||
|
\
|
||||||
|
__priority; \
|
||||||
|
})
|
||||||
|
|
||||||
/* ppi_nmi_refs[n] == number of cpus having ppi[n + 16] set as NMI */
|
/* ppi_nmi_refs[n] == number of cpus having ppi[n + 16] set as NMI */
|
||||||
static refcount_t *ppi_nmi_refs;
|
static refcount_t *ppi_nmi_refs;
|
||||||
|
|
||||||
@@ -679,7 +700,7 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
if (gic_supports_nmi() &&
|
if (gic_supports_nmi() &&
|
||||||
unlikely(gic_read_rpr() == GICD_INT_NMI_PRI)) {
|
unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI))) {
|
||||||
gic_handle_nmi(irqnr, regs);
|
gic_handle_nmi(irqnr, regs);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@@ -92,18 +92,22 @@ static int pch_pic_set_type(struct irq_data *d, unsigned int type)
|
|||||||
case IRQ_TYPE_EDGE_RISING:
|
case IRQ_TYPE_EDGE_RISING:
|
||||||
pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
|
pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
|
||||||
pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
|
pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
|
||||||
|
irq_set_handler_locked(d, handle_edge_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_EDGE_FALLING:
|
case IRQ_TYPE_EDGE_FALLING:
|
||||||
pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
|
pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
|
||||||
pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
|
pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
|
||||||
|
irq_set_handler_locked(d, handle_edge_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_LEVEL_HIGH:
|
case IRQ_TYPE_LEVEL_HIGH:
|
||||||
pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
|
pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
|
||||||
pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
|
pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
|
||||||
|
irq_set_handler_locked(d, handle_level_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_LEVEL_LOW:
|
case IRQ_TYPE_LEVEL_LOW:
|
||||||
pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
|
pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
|
||||||
pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
|
pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
|
||||||
|
irq_set_handler_locked(d, handle_level_irq);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
@@ -113,11 +117,24 @@ static int pch_pic_set_type(struct irq_data *d, unsigned int type)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void pch_pic_ack_irq(struct irq_data *d)
|
||||||
|
{
|
||||||
|
unsigned int reg;
|
||||||
|
struct pch_pic *priv = irq_data_get_irq_chip_data(d);
|
||||||
|
|
||||||
|
reg = readl(priv->base + PCH_PIC_EDGE + PIC_REG_IDX(d->hwirq) * 4);
|
||||||
|
if (reg & BIT(PIC_REG_BIT(d->hwirq))) {
|
||||||
|
writel(BIT(PIC_REG_BIT(d->hwirq)),
|
||||||
|
priv->base + PCH_PIC_CLR + PIC_REG_IDX(d->hwirq) * 4);
|
||||||
|
}
|
||||||
|
irq_chip_ack_parent(d);
|
||||||
|
}
|
||||||
|
|
||||||
static struct irq_chip pch_pic_irq_chip = {
|
static struct irq_chip pch_pic_irq_chip = {
|
||||||
.name = "PCH PIC",
|
.name = "PCH PIC",
|
||||||
.irq_mask = pch_pic_mask_irq,
|
.irq_mask = pch_pic_mask_irq,
|
||||||
.irq_unmask = pch_pic_unmask_irq,
|
.irq_unmask = pch_pic_unmask_irq,
|
||||||
.irq_ack = irq_chip_ack_parent,
|
.irq_ack = pch_pic_ack_irq,
|
||||||
.irq_set_affinity = irq_chip_set_affinity_parent,
|
.irq_set_affinity = irq_chip_set_affinity_parent,
|
||||||
.irq_set_type = pch_pic_set_type,
|
.irq_set_type = pch_pic_set_type,
|
||||||
};
|
};
|
||||||
|
@@ -385,6 +385,7 @@ static int is31fl32xx_parse_dt(struct device *dev,
|
|||||||
dev_err(dev,
|
dev_err(dev,
|
||||||
"Node %pOF 'reg' conflicts with another LED\n",
|
"Node %pOF 'reg' conflicts with another LED\n",
|
||||||
child);
|
child);
|
||||||
|
ret = -EINVAL;
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -99,10 +99,9 @@ static int lt3593_led_probe(struct platform_device *pdev)
|
|||||||
init_data.default_label = ":";
|
init_data.default_label = ":";
|
||||||
|
|
||||||
ret = devm_led_classdev_register_ext(dev, &led_data->cdev, &init_data);
|
ret = devm_led_classdev_register_ext(dev, &led_data->cdev, &init_data);
|
||||||
if (ret < 0) {
|
|
||||||
fwnode_handle_put(child);
|
fwnode_handle_put(child);
|
||||||
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
|
||||||
|
|
||||||
platform_set_drvdata(pdev, led_data);
|
platform_set_drvdata(pdev, led_data);
|
||||||
|
|
||||||
|
@@ -6,10 +6,33 @@
|
|||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/leds.h>
|
#include <linux/leds.h>
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
|
#include "../leds.h"
|
||||||
|
|
||||||
static struct led_trigger *ledtrig_audio[NUM_AUDIO_LEDS];
|
|
||||||
static enum led_brightness audio_state[NUM_AUDIO_LEDS];
|
static enum led_brightness audio_state[NUM_AUDIO_LEDS];
|
||||||
|
|
||||||
|
static int ledtrig_audio_mute_activate(struct led_classdev *led_cdev)
|
||||||
|
{
|
||||||
|
led_set_brightness_nosleep(led_cdev, audio_state[LED_AUDIO_MUTE]);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ledtrig_audio_micmute_activate(struct led_classdev *led_cdev)
|
||||||
|
{
|
||||||
|
led_set_brightness_nosleep(led_cdev, audio_state[LED_AUDIO_MICMUTE]);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct led_trigger ledtrig_audio[NUM_AUDIO_LEDS] = {
|
||||||
|
[LED_AUDIO_MUTE] = {
|
||||||
|
.name = "audio-mute",
|
||||||
|
.activate = ledtrig_audio_mute_activate,
|
||||||
|
},
|
||||||
|
[LED_AUDIO_MICMUTE] = {
|
||||||
|
.name = "audio-micmute",
|
||||||
|
.activate = ledtrig_audio_micmute_activate,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
enum led_brightness ledtrig_audio_get(enum led_audio type)
|
enum led_brightness ledtrig_audio_get(enum led_audio type)
|
||||||
{
|
{
|
||||||
return audio_state[type];
|
return audio_state[type];
|
||||||
@@ -19,24 +42,22 @@ EXPORT_SYMBOL_GPL(ledtrig_audio_get);
|
|||||||
void ledtrig_audio_set(enum led_audio type, enum led_brightness state)
|
void ledtrig_audio_set(enum led_audio type, enum led_brightness state)
|
||||||
{
|
{
|
||||||
audio_state[type] = state;
|
audio_state[type] = state;
|
||||||
led_trigger_event(ledtrig_audio[type], state);
|
led_trigger_event(&ledtrig_audio[type], state);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(ledtrig_audio_set);
|
EXPORT_SYMBOL_GPL(ledtrig_audio_set);
|
||||||
|
|
||||||
static int __init ledtrig_audio_init(void)
|
static int __init ledtrig_audio_init(void)
|
||||||
{
|
{
|
||||||
led_trigger_register_simple("audio-mute",
|
led_trigger_register(&ledtrig_audio[LED_AUDIO_MUTE]);
|
||||||
&ledtrig_audio[LED_AUDIO_MUTE]);
|
led_trigger_register(&ledtrig_audio[LED_AUDIO_MICMUTE]);
|
||||||
led_trigger_register_simple("audio-micmute",
|
|
||||||
&ledtrig_audio[LED_AUDIO_MICMUTE]);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
module_init(ledtrig_audio_init);
|
module_init(ledtrig_audio_init);
|
||||||
|
|
||||||
static void __exit ledtrig_audio_exit(void)
|
static void __exit ledtrig_audio_exit(void)
|
||||||
{
|
{
|
||||||
led_trigger_unregister_simple(ledtrig_audio[LED_AUDIO_MUTE]);
|
led_trigger_unregister(&ledtrig_audio[LED_AUDIO_MUTE]);
|
||||||
led_trigger_unregister_simple(ledtrig_audio[LED_AUDIO_MICMUTE]);
|
led_trigger_unregister(&ledtrig_audio[LED_AUDIO_MICMUTE]);
|
||||||
}
|
}
|
||||||
module_exit(ledtrig_audio_exit);
|
module_exit(ledtrig_audio_exit);
|
||||||
|
|
||||||
|
@@ -934,20 +934,20 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
|
|||||||
n = BITS_TO_LONGS(d->nr_stripes) * sizeof(unsigned long);
|
n = BITS_TO_LONGS(d->nr_stripes) * sizeof(unsigned long);
|
||||||
d->full_dirty_stripes = kvzalloc(n, GFP_KERNEL);
|
d->full_dirty_stripes = kvzalloc(n, GFP_KERNEL);
|
||||||
if (!d->full_dirty_stripes)
|
if (!d->full_dirty_stripes)
|
||||||
return -ENOMEM;
|
goto out_free_stripe_sectors_dirty;
|
||||||
|
|
||||||
idx = ida_simple_get(&bcache_device_idx, 0,
|
idx = ida_simple_get(&bcache_device_idx, 0,
|
||||||
BCACHE_DEVICE_IDX_MAX, GFP_KERNEL);
|
BCACHE_DEVICE_IDX_MAX, GFP_KERNEL);
|
||||||
if (idx < 0)
|
if (idx < 0)
|
||||||
return idx;
|
goto out_free_full_dirty_stripes;
|
||||||
|
|
||||||
if (bioset_init(&d->bio_split, 4, offsetof(struct bbio, bio),
|
if (bioset_init(&d->bio_split, 4, offsetof(struct bbio, bio),
|
||||||
BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER))
|
BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER))
|
||||||
goto err;
|
goto out_ida_remove;
|
||||||
|
|
||||||
d->disk = alloc_disk(BCACHE_MINORS);
|
d->disk = alloc_disk(BCACHE_MINORS);
|
||||||
if (!d->disk)
|
if (!d->disk)
|
||||||
goto err;
|
goto out_bioset_exit;
|
||||||
|
|
||||||
set_capacity(d->disk, sectors);
|
set_capacity(d->disk, sectors);
|
||||||
snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", idx);
|
snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", idx);
|
||||||
@@ -993,8 +993,14 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err:
|
out_bioset_exit:
|
||||||
|
bioset_exit(&d->bio_split);
|
||||||
|
out_ida_remove:
|
||||||
ida_simple_remove(&bcache_device_idx, idx);
|
ida_simple_remove(&bcache_device_idx, idx);
|
||||||
|
out_free_full_dirty_stripes:
|
||||||
|
kvfree(d->full_dirty_stripes);
|
||||||
|
out_free_stripe_sectors_dirty:
|
||||||
|
kvfree(d->stripe_sectors_dirty);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@@ -2233,6 +2233,7 @@ static int tda1997x_core_init(struct v4l2_subdev *sd)
|
|||||||
/* get initial HDMI status */
|
/* get initial HDMI status */
|
||||||
state->hdmi_status = io_read(sd, REG_HDMI_FLAGS);
|
state->hdmi_status = io_read(sd, REG_HDMI_FLAGS);
|
||||||
|
|
||||||
|
io_write(sd, REG_EDID_ENABLE, EDID_ENABLE_A_EN | EDID_ENABLE_B_EN);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -2031,17 +2031,25 @@ static int __coda_start_decoding(struct coda_ctx *ctx)
|
|||||||
u32 src_fourcc, dst_fourcc;
|
u32 src_fourcc, dst_fourcc;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (!ctx->initialized) {
|
|
||||||
ret = __coda_decoder_seq_init(ctx);
|
|
||||||
if (ret < 0)
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
|
q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
|
||||||
q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
|
q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
|
||||||
src_fourcc = q_data_src->fourcc;
|
src_fourcc = q_data_src->fourcc;
|
||||||
dst_fourcc = q_data_dst->fourcc;
|
dst_fourcc = q_data_dst->fourcc;
|
||||||
|
|
||||||
|
if (!ctx->initialized) {
|
||||||
|
ret = __coda_decoder_seq_init(ctx);
|
||||||
|
if (ret < 0)
|
||||||
|
return ret;
|
||||||
|
} else {
|
||||||
|
ctx->frame_mem_ctrl &= ~(CODA_FRAME_CHROMA_INTERLEAVE | (0x3 << 9) |
|
||||||
|
CODA9_FRAME_TILED2LINEAR);
|
||||||
|
if (dst_fourcc == V4L2_PIX_FMT_NV12 || dst_fourcc == V4L2_PIX_FMT_YUYV)
|
||||||
|
ctx->frame_mem_ctrl |= CODA_FRAME_CHROMA_INTERLEAVE;
|
||||||
|
if (ctx->tiled_map_type == GDI_TILED_FRAME_MB_RASTER_MAP)
|
||||||
|
ctx->frame_mem_ctrl |= (0x3 << 9) |
|
||||||
|
((ctx->use_vdoa) ? 0 : CODA9_FRAME_TILED2LINEAR);
|
||||||
|
}
|
||||||
|
|
||||||
coda_write(dev, ctx->parabuf.paddr, CODA_REG_BIT_PARA_BUF_ADDR);
|
coda_write(dev, ctx->parabuf.paddr, CODA_REG_BIT_PARA_BUF_ADDR);
|
||||||
|
|
||||||
ret = coda_alloc_framebuffers(ctx, q_data_dst, src_fourcc);
|
ret = coda_alloc_framebuffers(ctx, q_data_dst, src_fourcc);
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user