Commit Graph

130059 Commits

Author SHA1 Message Date
Arnd Bergmann
f98121f3ef arm64: dts: fix build errors from missing dependencies
Two branches were incorrectly sent without having the necessary
header file changes. Rather than back those out now, I'm replacing
the symbolic names for the clks and resets with the numeric
values to get 'make allmodconfig dtbs' back to work.

After the header file changes are merged, we can revert this
patch.

Fixes: 6bc37fa ("arm64: dts: add Allwinner A64 SoC .dtsi")
Fixes: 50784e6 ("dts: arm64: db820c: add pmic pins specific dts file")
Acked-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2016-11-30 15:15:16 +01:00
Arnd Bergmann
c2d3925456 Merge tag 'davinci-for-v4.10/defconfig-3' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into next/defconfig
Pull "DaVinci defconfig updates for v4.10 (part 3)" from Sekhar Nori:

Enables newly introduced DDR controller and
master priority setting drivers in kernel.

Also, update defconfig to boot latest systemd
based filesystems on DA850.

A patch enabling USB OHCI support in davinci
defconfig.

* tag 'davinci-for-v4.10/defconfig-3' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
  ARM: davinci_all_defconfig: Enable OHCI as module
  ARM: davinci_all_defconfig: add missing options for systemd
  ARM: davinci_all_defconfig: enable the mstpri and ddrctl drivers
2016-11-30 14:55:14 +01:00
Arnd Bergmann
196249beab Merge tag 'davinci-for-v4.10/soc-3' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into next/soc
Pull "DaVinci SoC updates for v4.10 (part 3)" from Sekhar Nori:

mach-davinci SoC support updates to adjust
USB ohci device name to that used by drivers
and update of various board files to use gpio
descriptor API used by MMC subsystem for card
detect and write-protect detection.

* tag 'davinci-for-v4.10/soc-3' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
  ARM: davinci: da830-evm: use gpio descriptor for mmc pins
  ARM: davinci: da850-evm: use gpio descriptor for mmc pins
  ARM: davinci: hawk: use gpio descriptor for mmc pins
  ARM: davinci: da8xx: Fix ohci device name
2016-11-30 14:48:30 +01:00
Arnd Bergmann
c0db16b815 Merge tag 'davinci-for-v4.10/soc-2' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into next/soc
Pull "DaVinci SoC updates for v4.10 (part 2)" from Sekhar Nori:

Adds suspend-to-RAM support for DT boot mode
on DA850.

* tag 'davinci-for-v4.10/soc-2' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
  ARM: davinci: PM: support da8xx DT platforms
2016-11-30 14:47:37 +01:00
Arnd Bergmann
e264ae280c Merge tag 'davinci-for-v4.10/cleanup-2' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into next/soc
Pull DaVinci cleanup for v4.10 from Sekhar Nori:

mach-davinci cleanup to make it easy to add PM support
for DT-boot.

* tag 'davinci-for-v4.10/cleanup-2' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
  ARM: davinci: PM: fix build when da850 not compiled in
  ARM: davinci: PM: cleanup: remove references to pdata
  ARM: davinci: PM: rework init, remove platform device
2016-11-30 14:45:31 +01:00
Geoff Levand
6dff5b6705 powerpc/ps3: Fix system hang with GCC 5 builds
GCC 5 generates different code for this bootwrapper null check that
causes the PS3 to hang very early in its bootup. This check is of
limited value, so just get rid of it.

Cc: stable@vger.kernel.org
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:19:59 +11:00
Michael Ellerman
76ffb57850 powerpc/prom: Switch to using structs for ibm_architecture_vec
Now that we've defined structures to describe each of the client
architecture vectors, we can use those to construct the value we pass to
firmware.

This avoids the tricks we previously played with the W() macro, allows
us to properly endian annotate fields, and should help to avoid bugs
introduced by failing to have the correct number of zero pad bytes
between fields.

It also means we can avoid hard coding IBM_ARCH_VEC_NRCORES_OFFSET in
order to update the max_cpus value and instead just set it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:19:59 +11:00
Michael Ellerman
d03d1d65b5 powerpc/prom: Define structs for client architecture vectors
The "client architecture vectors" are a series of structures we pass to
firmware to define various things, such as what processors we support
and many other options.

Each structure is entirely different so we have to define a different
struct for each one, but that's OK.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:19:59 +11:00
Nicholas Piggin
53ce299615 powerpc/pseries: add definitions for new H_SIGNAL_SYS_RESET hcall
This has not made its way to a PAPR release yet, but we have an hcall
number assigned.

  H_SIGNAL_SYS_RESET = 0x380

  Syntax:
    hcall(uint64 H_SIGNAL_SYS_RESET, int64 target);

  Generate a system reset NMI on the threads indicated by target.

  Values for target:
    -1 = target all online threads including the caller
    -2 = target all online threads except for the caller
    All other negative values: reserved
    Positive values: The thread to be targeted, obtained from the value
    of the "ibm,ppc-interrupt-server#s" property of the CPU in the OF
    device tree.

  Semantics:
  - Invalid target: return H_Parameter.
  - Otherwise: Generate a system reset NMI on target thread(s),
    return H_Success.

This will be used by crash/debug code to get stuck CPUs into a known
state.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:19:59 +11:00
Thiago Jung Bauermann
500c7ab1a9 powerpc: Enable CONFIG_KEXEC_FILE in powerpc server defconfigs.
Enable CONFIG_KEXEC_FILE in powernv_defconfig, ppc64_defconfig and
pseries_defconfig.

It depends on CONFIG_CRYPTO_SHA256=y, so add that as well.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:15:36 +11:00
Thiago Jung Bauermann
80f60e509a powerpc/kexec: Enable kexec_file_load() syscall
Define the Kconfig symbol so that the kexec_file_load() code can be
built, and wire up the syscall so that it can be called.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:15:27 +11:00
Thiago Jung Bauermann
0d97631392 powerpc: Add purgatory for kexec_file_load() implementation.
This purgatory implementation is based on the versions from kexec-tools
and kexec-lite, with additional changes.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:15:26 +11:00
Thiago Jung Bauermann
a0458284f0 powerpc: Add support code for kexec_file_load()
This patch adds the support code needed for implementing
kexec_file_load() on powerpc.

This consists of functions to load the ELF kernel, either big or little
endian, and setup the purgatory enviroment which switches from the first
kernel to the second kernel.

None of this code is built yet, as it depends on CONFIG_KEXEC_FILE which
we have not yet defined. Although we could define CONFIG_KEXEC_FILE in
this patch, we'd then have a window in history where the kconfig symbol
is present but the syscall is not, which would be awkward.

Signed-off-by: Josh Sklar <sklar@linux.vnet.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:15:25 +11:00
Thiago Jung Bauermann
da6658859b powerpc: Change places using CONFIG_KEXEC to use CONFIG_KEXEC_CORE instead.
Commit 2965faa5e0 ("kexec: split kexec_load syscall from kexec core
code") introduced CONFIG_KEXEC_CORE so that CONFIG_KEXEC means whether
the kexec_load system call should be compiled-in and CONFIG_KEXEC_FILE
means whether the kexec_file_load system call should be compiled-in.
These options can be set independently from each other.

Since until now powerpc only supported kexec_load, CONFIG_KEXEC and
CONFIG_KEXEC_CORE were synonyms. That is not the case anymore, so we
need to make a distinction. Almost all places where CONFIG_KEXEC was
being used should be using CONFIG_KEXEC_CORE instead, since
kexec_file_load also needs that code compiled in.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:15:11 +11:00
Thiago Jung Bauermann
ec2b9bfaac kexec_file: Change kexec_add_buffer to take kexec_buf as argument.
This is done to simplify the kexec_add_buffer argument list.
Adapt all callers to set up a kexec_buf to pass to kexec_add_buffer.

In addition, change the type of kexec_buf.buffer from char * to void *.
There is no particular reason for it to be a char *, and the change
allows us to get rid of 3 existing casts to char * in the code.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 23:14:59 +11:00
Ard Biesheuvel
81126d1a8b crypto: arm/aesbs - fix brokenness after skcipher conversion
The CBC encryption routine should use the encryption round keys, not
the decryption round keys.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-30 20:01:51 +08:00
Ard Biesheuvel
b3e1e0cbd9 crypto: arm64/aes-ce-ctr - fix skcipher conversion
Fix a missing statement that got lost in the skcipher conversion of
the CTR transform.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-30 20:01:44 +08:00
Ard Biesheuvel
7f329c1742 crypto: arm/aes-ce - fix broken monolithic build
When building the arm64 kernel with both CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
and CONFIG_CRYPTO_AES_ARM64_NEON_BLK=y configured, the build breaks with
the following error:

arch/arm64/crypto/aes-neon-blk.o:(.bss+0x0): multiple definition of `aes_simd_algs'
arch/arm64/crypto/aes-ce-blk.o:(.bss+0x0): first defined here

Fix this by making aes_simd_algs 'static'.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-30 20:01:41 +08:00
Herbert Xu
6fdf436fd8 crypto: arm/aes - Add missing SIMD select for aesbs
This patch adds one more missing SIMD select for AES_ARM_BS.  It
also changes selects on ALGAPI to BLKCIPHER.

Fixes: 211f41af53 ("crypto: aesbs - Convert to skcipher")
Reported-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-30 20:01:40 +08:00
Thomas Gleixner
b836554386 x86/tsc: Fix broken CONFIG_X86_TSC=n build
Add the missing return statement to the inline stub
tsc_store_and_check_tsc_adjust() and add the other stubs to make a
SMP=y,TSC=n build happy.

While at it, remove the unused variable from the UP variant of
tsc_store_and_check_tsc_adjust().

Fixes: commit ba75fb646931 ("x86/tsc: Sync test only for the first cpu in a package")
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-30 09:44:52 +01:00
Ingo Molnar
0a21fc1214 sched/x86: Make CONFIG_SCHED_MC_PRIO=y easier to enable
Right now CONFIG_SCHED_MC_PRIO has X86_INTEL_PSTATE as a dependency,
which is not enabled by default and which hides the CONFIG_SCHED_MC_PRIO
hardware-enabling feature.

Select X86_INTEL_PSTATE instead, plus its dependency (CPU_FREQ), if the
user enables CONFIG_SCHED_MC_PRIO=y.

(Also align the CONFIG_SCHED_MC_PRIO Kconfig help text in standard style.)

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: bp@suse.de
Cc: jolsa@redhat.com
Cc: linux-acpi@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: rjw@rjwysocki.net
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-30 08:36:10 +01:00
Tim Chen
de966cf4a4 sched/x86: Change CONFIG_SCHED_ITMT to CONFIG_SCHED_MC_PRIO
Rename CONFIG_SCHED_ITMT for Intel Turbo Boost Max Technology 3.0
to CONFIG_SCHED_MC_PRIO.  This makes the configuration extensible
in future to other architectures that wish to similarly establish
CPU core priorities support in the scheduler.

The description in Kconfig is updated to reflect this change with
added details for better clarity.  The configuration is explicitly
default-y, to enable the feature on CPUs that have this feature.

It has no effect on non-TBM3 CPUs.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bp@suse.de
Cc: jolsa@redhat.com
Cc: linux-acpi@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: rjw@rjwysocki.net
Link: http://lkml.kernel.org/r/2b2ee29d93e3f162922d72d0165a1405864fbb23.1480444902.git.tim.c.chen@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-11-30 08:27:08 +01:00
Balbir Singh
0ab5171b89 powerpc/mm: Fix no execute fault handling on pre-POWER5
Aneesh/Ben reported that the change to do_page_fault() we made in commit
1d18ad0268 ("powerpc/mm: Detect instruction fetch denied and report")
needs to handle the case where CPU_FTR_COHERENT_ICACHE is missing but we
have CPU_FTR_NOEXECUTE. In those cases the check added for
SRR1_ISI_N_OR_G might trigger a false positive.

This patch adds a check for CPU_FTR_COHERENT_ICACHE in addition to the
MSR value.

Fixes: 1d18ad0268 ("powerpc/mm: Detect instruction fetch denied and report")
Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 17:19:01 +11:00
Dave Airlie
35838b470a Merge tag 'drm-intel-next-2016-11-21' of git://anongit.freedesktop.org/git/drm-intel into drm-next
Final 4.10 updates:

- fine-tune fb flushing and tracking (Chris Wilson)
- refactor state check dumper code for more conciseness (Tvrtko)
- roll out dev_priv all over the place (Tvrkto)
- finally remove __i915__ magic macro (Tvrtko)
- more gvt bugfixes (Zhenyu&team)
- better opregion CADL handling (Jani)
- refactor/clean up wm programming (Maarten)
- gpu scheduler + priority boosting for flips as first user (Chris
  Wilson)
- make fbc use more atomic (Paulo)
- initial kvm-gvt framework, but not yet complete (Zhenyu&team)

* tag 'drm-intel-next-2016-11-21' of git://anongit.freedesktop.org/git/drm-intel: (127 commits)
  drm/i915: Update DRIVER_DATE to 20161121
  drm/i915: Skip final clflush if LLC is coherent
  drm/i915: Always flush the dirty CPU cache when pinning the scanout
  drm/i915: Don't touch NULL sg on i915_gem_object_get_pages_gtt() error
  drm/i915: Check that each request phase is completed before retiring
  drm/i915: i915_pages_create_for_stolen should return err ptr
  drm/i915: Enable support for nonblocking modeset
  drm/i915: Be more careful to drop the GT wakeref
  drm/i915: Move frontbuffer CS write tracking from ggtt vma to object
  drm/i915: Only dump dp_m2_n2 configuration when drrs is used
  drm/i915: don't leak global_timeline
  drm/i915: add i915_address_space_fini
  drm/i915: Add a few more sanity checks for stolen handling
  drm/i915: Waterproof verification of gen9 forcewake table ranges
  drm/i915: Introduce enableddisabled helper
  drm/i915: Only dump possible panel fitter config for the platform
  drm/i915: Only dump scaler config where supported
  drm/i915: Compact a few pipe config debug lines
  drm/i915: Don't log pipe config kernel pointer and duplicated pipe name
  drm/i915: Dump FDI config only where applicable
  ...
2016-11-30 14:21:35 +10:00
Eugeniy Paltsev
bd2c6636cc dmaengine: DW DMAC: add multi-block property to device tree
Several versions of DW DMAC have multi block transfers hardware
support. Hardware support of multi block transfers is disabled
by default if we use DT to configure DMAC and software emulation
of multi block transfers used instead.
Add multi-block property, so it is possible to enable hardware
multi block transfers (if present) via DT.

Switch from per device is_nollp variable to multi_block array
to be able enable/disable multi block transfers separately per
channel.

Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-11-30 08:57:50 +05:30
Brian Norris
0989b0909c Merge tag 'nand/for-4.10' of github.com:linux-nand/linux
From Boris Brezillon:

"""
This pull request contains the following notable changes:
- new tango NAND controller driver
- new ox820 NAND controller driver
- addition of a new full-ID entry in the nand_ids table
- rework of the s3c240 driver to support DT
- extension of the nand_sdr_timings to expose tCCS, tPROG and tR
- addition of a new flag to ask the core to wait for tCCS when sending
  a RNDIN/RNDOUT command
- addition of a new flag to ask the core to let the controller driver
  send the READ/PROGPAGE command

This pull request also contains minor fixes/cleanup/cosmetic changes:
- properly support 512 ECC step size in the sunxi driver
- improve the error messages in the pxa probe path
- fix module autoload in the omap2 driver
- cleanup of several nand drivers to return nand_scan{_tail}() error
  code instead of returning -EIO
- various cleanups in the denali driver
- cleanups in the ooblayout handling (MTD core)
- fix an error check in nandsim
"""
2016-11-29 18:28:30 -08:00
Thomas Gleixner
cc4db26899 x86/tsc: Try to adjust TSC if sync test fails
If the first CPU of a package comes online, it is necessary to test whether
the TSC is in sync with a CPU on some other package. When a deviation is
observed (time going backwards between the two CPUs) the TSC is marked
unstable, which is a problem on large machines as they have to fall back to
the HPET clocksource, which is insanely slow.

It has been attempted to compensate the TSC by adding the offset to the TSC
and writing it back some time ago, but this never was merged because it did
not turn out to be stable, especially not on older systems.

Modern systems have become more stable in that regard and the TSC_ADJUST
MSR allows us to compensate for the time deviation in a sane way. If it's
available allow up to three synchronization runs and if a time warp is
detected the starting CPU can compensate the time warp via the TSC_ADJUST
MSR and retry. If the third run still shows a deviation or when random time
warps are detected the test terminally fails.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134018.048237517@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:18 +01:00
Thomas Gleixner
76d3b85158 x86/tsc: Prepare warp test for TSC adjustment
To allow TSC compensation cross nodes its necessary to know in which
direction the TSC warp was observed. Return the maximum observed value on
the calling CPU so the caller can determine the direction later.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.970859287@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:18 +01:00
Thomas Gleixner
4c5e3c6375 x86/tsc: Move sync cleanup to a safe place
Cleaning up the stop marker on the control CPU is wrong when we want to add
retry support. Move the cleanup to the starting CPU.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.892095627@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:18 +01:00
Thomas Gleixner
a36f513681 x86/tsc: Sync test only for the first cpu in a package
If the TSC_ADJUST MSR is available all CPUs in a package are forced to the
same value. So TSCs cannot be out of sync when the first CPU in the package
was in sync.

That allows to skip the sync test for all CPUs except the first starting
CPU in a package.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.809901363@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:17 +01:00
Thomas Gleixner
1d0095feea x86/tsc: Verify TSC_ADJUST from idle
When entering idle, it's a good oportunity to verify that the TSC_ADJUST
MSR has not been tampered with (BIOS hiding SMM cycles). If tampering is
detected, emit a warning and restore it to the previous value.

This is especially important for machines, which mark the TSC reliable
because there is no watchdog clocksource available (SoCs).

This is not sufficient for HPC (NOHZ_FULL) situations where a CPU never
goes idle, but adding a timer to do the check periodically is not an option
either. On a machine, which has this issue, the check triggeres right
during boot, so there is a decent chance that the sysadmin will notice.

Rate limit the check to once per second and warn only once per cpu.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.732180441@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:16 +01:00
Thomas Gleixner
8b223bc7ab x86/tsc: Store and check TSC ADJUST MSR
The TSC_ADJUST MSR shows whether the TSC has been modified. This is helpful
in a two aspects:

1) It allows to detect BIOS wreckage, where SMM code tries to 'hide' the
   cycles spent by storing the TSC value at SMM entry and restoring it at
   SMM exit. On affected machines the TSCs run slowly out of sync up to the
   point where the clocksource watchdog (if available) detects it.

   The TSC_ADJUST MSR allows to detect the TSC modification before that and
   eventually restore it. This is also important for SoCs which have no
   watchdog clocksource and therefore TSC wreckage cannot be detected and
   acted upon.

2) All threads in a package are required to have the same TSC_ADJUST
   value. Broken BIOSes break that and as a result the TSC synchronization
   check fails.

   The TSC_ADJUST MSR allows to detect the deviation when a CPU comes
   online. If detected set it to the value of an already online CPU in the
   same package. This also allows to reduce the number of sync tests
   because with that in place the test is only required for the first CPU
   in a package.

   In principle all CPUs in a system should have the same TSC_ADJUST value
   even across packages, but with physical CPU hotplug this assumption is
   not true because the TSC starts with power on, so physical hotplug has
   to do some trickery to bring the TSC into sync with already running
   packages, which requires to use an TSC_ADJUST value different from CPUs
   which got powered earlier.

   A final enhancement is the opportunity to compensate for unsynced TSCs
   accross nodes at boot time and make the TSC usable that way. It won't
   help for TSCs which run apart due to frequency skew between packages,
   but this gets detected by the clocksource watchdog later.

The first step toward this is to store the TSC_ADJUST value of a starting
CPU and compare it with the value of an already online CPU in the same
package. If they differ, emit a warning and adjust it to the reference
value. The !SMP version just stores the boot value for later verification.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.655323776@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:16 +01:00
Thomas Gleixner
bec8520dca x86/tsc: Detect random warps
If time warps can be observed then they should only ever be observed on one
CPU. If they are observed on both CPUs then the system is completely hosed.

Add a check for this condition and notify if it happens.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.574838461@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:15 +01:00
Thomas Gleixner
7b3d2f6e08 x86/tsc: Use X86_FEATURE_TSC_ADJUST in detect_art()
The art detection uses rdmsrl_safe() to detect the availablity of the
TSC_ADJUST MSR.

That's pointless because we have a feature bit for this. Use it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.483561692@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:15 +01:00
Russell King
76fb051d42 ARM: mm: allow set_memory_*() to be used on the vmalloc region
We can allow modules to be loaded into the vmalloc region, where they
should also benefit from the same protections as those loaded into
the more efficient module region.  Allow these functions to operate
there as well.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-11-29 18:00:34 +00:00
Russell King
580218f967 ARM: mm: fix set_memory_*() bounds checks
The set_memory_*() bounds checks are buggy on several fronts:

1. They fail to round the region size up if the passed address is not
   page aligned.
2. The region check was incomplete, and didn't correspond with what
   was being asked of apply_to_page_range()

So, rework change_memory_common() to fix these problems, adding an
"in_region()" helper to determine whether the start & size fit within
the provided region start and stop addresses.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-11-29 18:00:34 +00:00
Yuriy Kolerov
6a8b2ca702 ARC: mm: PAE40: Fix crash at munmap
commit 1c3c909303 broke PAE40. Macro pfn_pte(pfn, prot) creates paddr
from pfn, but the page shift was getting truncated to 32 bits since we lost
the proper cast to 64 bits (for PAE400

Instead of reverting that commit, use a better helper which is 32/64 bits
safe just like ARM implementation.

Fixes: 1c3c909303 ("ARC: mm: fix build breakage with STRICT_MM_TYPECHECKS")
Cc: <stable@vger.kernel.org>   #4.4+
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
[vgupta: massaged changelog]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-11-29 09:12:08 -08:00
Chen Yu
ba58d1020a timekeeping: Ignore the bogus sleep time if pm_trace is enabled
Power management suspend/resume tracing (ab)uses the RTC to store
suspend/resume information persistently. As a consequence the RTC value is
clobbered when timekeeping is resumed and tries to inject the sleep time.

Commit a4f8f6667f ("timekeeping: Cap array access in timekeeping_debug")
plugged a out of bounds array access in the timekeeping debug code which
was caused by the clobbered RTC value, but we still use the clobbered RTC
value for sleep time injection into kernel timekeeping, which will result
in random adjustments depending on the stored "hash" value.

To prevent this keep track of the RTC clobbering and ignore the invalid RTC
timestamp at resume. If the system resumed successfully clear the flag,
which marks the RTC as unusable, warn the user about the RTC clobber and
recommend to adjust the RTC with 'ntpdate' or 'rdate'.

[jstultz: Fixed up pr_warn formating, and implemented suggestions from Ingo]
[ tglx: Rewrote changelog ]

Originally-from: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Xunlei Pang <xlpang@redhat.com>
Cc: Len Brown <lenb@kernel.org>
Link: http://lkml.kernel.org/r/1480372524-15181-3-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 18:02:58 +01:00
Catalin Marinas
00cc2e0745 Merge Will Deacon's for-next/perf branch into for-next/core
* will/for-next/perf:
  selftests: arm64: add test for unaligned/inexact watchpoint handling
  arm64: Allow hw watchpoint of length 3,5,6 and 7
  arm64: hw_breakpoint: Handle inexact watchpoint addresses
  arm64: Allow hw watchpoint at varied offset from base address
  hw_breakpoint: Allow watchpoint of length 3,5,6 and 7
2016-11-29 15:38:57 +00:00
Radim Krčmář
ffcb09f27f Merge branch 'kvm-ppc-next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
PPC KVM update for 4.10:

 * Support for KVM guests on POWER9 using the hashed page table MMU.
 * Updates and improvements to the halt-polling support on PPC, from
   Suraj Jitindar Singh.
 * An optimization to speed up emulated MMIO, from Yongji Xie.
 * Various other minor cleanups.
2016-11-29 14:26:55 +01:00
Radim Krčmář
bf65014d0b Merge tag 'kvm-s390-next-4.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux
KVM: s390: Changes for 4.10 (via kvm/next)

Two small optimizations to not do register reloading in
vcpu_put/get, instead do it in the ioctl path. This reduces
the overhead for schedule-intense workload that does not
exit to QEMU. (e.g. KVM guest with eventfd/irqfd that
does a lot of context switching with vhost or iothreads).
2016-11-29 14:25:58 +01:00
Benjamin Herrenschmidt
dd7b2f035e powerpc/mm: Fix lazy icache flush on pre-POWER5
On 64-bit CPUs with no-execute support and non-snooping icache, such as
970 or POWER4, we have a software mechanism to ensure coherency of the
cache (using exec faults when needed).

This was broken due to a logic error when the code was rewritten
from assembly to C, previously the assembly code did:

  BEGIN_FTR_SECTION
         mr      r4,r30
         mr      r5,r7
         bl      hash_page_do_lazy_icache
  END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)

Which tests that:
   (cpu_features & (NOEXECUTE | COHERENT_ICACHE)) == NOEXECUTE

Which says that the current cpu does have NOEXECUTE, but does not have
COHERENT_ICACHE.

Fixes: 91f1da9979 ("powerpc/mm: Convert 4k hash insert to C")
Fixes: 89ff725051 ("powerpc/mm: Convert __hash_page_64K to C")
Fixes: a43c0eb836 ("powerpc/mm: Convert 4k insert from asm to C")
Cc: stable@vger.kernel.org # v4.5+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Change log verbosification]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29 23:59:40 +11:00
Jintack
1650ac49c2 arm64: head.S: Fix CNTHCTL_EL2 access on VHE system
Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit.
EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they
are 11th and 10th bits respectively when E2H is set.  Current code is
unintentionally setting wrong bits to CNTHCTL_EL2 with E2H set.

In fact, we don't need to set those two bits, which allow EL1 and EL0 to
access physical timer and counter respectively, if E2H and TGE are set
for the host kernel. They will be configured later as necessary. First,
we don't need to configure those bits for EL1, since the host kernel
runs in EL2.  It is a hypervisor's responsibility to configure them
before entering a VM, which runs in EL0 and EL1. Second, EL0 accesses
are configured in the later stage of boot process.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-29 11:37:05 +00:00
Michael Ellerman
f0f7fe1ac3 powerpc/boot: Fix rebuild when changing kernel endian
Now that we don't set ARCH incorrectly when calling the boot Makefile,
we can use the generic cpp_lds_S rule for converting our zImage.lds.S
into zImage.lds.

The main advantage of using the generic rule is that it correctly uses
if_changed, which means we correctly regenerate the linker script when
switching endian. Fixing that means we are finally able to build one
endian and then rebuild the other endian without requiring to clean
between builds.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29 21:42:34 +11:00
Michael Ellerman
42d0c932b0 powerpc/boot: All uses of if_changed should depend on FORCE
If we're using if_changed then we must depend on FORCE, so that
if_changed gets a chance to check if something changed.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29 21:42:34 +11:00
Michael Ellerman
1196d7aaeb powerpc: Stop passing ARCH=ppc64 to boot Makefile
Back in 2005 when the ppc/ppc64 merge started, we used to build the
kernel code in arch/powerpc but use the boot code from arch/ppc or
arch/ppc64 depending on whether we were building for 32 or 64-bit.

Originally we called the boot Makefile passing ARCH=$(OLDARCH), where
OLDARCH was ppc or ppc64.

In commit 20f629549b ("powerpc: Make building the boot image work for
both 32-bit and 64-bit") (2005-10-11) we split the call for 32/64-bit
using an ifeq check, because the two Makefiles took different targets,
and explicitly passed ARCH=ppc64 for the 64-bit case and ARCH=ppc for
the 32-bit case.

Then in commit 94b212c29f ("powerpc: Move ppc64 boot wrapper code over
to arch/powerpc") (2005-11-16) we moved the boot code into arch/powerpc
and dropped the ppc case, but kept passing ARCH=ppc64 to
arch/powerpc/boot/Makefile.

Since then there have been several more boot targets added, all of which
have copied the ARCH=ppc64 setting, such that now we have four targets
using it.

Currently it seems that nothing actually uses the ARCH value, but that's
basically just luck, and in particular it prevents us from using the
generic cpp_lds_S rule. It's also clearly wrong, ARCH=ppc64 is dead,
buried and cremated.

Fix it by dropping the setting of ARCH completely, the correct value is
exported by the top level Makefile.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29 21:42:34 +11:00
Zubair Lutfullah Kakakhel
8328255ff8 powerpc/virtex: Use generic xilinx irqchip driver
The Xilinx interrupt controller driver is now available in drivers/irqchip.
Switch to using that driver.

Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:50 +00:00
Zubair Lutfullah Kakakhel
2120a43527 irqchip/xilinx: Rename get_irq to xintc_get_irq
Now that the driver is generic and used by multiple archs,
get_irq is too generic.

Rename get_irq to xintc_get_irq to avoid any conflicts

Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:49 +00:00
Zubair Lutfullah Kakakhel
0547dc7885 microblaze/irqchip: Move intc driver to irqchip
The Xilinx AXI Interrupt Controller IP block is used by the MIPS
based xilfpga platform and a few PowerPC based platforms.

Move the interrupt controller code out of arch/microblaze so that
it can be used by everyone

Tested-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:49 +00:00
Vladimir Murzin
bb29cecb3b ARM: virt: Select ARM_GIC_V3_ITS
This patch allows ARM guests to use GICv3 ITS on an arm64 host

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:49 +00:00