Commit Graph

129978 Commits

Author SHA1 Message Date
Alexander Shishkin
ef9ef3befa perf/x86/intel/bts: Kill a silly warning
At the moment, intel_bts will WARN() out if there is more than one
event writing to the same ring buffer, via SET_OUTPUT, and will only
send data from one event to a buffer.

There is no reason to have this warning in, so kill it.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/20160906132353.19887-6-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-10 11:15:38 +02:00
Alexander Shishkin
4d4c474124 perf/x86/intel/bts: Fix BTS PMI detection
Since BTS doesn't have a dedicated PMI status bit, the driver needs to
take extra care to check for the condition that triggers it to avoid
spurious NMI warnings.

Regardless of the local BTS context state, the only way of knowing that
the NMI is ours is to compare the write pointer against the interrupt
threshold.

Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/20160906132353.19887-5-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-10 11:15:38 +02:00
Alexander Shishkin
a9a94401c2 perf/x86/intel/bts: Fix confused ordering of PMU callbacks
The intel_bts driver is using a CPU-local 'started' variable to order
callbacks and PMIs and make sure that AUX transactions don't get messed
up. However, the ordering rules in regard to this variable is a complete
mess, which recently resulted in perf_fuzzer-triggered warnings and
panics.

The general ordering rule that is patch is enforcing is that this
cpu-local variable be set only when the cpu-local AUX transaction is
active; consequently, this variable is to be checked before the AUX
related bits can be touched.

Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/20160906132353.19887-4-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-10 11:15:37 +02:00
Daniel Axtens
bc42f1d9f5 powerpc/cell: Drop unused iic_get_irq_host()
Sparse checking revealed that it is no longer used. The last usage was
removed in commit 2e19458312 ("[POWERPC] Cell interrupt rework") in
2006.

Signed-off-by: Daniel Axtens <dja@axtens.net>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-09-10 18:46:45 +10:00
Scott Telford
23c2b9321b xtensa: Added Cadence CSP kernel configuration for Xtensa
Added defconfig, device tree and Xtensa variant header files for the
Cadence Configurable System Platform "xt_lnx" processor configuration.

Signed-off-by: Scott Telford <stelford@cadence.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-09-09 18:39:09 -07:00
Max Filippov
73a3eed0a4 xtensa: fix default kernel load address
Make default kernel load address 0xd0003000 for MMUv2 cores and
0x60003000 for noMMU cores. Don't initialize MMU inside vmlinux for
predefined MMUv2 cores (it's noop anyway).

This fixes the following defconfig build error:
  arch/xtensa/kernel/built-in.o: In function `fast_alloca':
  (.text+0x99a): dangerous relocation: j: cannot encode: _WindowUnderflow12
  arch/xtensa/kernel/built-in.o: In function `fast_alloca':
  (.text+0x99d): dangerous relocation: j: cannot encode: _WindowUnderflow8
  arch/xtensa/kernel/built-in.o: In function `fast_alloca':
  (.text+0x9a0): dangerous relocation: j: cannot encode: _WindowUnderflow4
  arch/xtensa/kernel/built-in.o: In function `window_overflow_restore_a0_fixup':
  (.text+0x23a3): dangerous relocation: j: cannot encode: (.DoubleExceptionVector.text+0x104)
  arch/xtensa/kernel/built-in.o: In function `window_overflow_restore_a0_fixup':
  (.text+0x23c1): dangerous relocation: j: cannot encode: (.DoubleExceptionVector.text+0x104)
  arch/xtensa/kernel/built-in.o: In function `window_overflow_restore_a0_fixup':
  (.text+0x23dd): dangerous relocation: j: cannot encode: (.DoubleExceptionVector.text+0x104)

With this change all xtensa defconfigs build correctly.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2016-09-09 18:38:35 -07:00
Dan Williams
9049771f7d mm: fix cache mode of dax pmd mappings
track_pfn_insert() in vmf_insert_pfn_pmd() is marking dax mappings as
uncacheable rendering them impractical for application usage.  DAX-pte
mappings are cached and the goal of establishing DAX-pmd mappings is to
attain more performance, not dramatically less (3 orders of magnitude).

track_pfn_insert() relies on a previous call to reserve_memtype() to
establish the expected page_cache_mode for the range.  While memremap()
arranges for reserve_memtype() to be called, devm_memremap_pages() does
not.  So, teach track_pfn_insert() and untrack_pfn() how to handle
tracking without a vma, and arrange for devm_memremap_pages() to
establish the write-back-cache reservation in the memtype tree.

Cc: <stable@vger.kernel.org>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Nilesh Choudhury <nilesh.choudhury@oracle.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Toshi Kani <toshi.kani@hpe.com>
Reported-by: Kai Zhang <kai.ka.zhang@oracle.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2016-09-09 17:34:46 -07:00
Al Viro
2561d309df alpha: fix copy_from_user()
it should clear the destination even when access_ok() fails.

Cc: stable@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-09-09 19:34:32 -04:00
Sebastian Andrzej Siewior
7d762e49c2 perf/x86/amd/uncore: Prevent use after free
The resent conversion of the cpu hotplug support in the uncore driver
introduced a regression due to the way the callbacks are invoked at
initialization time.

The old code called the prepare/starting/online function on each online cpu
as a block. The new code registers the hotplug callbacks in the core for
each state. The core invokes the callbacks at each registration on all
online cpus.

The code implicitely relied on the prepare/starting/online callbacks being
called as combo on a particular cpu, which was not obvious and completely
undocumented.

The resulting subtle wreckage happens due to the way how the uncore code
manages shared data structures for cpus which share an uncore resource in
hardware. The sharing is determined in the cpu starting callback, but the
prepare callback allocates per cpu data for the upcoming cpu because
potential sharing is unknown at this point. If the starting callback finds
a online cpu which shares the hardware resource it takes a refcount on the
percpu data of that cpu and puts the own data structure into a
'free_at_online' pointer of that shared data structure. The online callback
frees that.

With the old model this worked because in a starting callback only one non
unused structure (the one of the starting cpu) was available. The new code
allocates the data structures for all cpus when the prepare callback is
registered.

Now the starting function iterates through all online cpus and looks for a
data structure (skipping its own) which has a matching hardware id. The id
member of the data structure is initialized to 0, but the hardware id can
be 0 as well. The resulting wreckage is:

  CPU0 finds a matching id on CPU1, takes a refcount on CPU1 data and puts
  its own data structure into CPU1s data structure to be freed.

  CPU1 skips CPU0 because the data structure is its allegedly unsued own.
  It finds a matching id on CPU2, takes a refcount on CPU1 data and puts
  its own data structure into CPU2s data structure to be freed.

  ....

Now the online callbacks are invoked.

  CPU0 has a pointer to CPU1s data and frees the original CPU0 data. So
  far so good.

  CPU1 has a pointer to CPU2s data and frees the original CPU1 data, which
  is still referenced by CPU0 ---> Booom

So there are two issues to be solved here:

1) The id field must be initialized at allocation time to a value which
   cannot be a valid hardware id, i.e. -1

   This prevents the above scenario, but now CPU1 and CPU2 both stick their
   own data structure into the free_at_online pointer of CPU0. So we leak
   CPU1s data structure.

2) Fix the memory leak described in #1

   Instead of having a single pointer, use a hlist to enqueue the
   superflous data structures which are then freed by the first cpu
   invoking the online callback.

Ideally we should know the sharing _before_ invoking the prepare callback,
but that's way beyond the scope of this bug fix.

[ tglx: Rewrote changelog ]

Fixes: 96b2bd3866 ("perf/x86/amd/uncore: Convert to hotplug state machine")
Reported-and-tested-by: Eric Sandeen <sandeen@sandeen.net>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/20160909160822.lowgmkdwms2dheyv@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-09-10 00:00:06 +02:00
Linus Walleij
17470b7da1 ARM: dts: add the CLCD LCD display to the NHK15
This adds the TPG110 TDO43MTEA2 24-bit RGB LCD panel and sets
up the Nomadik device tree to activate the CLCD and connect it
to this panel.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2016-09-09 23:16:07 +02:00
Linus Walleij
6fb2de9d9f ARM: dts: add PMU to the NHK15 device tree
The so-called Nomadik Power Mangament Unit is actually a set
of some power management registers and some miscellaneous
system control stuff like muxing of entire hardware units.
Add this as a system controller.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2016-09-09 23:15:55 +02:00
Linus Walleij
6f101946cb ARM: nomadik: select MFD_SYSCON
Since the Power Management Unit is using syscon to access a
set of necessary hardware muxing, let's select MFD_SYSCON for
the Nomadik subarchitecture.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2016-09-09 23:15:50 +02:00
Linus Walleij
6d996f361b ARM: dts: add STMPE PWM to the NHK15 device tree
This adds the STMPE PWM to the NHK15 device tree.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2016-09-09 23:15:30 +02:00
Linus Torvalds
e45eeb43e6 Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:

 - smp_mb__before_spinlock() changed to smp_mb() on arm64 since the
   generic definition to smp_wmb() is not sufficient

 - avoid a recursive loop with the graph tracer by using using
   preempt_(enable|disable)_notrace in _percpu_(read|write)

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: use preempt_disable_notrace in _percpu_read/write
  arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()
2016-09-09 10:54:29 -07:00
Will Deacon
05492f2fd8 arm64: lse: convert lse alternatives NOP padding to use __nops
The LSE atomics are implemented using alternative code sequences of
different lengths, and explicit NOP padding is used to ensure the
patching works correctly.

This patch converts the bulk of the LSE code over to using the __nops
macro, which makes it slightly clearer as to what is going on and also
consolidates all of the padding at the end of the various sequences.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 18:12:34 +01:00
Will Deacon
f99a250cb6 arm64: barriers: introduce nops and __nops macros for NOP sequences
NOP sequences tend to get used for padding out alternative sections
and uarch-specific pipeline flushes in errata workarounds.

This patch adds macros for generating these sequences as both inline
asm blocks, but also as strings suitable for embedding in other asm
blocks directly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 18:12:28 +01:00
Will Deacon
8a71f0c656 arm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_s
Similar to our {read,write}_sysreg accessors for architected, named
system registers, this patch introduces {read,write}_sysreg_s variants
that can take arbitrary sys_reg output and therefore access IMPDEF
registers or registers that unsupported by binutils.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 18:12:08 +01:00
Robert Jarzmik
9ba63e3cc8 ARM: pxa: pxa_cplds: fix interrupt handling
Since its initial commit, the driver is buggy for multiple interrupts
handling. The translation from the former lubbock.c file was not
complete, and might stall all interrupt handling when multiple
interrupts occur.

This is especially true when inside the interrupt handler and if a new
interrupt comes and is not handled, leaving the output line still held,
and not creating a transition as the GPIO block behind would expect to
trigger another cplds_irq_handler() call.

For the record, the hardware is working as follows.

The interrupt mechanism relies on :
 - one status register
 - one mask register

Let's suppose the input irq lines are called :
 - i_sa1111
 - i_lan91x
 - i_mmc_cd
Let's suppose the status register for each irq line is called :
 - status_sa1111
 - status_lan91x
 - status_mmc_cd
Let's suppose the interrupt mask for each irq line is called :
 - irqen_sa1111
 - irqen_lan91x
 - irqen_mmc_cd
Let's suppose the output irq line, connected to GPIO0 is called :
 - o_gpio0

The behavior is as follows :
 - o_gpio0 = not((status_sa1111 & irqen_sa1111) |
		 (status_lan91x & irqen_lan91x) |
		 (status_mmc_cd & irqen_mmc_cd))
   => this is a N-to-1 NOR gate and multiple AND gates
 - irqen_* is exactly as programmed by a write to the FPGA
 - status_* behavior is governed by a bi-stable D flip-flop
   => on next FPGA clock :
     - if i_xxx is high, status_xxx becomes 1
     - if i_xxx is low, status_xxx remains as it is
     - if software sets status_xxx to 0, the D flip-flop is reset
       => status_xxx becomes 0
       => on next FPGA clock cycle, if i_xxx is high, status_xxx becomes
	  1 again

Fixes: fc9e38c0f4 ("ARM: pxa: lubbock: use new pxa_cplds driver")
Reported-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
2016-09-09 18:09:53 +02:00
Robert Jarzmik
32f17997c1 ARM: pxa: remove irq init from dt machines
The init_irq and handle_irq can be declared through standard irqchip
declaration and are not necessary in machine descriptions.

This is another step towards the generic kernel for the pxa
architecture.

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Acked-by: Arnd Bergmann <arnd@arndb.de>
2016-09-09 18:08:01 +02:00
Markus Elfring
8571110575 ARM: pxa: Use kmalloc_array() in pxa_pm_init()
* A multiplication for the size determination of a memory allocation
  indicated that an array data structure should be processed.
  Thus use the corresponding function "kmalloc_array".

  This issue was detected by using the Coccinelle software.

* Replace the specification of a data type by a pointer dereference
  to make the corresponding size determination a bit safer according to
  the Linux coding style convention.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
2016-09-09 18:08:00 +02:00
Petr Cvek
e572f6491a ARM: pxa: magician: Remove duplicated I2C pins declaration
Magician has GPIO117_I2C_SCL and GPIO118_I2C_SDA pins declared twice.

Signed-off-by: Petr Cvek <petr.cvek@tul.cz>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
2016-09-09 18:08:00 +02:00
Robert Jarzmik
ca26475bf0 ARM: pxa: fix GPIO double shifts
The commit 9bf448c66d ("ARM: pxa: use generic gpio operation instead of
gpio register") from Oct 17, 2011, leads to the following static checker
warning:
  arch/arm/mach-pxa/spitz_pm.c:172 spitz_charger_wakeup()
  warn: double left shift '!gpio_get_value(SPITZ_GPIO_KEY_INT)
        << (1 << ((SPITZ_GPIO_KEY_INT) & 31))'

As Dan reported, the value is shifted three times :
 - once by gpio_get_value(), which returns either 0 or BIT(gpio)
 - once by the shift operation '<<'
 - a last time by GPIO_bit(gpio) which is BIT(gpio)

Therefore the calculation lead to a chained or operator of :
 - (1 << gpio) << (1 << gpio) = (2^gpio)^gpio = 2 ^ (gpio * gpio)

It is be sheer luck the former statement works, only because each gpio
used is strictly smaller than 6, and therefore 2^(gpio^2) never
overflows a 32 bits value, and because it is used as a boolean value to
check a gpio activation.

As the xxx_charger_wakeup() functions are used as a true/false detection
mechanism, take that opportunity to change their prototypes from integer
return value to boolean one.

Fixes: 9bf448c66d ("ARM: pxa: use generic gpio operation instead of
gpio register")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
2016-09-09 18:07:59 +02:00
Arnd Bergmann
56fe27beeb Merge tag 'sti-dt-fixes-for-v4.8-rcs' of git://git.kernel.org/pub/scm/linux/kernel/git/pchotard/sti into fixes
Pull "Handle STiH410 interconnect clock required for EHCI/OHCI and SDHCI" from Patrice Chotard:

With the introduction of critical-clock support in v4.8, our developers'
default configuration is to run with 'clk_ignore_unused' removed.  This
patch-set ensures they can achieve successful boot when a) booting from
an SD Card and when b) booting using USB->Eth adaptors for NFS booting.

* tag 'sti-dt-fixes-for-v4.8-rcs' of git://git.kernel.org/pub/scm/linux/kernel/git/pchotard/sti:
  ARM: dts: STiH407-family: Provide interconnect clock for consumption in ST SDHCI
  ARM: dts: STiH410: Handle interconnect clock required by EHCI/OHCI (USB)
2016-09-09 17:58:40 +02:00
Arnd Bergmann
d31449a59a Merge tag 'renesas-fixes-for-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas into fixes
Merge "Renesas ARM Based SoC Fixes for v4.8" from Simon Horman:

* Correct R-Car Gen2 regulator quirk

* tag 'renesas-fixes-for-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas:
  ARM: shmobile: fix regulator quirk for Gen2
2016-09-09 17:56:40 +02:00
Ian Campbell
76aa759168 ARM64: dts: bcm: Use a symlink to R-Pi dtsi files from arch=arm
The ../../../arm... style cross-references added by commit 9d56c22a78
("ARM: bcm2835: Add devicetree for the Raspberry Pi 3.") do not work in the
context of the split device-tree repository[0] (where the directory
structure differs). As with commit 8ee57b8182 ("ARM64: dts: vexpress: Use
a symlink to vexpress-v2m-rs1.dtsi from arch=arm") use symlinks instead.

[0] https://git.kernel.org/cgit/linux/kernel/git/devicetree/devicetree-rebasing.git/

Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Cc: Eric Anholt <eric@anholt.net>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Lee Jones <lee@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: devicetree@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-rpi-kernel@lists.infradead.org
Cc: arm@kernel.org
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2016-09-09 17:47:16 +02:00
Ian Campbell
6b7b554d34 ARM: dts: Remove use of skeleton.dtsi from bcm283x.dtsi
This file is included from DTS files under arch/arm64 too (via
broadcom/bcm2837-rpi-3-b.dts and broadcom/bcm2837.dtsi). There is a desire
not to have skeleton.dtsi for ARM64. See commit 3ebee5a2e1 ("arm64: dts:
kill skeleton.dtsi") for rationale for its removal.

As well as the addition of #*-cells also requires adding the device_type to
the rpi memory node explicitly.

Note that this change results in the removal of an empty /aliases node from
bcm2835-rpi-a.dtb and bcm2835-rpi-a-plus.dtb. I have no hardware to check
if this is a problem or not.

It also results in some reordering of the nodes in the DTBs (the /aliases
and /memory nodes come later). This isn't supposed to matter but, again,
I've no hardware to check if it is true in this particular case.

Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Cc: Eric Anholt <eric@anholt.net>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Lee Jones <lee@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: devicetree@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-rpi-kernel@lists.infradead.org
Cc: arm@kernel.org
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2016-09-09 17:46:18 +02:00
Paolo Bonzini
27bd44e06c Merge tag 'kvm-arm-fixes-for-v4.8-round2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-master
KVM/ARM Fixes for v4.8, round 2

Fixes an idmap issue on 32-bit KVM on ARM, and fixes a memory unmapping
issue that we've had forever.
2016-09-09 17:45:56 +02:00
Linus Torvalds
2771fc8ed6 Merge tag 'powerpc-4.8-5' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
 "Fixes marked for stable:
   - Don't alias user region to other regions below PAGE_OFFSET from
     Paul Mackerras
   - Fix again csum_partial_copy_generic() on 32-bit from Christophe
     Leroy
   - Fix corrupted PE allocation bitmap on releasing PE from Gavin Shan

  Fixes for code merged this cycle:
   - Fix crash on releasing compound PE from Gavin Shan
   - Fix processor numbers in OPAL ICP from Benjamin Herrenschmidt
   - Fix little endian build with CONFIG_KEXEC=n from Thiago Jung
     Bauermann"

* tag 'powerpc-4.8-5' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/mm: Don't alias user region to other regions below PAGE_OFFSET
  powerpc/32: Fix again csum_partial_copy_generic()
  powerpc/powernv: Fix corrupted PE allocation bitmap on releasing PE
  powerpc/powernv: Fix crash on releasing compound PE
  powerpc/xics/opal: Fix processor numbers in OPAL ICP
  powerpc/pseries: Fix little endian build with CONFIG_KEXEC=n
2016-09-09 08:43:42 -07:00
Linus Torvalds
53d5f1dcd1 Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
 "A few ARM fixes:

   - Robin Murphy noticed that the non-secure privileged entry was
     relying on undefined behaviour, which needed to be fixed.

   - Vladimir Murzin noticed that prov-v7 fails to build for MMUless
     configurations because a required header file wasn't included.

   - A bunch of fixes for StrongARM regressions found while testing
     4.8-rc on such platforms"

* 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm:
  ARM: sa1100: clear reset status prior to reboot
  ARM: 8600/1: Enforce some NS-SVC initialisation
  ARM: 8599/1: mm: pull asm/memory.h explicitly
  ARM: sa1100: register clocks early
  ARM: sa1100: fix 3.6864MHz clock
2016-09-09 08:32:10 -07:00
Lukas Wunner
0a637ee612 x86/efi: Allow invocation of arbitrary boot services
We currently allow invocation of 8 boot services with efi_call_early().
Not included are LocateHandleBuffer and LocateProtocol in particular.
For graphics output or to retrieve PCI ROMs and Apple device properties,
we're thus forced to use the LocateHandle + AllocatePool + LocateHandle
combo, which is cumbersome and needs more code.

The ARM folks allow invocation of the full set of boot services but are
restricted to our 8 boot services in functions shared across arches.

Thus, rather than adding just LocateHandleBuffer and LocateProtocol to
struct efi_config, let's rework efi_call_early() to allow invocation of
arbitrary boot services by selecting the 64 bit vs 32 bit code path in
the macro itself.

When compiling for 32 bit or for 64 bit without mixed mode, the unused
code path is optimized away and the binary code is the same as before.
But on 64 bit with mixed mode enabled, this commit adds one compare
instruction to each invocation of a boot service and, depending on the
code path selected, two jump instructions. (Most of the time gcc
arranges the jumps in the 32 bit code path.) The result is a minuscule
performance penalty and the binary code becomes slightly larger and more
difficult to read when disassembled. This isn't a hot path, so these
drawbacks are arguably outweighed by the attainable simplification of
the C code. We have some overhead anyway for thunking or conversion
between calling conventions.

The 8 boot services can consequently be removed from struct efi_config.

No functional change intended (for now).

Example -- invocation of free_pool before (64 bit code path):
0x2d4      movq  %ds:efi_early, %rdx          ; efi_early
0x2db      movq  %ss:arg_0-0x20(%rsp), %rsi
0x2e0      xorl  %eax, %eax
0x2e2      movq  %ds:0x28(%rdx), %rdi         ; efi_early->free_pool
0x2e6      callq *%ds:0x58(%rdx)              ; efi_early->call()

Example -- invocation of free_pool after (64 / 32 bit mixed code path):
0x0dc      movq  %ds:efi_early, %rax          ; efi_early
0x0e3      cmpb  $0, %ds:0x28(%rax)           ; !efi_early->is64 ?
0x0e7      movq  %ds:0x20(%rax), %rdx         ; efi_early->call()
0x0eb      movq  %ds:0x10(%rax), %rax         ; efi_early->boot_services
0x0ef      je    $0x150
0x0f1      movq  %ds:0x48(%rax), %rdi         ; free_pool (64 bit)
0x0f5      xorl  %eax, %eax
0x0f7      callq *%rdx
...
0x150      movl  %ds:0x30(%rax), %edi         ; free_pool (32 bit)
0x153      jmp   $0x0f5

Size of eboot.o text section:
CONFIG_X86_32:                         6464 before, 6318 after
CONFIG_X86_64 && !CONFIG_EFI_MIXED:    7670 before, 7573 after
CONFIG_X86_64 &&  CONFIG_EFI_MIXED:    7670 before, 8319 after

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:57 +01:00
Lukas Wunner
2757161638 x86/efi: Optimize away setup_gop32/64 if unused
Commit 2c23b73c2d ("x86/efi: Prepare GOP handling code for reuse
as generic code") introduced an efi_is_64bit() macro to x86 which
previously only existed for arm arches.  The macro is used to
choose between the 64 bit or 32 bit code path in gop.c at runtime.

However the code path that's going to be taken is known at compile
time when compiling for x86_32 or for x86_64 with mixed mode disabled.
Amend the macro to eliminate the unused code path in those cases.

Size of gop.o text section:
CONFIG_X86_32:                         1758 before, 1299 after
CONFIG_X86_64 && !CONFIG_EFI_MIXED:    2201 before, 1406 after
CONFIG_X86_64 &&  CONFIG_EFI_MIXED:    2201 before and after

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:56 +01:00
Markus Elfring
20ebc15e6c x86/efi: Use kmalloc_array() in efi_call_phys_prolog()
* A multiplication for the size determination of a memory allocation
  indicated that an array data structure should be processed.
  Thus reuse the corresponding function "kmalloc_array".

  This issue was detected by using the Coccinelle software.

* Replace the specification of a data type by a pointer dereference
  to make the corresponding size determination a bit safer according to
  the Linux coding style convention.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:55 +01:00
Ricardo Neri
3dad6f7f69 x86/efi: Defer efi_esrt_init until after memblock_x86_fill
Commit 7b02d53e7852 ("efi: Allow drivers to reserve boot services forever")
introduced a new efi_mem_reserve to reserve the boot services memory
regions forever. This reservation involves allocating a new EFI memory
range descriptor. However, allocation can only succeed if there is memory
available for the allocation. Otherwise, error such as the following may
occur:

esrt: Reserving ESRT space from 0x000000003dd6a000 to 0x000000003dd6a010.
Kernel panic - not syncing: ERROR: Failed to allocate 0x9f0 bytes below \
 0x0.
CPU: 0 PID: 0 Comm: swapper Not tainted 4.7.0-rc5+ #503
 0000000000000000 ffffffff81e03ce0 ffffffff8131dae8 ffffffff81bb6c50
 ffffffff81e03d70 ffffffff81e03d60 ffffffff8111f4df 0000000000000018
 ffffffff81e03d70 ffffffff81e03d08 00000000000009f0 00000000000009f0
Call Trace:
 [<ffffffff8131dae8>] dump_stack+0x4d/0x65
 [<ffffffff8111f4df>] panic+0xc5/0x206
 [<ffffffff81f7c6d3>] memblock_alloc_base+0x29/0x2e
 [<ffffffff81f7c6e3>] memblock_alloc+0xb/0xd
 [<ffffffff81f6c86d>] efi_arch_mem_reserve+0xbc/0x134
 [<ffffffff81fa3280>] efi_mem_reserve+0x2c/0x31
 [<ffffffff81fa3280>] ? efi_mem_reserve+0x2c/0x31
 [<ffffffff81fa40d3>] efi_esrt_init+0x19e/0x1b4
 [<ffffffff81f6d2dd>] efi_init+0x398/0x44a
 [<ffffffff81f5c782>] setup_arch+0x415/0xc30
 [<ffffffff81f55af1>] start_kernel+0x5b/0x3ef
 [<ffffffff81f55434>] x86_64_start_reservations+0x2f/0x31
 [<ffffffff81f55520>] x86_64_start_kernel+0xea/0xed
---[ end Kernel panic - not syncing: ERROR: Failed to allocate 0x9f0
     bytes below 0x0.

An inspection of the memblock configuration reveals that there is no memory
available for the allocation:

MEMBLOCK configuration:
 memory size = 0x0 reserved size = 0x4f339c0
 memory.cnt  = 0x1
 memory[0x0]    [0x00000000000000-0xffffffffffffffff], 0x0 bytes on node 0\
                 flags: 0x0
 reserved.cnt  = 0x4
 reserved[0x0]  [0x0000000008c000-0x0000000008c9bf], 0x9c0 bytes flags: 0x0
 reserved[0x1]  [0x0000000009f000-0x000000000fffff], 0x61000 bytes\
                 flags: 0x0
 reserved[0x2]  [0x00000002800000-0x0000000394bfff], 0x114c000 bytes\
                 flags: 0x0
 reserved[0x3]  [0x000000304e4000-0x00000034269fff], 0x3d86000 bytes\
                 flags: 0x0

This situation can be avoided if we call efi_esrt_init after memblock has
memory regions for the allocation.

Also, the EFI ESRT driver makes use of early_memremap'pings. Therfore, we
do not want to defer efi_esrt_init for too long. We must call such function
while calls to early_memremap are still valid.

A good place to meet the two aforementioned conditions is right after
memblock_x86_fill, grouped with other EFI-related functions.

Reported-by: Scott Lawson <scott.lawson@intel.com>
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Peter Jones <pjones@redhat.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:52 +01:00
Lukas Wunner
15cf7cae08 x86/efi: Remove unused find_bits() function
Left behind by commit fc37206427 ("efi/libstub: Move Graphics Output
Protocol handling to generic code").

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:50 +01:00
Alex Thorlton
0513fe1d28 x86/efi: Map in physical addresses in efi_map_region_fixed
This is a simple change to add in the physical mappings as well as the
virtual mappings in efi_map_region_fixed.  The motivation here is to
get access to EFI runtime code that is only available via the 1:1
mappings on a kexec'd kernel.

The added call is essentially the kexec analog of the first __map_region
that Boris put in efi_map_region in commit d2f7cbe7b2 ("x86/efi:
Runtime services virtual mapping").

Signed-off-by: Alex Thorlton <athorlton@sgi.com>
Cc: Russ Anderson <rja@sgi.com>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: Mike Travis <travis@sgi.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:47 +01:00
Colin Ian King
ac0e94b63e x86/efi: Initialize status to ensure garbage is not returned on small size
Although very unlikey, if size is too small or zero, then we end up with
status not being set and returning garbage. Instead, initializing status to
EFI_INVALID_PARAMETER to indicate that size is invalid in the calls to
setup_uga32 and setup_uga64.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:44 +01:00
Matt Fleming
4bc9f92e64 x86/efi-bgrt: Use efi_mem_reserve() to avoid copying image data
efi_mem_reserve() allows us to permanently mark EFI boot services
regions as reserved, which means we no longer need to copy the image
data out and into a separate buffer.

Leaving the data in the original boot services region has the added
benefit that BGRT images can now be passed across kexec reboot.

Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Peter Jones <pjones@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Josh Boyer <jwboyer@fedoraproject.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Môshe van der Sterre <me@moshe.nl>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:38 +01:00
Matt Fleming
31ce8cc681 efi/runtime-map: Use efi.memmap directly instead of a copy
Now that efi.memmap is available all of the time there's no need to
allocate and build a separate copy of the EFI memory map.

Furthermore, efi.memmap contains boot services regions but only those
regions that have been reserved via efi_mem_reserve(). Using
efi.memmap allows us to pass boot services across kexec reboot so that
the ESRT and BGRT drivers will now work.

Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Peter Jones <pjones@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:36 +01:00
Matt Fleming
816e76129e efi: Allow drivers to reserve boot services forever
Today, it is not possible for drivers to reserve EFI boot services for
access after efi_free_boot_services() has been called on x86. For
ARM/arm64 it can be done simply by calling memblock_reserve().

Having this ability for all three architectures is desirable for a
couple of reasons,

  1) It saves drivers copying data out of those regions
  2) kexec reboot can now make use of things like ESRT

Instead of using the standard memblock_reserve() which is insufficient
to reserve the region on x86 (see efi_reserve_boot_services()), a new
API is introduced in this patch; efi_mem_reserve().

efi.memmap now always represents which EFI memory regions are
available. On x86 the EFI boot services regions that have not been
reserved via efi_mem_reserve() will be removed from efi.memmap during
efi_free_boot_services().

This has implications for kexec, since it is not possible for a newly
kexec'd kernel to access the same boot services regions that the
initial boot kernel had access to unless they are reserved by every
kexec kernel in the chain.

Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Peter Jones <pjones@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:08:34 +01:00
Matt Fleming
dca0f971ea efi: Add efi_memmap_init_late() for permanent EFI memmap
Drivers need a way to access the EFI memory map at runtime. ARM and
arm64 currently provide this by remapping the EFI memory map into the
vmalloc space before setting up the EFI virtual mappings.

x86 does not provide this functionality which has resulted in the code
in efi_mem_desc_lookup() where it will manually map individual EFI
memmap entries if the memmap has already been torn down on x86,

  /*
   * If a driver calls this after efi_free_boot_services,
   * ->map will be NULL, and the target may also not be mapped.
   * So just always get our own virtual map on the CPU.
   *
   */
  md = early_memremap(p, sizeof (*md));

There isn't a good reason for not providing a permanent EFI memory map
for runtime queries, especially since the EFI regions are not mapped
into the standard kernel page tables.

Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Peter Jones <pjones@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:07:43 +01:00
Matt Fleming
9479c7cebf efi: Refactor efi_memmap_init_early() into arch-neutral code
Every EFI architecture apart from ia64 needs to setup the EFI memory
map at efi.memmap, and the code for doing that is essentially the same
across all implementations. Therefore, it makes sense to factor this
out into the common code under drivers/firmware/efi/.

The only slight variation is the data structure out of which we pull
the initial memory map information, such as physical address, memory
descriptor size and version, etc. We can address this by passing a
generic data structure (struct efi_memory_map_data) as the argument to
efi_memmap_init_early() which contains the minimum info required for
initialising the memory map.

In the process, this patch also fixes a few undesirable implementation
differences:

 - ARM and arm64 were failing to clear the EFI_MEMMAP bit when
   unmapping the early EFI memory map. EFI_MEMMAP indicates whether
   the EFI memory map is mapped (not the regions contained within) and
   can be traversed.  It's more correct to set the bit as soon as we
   memremap() the passed in EFI memmap.

 - Rename efi_unmmap_memmap() to efi_memmap_unmap() to adhere to the
   regular naming scheme.

This patch also uses a read-write mapping for the memory map instead
of the read-only mapping currently used on ARM and arm64. x86 needs
the ability to update the memory map in-place when assigning virtual
addresses to regions (efi_map_region()) and tagging regions when
reserving boot services (efi_reserve_boot_services()).

There's no way for the generic fake_mem code to know which mapping to
use without introducing some arch-specific constant/hook, so just use
read-write since read-only is of dubious value for the EFI memory map.

Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Peter Jones <pjones@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:06:38 +01:00
Matt Fleming
ab72a27da4 x86/efi: Consolidate region mapping logic
EFI regions are currently mapped in two separate places. The bulk of
the work is done in efi_map_regions() but when CONFIG_EFI_MIXED is
enabled the additional regions that are required when operating in
mixed mode are mapping in efi_setup_page_tables().

Pull everything into efi_map_regions() and refactor the test for
which regions should be mapped into a should_map_region() function.
Generously sprinkle comments to clarify the different cases.

Acked-by: Borislav Petkov <bp@suse.de>
Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:06:37 +01:00
Matt Fleming
4971531af3 x86/efi: Test for EFI_MEMMAP functionality when iterating EFI memmap
Both efi_find_mirror() and efi_fake_memmap() really want to know
whether the EFI memory map is available, not just whether the machine
was booted using EFI. efi_fake_memmap() even has a check for
EFI_MEMMAP at the start of the function.

Since we've already got other code that has this dependency, merge
everything under one if() conditional, and remove the now superfluous
check from efi_fake_memmap().

Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
2016-09-09 16:06:34 +01:00
Robin Murphy
0e27a7fce6 arm64: Remove shadowed asm-generic headers
We've grown our own versions of bug.h, ftrace.h, pci.h and topology.h,
so generating the generic ones as well is unnecessary and a potential
source of build hiccups. At the very least, having them present has
confused my source-indexing tool, and that simply will not do.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:05:08 +01:00
Suzuki K Poulose
116c81f427 arm64: Work around systems with mismatched cache line sizes
Systems with differing CPU i-cache/d-cache line sizes can cause
problems with the cache management by software when the execution
is migrated from one to another. Usually, the application reads
the cache size on a CPU and then uses that length to perform cache
operations. However, if it gets migrated to another CPU with a smaller
cache line size, things could go completely wrong. To prevent such
cases, always use the smallest cache line size among the CPUs. The
kernel CPU feature infrastructure already keeps track of the safe
value for all CPUID registers including CTR. This patch works around
the problem by :

For kernel, dynamically patch the kernel to read the cache size
from the system wide copy of CTR_EL0.

For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT)
and emulate the mrs instruction to return the system wide safe value
of CTR_EL0.

For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0
via read_system_reg), we keep track of the pointer to table entry for
CTR_EL0 in the CPU feature infrastructure.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:29 +01:00
Suzuki K Poulose
9dbd5bb25c arm64: Refactor sysinstr exception handling
Right now we trap some of the user space data cache operations
based on a few Errata (ARM 819472, 826319, 827319 and 824069).
We need to trap userspace access to CTR_EL0, if we detect mismatched
cache line size. Since both these traps share the EC, refactor
the handler a little bit to make it a bit more reader friendly.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:29 +01:00
Suzuki K Poulose
072f0a6338 arm64: Introduce raw_{d,i}cache_line_size
On systems with mismatched i/d cache min line sizes, we need to use
the smallest size possible across all CPUs. This will be done by fetching
the system wide safe value from CPU feature infrastructure.
However the some special users(e.g kexec, hibernate) would need the line
size on the CPU (rather than the system wide), when either the system
wide feature may not be accessible or it is guranteed that the caller
executes with a gurantee of no migration.
Provide another helper which will fetch cache line size on the current CPU.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Reviewed-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:29 +01:00
Suzuki K Poulose
c831b2ae25 arm64: alternative: Add support for patching adrp instructions
adrp uses PC-relative address offset to a page (of 4K size) of
a symbol. If it appears in an alternative code patched in, we
should adjust the offset to reflect the address where it will
be run from. This patch adds support for fixing the offset
for adrp instructions.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:28 +01:00
Suzuki K Poulose
46084bc253 arm64: insn: Add helpers for adrp offsets
Adds helpers for decoding/encoding the PC relative addresses for adrp.
This will be used for handling dynamic patching of 'adrp' instructions
in alternative code patching.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:28 +01:00
Suzuki K Poulose
baa763b565 arm64: alternative: Disallow patching instructions using literals
The alternative code patching doesn't check if the replaced instruction
uses a pc relative literal. This could cause silent corruption in the
instruction stream as the instruction will be executed from a different
address than what it was compiled for. Catch all such cases.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:28 +01:00