This adds support for using CAPP DMA mode, which is required for XSL
based cards such as the Mellanox CX4 to function.
This is currently an RFC as it depends on the corresponding support to
be merged into skiboot first, which was submitted here:
http://patchwork.ozlabs.org/patch/625582/
In the event that the skiboot on the system does not have the above
support, it will indicate as such in the kernel log and abort the init
process.
Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sparse complains that it doesn't know what REG_BYTE is:
arch/powerpc/kernel/align.c:313:29: error: undefined identifier 'REG_BYTE'
REG_BYTE is defined differently based on whether we're compiling for
LE, BE32 or BE64. Sparse apparently doesn't provide __BIG_ENDIAN__ or
__LITTLE_ENDIAN__, which means we get no definition.
Rather than check for __BIG_ENDIAN__ and then separately for
__LITTLE_ENDIAN__, just switch the #ifdef to check for __BIG_ENDIAN__
and then #else we define the little endian version. Technically that's
dicey because PDP_ENDIAN is also a possibility, but we already do it in
a lot of places so one more hardly matters.
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sometimes headers that provide prototypes for functions are
accidentally omitted from the files that define the functions.
Fix a couple of times that occurs.
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sparse picked up a number of functions that are implemented in C and
then only referred to in asm code.
This introduces asm-prototypes.h, which provides a place for
prototypes of these functions.
This silences some sparse warnings.
Signed-off-by: Daniel Axtens <dja@axtens.net>
[mpe: Add include guards, clean up copyright & GPL text]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This is just a smattering of things picked up by sparse that should
be made static.
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The array crash_shutdown_handles is an array of size CRASH_HANDLER_MAX+1
containing up to CRASH_HANDLER_MAX shutdown_handlers. It is assumed to
be NULL terminated, which it is under normal circumstances. Array
accesses in the functions crash_shutdown_unregister() and
default_machine_crash_shutdown() rely on this NULL termination property
when traversing this list and don't protect again out of bounds accesses.
If the NULL terminator were somehow overwritten these functions could
potentially access out of the bounds of the array.
Shrink the array to size CRASH_HANDLER_MAX and implement explicit array
bounds checking when accessing the elements of the
crash_shutdown_handles[] array in crash_shutdown_unregister() and
default_machine_crash_shutdown().
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The mm zone mechanism was traditionally used by arch specific code to
partition memory into allocation zones. However there are several zones
that are managed by the mm subsystem rather than the architecture. Most
architectures set the max PFN of these special zones to zero, however on
powerpc we set them to ~0ul. This, in conjunction with a bug in
free_area_init_nodes() results in all of system memory being placed in
ZONE_DEVICE when enabled. Device memory cannot be used for regular kernel
memory allocations so this will cause a kernel panic at boot. Given the
planned addition of more mm managed zones (ZONE_CMA) we should aim to be
consistent with every other architecture and set the max PFN for these
zones to zero.
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
THREAD_DSCR:
Added in efcac6589a "powerpc: Per process DSCR + some fixes (try#4)"
Last usage removed in 152d523e63 "powerpc: Create context switch helpers save_sprs() and restore_sprs()"
THREAD_DSCR_INHERIT:
Added in 714332858b "powerpc: Restore correct DSCR in context switch"
Last usage removed in 152d523e63 "powerpc: Create context switch helpers save_sprs() and restore_sprs()"
THREAD_TAR:
Added in 2468dcf641 "powerpc: Add support for context switching the TAR register"
Last usage removed in 152d523e63 "powerpc: Create context switch helpers save_sprs() and restore_sprs()"
THREAD_BESCR, THREAD_EBBHR and THREAD_EBBRR:
Added in 9353374b8e "powerpc: Context switch the new EBB SPRs"
Last usage removed in 152d523e63 "powerpc: Create context switch helpers save_sprs() and restore_sprs()"
THREAD_SIAR, THREAD_SDAR, THREAD_SIER, THREAD_MMCR0, and THREAD_MMCR2:
Added in 59affcd3e4 "powerpc: Context switch more PMU related SPRs"
Last usage removed in b11ae95100 "powerpc: Partial revert of "Context switch more PMU related SPRs""
PACA_LOCK_TOKEN:
Added in 9e368f2915 "KVM: PPC: book3s_hv: Add support for PPC970-family processors"
Last usage removed in c17b98cf60 "KVM: PPC: Book3S HV: Remove code for PPC970 processors"
HCALL_STAT_SIZE, HCALL_STAT_CALLS, HCALL_STAT_TB and HCALL_STAT_PURR:
Added in 57852a853b "[POWERPC] powerpc: Instrument Hypervisor Calls"
Last usage removed in c8cd093a6e "powerpc: tracing: Add hypervisor call tracepoints"
VCPU_EPLC:
Added in d30f6e4800 "KVM: PPC: booke: category E.HV (GS-mode) support"
Never used.
CPU_DOWN_FLUSH:
Added in e7affb1dba "powerpc/cache: add cache flush operation for various e500"
Never used.
CFG_STAMP_XSEC:
Added in 14cf11af6c "powerpc: Merge enough to start building in arch/powerpc."
Last usage removed in 0e469db8f7 "powerpc: Rework VDSO gettimeofday to prevent time going backwards"
KVM_LPCR:
Added in aa04b4cc5b "KVM: PPC: Allocate RMAs (Real Mode Areas) at boot for use by guests"
Last usage removed in a0144e2a6b "KVM: PPC: Book3S HV: Store LPCR value for each virtual core"
GPR15, GPR16, GPR17, GPR18, GPR19, GPR20, GPR21, GPR22, GPR23, GPR24,
GPR25, GPR26, GPR27, GPR28, GPR29, GPR30 and GPR31:
Added in 14cf11af6c "powerpc: Merge enough to start building in arch/powerpc."
Never used.
VCPU_SHADOW_FSCR:
Added in 616dff8602 "KVM: PPC: Book3S PR: Handle Facility interrupt and FSCR"
Never used.
VCPU_SHADOW_SRR1:
Added in a2d56020d1 "KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu"
Never used.
KVM_SPLIT_SIZE:
Added in b4deba5c41 "KVM: PPC: Book3S HV: Implement dynamicmicro-threading on POWER8"
Never used.
VCPU_VCPUID:
Added in de56a948b9 "KVM: PPC: Add support for Book3S processors in hypervisor mode"
Last usage removed 1b400ba0cd "KVM: PPC: Book3S HV: Improve handling of local vs. global TLB invalidations"
_MQ:
Added in 14cf11af6c "powerpc: Merge enough to start building in arch/powerpc."
Never used.
AUDITCONTEXT:
Added in 14cf11af6c "powerpc: Merge enough to start building in arch/powerpc."
Last usage removed in 401d1f029b "[PATCH] syscall entry/exit revamp"
CLONE_VM:
Added in 14cf11af6c "powerpc: Merge enough to start building in arch/powerpc."
Currently unused.
CLONE_UNTRACED:
Added in 14cf11af6c "powerpc: Merge enough to start building in arch/powerpc."
Currently unused.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
[mpe: Munge change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Currently, if arch code wants to supply seccomp_data directly to
seccomp (which is generally much faster than having seccomp do it
using the syscall_get_xyz() API), it has to use the two-phase
seccomp hooks. Add it to the easy hooks, too.
Cc: linux-arch@vger.kernel.org
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
memory_hotplug_max() uses hot_add_drconf_memory_max() to get maxmimum
addressable memory by referring to ibm,dyanamic-memory property. There
are three problems with the current approach:
1 hot_add_drconf_memory_max() assumes that ibm,dynamic-memory includes
all the LMBs of the guest, but that is not true for PowerKVM which
populates only DR LMBs (LMBs that can be hotplugged/removed) in that
property.
2 hot_add_drconf_memory_max() multiplies lmb-size with lmb-count to arrive
at the max possible address. Since ibm,dynamic-memory doesn't include
RMA LMBs, the address thus obtained will be less than the actual max
address. For example, if max possible memory size is 32G, with lmb-size
of 256MB there can be 127 LMBs in ibm,dynamic-memory (1 LMB for RMA
which won't be present here). hot_add_drconf_memory_max() would then
return the max addressable memory as 127 * 256MB = 31.75GB, the max
address should have been 32G which is what ibm,lrdr-capacity shows.
3 In PowerKVM, there can be a gap between the end of boot time RAM and
beginning of hotplug RAM area. So just multiplying lmb-count with
lmb-size will not provide the correct max possible address for PowerKVM.
This patch fixes 1 by using ibm,lrdr-capacity property to return the max
addressable memory whenever the property is present. Then it fixes 2 & 3
by fetching the address of the last LMB in ibm,dynamic-memory property.
Fixes: cd34206e94 ("powerpc: Add memory_hotplug_max()")
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
There is an ordering issue with spin_unlock_wait() on powerpc, because
the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
the load part of the operation with memory operations following it.
Therefore the following event sequence can happen:
CPU 1 CPU 2 CPU 3
================== ==================== ==============
spin_unlock(&lock);
spin_lock(&lock):
r1 = *lock; // r1 == 0;
o = object; o = READ_ONCE(object); // reordered here
object = NULL;
smp_mb();
spin_unlock_wait(&lock);
*lock = 1;
smp_mb();
o->dead = true; < o = READ_ONCE(object); > // reordered upwards
if (o) // true
BUG_ON(o->dead); // true!!
To fix this, we add a "nop" ll/sc loop in arch_spin_unlock_wait() on
ppc, the "nop" ll/sc loop reads the lock
value and writes it back atomically, in this way it will synchronize the
view of the lock on CPU1 with that on CPU2. Therefore in the scenario
above, either CPU2 will fail to get the lock at first or CPU1 will see
the lock acquired by CPU2, both cases will eliminate this bug. This is a
similar idea as what Will Deacon did for ARM64 in:
d86b8da04d ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")
Furthermore, if the "nop" ll/sc figures out the lock is locked, we
actually don't need to do the "nop" ll/sc trick again, we can just do a
normal load+check loop for the lock to be released, because in that
case, spin_unlock_wait() is called when someone is holding the lock, and
the store part of the "nop" ll/sc happens before the lock release of the
current lock holder:
"nop" ll/sc -> spin_unlock()
and the lock release happens before the next lock acquisition:
spin_unlock() -> spin_lock() <next holder>
which means the "nop" ll/sc happens before the next lock acquisition:
"nop" ll/sc -> spin_unlock() -> spin_lock() <next holder>
With a smp_mb() preceding spin_unlock_wait(), the store of object is
guaranteed to be observed by the next lock holder:
STORE -> smp_mb() -> "nop" ll/sc
-> spin_unlock() -> spin_lock() <next holder>
This patch therefore fixes the issue and also cleans the
arch_spin_unlock_wait() a little bit by removing superfluous memory
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.
Suggested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
[mpe: Inline the "nop" ll/sc loop and set EH=0, munge change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Since the pstore code has moved away from nvram.c, remove unused
pstore headers pstore.h and kmsg_dump.h.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
We're approaching 20 locations where we need to check for ELF ABI v2.
That's fine, except the logic is a bit awkward, because we have to check
that _CALL_ELF is defined and then what its value is.
So check it once in asm/types.h and define PPC64_ELF_ABI_v2 when ELF ABI
v2 is detected.
We also have a few places where what we're really trying to check is
that we are using the 64-bit v1 ABI, ie. function descriptors. So also
add a #define for that, which simplifies several checks.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
MPIC was only used by Power3 which is now unsupported, so remove MPIC
code. XICS is now the only supported interrupt controller for
pSeries so do some cleanups too.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
MPIC was only used by Power3 which is now unsupported, so remove MPIC
code.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
MPIC was only used by Power3 which is now unsupported, so remove MPIC
code. XICS is now the only supported interrupt controller for
pSeries so do some cleanups too.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
MPIC was only used by Power3 which is now unsupported, so remove MPIC
code. XICS is now the only supported interrupt controller for
pSeries so do some cleanups too.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
MPIC was only used by Power3 which is now unsupported, so drop support
for MPIC. XICS is now the only supported interrupt controller for
pSeries so make the XICS functions generic.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
sub_reloc_offset() has not been used since commit
917f0af9e5 ("powerpc: Remove arch/ppc and include/asm-ppc") which
removed include/asm-ppc/prom.h.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Align the hot loops in our assembly implementation of strncpy(),
strncmp() and memchr().
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
A number of our assembly implementations of string functions do not
align their hot loops. I was going to align them manually, but I
realised that they are are almost instruction for instruction
identical to what gcc produces, with the advantage that gcc does
align them.
In light of that, let's just remove the assembly versions.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In setup_sigcontext(), we set current->thread.vrsave then use it
straight after. Since current is hidden from the compiler via inline
assembly, it cannot optimise this and we end up with a load hit store.
Fix this by using a temporary.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In both __giveup_fpu() and __giveup_altivec() we make two modifications
to tsk->thread.regs->msr. gcc decides to do a read/modify/write of
each change, so we end up with a load hit store:
ld r9,264(r10)
rldicl r9,r9,50,1
rotldi r9,r9,14
std r9,264(r10)
...
ld r9,264(r10)
rldicl r9,r9,40,1
rotldi r9,r9,24
std r9,264(r10)
Fix this by using a temporary.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
With commit e58e87adc8 "powerpc/mm: Update _PAGE_KERNEL_RO" we now
use all the three PPP bits. The top bit is now used to have a PPP value
of 0b110 which will be mapped to kernel read only. When updating the
hpte entry use right mask such that we update the 63rd bit (top 'P' bit)
too.
Prior to e58e87adc8 we didn't support KERNEL_RO at all (it was ==
KERNEL_RW), so this isn't a regression as such.
Fixes: e58e87adc8 ("powerpc/mm: Update _PAGE_KERNEL_RO")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pull powerpc fixes from
- ptrace: Fix out of bounds array access warning from Khem Raj
- pseries: Fix PCI config address for DDW from Gavin Shan
- pseries: Fix IBM_ARCH_VEC_NRCORES_OFFSET since POWER8NVL was added
from Michael Ellerman
- of: fix autoloading due to broken modalias with no 'compatible' from
Wolfram Sang
- radix: Fix always false comparison against MMU_NO_CONTEXT from Aneesh
Kumar K.V
- hash: Compute the segment size correctly for ISA 3.0 from Aneesh
Kumar K.V
- nohash: Fix build break with 64K pages from Michael Ellerman
* tag 'powerpc-4.7-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/nohash: Fix build break with 64K pages
powerpc/mm/hash: Compute the segment size correctly for ISA 3.0
powerpc/mm/radix: Fix always false comparison against MMU_NO_CONTEXT
of: fix autoloading due to broken modalias with no 'compatible'
powerpc/pseries: Fix IBM_ARCH_VEC_NRCORES_OFFSET since POWER8NVL was added
powerpc/pseries: Fix PCI config address for DDW
powerpc/ptrace: Fix out of bounds array access warning
Even though a tlb_flush() does a flush with invalidate all cache,
we can end up doing an RCU page table free before calling tlb_flush().
That means we can have page walk cache entries even after we free the
page table pages. This can result in us doing wrong page table walk.
Avoid this by doing pwc flush on every page table free. We can't batch
the pwc flush, because the rcu call back function where we free the
page table pages doesn't have information of the mmu gather. Thus we
have to do a pwc on every page table page freed.
Note: I also removed the dummy tlb_flush_pgtable call functions for
hash 32.
Fixes: 1a472c9dba ("powerpc/mm/radix: Add tlbflush routines")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Radix invalidate control (RIC) is used to control which cache to flush
using tlb instructions. When doing a PID flush, we currently flush
everything including page walk cache. For address range flush, we flush
only the TLB. In the next patch, we add support for flushing only the
page walk cache.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Commit 74701d5947 "powerpc/mm: Rename function to indicate we are
allocating fragments" renamed page_table_free() to pte_fragment_free().
One occurrence was mistyped as pte_fragment_fre().
This only breaks the nohash 64K page build, which is not the default or
enabled in any defconfig.
Fixes: 74701d5947 ("powerpc/mm: Rename function to indicate we are allocating fragments")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This replaces:
- "select ARCH_REQUIRE_GPIOLIB" with "select GPIOLIB" as this can
now be selected directly.
- "select ARCH_WANT_OPTIONAL_GPIOLIB" with no dependency: GPIOLIB
is now selectable by everyone, so we need not declare our
intent to select it.
When ordering the symbols the following rationale was used:
if the selects were in alphabetical order, I moved select GPIOLIB
to be in alphabetical order, but if the selects were not
maintained in alphabetical order, I just replaced
"select ARCH_REQUIRE_GPIOLIB" with "select GPIOLIB".
Cc: Michael Büsch <m@bues.ch>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
PowerISA 3.0 encodes the segment size in the second half of hash page
table entry. Update hpte_decode() accordingly.
Fixes: 50de596de8 ("powerpc/mm/hash: Add support for Power9 Hash")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In some of the radix TLB flush routines, we use a local to store the
mm->context.id, AKA the PID.
Currently we use an int, but the PID is unsigned long, so large values
of PID will be truncated. In particular MMU_NO_CONTEXT is -1, which
means all our comparisons against that value can never be true.
This means we'll issue TLB flushes when we shouldn't on radix enabled
machines.
Fix it by using an unsigned long for the local. Discovered by Coverity.
Fixes: 1a472c9dba ("powerpc/mm/radix: Add tlbflush routines")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Write change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pull vfs fixes from Al Viro:
"Fixes for crap of assorted ages: EOPENSTALE one is 4.2+, autofs one is
4.6, d_walk - 3.2+.
The atomic_open() and coredump ones are regressions from this window"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
coredump: fix dumping through pipes
fix a regression in atomic_open()
fix d_walk()/non-delayed __d_free() race
autofs braino fix for do_last()
fix EOPENSTALE bug in do_last()
The offset in the core file used to be tracked with ->written field of
the coredump_params structure. The field was retired in favour of
file->f_pos.
However, ->f_pos is not maintained for pipes which leads to breakage.
Restore explicit tracking of the offset in coredump_params. Introduce
->pos field for this purpose since ->written was already reused.
Fixes: a008393951 ("get rid of coredump_params->written").
Reported-by: Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
Signed-off-by: Mateusz Guzik <mguzik@redhat.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
The recent commit 7cc851039d ("powerpc/pseries: Add POWER8NVL support
to ibm,client-architecture-support call") added a new PVR mask & value
to the start of the ibm_architecture_vec[] array.
However it missed the fact that further down in the array, we hard code
the offset of one of the fields, and then at boot use that value to
patch the value in the array. This means every update to the array must
also update the #define, ugh.
This means that on pseries machines we will misreport to firmware the
number of cores we support, by a factor of threads_per_core.
Fix it for now by updating the #define.
Fixes: 7cc851039d ("powerpc/pseries: Add POWER8NVL support to ibm,client-architecture-support call")
Cc: stable@vger.kernel.org # v4.0+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pull clk fixes from Stephen Boyd:
"This finally removes the CLK_IS_ROOT flag by picking up the last few
stragglers that didn't get merged by anyone this time around.
Better to do it now than wait for another one to pop up. There's also
a minor maintainers update and a Kconfig fix"
* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
clk: nxp: Select MFD_SYSCON for creg driver
MAINTAINERS: Add file patterns for clock device tree bindings
clk: Remove CLK_IS_ROOT flag
clk: microchip: Remove CLK_IS_ROOT
powerpc/512x: clk: Remove CLK_IS_ROOT
vexpress/spc: Remove CLK_IS_ROOT
In commit 8445a87f70 "powerpc/iommu: Remove the dependency on EEH
struct in DDW mechanism", the PE address was replaced with the PCI
config address in order to remove dependency on EEH. According to PAPR
spec, firmware (pHyp or QEMU) should accept "xxBBSSxx" format PCI config
address, not "xxxxBBSS" provided by the patch. Note that "BB" is PCI bus
number and "SS" is the combination of slot and function number.
This fixes the PCI address passed to DDW RTAS calls.
Fixes: 8445a87f70 ("powerpc/iommu: Remove the dependency on EEH struct in DDW mechanism")
Cc: stable@vger.kernel.org # v3.4+
Reported-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Tested-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
gcc-6 correctly warns about a out of bounds access
arch/powerpc/kernel/ptrace.c:407:24: warning: index 32 denotes an offset greater than size of 'u64[32][1] {aka long long unsigned int[32][1]}' [-Warray-bounds]
offsetof(struct thread_fp_state, fpr[32][0]));
^
check the end of array instead of beginning of next element to fix this
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Segher Boessenkool <segher@kernel.crashing.org>
Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
PowerPC is the last architecture using the GEN_RTC driver on some
machines, but we can migrate them all to using the RTC_DRV_GENERIC
driver instead now.
This moves over the CONFIG_GEN_RTC option from drivers/char into
arch/powerpc/platforms/Kconfig and makes it just select the
replacement driver instead, for the only reason of not breaking
existing defconfig and .config files that users may have.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
The rtc-generic driver provides an architecture specific
wrapper on top of the generic rtc_class_ops abstraction,
and powerpc has another abstraction on top, which is a bit
silly.
This changes the powerpc rtc-generic device to provide its
rtc_class_ops directly, to reduce the number of layers
by one.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
The design of the cpufreq governor API is not very straightforward,
as struct cpufreq_governor provides only one callback to be invoked
from different code paths for different purposes. The purpose it is
invoked for is determined by its second "event" argument, causing it
to act as a "callback multiplexer" of sorts.
Unfortunately, that leads to extra complexity in governors, some of
which implement the ->governor() callback as a switch statement
that simply checks the event argument and invokes a separate function
to handle that specific event.
That extra complexity can be eliminated by replacing the all-purpose
->governor() callback with a family of callbacks to carry out specific
governor operations: initialization and exit, start and stop and policy
limits updates. That also turns out to reduce the code size too, so
do it.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Like zlib compression in pstore, this patch added lzo and lz4
compression support so that users can have more options and better
compression ratio.
The original code treats the compressed data together with the
uncompressed ECC correction notice by using zlib decompress. The
ECC correction notice is missing in the decompression process. The
treatment also makes lzo and lz4 not working. So I treat them
separately by using pstore_decompress() to treat the compressed
data, and memcpy() to treat the uncompressed ECC correction notice.
Signed-off-by: Geliang Tang <geliangtang@163.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
This flag is a no-op now (see commit 47b0eeb3dc "clk: Deprecate
CLK_IS_ROOT", 2016-02-02) so remove it.
Cc: Gerhard Sittig <gsi@denx.de>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>