Current init code initialises bootmem allocator with all of the low
memory that it assumes is available, but does not check for reserved
memory block, which can lead to corruption of data that may be stored
there.
Move bootmem's allocation map to a location that does not cross any
reserved regions
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14609/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Memories managed through boot_mem_map are generally expected to define
non-crossing areas. However, if part of a larger memory block is marked
as reserved, it would still be added to bootmem allocator as an
available block and could end up being overwritten by the allocator.
Prevent this by explicitly marking the memory as reserved it if exists
in the range used by bootmem allocator.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14608/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
MIPS does not currently define ELF_CORE_COPY_REGS macros and as a result
the generic implementation is used. The generic version attempts to do
directly map (struct pt_regs) into (elf_gregset_t), which isn't correct
for MIPS platforms and also triggers a BUG() at runtime in
include/linux/elfcore.h:16 (BUG_ON(sizeof(*elfregs) != sizeof(*regs)))
[ralf@linux-mips.org: Add semicolons to the macro definitions as I do not
apply https://patchwork.linux-mips.org/patch/14588/ for now.]
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14586/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
SMP_DUMP has been added as a new IPI signal when kexec support was added
for Cavium Octeon CPUs ('commit 7aa1c8f47e ("MIPS: kdump: Add support")'.
However, the new signal doesn't appear to ever have a proper handler
added (octeon_message_functions[] array has an empty handler for it),
and generic IPI handlers now trigger a BUG() on unhandled signal.
As the method is unused remove it completely and replace its only
invocation with a smp_call_function().
[ralf@linux-mips.org: Renumber SMP_ASK_C0COUNT to avoid numbering gaps.]
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14630/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
There's no reason for the pre-r6 instruction emulation code to be
limited to uniprocessor kernels. We already emulate atomic memory access
instructions in a way that works for SMP systems, and nothing else
should be affected. Remove the artificial limitation, allowing pre-r6
instruction emulation to be used with SMP kernels.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14410/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit 7c151d3d5d ("MIPS: Make use of the ERETNC instruction on MIPS
R6") began clearing LLBit during context switches, but did so on all
systems where it is writable for unclear reasons & did so from a macro
with "software_ll_bit" in its name, which is intended to operate on the
ll_bit variable used by ll/sc emulation for old CPUs.
We do now need to clear LLBit on MIPSr6 systems where we'll use eretnc
to return to userland, but we don't need to do so on MIPSr5 systems with
a writable LLBit.
Move the clear to its own appropriately named macro, do it only for
MIPSr6 systems & comment about why.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14409/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The r2_emul_return field in struct thread_info was used in order to take
an alternate codepath when returning to userland, which (besides not
implementing certain features) effectively used the eretnc instruction
in place of eret. The difference is that eretnc doesn't clear LLBit, and
therefore doesn't cause a linked load & store sequence to fail due to
emulation like eret would.
The reason eret would usually be used to clear LLBit is so that after
context switching we ensure that a load performed by one task doesn't
influence another task. However commit 7c151d3d5d ("MIPS: Make use of
the ERETNC instruction on MIPS R6") which introduced the r2_emul_return
field and conditional use of eretnc also for some reason began
explicitly clearing LLBit during context switches - despite retaining
the use of eret for everything but returns from the pre-r6 instruction
emulation code.
As LLBit is cleared upon context switches anyway, simplify this by using
eretnc unconditionally for MIPSr6 kernels. This allows us to remove the
4 byte r2_emul_return boolean from struct thread_info, simplify the
return to user code in entry.S and avoid the overhead of tracking &
checking state which we don't need.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14408/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On systems with CM3, we must ensure that the L1 & L2 ECC enables are set
to the same value. This is presumed by the hardware & cache corruption
can occur when it is not the case. Support enabling & disabling the L2
ECC checking on CM3 systems where this is controlled via a GCR, and
ensure that it matches the state of L1 ECC checking. Remove I6400 from
the switch statement it will no longer hit, and which was incorrect
since the L2 ECC enable bit isn't in the CP0 ErrCtl register.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14413/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Code in arch/mips/netlogic/common/irq.c which handles the XLP PIC fails
to build in XLR configurations due to cpu_is_xlp9xx not being defined,
leading to the following build failure:
arch/mips/netlogic/common/irq.c: In function ‘xlp_of_pic_init’:
arch/mips/netlogic/common/irq.c:298:2: error: implicit declaration
of function ‘cpu_is_xlp9xx’ [-Werror=implicit-function-declaration]
if (cpu_is_xlp9xx()) {
^
Although the code was conditional upon CONFIG_OF which is indirectly
selected by CONFIG_NLM_XLP_BOARD but not CONFIG_NLM_XLR_BOARD, the
failing XLR with CONFIG_OF configuration can be configured manually or
by randconfig.
Fix the build failure by making the affected XLP PIC code conditional
upon CONFIG_CPU_XLP which is used to guard the inclusion of
asm/netlogic/xlp-hal/xlp.h that provides the required cpu_is_xlp9xx
function.
[ralf@linux-mips.org: Fixed up as per Jayachandran's suggestion.]
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Jayachandran C <jchandra@broadcom.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14524/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When relocatable support for MIPS was merged, there was no support for
an architecture to add a postlink step for vmlinux. This meant that only
invoking a target within the boot directory, such as uImage, caused the
relocations to be inserted into vmlinux. Building just the vmlinux
target would result in a relocatable kernel with no relocation
information present.
Commit fbe6e37dab ("kbuild: add arch specific post-link Makefile")
recified this situation, so MIPS can now define a postlink step to add
relocation information into vmlinux, and remove the additional steps
tacked onto boot targets.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Tested-by: Steven J. Hill <steven.hill@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14554/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
get_frame_info() calculates the offset of the return address within a
stack frame simply by dividing a the bottom 16 bits of the instruction,
treated as a signed integer, by the size of a long. Whilst this works
for MIPS32 & MIPS64 ISAs where the sw or sd instructions are used, it's
incorrect for microMIPS where encodings differ. The result is that we
typically completely fail to unwind the stack on microMIPS.
Fix this by adjusting is_ra_save_ins() to calculate the return address
offset, and take into account the various different encodings there in
the same place as we consider whether an instruction is storing the
ra/$31 register.
With this we are now able to unwind the stack for kernels targetting the
microMIPS ISA, for example we can produce:
Call Trace:
[<80109e1f>] show_stack+0x63/0x7c
[<8011ea17>] __warn+0x9b/0xac
[<8011ea45>] warn_slowpath_fmt+0x1d/0x20
[<8013fe53>] register_console+0x43/0x314
[<8067c58d>] of_setup_earlycon+0x1dd/0x1ec
[<8067f63f>] early_init_dt_scan_chosen_stdout+0xe7/0xf8
[<8066c115>] do_early_param+0x75/0xac
[<801302f9>] parse_args+0x1dd/0x308
[<8066c459>] parse_early_options+0x25/0x28
[<8066c48b>] parse_early_param+0x2f/0x38
[<8066e8cf>] setup_arch+0x113/0x488
[<8066c4f3>] start_kernel+0x57/0x328
---[ end trace 0000000000000000 ]---
Whereas previously we only produced:
Call Trace:
[<80109e1f>] show_stack+0x63/0x7c
---[ end trace 0000000000000000 ]---
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14532/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
get_frame_info() is meant to iterate over up to the first 128
instructions within a function, but for microMIPS kernels it will not
reach that many instructions unless the function is 512 bytes long since
we calculate the maximum number of instructions to check by dividing the
function length by the 4 byte size of a union mips_instruction. In
microMIPS kernels this won't do since instructions are variable length.
Fix this by instead checking whether the pointer to the current
instruction has reached the end of the function, and use max_insns as a
simple constant to check the number of iterations against.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14530/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
During stack unwinding we call a number of functions to determine what
type of instruction we're looking at. The union mips_instruction pointer
provided to them may be pointing at a 2 byte, but not 4 byte, aligned
address & we thus cannot directly access the 4 byte wide members of the
union mips_instruction. To avoid this is_ra_save_ins() copies the
required half-words of the microMIPS instruction to a correctly aligned
union mips_instruction on the stack, which it can then access safely.
The is_jump_ins() & is_sp_move_ins() functions do not correctly perform
this temporary copy, and instead attempt to directly dereference 4 byte
fields which may be misaligned and lead to an address exception.
Fix this by copying the instruction halfwords to a temporary union
mips_instruction in get_frame_info() such that we can provide a 4 byte
aligned union mips_instruction to the is_*_ins() functions and they do
not need to deal with misalignment themselves.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14529/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
get_frame_info() can be called in microMIPS kernels with the ISA bit
already clear. For example this happens when unwind_stack_by_address()
is called because we begin with a PC that has the ISA bit set & subtract
the (odd) offset from the preceding symbol (which does not have the ISA
bit set). Since get_frame_info() unconditionally subtracts 1 from the PC
in microMIPS kernels it incorrectly misaligns the address it then
attempts to access code at, leading to an address error exception.
Fix this by using msk_isa16_mode() to clear the ISA bit, which allows
get_frame_info() to function regardless of whether it is provided with a
PC that has the ISA bit set or not.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14528/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When clearing the .bss section in kernel_entry we do so using LONG_S
instructions, and branch whilst the current write address doesn't equal
the end of the .bss section minus the size of a long integer. The .bss
section always begins at a long-aligned address and we always increment
the write pointer by the size of a long integer - we therefore rely upon
the .bss section ending at a long-aligned address. If this is not the
case then the long-aligned write address can never be equal to the
non-long-aligned end address & we will continue to increment past the
end of the .bss section, attempting to zero the rest of memory.
Despite this requirement that .bss end at a long-aligned address we pass
0 as the end alignment requirement to the BSS_SECTION macro and thus
don't guarantee any particular alignment, allowing us to hit the error
condition described above.
Fix this by instead passing 8 bytes as the end alignment argument to
the BSS_SECTION macro, ensuring that the end of the .bss section is
always at least long-aligned.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14526/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This hook provides the platform the chance to perform any required
setup before the boot processor switches to the relocated kernel.
The relocated kernel has been copied and fixed up ready for execution
at this point. Secondary CPUs may wish to switch to it early. There
is also the opportunity for the platform to abort jumping to the
relocated kernel if there is anything wrong with the chosen offset.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14651/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch enables KASLR for Octeon systems. The SMP startup code is
such that the secondaries monitor the volatile variable
'octeon_processor_relocated_kernel_entry' for any non-zero value.
The 'plat_post_relocation hook' is used to set that value to the
kernel entry point of the relocated kernel. The secondary CPUs will
then jusmp to the new kernel, perform their initialization again
and begin waiting for the boot CPU to start them via the relocated
loop 'octeon_spin_wait_boot'. Inspired by Steven's code from Cavium.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Signed-off-by: Steven J. Hill <steven.hill@cavium.com>
Acked-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14669/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add in the function needed for Octeon platforms to support KASLR.
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>