Add initialisation for Memory Accessibility Attribute Registers. Generic
code cannot know the platform-specific requirements with regards to
speculative accesses, so it simply calls a platform_maar_init function
which platforms with MAARs are expected to implement by calling the
provided write_maar_pair function & returning the number of MAAR pairs
used. A weak default implementation will simply use no MAAR pairs. Any
present but unused MAAR pairs are then marked invalid, effectively
disabling them.
The end result of this patch is that MAARs are all marked invalid, until
platforms implement the platform_maar_init function.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7331/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add accessor macros for the Memory Accessibility Attribute Registers
(MAARs), the bits contained within the MAARs & the Config5.MRP bit
indicating their presence. The only current use of the MAARs is to
enable speculative accesses to regions of memory. Besides the potential
performance benefits of speculative accesses, they are a requirement
for the P5600 core to handle non-128b-aligned MSA vector loads & stores
rather than generating an address error.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7329/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In light of the commit 16f77de82f (Revert "MIPS: Save/restore MSA
context around signals") the MSA support in the kernel is incomplete.
Until the replacement for the former sigcontext changes is agreed upon
and in tree, mark MSA experimental & disable it by default.
MSA is only implemented by one CPU supported by the kernel, the P5600.
The P5600 is a 32 bit core, and thus MSA can only be used when the
experimental CONFIG_MIPS_O32_FP64_SUPPORT option is enabled. Therefore
MSA is only being used in experimental settings anyway and this change
doesn't actually make any difference beyond clarifying the state of
MSA support.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7311/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The TIF_MSA_CTX_LIVE flag (indicating that a task has MSA context which
needs to be preserved) was being cleared in start_thread, but the
TIF_USEDMSA flag (indicating that a task has used MSA in this timeslice)
was not. In copy_thread neither flag was cleared, but both need to be.
Without clearing these flags the kernel will proceed to attempt to save
MSA context when the task is context switched out, and if the task had
not used MSA in the meantime then it will fail because MSA or the FPU
are disabled. The end result is typically:
do_cpu invoked from kernel context![#1]:
CPU: 0 PID: 99 Comm: sh Not tainted 3.16.0-rc4-00025-g6dc9476-dirty #88
task: 8f23dc60 ti: 8f1d8000 task.ti: 8f1d8000
...
Call Trace:
[<8010edbc>] resume+0x5c/0x280
[<80481e0c>] __schedule+0x370/0x800
[<80104838>] work_resched+0x8/0x2c
Fix by consistently clearing both flags in both functions.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7309/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The MSA specification upon first read appears to suggest that it is safe
to perform vector loads & stores with arbitrary alignment. However it
leaves provision for "address-dependent exceptions"... Align the vector
context to a 16 byte boundary to ensure that the kernel cannot cause any
such exceptions.
Note that the fpu field of struct thread_struct was already at a 16 byte
boundary within the struct, the introduction of FPU_ALIGN simply makes
the requirement explicit. The only part of this impacting the generated
kernel binary is ARCH_MIN_TASKALIGN.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7308/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The kernel relies upon MSA being disabled when a task begins running,
so that it can initialise or restore context in response to the
resulting MSA disabled exception. Previously the state of MSA following
boot was left as it was before the kernel ran, where MSA could
potentially have been enabled. Explicitly disable it during boot to
prevent any problems.
As a nice side effect the code reads a little better too.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7306/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit d96cc3d1ec "MIPS: Add microMIPS MSA support." attempted to use
the value of a macro within an inline asm statement but instead emitted
a comment leading to the cfcmsa & ctcmsa instructions being omitted. Fix
that by passing CFC_MSA_INSN & CTC_MSA_INSN as arguments to the asm
statements.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7305/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If a task does not execute scalar FP instructions prior to using MSA
then the flags indicating that the task has live MSA context were not
being set. The upper 64b of each vector register would then be lost
upon the tasks first context switch after using MSA.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7500/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When a task first makes use of MSA we need to ensure that the upper
64b of the vector registers are set to some value such that no
information can be leaked to it from the previous task to use MSA
context on the CPU. The architecture formerly specified that these
bits would be cleared to 0 when a scalar FP instructions wrote to the
aliased FP registers, which would have implicitly handled this as the
kernel restored scalar FP context. However more recent versions of the
specification now state that the value of the bits in such cases is
unpredictable. Initialise them explictly to be sure, and set all the
bits to 1 rather than 0 for consistency with the least significant
64b.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7497/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The kernel depends upon MSA never being enabled when the FPU is not, a
condition which is currently violated in a few places (whilst saving
sigcontext, following mips_cpu_save). Catch all the problem cases by
disabling MSA in lose_fpu, after saving context if necessary.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7302/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Switching the vector context implicitly saves & restores the state of
the aliased scalar FP data registers, however the scalar FP control
& status register is distinct from the MSA control & status register.
In order to allow scalar FP to function correctly in programs using
MSA, the scalar CSR needs to be saved & restored along with the MSA
vector context.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7301/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The GICBIS macro could update the GIC registers incorrectly, depending
on the data value passed in:
* Bits were only OR'd into the register data, so register fields could
not be cleared.
* Bits were OR'd into the register data without masking the data to the
correct field width, corrupting adjacent bits.
Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7378/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Move most of the functionality of gic_get_int() into a new function
gic_get_int_mask() which takes a bitmask of interrupts in which the
caller is interested, and returns the subset which are pending for the
current CPU.
This allows CP0 IRQ dispatch routines to check only the GIC interrupts
which are routed to a particular CPU interrupt input.
gic_get_int() is reimplemented using gic_get_int_mask() and is retained
for use by any platforms for which gic_get_int() is sufficient.
Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7376/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
irq-gic.c:gic_get_int() masks out interrupts from the pending set which
aren’t in the pcpu_mask. Only interrupts marked with GIC_FLAG_IPI were
set in pcpu_mask, meaning that peripheral interrupts also had to be
marked as IPIs. Remove the use of GIC_FLAG_IPI and allow the flags
member of struct gic_intr_map to be zero.
Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7374/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Detect if the core supports unique exception codes for the
Read-Inhibit and Execute-Inhibit exceptions and set the
option accordingly. The RI/XI exception support is detected
by setting the 27th bit (IEC) of the PageGrain C0 register
and reading back the value of that register to verify the
bit is enabled.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7340/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The Hardware Page Table Walker aims to speed up TLB refill exceptions
by handling them in the hardware level instead of having a software
TLB refill handler. However, a TLB refill exception can still be
thrown in certain cases such as, synchronus exceptions, or address
translation or memory errors during the HTW operation. As a result of
which, HTW must not be considered a complete replacement for the TLB
refill software handler, but rather a fast-path for it.
For HTW to work, the PWBase register must contain the task's page
global directory address so the HTW will kick in on TLB refill
exceptions.
Due to HTW being a separate engine embedded deep in the CPU pipeline,
we need to restart the HTW everytime a PTE changes to avoid HTW
fetching a old entry from the page tables. It's also necessary to
restart the HTW on context switches to prevent it from fetching a
page from the previous process. Finally, since HTW is using the
entryhi register to write the translations to the TLB, it's necessary
to stop the HTW whenever the entryhi changes (eg for tlb probe
perations) and enable it back afterwards.
== Performance ==
The following trivial test was used to measure the performance of the
HTW. Using the same root filesystem, the following command was used
to measure the number of tlb refill handler executions with and
without (using 'nohtw' kernel parameter) HTW support. The kernel was
modified to use a scratch register as a counter for the TLB refill
exceptions.
find /usr -type f -exec ls -lh {} \;
HTW Enabled:
TLB refill exceptions: 12306
HTW Disabled:
TLB refill exceptions: 17805
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/7336/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Long integers which are 4 bytes in MIPS32 can't hold new CPU
options anymore, so the type of the 'options' variable is changed
to unsigned long long which allows 32 more cpu options to be defined
for MIPS32
Also, re-arrange the 'options' struct member to avoid potential 4-byte
alignment gap in the middle of the struct.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7324/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add cases in perf_event_mipsxx.c for CPU_P5600. All the event numbers
listed for proAptiv also apply to P5600, so we use mipsxxcore_event_map2
and mipsxxcore_cache_map2 too, but the P5600 has 8-bit event numbers so
bit 8 (256) of the user ABI config is used for the parity bit (to
specify odd/even counter events).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7242/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In mipsxx_pmu_map_raw_event(), set event_id to base_id after the cpu
type conditional code to allow that code to override the base_id to use
more bits from the config and a higher bit for parity.
This will allow cores with up to 512 events between all even/odd
counters (an 8-bit event id) such as P5600 to use bit 8 for parity.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7243/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The struct user definition in this file is not used anywhere (the ELF
core dumper does not use that format). Therefore, remove the header and
instead enable the asm-generic user.h which is an empty header to
satisfy a few generic headers which still try to include user.h.
Signed-off-by: Alex Smith <alex@alex-smith.me.uk>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7459/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In uapi/asm/ptrace.h, a user version of pt_regs is defined wrapped in
ifndef __KERNEL__. This structure definition does not match anything
used by any kernel API, in particular it does not match the format used
by PTRACE_{GET,SET}REGS.
Therefore, replace the structure definition with one matching what is
used by PTRACE_{GET,SET}REGS. The format used by these is the same for
both 32-bit and 64-bit.
Also, change the implementation of PTRACE_{GET,SET}REGS to use this new
structure definition. The structure is renamed to user_pt_regs when
__KERNEL__ is defined to avoid conflicts with the kernel's own pt_regs.
Signed-off-by: Alex Smith <alex@alex-smith.me.uk>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7457/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On 32-bit/O32, pt_regs has a padding area at the beginning into which the
syscall arguments passed via the user stack are copied. 4 arguments
totalling 16 bytes are copied to offset 16 bytes into this area, however
the area is only 24 bytes long. This means the last 2 arguments overwrite
pt_regs->regs[{0,1}].
If a syscall function returns an error, handle_sys stores the original
syscall number in pt_regs->regs[0] for syscall restart. signal.c checks
whether regs[0] is non-zero, if it is it will check whether the syscall
return value is one of the ERESTART* codes to see if it must be
restarted.
Should a syscall be made that results in a non-zero value being copied
off the user stack into regs[0], and then returns a positive (non-error)
value that matches one of the ERESTART* error codes, this can be mistaken
for requiring a syscall restart.
While the possibility for this to occur has always existed, it is made
much more likely to occur by commit 46e12c07b3 ("MIPS: O32 / 32-bit:
Always copy 4 stack arguments."), since now every syscall will copy 4
arguments and overwrite regs[0], rather than just those with 7 or 8
arguments.
Since that commit, booting Debian under a 32-bit MIPS kernel almost
always results in a hang early in boot, due to a wait4 syscall returning
a PID that matches one of the ERESTART* codes, which then causes an
incorrect restart of the syscall.
The problem is fixed by increasing the size of the padding area so that
arguments copied off the stack will not overwrite pt_regs->regs[{0,1}].
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: <stable@vger.kernel.org> # v3.13+
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7454/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit
Commit 1d40cfcd34
Author: Ralf Baechle <ralf@linux-mips.org>
Date: Fri Jul 15 15:23:23 2005 +0000
Avoid SMP cacheflushes. This is a minor optimization of startup but
will also avoid smp_call_function from doing stupid things when called
from a CPU that is not yet marked online.
missed an appropriate cache flush of TLB refill handler because that time it was
at fixed location CAC_BASE. After years the refill handler in EBASE vector
is not at that location and can be allocated in some another memory and needs
I-cache sync as other TLB exception vectors.
Besides that, the new function - local_flash_icache_range() was introduced
to avoid SMP cacheflushes.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: paul.gortmaker@windriver.com
Cc: jchandra@broadcom.com
Cc: linux-kernel@vger.kernel.org
Cc: david.daney@cavium.com
Patchwork: https://patchwork.linux-mips.org/patch/7312/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>