A MIPS specific csum_and_copy_from_user function is necessary because
the generic one from include/net/checksum.h will not work for EVA.
This is because the generic one will link to symbols from lib/checksum.c
which are not EVA aware.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
In EVA mode, different instructions need to be used to read/write
from kernel and userland. In non-EVA mode, there is no functional
difference. The current address limit is checked to decide the
type of operation that will be performed.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
The 'copy_user' symbol can be used to copy from or to
userland so we will use two different symbols for these
operations. This makes no difference in the existing code,
but when the core is operating in EVA mode, different instructions
need to be used to read and write to userland address space.
The old function has also been renamed to 'copy_kernel' to denote
that it is suitable for copy data to and from kernel space.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
The str*_user functions are used to securely access NULL terminated
strings from userland. Therefore, it's necessary to use the appropriate
EVA function. However, if the string is in kernel space, then the normal
instructions are being used to access it. The __str*_kernel_asm and
__str*_user_asm symbols are the same for non-EVA mode so there is no
functional change for the non-EVA kernels.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Use the EVA specific functions from memcpy.S to perform
userspace operations. When get_fs() == get_ds() the usual load/store
instructions are used because the destination address is located in
the kernel address space region. Otherwise, the EVA specifc load/store
instructions are used which will go through th TLB to perform the virtual
to physical translation for the userspace address.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
The {get,put}_user_asm functions can be used to load data from
kernel or the user address space so rename them to avoid
confusion.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Use the EVA instruction wrappers from asm.h to perform
read/write operations from userland.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
ulb, ulh, ulw are macros which emulate unaligned access for MIPS.
However, no such macros exist for EVA mode, so the only way to do
EVA unaligned accesses is in the ADE exception handler. As a result
of which, disable these macros for EVA.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Similar to __get_user_* functions, move common code to
__put_user_*_common so it can be shared among similar users.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
In preparation for EVA support, an instruction argument is needed
for the __get_user_asm{,_ll32} functions to allow instruction overrides in
EVA mode. Even though EVA only works for MIPS 32-bit, both codepaths are
changed (32-bit and 64-bit) for consistency reasons.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Use LLE/SCE instructions for performing an address translation for
userspace when EVA is enabled.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
EVA uses specific instructions for accessing user memory.
Instead of polluting the kernel with numerous #ifdef CONFIG_EVA
we add wrappers for all the instructions that need special
handling when EVA is enabled.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
EVA can use the PREFE instruction to perform the virtual address
translation using the user mapping of the address rather than the
kernel mapping.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
This patch extends sigcontext in order to hold the most significant 64
bits of each vector register in addition to the MSA control & status
register. The least significant 64 bits are already saved as the scalar
FP context. This makes things a little awkward since the least & most
significant 64 bits of each vector register are not contiguous in
memory. Thus the copy_u & insert instructions are used to transfer the
values of the most significant 64 bits via GP registers.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6533/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch adds support for context switching the MSA vector registers.
These 128 bit vector registers are aliased with the FP registers - an
FP register accesses the least significant bits of the vector register
with which it is aliased (ie. the register with the same index). Due to
both this & the requirement that the scalar FPU must be 64-bit (FR=1) if
enabled at the same time as MSA the kernel will enable MSA & scalar FP
at the same time for tasks which use MSA. If we restore the MSA vector
context then we might as well enable the scalar FPU since the reason it
was left disabled was to allow for lazy FP context restoring - but we
just restored the FP context as it's a subset of the vector context. If
we restore the FP context and have previously used MSA then we have to
restore the whole vector context anyway (see comment in
enable_restore_fp_context for details) so similarly we might as well
enable MSA.
Thus if a task does not use MSA then it will continue to behave as
without this patch - the scalar FP context will be saved & restored as
usual. But if a task executes an MSA instruction then it will save &
restore the vector context forever more.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6431/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch adds support for probing the MSAP bit within the Config3
register in order to detect the presence of the MSA ASE. Presence of the
ASE will be indicated in /proc/cpuinfo. The value of the MSA
implementation register will be displayed at boot to aid debugging and
verification of a correct setup, as is done for the FPU.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6430/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch introduces definitions for the MSA control registers and
functions which allow access to both the control & vector registers. If
the toolchain being used to build the kernel includes support for MSA
then this patch will make use of that support & use MSA instructions
directly. However toolchain support for MSA is very new & far from a
point where it can be reasonably expected that everyone building the
kernel uses a toolchain with support. Thus fallbacks using .word
assembler directives are also provided for now as a temporary measure.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6429/
Patchwork: https://patchwork.linux-mips.org/patch/6607/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When saving or restoring scalar FP context we want to access the least
significant 64 bits of each FP register. When the FP registers are 64
bits wide that is trivially the start of the registers value in memory.
However when the FP registers are wider this equivalence will no longer
be true for big endian systems. Define a new set of offset macros for
the least significant 64 bits of each saved FP register within thread
context, and make use of them when saving and restoring scalar FP
context.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6428/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch replaces the fpureg_t typedef with a "union fpureg" enabling
easier access to 32 & 64 bit values. This allows the access macros used
in cp1emu.c to be simplified somewhat. It will also make it easier to
expand the width of the FP registers as will be done in a future
patch in order to support the 128 bit registers introduced with MSA.
No behavioural change is intended by this patch.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Qais Yousef <qais.yousef@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6532/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When userland uses syscall() to perform an indirect system call
the actually system call that needs to be checked by the filter
is on the first argument. The kernel code needs to handle this case
by looking at the original syscall number in v0 and if it's
NR_syscall, then it needs to examine the first argument to
identify the real system call that will be executed.
Similarly, we need to 'virtually' shift the syscall() arguments
so the syscall_get_arguments() function can fetch the correct
arguments for the indirect system call.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6404/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This effectively renames __syscall_get_arch to syscall_get_arch
and implements a compatible interface for the seccomp API.
The seccomp code (kernel/seccomp.c) expects a syscall_get_arch
function to be defined for every architecture, so we drop
the leading underscores from the existing function.
This also makes use of the 'task' argument to determine the type
the process instead of assuming the process has the same
characteristics as the kernel it's running on.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6398/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The vpe_id field of struct cpuinfo_mips is only present when one of
CONFIG_MIPS_MT_{SMP,SMTC} is enabled. That means that any code accessing
which may compile without MT is currently forced to use an #ifdef.
Instead this patch provides an accessor macro, #ifdef'd appropriately
to prevent further #ifdef's elsewhere.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6646/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
For non-mipsr2 processors, the local_irq_disable contains an mfc0-mtc0
pair with instructions inbetween. With preemption enabled, this sequence
may get preempted and effect a stale value of CP0_STATUS when executing
the mtc0 instruction. This commit avoids this scenario by incrementing
the preempt count before the mfc0 and decrementing it after the mtc9.
[ralf@linux-mips.org: This patch is sorting out the part that were missed
by e97c5b6098 [MIPS: Make irqflags.h functions preempt-safe for non-mipsr2
cpus.] I also re-enabled the inclusion of <asm/asm-offsets.h> at the top
of <asm/asmmacro.h>].
Signed-off-by: Jim Quinlan <jim2101024@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: cernekee@gmail.com
Patchwork: https://patchwork.linux-mips.org/patch/6164/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Due to name collision in ftrace safe_load and safe_store macros,
these macros cannot take expressions as operands.
For example, compiler will complain for a macro call like the following:
safe_store_code(new_code2, ip + 4, faulted);
arch/mips/include/asm/ftrace.h:61:6: note: in definition of macro 'safe_store'
: [dst] "r" (dst), [src] "r" (src)\
^
arch/mips/kernel/ftrace.c:118:2: note: in expansion of macro 'safe_store_code'
safe_store_code(new_code2, ip + 4, faulted);
^
arch/mips/kernel/ftrace.c:118:32: error: undefined named operand 'ip + 4'
safe_store_code(new_code2, ip + 4, faulted);
^
arch/mips/include/asm/ftrace.h:61:6: note: in definition of macro 'safe_store'
: [dst] "r" (dst), [src] "r" (src)\
^
arch/mips/kernel/ftrace.c:118:2: note: in expansion of macro 'safe_store_code'
safe_store_code(new_code2, ip + 4, faulted);
^
This build error is triggered by a4671094 [MIPS: ftrace: Fix icache flush
range error]. Tweak variable naming in those macros to allow flexible
operands.
Signed-off-by: Viller Hsiao <villerhsiao@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: rostedt@goodmis.org
Cc: fweisbec@gmail.com
Cc: mingo@redhat.com
Cc: Qais.Yousef@imgtec.com
Patchwork: https://patchwork.linux-mips.org/patch/6622/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The ability to read hardware registers from userland with the RDHWR
instruction should depend upon the corresponding bit of the HWREna
register being set, otherwise a reserved instruction exception should be
generated.
However KVM's current emulation ignores the guest's HWREna and always
emulates RDHWR instructions even if the guest OS has disallowed them.
Therefore rework the RDHWR emulation code to check for privilege or the
corresponding bit in the guest HWREna bit. Also remove the #if 0 case
for the UserLocal register. I presume it was there for debug purposes
but it seems unnecessary now that the guest can control whether it
causes a guest exception.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sanjay Lal <sanjayl@kymasys.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The syscall_get_arguments function expects the arguments to be copied
to the '*args' argument but instead a local variable was used to hold
the system call argument. As a result of which, this variable was
never passed to the filter and any filter testing the system call
arguments would fail. This is fixed by passing the '*args' variable
as the destination memory for the system call arguments.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6402/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The kernel currently only probes for a MIPS Coherence Manager in the
Malta interrupt code in order to detect & enable the GIC. However CM is
not Malta-specific, so this should really be more generic. This patch
introduces some non-Malta-specific code which probes for a CM and
performs some basic initialisation.
A new header, with temporarily duplicated register definitions, is
introduced in order to:
1) Allow the new definitions to be correct with regards to the
CM documentation, as many of those in gcmpregs.h aren't.
2) Allow switching away from the REG() macro used via a few layers of
nested macros in order to access registers in gcmpregs.h. This
patch instead introduced accessor functions akin to the
{read,write}_c0_* functions used for cop0 registers.
3) Allow users of the CM to be migrated one by one.
4) Switch from the name 'GCMP' to 'CM' since the Coherence Manager is
what this code is actually dealing with.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6360/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The GIC IPI functions aren't necessarily specific to the "CMP
framework" SMP implementation, and will be used elsewhere in a
subsequent commit. This patch adds cleaned up GIC IPI functions to a
separate file which is compiled when a new CONFIG_MIPS_GIC_IPI Kconfig
symbol is selected, and selects that symbol for CONFIG_MIPS_CMP.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6359/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>