Provide a function to trivially return the version of the CM present in
the system, or 0 if no CM is present. The mips_cm_revision() will be
used later on to determine the CM register width, so it must not use
the regular CM accessors to read the revision register since that will
lead to build failures due to recursive inlines.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10655/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Matthew Fortune <Matthew.Fortune@imgtec.com> reports:
The genex.S file appears to mix the case of a macro between its definition and
use. A cut down example of this is below. The macro __build_clear_none has
lower case 'build' but ends up being instantiated with upper case BUILD. Can
this be fixed on master. It has been picked up by the LLVM integrated assembler
which is currently case sensitive. We are likely to fix the assembler as well
but the code is currently inconsistent in the kernel.
.macro __build_clear_none
.endm
.macro __BUILD_HANDLER exception handler clear verbose ext
.align 5
.globl handle_\exception; .align 2; .type handle_\exception, @function; .ent
handle_\exception, 0; handle_\exception: .frame $29, 184, $29
.set noat
.globl handle_\exception\ext; .type handle_\exception\ext, @function;
handle_\exception\ext:
__BUILD_clear_\clear
.endm
.macro BUILD_HANDLER exception handler clear verbose
__BUILD_HANDLER \exception \handler \clear \verbose _int
.endm
BUILD_HANDLER ftlb ftlb none silent
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Reported-by: Matthew Fortune <Matthew.Fortune@imgtec.com>
Commit 4c21b8fd8f ("MIPS: seccomp: Handle indirect system calls (o32)")
fixed indirect system calls on O32 but it also introduced a bug for MIPS64
where it erroneously modified the v0 (syscall) register with the assumption
that the sycall offset hasn't been taken into consideration. This breaks
seccomp on MIPS64 n64 and n32 ABIs. We fix this by replacing the addition
with a move instruction.
Fixes: 4c21b8fd8f ("MIPS: seccomp: Handle indirect system calls (o32)")
Cc: <stable@vger.kernel.org> # 3.15+
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10951/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Pull MIPS fixes from Ralf Baechle:
"Another round of MIPS fixes for 4.2. No area does particularly stand
out but we have a two unpleasant ones:
- Kernel ptes are marked with a global bit which allows the kernel to
share kernel TLB entries between all processes. For this to work
both entries of an adjacent even/odd pte pair need to have the
global bit set. There has been a subtle race in setting the other
entry's global bit since ~ 2000 but it take particularly
pathological workloads that essentially do mostly vmalloc/vfree to
trigger this.
This pull request fixes the 64-bit case but leaves the case of 32
bit CPUs with 64 bit ptes unsolved for now. The unfixed cases
affect hardware that is not available in the field yet.
- Instruction emulation requires loading instructions from user space
but the current fast but simplistic approach will fail on pages
that are PROT_EXEC but !PROT_READ. For this reason we temporarily
do not permit this permission and will map pages with PROT_EXEC |
PROT_READ.
The remainder of this pull request is more or less across the field
and the short log explains them well"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
MIPS: Make set_pte() SMP safe.
MIPS: Replace add and sub instructions in relocate_kernel.S with addiu
MIPS: Flush RPS on kernel entry with EVA
Revert "MIPS: BCM63xx: Provide a plat_post_dma_flush hook"
MIPS: BMIPS: Delete unused Kconfig symbol
MIPS: Export get_c0_perfcount_int()
MIPS: show_stack: Fix stack trace with EVA
MIPS: do_mcheck: Fix kernel code dump with EVA
MIPS: SMP: Don't increment irq_count multiple times for call function IPIs
MIPS: Partially disable RIXI support.
MIPS: Handle page faults of executable but unreadable pages correctly.
MIPS: Malta: Don't reinitialise RTC
MIPS: unaligned: Fix build error on big endian R6 kernels
MIPS: Fix sched_getaffinity with MT FPAFF enabled
MIPS: Fix build with CONFIG_OF=y for non OF-enabled targets
CPUFREQ: Loongson2: Fix broken build due to incorrect include.
This function can leak kernel stack data when the user siginfo_t has a
positive si_code value. The top 16 bits of si_code descibe which fields
in the siginfo_t union are active, but they are treated inconsistently
between copy_siginfo_from_user32, copy_siginfo_to_user32 and
copy_siginfo_to_user.
copy_siginfo_from_user32 is called from rt_sigqueueinfo and
rt_tgsigqueueinfo in which the user has full control overthe top 16 bits
of si_code.
This fixes the following information leaks:
x86: 8 bytes leaked when sending a signal from a 32-bit process to
itself. This leak grows to 16 bytes if the process uses x32.
(si_code = __SI_CHLD)
x86: 100 bytes leaked when sending a signal from a 32-bit process to
a 64-bit process. (si_code = -1)
sparc: 4 bytes leaked when sending a signal from a 32-bit process to a
64-bit process. (si_code = any)
parsic and s390 have similar bugs, but they are not vulnerable because
rt_[tg]sigqueueinfo have checks that prevent sending a positive si_code
to a different process. These bugs are also fixed for consistency.
Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
On MIPS the GLOBAL bit of the PTE must have the same value in any
aligned pair of PTEs. These pairs of PTEs are referred to as
"buddies". In a SMP system is is possible for two CPUs to be calling
set_pte() on adjacent PTEs at the same time. There is a race between
setting the PTE and a different CPU setting the GLOBAL bit in its
buddy PTE.
This race can be observed when multiple CPUs are executing
vmap()/vfree() at the same time.
Make setting the buddy PTE's GLOBAL bit an atomic operation to close
the race condition.
The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not*
handled.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: <stable@vger.kernel.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10835/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The AR71XX/AR9XXX SoC have a simple reset controller with one bit per
reset line.
Signed-off-by: Alban Bedel <albeu@free.fr>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
MIPS was using finish_arch_switch() as a hook to restore and initialize
CPU context for all threads, even newly created kernel and user threads.
This is however entirely solvable within switch_to() so get rid of
finish_arch_switch() which is in the way of scheduler cleanups.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
There are various problems and short-comings with the current
static_key interface:
- static_key_{true,false}() read like a branch depending on the key
value, instead of the actual likely/unlikely branch depending on
init value.
- static_key_{true,false}() are, as stated above, tied to the
static_key init values STATIC_KEY_INIT_{TRUE,FALSE}.
- we're limited to the 2 (out of 4) possible options that compile to
a default NOP because that's what our arch_static_branch() assembly
emits.
So provide a new static_key interface:
DEFINE_STATIC_KEY_TRUE(name);
DEFINE_STATIC_KEY_FALSE(name);
Which define a key of different types with an initial true/false
value.
Then allow:
static_branch_likely()
static_branch_unlikely()
to take a key of either type and emit the right instruction for the
case.
This means adding a second arch_static_branch_jump() assembly helper
which emits a JMP per default.
In order to determine the right instruction for the right state,
encode the branch type in the LSB of jump_entry::key.
This is the final step in removing the naming confusion that has led to
a stream of avoidable bugs such as:
a833581e37 ("x86, perf: Fix static_key bug in load_mm_cr4()")
... but it also allows new static key combinations that will give us
performance enhancements in the subsequent patches.
Tested-by: Rabin Vincent <rabin@rab.in> # arm
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> # ppc
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # s390
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Since we've already stepped away from ENABLE is a JMP and DISABLE is a
NOP with the branch_default bits, and are going to make it even worse,
rename it to make it all clearer.
This way we don't mix multiple levels of logic attributes, but have a
plain 'physical' name for what the current instruction patching status
of a jump label is.
This is a first step in removing the naming confusion that has led to
a stream of avoidable bugs such as:
a833581e37 ("x86, perf: Fix static_key bug in load_mm_cr4()")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
[ Beefed up the changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The show_stack() function deals exclusively with kernel contexts, but if
it gets called in user context with EVA enabled, show_stacktrace() will
attempt to access the stack using EVA accesses, which will either read
other user mapped data, or more likely cause an exception which will be
handled by __get_user().
This is easily reproduced using SysRq t to show all task states, which
results in the following stack dump output:
Stack : (Bad stack address)
Fix by setting the current user access mode to kernel around the call to
show_stacktrace(). This causes __get_user() to use normal loads to read
the kernel stack.
Now we get the correct output, like this:
Stack : 00000000 80168960 00000000 004a0000 00000000 00000000 8060016c 1f3abd0c
1f172cd8 8056f09c 7ff1e450 8014fc3c 00000001 806dd0b0 0000001d 00000002
1f17c6a0 1f17c804 1f17c6a0 8066f6e0 00000000 0000000a 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 0110e800 1f3abd6c 1f17c6a0
...
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/10778/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If a machine check exception is raised in kernel mode, user context,
with EVA enabled, then the do_mcheck handler will attempt to read the
code around the EPC using EVA load instructions, i.e. as if the reads
were from user mode. This will either read random user data if the
process has anything mapped at the same address, or it will cause an
exception which is handled by __get_user, resulting in this output:
Code: (Bad address in epc)
Fix by setting the current user access mode to kernel if the saved
register context indicates the exception was taken in kernel mode. This
causes __get_user to use normal loads to read the kernel code.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/10777/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The majority of SMP platforms handle their IPIs through do_IRQ()
which calls irq_{enter/exit}(). When a call function IPI is received,
smp_call_function_interrupt() is called which also calls
irq_{enter,exit}(), meaning irq_count is raised twice.
When tick broadcasting is used (which is implemented via a call
function IPI), this incorrectly causes all CPU idle time on the core
receiving broadcast ticks to be accounted as time spent servicing
IRQs, as account_process_tick() will account as such if irq_count is
greater than 1. This results in 100% CPU usage being reported on a
core which receives its ticks via broadcast.
This patch removes the SMP smp_call_function_interrupt() wrapper which
calls irq_{enter,exit}(). Platforms which handle their IPIs through
do_IRQ() now call generic_smp_call_function_interrupt() directly to
avoid incrementing irq_count a second time. Platforms which don't
(loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt()
wrapped in irq_{enter,exit}().
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10770/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Execution of break instruction, trap instructions, emulation of unaligned
loads or floating point instructions - anything that tries to read the
instruction's opcode from userspace - needs read access to a page.
RIXI (Read Inhibit / Execute Inhibit) support however allows the creation of
pags that are executable but not readable. On such a mapping the attempted
load of the opcode by the kernel is going to cause an endless loop of
page faults.
The quick workaround for this is to disable the combinations that the kernel
currently isn't able to handle which are executable mappings.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On Malta, since commit a87ea88d8f ("MIPS: Malta: initialise the RTC at
boot"), the RTC is reinitialised and forced into binary coded decimal
(BCD) mode during init, even if the bootloader has already initialised
it, and may even have already put it into binary mode (as YAMON does).
This corrupts the current time, can result in the RTC seconds being an
invalid BCD (e.g. 0x1a..0x1f) for up to 6 seconds, as well as confusing
YAMON for a while after reset, enough for it to report timeouts when
attempting to load from TFTP (it actually uses the RTC in that code).
Therefore only initialise the RTC to the extent that is necessary so
that Linux avoids interfering with the bootloader setup, while also
allowing it to estimate the CPU frequency without hanging, without a
bootloader necessarily having done anything with the RTC (for example
when the kernel is loaded via EJTAG).
The divider control is configured for a 32KHZ reference clock if
necessary, and the SET bit of the RTC_CONTROL register is cleared if
necessary without changing any other bits (this bit will be set when
coming out of reset if the battery has been disconnected).
Fixes: a87ea88d8f ("MIPS: Malta: initialise the RTC at boot")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.14+
Patchwork: https://patchwork.linux-mips.org/patch/10739/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel
unaligned accesses") renamed the Load* and Store* defines in unaligned.c
to _Load* and _Store* as part of its fix. One define was missed out which
causes big endian R6 kernels to fail to build.
arch/mips/kernel/unaligned.c:880:35:
error: implicit declaration of function '_StoreDW'
#define StoreDW(addr, value, res) _StoreDW(addr, value, res)
^
Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Fixes: eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel unaligned accesses")
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # 4.0+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10575/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit 01306aeadd ("MIPS: prepare for user enabling of CONFIG_OF")
changed the guards in asm/prom.h from CONFIG_OF to CONFIG_USE_OF, but
missed the actual function declarations in kernel/prom.c, which have
additional dependencies.
Fixes the following build error:
CC arch/mips/kernel/prom.o
arch/mips/kernel/prom.c: In function '__dt_setup_arch':
arch/mips/kernel/prom.c:54:2: error: implicit declaration of function 'early_init_dt_scan' [-Werror=implicit-function-declaration]
if (!early_init_dt_scan(bph))
^
Fixes: 01306aeadd ("MIPS: prepare for user enabling of CONFIG_OF")
Signed-off-by: Jonas Gorski <jogo@openwrt.org>
Acked-by: Rob Herring <robh@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: devicetree@vger.kernel.org
Cc: Grant Likely <grant.likely@linaro.org>
Patchwork: https://patchwork.linux-mips.org/patch/10741/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* cleanup-clk-h-includes: (62 commits)
clk: Remove clk.h from clk-provider.h
clk: h8300: Remove clk.h and clkdev.h includes
clk: at91: Include clk.h and slab.h
clk: ti: Switch clk-provider.h include to clk.h
clk: pistachio: Include clk.h
clk: ingenic: Include clk.h
clk: si570: Include clk.h
clk: moxart: Include clk.h
clk: cdce925: Include clk.h
clk: Include clk.h in clk.c
clk: zynq: Include clk.h
clk: ti: Include clk.h
clk: sunxi: Include clk.h and remove unused clkdev.h includes
clk: st: Include clk.h
clk: qcom: Include clk.h
clk: highbank: Include clk.h
clk: bcm: Include clk.h
clk: versatile: Remove clk.h and clkdev.h includes
clk: ux500: Remove clk.h and clkdev.h includes
clk: tegra: Properly include clk.h
...
Now that minor LSMs can cleanly stack with major LSMs, remove the unneeded
config for Yama to be made to explicitly stack. Just selecting the main
Yama CONFIG will allow it to work, regardless of the major LSM. Since
distros using Yama are already forcing it to stack, this is effectively
a no-op change.
Additionally add MAINTAINERS entry.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Clock rates are stored in an unsigned long field, but ->determine_rate()
(which returns a rounded rate from a requested one) returns a long
value (errors are reported using negative error codes), which can lead
to long overflow if the clock rate exceed 2Ghz.
Change ->determine_rate() prototype to return 0 or an error code, and pass
a pointer to a clk_rate_request structure containing the expected target
rate and the rate constraints imposed by clk users.
The clk_rate_request structure might be extended in the future to contain
other kind of constraints like the rounding policy, the maximum clock
inaccuracy or other things that are not yet supported by the CCF
(power consumption constraints ?).
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
CC: Jonathan Corbet <corbet@lwn.net>
CC: Tony Lindgren <tony@atomide.com>
CC: Ralf Baechle <ralf@linux-mips.org>
CC: "Emilio López" <emilio@elopez.com.ar>
CC: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Tero Kristo <t-kristo@ti.com>
CC: Peter De Schrijver <pdeschrijver@nvidia.com>
CC: Prashant Gaikwad <pgaikwad@nvidia.com>
CC: Stephen Warren <swarren@wwwdotorg.org>
CC: Thierry Reding <thierry.reding@gmail.com>
CC: Alexandre Courbot <gnurou@gmail.com>
CC: linux-doc@vger.kernel.org
CC: linux-kernel@vger.kernel.org
CC: linux-arm-kernel@lists.infradead.org
CC: linux-omap@vger.kernel.org
CC: linux-mips@linux-mips.org
CC: linux-tegra@vger.kernel.org
[sboyd@codeaurora.org: Fix parent dereference problem in
__clk_determine_rate()]
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
[sboyd@codeaurora.org: Folded in fix from Heiko for fixed-rate
clocks without parents or a rate determining op]
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Implement atomic logic ops -- atomic_{or,xor,and}.
These will replace the atomic_{set,clear}_mask functions that are
available on some archs.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Implement atomic logic ops -- atomic_{or,xor,and}.
These will replace the atomic_{set,clear}_mask functions that are
available on some archs.
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Pull MIPS fixes from Ralf Baechle:
"Another round of MIPS fixes for 4.2.
Things are looking quite decent at this stage but the recent work on
the FPU support took its toll:
- fix an incorrect overly restrictive ifdef
- select O32 64-bit FP support for O32 binary compatibility
- remove workarounds for Sibyte SB1250 Pass1 parts. There are rare
fixing the workarounds is not worth the effort.
- patch up an outdated and now incorrect comment"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
MIPS: fpu.h: Allow 64-bit FPU on a 64-bit MIPS R6 CPU
MIPS: SB1: Remove support for Pass 1 parts.
MIPS: Require O32 FP64 support for MIPS64 with O32 compat
MIPS: asm-offset.c: Patch up various comments refering to the old filename.
Commit 2ae416b142 ("mm: new mm hook framework") introduced an empty
header file (mm-arch-hooks.h) for every architecture, even those which
doesn't need to define mm hooks.
As suggested by Geert Uytterhoeven, this could be cleaned through the use
of a generic header file included via each per architecture
asm/include/Kbuild file.
The PowerPC architecture is not impacted here since this architecture has
to defined the arch_remap MM hook.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Suggested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pass 1 parts had a number of significant erratas and were only available
in small numbers and under NDA. Full support also required the use of a
special toolchain that kept branches properly aligned. These workarounds
were never upstreamed and the only toolchain known to have them is
Montavista's GCC 3.0-based toolchain which completly obsoleted if not
useless these days.
So now that automated testing has tripped over the user of the
-msb1-pass1-workarounds option, rather than fixing it remove support for
pass 1 parts.
Probably nobody will notice. I seem to own the last know pass 1 board
and I haven't noticed another one in the wild in the past decade, at
least.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>