Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus

Pull MIPS updates from Ralf Baechle:
 "This is the main pull request for MIPS for 4.7.  Here's the summary of
  the changes:

   - ATH79: Support for DTB passuing using the UHI boot protocol
   - ATH79: Remove support for builtin DTB.
   - ATH79: Add zboot debug serial support.
   - ATH79: Add initial support for Dragino MS14 (Dragine 2), Onion Omega
            and DPT-Module.
   - ATH79: Update devicetree clock support for AR9132 and AR9331.
   - ATH79: Cleanup the DT code.
   - ATH79: Support newer SOCs in ath79_ddr_ctrl_init.
   - ATH79: Fix regression in PCI window initialization.
   - BCM47xx: Move SPROM driver to drivers/firmware/
   - BCM63xx: Enable partition parser in defconfig.
   - BMIPS: BMIPS5000 has I cache filing from D cache
   - BMIPS: BMIPS: Add cpu-feature-overrides.h
   - BMIPS: Add Whirlwind support
   - BMIPS: Adjust mips-hpt-frequency for BCM7435
   - BMIPS: Remove maxcpus from BCM97435SVMB DTS
   - BMIPS: Add missing 7038 L1 register cells to BCM7435
   - BMIPS: Various tweaks to initialization code.
   - BMIPS: Enable partition parser in defconfig.
   - BMIPS: Cache tweaks.
   - BMIPS: Add UART, I2C and SATA devices to DT.
   - BMIPS: Add BCM6358 and BCM63268support
   - BMIPS: Add device tree example for BCM6358.
   - BMIPS: Improve Improve BCM6328 and BCM6368 device trees
   - Lantiq: Add support for device tree file from boot loader
   - Lantiq: Allow build with no built-in DT.
   - Loongson 3: Reserve 32MB for RS780E integrated GPU.
   - Loongson 3: Fix build error after ld-version.sh modification
   - Loongson 3: Move chipset ACPI code from drivers to arch.
   - Loongson 3: Speedup irq processing.
   - Loongson 3: Add basic Loongson 3A support.
   - Loongson 3: Set cache flush handlers to nop.
   - Loongson 3: Invalidate special TLBs when needed.
   - Loongson 3: Fast TLB refill handler.
   - MT7620: Fallback strategy for invalid syscfg0.
   - Netlogic: Fix CP0_EBASE redefinition warnings
   - Octeon: Initialization fixes
   - Octeon: Add DTS files for the D-Link DSR-1000N and EdgeRouter Lite
   - Octeon: Enable add Octeon-drivers in cavium_octeon_defconfig
   - Octeon: Correctly handle endian-swapped initramfs images.
   - Octeon: Support CN73xx, CN75xx and CN78xx.
   - Octeon: Remove dead code from cvmx-sysinfo.
   - Octeon: Extend number of supported CPUs past 32.
   - Octeon: Remove some code limiting NR_IRQS to 255.
   - Octeon: Simplify octeon_irq_ciu_gpio_set_type.
   - Octeon: Mark some functions __init in smp.c
   - Octeon: Octeon: Add Octeon III CN7xxx interface detection
   - PIC32: Add serial driver and bindings for it.
   - PIC32: Add PIC32 deadman timer driver and bindings.
   - PIC32: Add PIC32 clock timer driver and bindings.
   - Pistachio: Determine SoC revision during boot
   - Sibyte: Fix Kconfig dependencies of SIBYTE_BUS_WATCHER.
   - Sibyte: Strip redundant comments from bcm1480_regs.h.
   - Panic immediately if panic_on_oops is set.
   - module: fix incorrect IS_ERR_VALUE macro usage.
   - module: Make consistent use of pr_*
   - Remove no longer needed work_on_cpu() call.
   - Remove CONFIG_IPV6_PRIVACY from defconfigs.
   - Fix registers of non-crashing CPUs in dumps.
   - Handle MIPSisms in new vmcore_elf32_check_arch.
   - Select CONFIG_HANDLE_DOMAIN_IRQ and make it work.
   - Allow RIXI to be used on non-R2 or R6 cores.
   - Reserve nosave data for hibernation
   - Fix siginfo.h to use strict POSIX types.
   - Don't unwind user mode with EVA.
   - Fix watchpoint restoration
   - Ptrace watchpoints for R6.
   - Sync icache when it fills from dcache
   - I6400 I-cache fills from dcache.
   - Various MSA fixes.
   - Cleanup MIPS_CPU_* definitions.
   - Signal: Move generic copy_siginfo to signal.h
   - Signal: Fix uapi include in exported asm/siginfo.h
   - Timer fixes for sake of KVM.
   - XPA TLB refill fixes.
   - Treat perf counter feature
   - Update John Crispin's email address
   - Add PIC32 watchdog and bindings.
   - Handle R10000 LL/SC bug in set_pte()
   - cpufreq: Various fixes for Longson1.
   - R6: Fix R2 emulation.
   - mathemu: Cosmetic fix to ADDIUPC emulation, plenty of other small fixes
   - ELF: ABI and FP fixes.
   - Allow for relocatable kernel and use that to support KASLR.
   - Fix CPC_BASE_ADDR mask
   - Plenty fo smp-cps, CM, R6 and M6250 fixes.
   - Make reset_control_ops const.
   - Fix kernel command line handling of leading whitespace.
   - Cleanups to cache handling.
   - Add brcm, bcm6345-l1-intc device tree bindings.
   - Use generic clkdev.h header
   - Remove CLK_IS_ROOT usage.
   - Misc small cleanups.
   - CM: Fix compilation error when !MIPS_CM
   - oprofile: Fix a preemption issue
   - Detect DSP ASE v3 support:1"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (275 commits)
  MIPS: pic32mzda: fix getting timer clock rate.
  MIPS: ath79: fix regression in PCI window initialization
  MIPS: ath79: make ath79_ddr_ctrl_init() compatible for newer SoCs
  MIPS: Fix VZ probe gas errors with binutils <2.24
  MIPS: perf: Fix I6400 event numbers
  MIPS: DEC: Export `ioasic_ssr_lock' to modules
  MIPS: MSA: Fix a link error on `_init_msa_upper' with older GCC
  MIPS: CM: Fix compilation error when !MIPS_CM
  MIPS: Fix genvdso error on rebuild
  USB: ohci-jz4740: Remove obsolete driver
  MIPS: JZ4740: Probe OHCI platform device via DT
  MIPS: JZ4740: Qi LB60: Remove support for AVT2 variant
  MIPS: pistachio: Determine SoC revision during boot
  MIPS: BMIPS: Adjust mips-hpt-frequency for BCM7435
  mips: mt7620: fallback to SDRAM when syscfg0 does not have a valid value for the memory type
  MIPS: Prevent "restoration" of MSA context in non-MSA kernels
  MIPS: cevt-r4k: Dynamically calculate min_delta_ns
  MIPS: malta-time: Take seconds into account
  MIPS: malta-time: Start GIC count before syncing to RTC
  MIPS: Force CPUs to lose FP context during mode switches
  ...
This commit is contained in:
Linus Torvalds
2016-05-19 10:02:26 -07:00
کامیت 07b75260eb
358فایلهای تغییر یافته به همراه14041 افزوده شده و 3435 حذف شده

مشاهده پرونده

@@ -44,7 +44,7 @@ obj-$(CONFIG_CPU_CAVIUM_OCTEON) += r4k_fpu.o octeon_switch.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_SMP_UP) += smp-up.o
obj-$(CONFIG_CPU_BMIPS) += smp-bmips.o bmips_vec.o
obj-$(CONFIG_CPU_BMIPS) += smp-bmips.o bmips_vec.o bmips_5xxx_init.o
obj-$(CONFIG_MIPS_MT) += mips-mt.o
obj-$(CONFIG_MIPS_MT_FPAFF) += mips-mt-fpaff.o
@@ -83,6 +83,8 @@ obj-$(CONFIG_I8253) += i8253.o
obj-$(CONFIG_GPIO_TXX9) += gpio_txx9.o
obj-$(CONFIG_RELOCATABLE) += relocate.o
obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o

مشاهده پرونده

@@ -14,6 +14,7 @@
#include <linux/mm.h>
#include <linux/kbuild.h>
#include <linux/suspend.h>
#include <asm/cpu-info.h>
#include <asm/pm.h>
#include <asm/ptrace.h>
#include <asm/processor.h>
@@ -338,6 +339,15 @@ void output_pm_defines(void)
}
#endif
void output_cpuinfo_defines(void)
{
COMMENT(" MIPS cpuinfo offsets. ");
DEFINE(CPUINFO_SIZE, sizeof(struct cpuinfo_mips));
#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
OFFSET(CPUINFO_ASID_MASK, cpuinfo_mips, asid_mask);
#endif
}
void output_kvm_defines(void)
{
COMMENT(" KVM/MIPS Specfic offsets. ");

مشاهده پرونده

@@ -30,21 +30,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch(hdr) \
({ \
int __res = 1; \
struct elfhdr *__h = (hdr); \
\
if (!mips_elf_check_machine(__h)) \
__res = 0; \
if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
__res = 0; \
if (((__h->e_flags & EF_MIPS_ABI2) == 0) || \
((__h->e_flags & EF_MIPS_ABI) != 0)) \
__res = 0; \
\
__res; \
})
#define elf_check_arch elfn32_check_arch
#define TASK32_SIZE 0x7fff8000UL
#undef ELF_ET_DYN_BASE

مشاهده پرونده

@@ -27,40 +27,10 @@ typedef elf_greg_t elf_gregset_t[ELF_NGREG];
typedef double elf_fpreg_t;
typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
/*
* In order to be sure that we don't attempt to execute an O32 binary which
* requires 64 bit FP (FR=1) on a system which does not support it we refuse
* to execute any binary which has bits specified by the following macro set
* in its ELF header flags.
*/
#ifdef CONFIG_MIPS_O32_FP64_SUPPORT
# define __MIPS_O32_FP64_MUST_BE_ZERO 0
#else
# define __MIPS_O32_FP64_MUST_BE_ZERO EF_MIPS_FP64
#endif
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch(hdr) \
({ \
int __res = 1; \
struct elfhdr *__h = (hdr); \
\
if (!mips_elf_check_machine(__h)) \
__res = 0; \
if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
__res = 0; \
if ((__h->e_flags & EF_MIPS_ABI2) != 0) \
__res = 0; \
if (((__h->e_flags & EF_MIPS_ABI) != 0) && \
((__h->e_flags & EF_MIPS_ABI) != EF_MIPS_ABI_O32)) \
__res = 0; \
if (__h->e_flags & __MIPS_O32_FP64_MUST_BE_ZERO) \
__res = 0; \
\
__res; \
})
#define elf_check_arch elfo32_check_arch
#ifdef CONFIG_KVM_GUEST
#define TASK32_SIZE 0x3fff8000UL

مشاهده پرونده

@@ -0,0 +1,753 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2011-2012 by Broadcom Corporation
*
* Init for bmips 5000.
* Used to init second core in dual core 5000's.
*/
#include <linux/init.h>
#include <asm/asm.h>
#include <asm/asmmacro.h>
#include <asm/cacheops.h>
#include <asm/regdef.h>
#include <asm/mipsregs.h>
#include <asm/stackframe.h>
#include <asm/addrspace.h>
#include <asm/hazards.h>
#include <asm/bmips.h>
#ifdef CONFIG_CPU_BMIPS5000
#define cacheop(kva, size, linesize, op) \
.set noreorder ; \
addu t1, kva, size ; \
subu t2, linesize, 1 ; \
not t2 ; \
and t0, kva, t2 ; \
addiu t1, t1, -1 ; \
and t1, t2 ; \
9: cache op, 0(t0) ; \
bne t0, t1, 9b ; \
addu t0, linesize ; \
.set reorder ;
#define IS_SHIFT 22
#define IL_SHIFT 19
#define IA_SHIFT 16
#define DS_SHIFT 13
#define DL_SHIFT 10
#define DA_SHIFT 7
#define IS_MASK 7
#define IL_MASK 7
#define IA_MASK 7
#define DS_MASK 7
#define DL_MASK 7
#define DA_MASK 7
#define ICE_MASK 0x80000000
#define DCE_MASK 0x40000000
#define CP0_BRCM_CONFIG0 $22, 0
#define CP0_BRCM_MODE $22, 1
#define CP0_CONFIG_K0_MASK 7
#define CP0_ICACHE_TAG_LO $28
#define CP0_ICACHE_DATA_LO $28, 1
#define CP0_DCACHE_TAG_LO $28, 2
#define CP0_D_SEC_CACHE_DATA_LO $28, 3
#define CP0_ICACHE_TAG_HI $29
#define CP0_ICACHE_DATA_HI $29, 1
#define CP0_DCACHE_TAG_HI $29, 2
#define CP0_BRCM_MODE_Luc_MASK (1 << 11)
#define CP0_BRCM_CONFIG0_CWF_MASK (1 << 20)
#define CP0_BRCM_CONFIG0_TSE_MASK (1 << 19)
#define CP0_BRCM_MODE_SET_MASK (1 << 7)
#define CP0_BRCM_MODE_ClkRATIO_MASK (7 << 4)
#define CP0_BRCM_MODE_BrPRED_MASK (3 << 24)
#define CP0_BRCM_MODE_BrPRED_SHIFT 24
#define CP0_BRCM_MODE_BrHIST_MASK (0x1f << 20)
#define CP0_BRCM_MODE_BrHIST_SHIFT 20
/* ZSC L2 Cache Register Access Register Definitions */
#define BRCM_ZSC_ALL_REGS_SELECT 0x7 << 24
#define BRCM_ZSC_CONFIG_REG 0 << 3
#define BRCM_ZSC_REQ_BUFFER_REG 2 << 3
#define BRCM_ZSC_RBUS_ADDR_MAPPING_REG0 4 << 3
#define BRCM_ZSC_RBUS_ADDR_MAPPING_REG1 6 << 3
#define BRCM_ZSC_RBUS_ADDR_MAPPING_REG2 8 << 3
#define BRCM_ZSC_SCB0_ADDR_MAPPING_REG0 0xa << 3
#define BRCM_ZSC_SCB0_ADDR_MAPPING_REG1 0xc << 3
#define BRCM_ZSC_SCB1_ADDR_MAPPING_REG0 0xe << 3
#define BRCM_ZSC_SCB1_ADDR_MAPPING_REG1 0x10 << 3
#define BRCM_ZSC_CONFIG_LMB1En 1 << (15)
#define BRCM_ZSC_CONFIG_LMB0En 1 << (14)
/* branch predition values */
#define BRCM_BrPRED_ALL_TAKEN (0x0)
#define BRCM_BrPRED_ALL_NOT_TAKEN (0x1)
#define BRCM_BrPRED_BHT_ENABLE (0x2)
#define BRCM_BrPRED_PREDICT_BACKWARD (0x3)
.align 2
/*
* Function: size_i_cache
* Arguments: None
* Returns: v0 = i cache size, v1 = I cache line size
* Description: compute the I-cache size and I-cache line size
* Trashes: v0, v1, a0, t0
*
* pseudo code:
*
*/
LEAF(size_i_cache)
.set noreorder
mfc0 a0, CP0_CONFIG, 1
move t0, a0
/*
* Determine sets per way: IS
*
* This field contains the number of sets (i.e., indices) per way of
* the instruction cache:
* i) 0x0: 64, ii) 0x1: 128, iii) 0x2: 256, iv) 0x3: 512, v) 0x4: 1k
* vi) 0x5 - 0x7: Reserved.
*/
srl a0, a0, IS_SHIFT
and a0, a0, IS_MASK
/* sets per way = (64<<IS) */
li v0, 0x40
sllv v0, v0, a0
/*
* Determine line size
*
* This field contains the line size of the instruction cache:
* i) 0x0: No I-cache present, i) 0x3: 16 bytes, ii) 0x4: 32 bytes, iii)
* 0x5: 64 bytes, iv) the rest: Reserved.
*/
move a0, t0
srl a0, a0, IL_SHIFT
and a0, a0, IL_MASK
beqz a0, no_i_cache
nop
/* line size = 2 ^ (IL+1) */
addi a0, a0, 1
li v1, 1
sll v1, v1, a0
/* v0 now have sets per way, multiply it by line size now
* that will give the set size
*/
sll v0, v0, a0
/*
* Determine set associativity
*
* This field contains the set associativity of the instruction cache.
* i) 0x0: Direct mapped, ii) 0x1: 2-way, iii) 0x2: 3-way, iv) 0x3:
* 4-way, v) 0x4 - 0x7: Reserved.
*/
move a0, t0
srl a0, a0, IA_SHIFT
and a0, a0, IA_MASK
addi a0, a0, 0x1
/* v0 has the set size, multiply it by
* set associativiy, to get the cache size
*/
multu v0, a0 /*multu is interlocked, so no need to insert nops */
mflo v0
b 1f
nop
no_i_cache:
move v0, zero
move v1, zero
1:
jr ra
nop
.set reorder
END(size_i_cache)
/*
* Function: size_d_cache
* Arguments: None
* Returns: v0 = d cache size, v1 = d cache line size
* Description: compute the D-cache size and D-cache line size.
* Trashes: v0, v1, a0, t0
*
*/
LEAF(size_d_cache)
.set noreorder
mfc0 a0, CP0_CONFIG, 1
move t0, a0
/*
* Determine sets per way: IS
*
* This field contains the number of sets (i.e., indices) per way of
* the instruction cache:
* i) 0x0: 64, ii) 0x1: 128, iii) 0x2: 256, iv) 0x3: 512, v) 0x4: 1k
* vi) 0x5 - 0x7: Reserved.
*/
srl a0, a0, DS_SHIFT
and a0, a0, DS_MASK
/* sets per way = (64<<IS) */
li v0, 0x40
sllv v0, v0, a0
/*
* Determine line size
*
* This field contains the line size of the instruction cache:
* i) 0x0: No I-cache present, i) 0x3: 16 bytes, ii) 0x4: 32 bytes, iii)
* 0x5: 64 bytes, iv) the rest: Reserved.
*/
move a0, t0
srl a0, a0, DL_SHIFT
and a0, a0, DL_MASK
beqz a0, no_d_cache
nop
/* line size = 2 ^ (IL+1) */
addi a0, a0, 1
li v1, 1
sll v1, v1, a0
/* v0 now have sets per way, multiply it by line size now
* that will give the set size
*/
sll v0, v0, a0
/* determine set associativity
*
* This field contains the set associativity of the instruction cache.
* i) 0x0: Direct mapped, ii) 0x1: 2-way, iii) 0x2: 3-way, iv) 0x3:
* 4-way, v) 0x4 - 0x7: Reserved.
*/
move a0, t0
srl a0, a0, DA_SHIFT
and a0, a0, DA_MASK
addi a0, a0, 0x1
/* v0 has the set size, multiply it by
* set associativiy, to get the cache size
*/
multu v0, a0 /*multu is interlocked, so no need to insert nops */
mflo v0
b 1f
nop
no_d_cache:
move v0, zero
move v1, zero
1:
jr ra
nop
.set reorder
END(size_d_cache)
/*
* Function: enable_ID
* Arguments: None
* Returns: None
* Description: Enable I and D caches, initialize I and D-caches, also set
* hardware delay for d-cache (TP0).
* Trashes: t0
*
*/
.global enable_ID
.ent enable_ID
.set noreorder
enable_ID:
mfc0 t0, CP0_BRCM_CONFIG0
or t0, t0, (ICE_MASK | DCE_MASK)
mtc0 t0, CP0_BRCM_CONFIG0
jr ra
nop
.end enable_ID
.set reorder
/*
* Function: l1_init
* Arguments: None
* Returns: None
* Description: Enable I and D caches, and initialize I and D-caches
* Trashes: a0, v0, v1, t0, t1, t2, t8
*
*/
.globl l1_init
.ent l1_init
.set noreorder
l1_init:
/* save return address */
move t8, ra
/* initialize I and D cache Data and Tag registers. */
mtc0 zero, CP0_ICACHE_TAG_LO
mtc0 zero, CP0_ICACHE_TAG_HI
mtc0 zero, CP0_ICACHE_DATA_LO
mtc0 zero, CP0_ICACHE_DATA_HI
mtc0 zero, CP0_DCACHE_TAG_LO
mtc0 zero, CP0_DCACHE_TAG_HI
/* Enable Caches before Clearing. If the caches are disabled
* then the cache operations to clear the cache will be ignored
*/
jal enable_ID
nop
jal size_i_cache /* v0 = i-cache size, v1 = i-cache line size */
nop
/* run uncached in kseg 1 */
la k0, 1f
lui k1, 0x2000
or k0, k1, k0
jr k0
nop
1:
/*
* set K0 cache mode
*/
mfc0 t0, CP0_CONFIG
and t0, t0, ~CP0_CONFIG_K0_MASK
or t0, t0, 3 /* Write Back mode */
mtc0 t0, CP0_CONFIG
/*
* Initialize instruction cache.
*/
li a0, KSEG0
cacheop(a0, v0, v1, Index_Store_Tag_I)
/*
* Now we can run from I-$, kseg 0
*/
la k0, 1f
lui k1, 0x2000
or k0, k1, k0
xor k0, k1, k0
jr k0
nop
1:
/*
* Initialize data cache.
*/
jal size_d_cache /* v0 = d-cache size, v1 = d-cache line size */
nop
li a0, KSEG0
cacheop(a0, v0, v1, Index_Store_Tag_D)
jr t8
nop
.end l1_init
.set reorder
/*
* Function: set_other_config
* Arguments: none
* Returns: None
* Description: initialize other remainder configuration to defaults.
* Trashes: t0, t1
*
* pseudo code:
*
*/
LEAF(set_other_config)
.set noreorder
/* enable Bus error for I-fetch */
mfc0 t0, CP0_CACHEERR, 0
li t1, 0x4
or t0, t1
mtc0 t0, CP0_CACHEERR, 0
/* enable Bus error for Load */
mfc0 t0, CP0_CACHEERR, 1
li t1, 0x4
or t0, t1
mtc0 t0, CP0_CACHEERR, 1
/* enable Bus Error for Store */
mfc0 t0, CP0_CACHEERR, 2
li t1, 0x4
or t0, t1
mtc0 t0, CP0_CACHEERR, 2
jr ra
nop
.set reorder
END(set_other_config)
/*
* Function: set_branch_pred
* Arguments: none
* Returns: None
* Description:
* Trashes: t0, t1
*
* pseudo code:
*
*/
LEAF(set_branch_pred)
.set noreorder
mfc0 t0, CP0_BRCM_MODE
li t1, ~(CP0_BRCM_MODE_BrPRED_MASK | CP0_BRCM_MODE_BrHIST_MASK )
and t0, t0, t1
/* enable Branch prediction */
li t1, BRCM_BrPRED_BHT_ENABLE
sll t1, CP0_BRCM_MODE_BrPRED_SHIFT
or t0, t0, t1
/* set history count to 8 */
li t1, 8
sll t1, CP0_BRCM_MODE_BrHIST_SHIFT
or t0, t0, t1
mtc0 t0, CP0_BRCM_MODE
jr ra
nop
.set reorder
END(set_branch_pred)
/*
* Function: set_luc
* Arguments: set link uncached.
* Returns: None
* Description:
* Trashes: t0, t1
*
*/
LEAF(set_luc)
.set noreorder
mfc0 t0, CP0_BRCM_MODE
li t1, ~(CP0_BRCM_MODE_Luc_MASK)
and t0, t0, t1
/* set Luc */
ori t0, t0, CP0_BRCM_MODE_Luc_MASK
mtc0 t0, CP0_BRCM_MODE
jr ra
nop
.set reorder
END(set_luc)
/*
* Function: set_cwf_tse
* Arguments: set CWF and TSE bits
* Returns: None
* Description:
* Trashes: t0, t1
*
*/
LEAF(set_cwf_tse)
.set noreorder
mfc0 t0, CP0_BRCM_CONFIG0
li t1, (CP0_BRCM_CONFIG0_CWF_MASK | CP0_BRCM_CONFIG0_TSE_MASK)
or t0, t0, t1
mtc0 t0, CP0_BRCM_CONFIG0
jr ra
nop
.set reorder
END(set_cwf_tse)
/*
* Function: set_clock_ratio
* Arguments: set clock ratio specified by a0
* Returns: None
* Description:
* Trashes: v0, v1, a0, a1
*
* pseudo code:
*
*/
LEAF(set_clock_ratio)
.set noreorder
mfc0 t0, CP0_BRCM_MODE
li t1, ~(CP0_BRCM_MODE_SET_MASK | CP0_BRCM_MODE_ClkRATIO_MASK)
and t0, t0, t1
li t1, CP0_BRCM_MODE_SET_MASK
or t0, t0, t1
or t0, t0, a0
mtc0 t0, CP0_BRCM_MODE
jr ra
nop
.set reorder
END(set_clock_ratio)
/*
* Function: set_zephyr
* Arguments: None
* Returns: None
* Description: Set any zephyr bits
* Trashes: t0 & t1
*
*/
LEAF(set_zephyr)
.set noreorder
/* enable read/write of CP0 #22 sel. 8 */
li t0, 0x5a455048
.word 0x4088b00f /* mtc0 t0, $22, 15 */
.word 0x4008b008 /* mfc0 t0, $22, 8 */
li t1, 0x09008000 /* turn off pref, jtb */
or t0, t0, t1
.word 0x4088b008 /* mtc0 t0, $22, 8 */
sync
/* disable read/write of CP0 #22 sel 8 */
li t0, 0x0
.word 0x4088b00f /* mtc0 t0, $22, 15 */
jr ra
nop
.set reorder
END(set_zephyr)
/*
* Function: set_llmb
* Arguments: a0=0 disable llmb, a0=1 enables llmb
* Returns: None
* Description:
* Trashes: t0, t1, t2
*
* pseudo code:
*
*/
LEAF(set_llmb)
.set noreorder
li t2, 0x90000000 | BRCM_ZSC_ALL_REGS_SELECT | BRCM_ZSC_CONFIG_REG
sync
cache 0x7, 0x0(t2)
sync
mfc0 t0, CP0_D_SEC_CACHE_DATA_LO
li t1, ~(BRCM_ZSC_CONFIG_LMB1En | BRCM_ZSC_CONFIG_LMB0En)
and t0, t0, t1
beqz a0, svlmb
nop
enable_lmb:
li t1, (BRCM_ZSC_CONFIG_LMB1En | BRCM_ZSC_CONFIG_LMB0En)
or t0, t0, t1
svlmb:
mtc0 t0, CP0_D_SEC_CACHE_DATA_LO
sync
cache 0xb, 0x0(t2)
sync
jr ra
nop
.set reorder
END(set_llmb)
/*
* Function: core_init
* Arguments: none
* Returns: None
* Description: initialize core related configuration
* Trashes: v0,v1,a0,a1,t8
*
* pseudo code:
*
*/
.globl core_init
.ent core_init
.set noreorder
core_init:
move t8, ra
/* set Zephyr bits. */
bal set_zephyr
nop
#if ENABLE_FPU==1
/* initialize the Floating point unit (both TPs) */
bal init_fpu
nop
#endif
/* set low latency memory bus */
li a0, 1
bal set_llmb
nop
/* set branch prediction (TP0 only) */
bal set_branch_pred
nop
/* set link uncached */
bal set_luc
nop
/* set CWF and TSE */
bal set_cwf_tse
nop
/*
*set clock ratio by setting 1 to 'set'
* and 0 to ClkRatio, (TP0 only)
*/
li a0, 0
bal set_clock_ratio
nop
/* set other configuration to defaults */
bal set_other_config
nop
move ra, t8
jr ra
nop
.set reorder
.end core_init
/*
* Function: clear_jump_target_buffer
* Arguments: None
* Returns: None
* Description:
* Trashes: t0, t1, t2
*
*/
#define RESET_CALL_RETURN_STACK_THIS_THREAD (0x06<<16)
#define RESET_JUMP_TARGET_BUFFER_THIS_THREAD (0x04<<16)
#define JTB_CS_CNTL_MASK (0xFF<<16)
.globl clear_jump_target_buffer
.ent clear_jump_target_buffer
.set noreorder
clear_jump_target_buffer:
mfc0 t0, $22, 2
nop
nop
li t1, ~JTB_CS_CNTL_MASK
and t0, t0, t1
li t2, RESET_CALL_RETURN_STACK_THIS_THREAD
or t0, t0, t2
mtc0 t0, $22, 2
nop
nop
and t0, t0, t1
li t2, RESET_JUMP_TARGET_BUFFER_THIS_THREAD
or t0, t0, t2
mtc0 t0, $22, 2
nop
nop
jr ra
nop
.end clear_jump_target_buffer
.set reorder
/*
* Function: bmips_cache_init
* Arguments: None
* Returns: None
* Description: Enable I and D caches, and initialize I and D-caches
* Trashes: v0, v1, t0, t1, t2, t5, t7, t8
*
*/
.globl bmips_5xxx_init
.ent bmips_5xxx_init
.set noreorder
bmips_5xxx_init:
/* save return address and A0 */
move t7, ra
move t5, a0
jal l1_init
nop
jal core_init
nop
jal clear_jump_target_buffer
nop
mtc0 zero, CP0_CAUSE
move a0, t5
jr t7
nop
.end bmips_5xxx_init
.set reorder
#endif

مشاهده پرونده

@@ -88,12 +88,13 @@ NESTED(bmips_reset_nmi_vec, PT_SIZE, sp)
li k1, (1 << 19)
mfc0 k0, CP0_STATUS
and k0, k1
beqz k0, bmips_smp_entry
beqz k0, soft_reset
#if defined(CONFIG_CPU_BMIPS5000)
mfc0 k0, CP0_PRID
li k1, PRID_IMP_BMIPS5000
andi k0, 0xff00
/* mask with PRID_IMP_BMIPS5000 to cover both variants */
andi k0, PRID_IMP_BMIPS5000
bne k0, k1, 1f
/* if we're not on core 0, this must be the SMP boot signal */
@@ -125,13 +126,48 @@ NESTED(bmips_reset_nmi_vec, PT_SIZE, sp)
.set arch=r4000
eret
#ifdef CONFIG_SMP
soft_reset:
#if defined(CONFIG_CPU_BMIPS5000)
mfc0 k0, CP0_PRID
andi k0, 0xff00
li k1, PRID_IMP_BMIPS5200
bne k0, k1, bmips_smp_entry
/* if running on TP 1, jump to bmips_smp_entry */
mfc0 k0, $22
li k1, (1 << 24)
and k1, k0
bnez k1, bmips_smp_entry
nop
/*
* running on TP0, can not be core 0 (the boot core).
* Check for soft reset. Indicates a warm boot
*/
mfc0 k0, $12
li k1, (1 << 20)
and k0, k1
beqz k0, bmips_smp_entry
/*
* Warm boot.
* Cache init is only done on TP0
*/
la k0, bmips_5xxx_init
jalr k0
nop
b bmips_smp_entry
nop
#endif
/***********************************************************************
* CPU1 reset vector (used for the initial boot only)
* This is still part of bmips_reset_nmi_vec().
***********************************************************************/
#ifdef CONFIG_SMP
bmips_smp_entry:
/* set up CP0 STATUS; enable FPU */
@@ -166,10 +202,12 @@ bmips_smp_entry:
2:
#endif /* CONFIG_CPU_BMIPS4350 || CONFIG_CPU_BMIPS4380 */
#if defined(CONFIG_CPU_BMIPS5000)
/* set exception vector base */
/* mask with PRID_IMP_BMIPS5000 to cover both variants */
li k1, PRID_IMP_BMIPS5000
andi k0, PRID_IMP_BMIPS5000
bne k0, k1, 3f
/* set exception vector base */
la k0, ebase
lw k0, 0(k0)
mtc0 k0, $15, 1
@@ -263,6 +301,8 @@ LEAF(bmips_enable_xks01)
#endif /* CONFIG_CPU_BMIPS4380 */
#if defined(CONFIG_CPU_BMIPS5000)
li t1, PRID_IMP_BMIPS5000
/* mask with PRID_IMP_BMIPS5000 to cover both variants */
andi t2, PRID_IMP_BMIPS5000
bne t2, t1, 2f
mfc0 t0, $22, 5

مشاهده پرونده

@@ -688,21 +688,9 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
}
lose_fpu(1); /* Save FPU state for the emulator. */
reg = insn.i_format.rt;
bit = 0;
switch (insn.i_format.rs) {
case bc1eqz_op:
/* Test bit 0 */
if (get_fpr32(&current->thread.fpu.fpr[reg], 0)
& 0x1)
bit = 1;
break;
case bc1nez_op:
/* Test bit 0 */
if (!(get_fpr32(&current->thread.fpu.fpr[reg], 0)
& 0x1))
bit = 1;
break;
}
bit = get_fpr32(&current->thread.fpu.fpr[reg], 0) & 0x1;
if (insn.i_format.rs == bc1eqz_op)
bit = !bit;
own_fpu(1);
if (bit)
epc = epc + 4 +

مشاهده پرونده

@@ -28,6 +28,83 @@ static int mips_next_event(unsigned long delta,
return res;
}
/**
* calculate_min_delta() - Calculate a good minimum delta for mips_next_event().
*
* Running under virtualisation can introduce overhead into mips_next_event() in
* the form of hypervisor emulation of CP0_Count/CP0_Compare registers,
* potentially with an unnatural frequency, which makes a fixed min_delta_ns
* value inappropriate as it may be too small.
*
* It can also introduce occasional latency from the guest being descheduled.
*
* This function calculates a good minimum delta based roughly on the 75th
* percentile of the time taken to do the mips_next_event() sequence, in order
* to handle potentially higher overhead while also eliminating outliers due to
* unpredictable hypervisor latency (which can be handled by retries).
*
* Return: An appropriate minimum delta for the clock event device.
*/
static unsigned int calculate_min_delta(void)
{
unsigned int cnt, i, j, k, l;
unsigned int buf1[4], buf2[3];
unsigned int min_delta;
/*
* Calculate the median of 5 75th percentiles of 5 samples of how long
* it takes to set CP0_Compare = CP0_Count + delta.
*/
for (i = 0; i < 5; ++i) {
for (j = 0; j < 5; ++j) {
/*
* This is like the code in mips_next_event(), and
* directly measures the borderline "safe" delta.
*/
cnt = read_c0_count();
write_c0_compare(cnt);
cnt = read_c0_count() - cnt;
/* Sorted insert into buf1 */
for (k = 0; k < j; ++k) {
if (cnt < buf1[k]) {
l = min_t(unsigned int,
j, ARRAY_SIZE(buf1) - 1);
for (; l > k; --l)
buf1[l] = buf1[l - 1];
break;
}
}
if (k < ARRAY_SIZE(buf1))
buf1[k] = cnt;
}
/* Sorted insert of 75th percentile into buf2 */
for (k = 0; k < i; ++k) {
if (buf1[ARRAY_SIZE(buf1) - 1] < buf2[k]) {
l = min_t(unsigned int,
i, ARRAY_SIZE(buf2) - 1);
for (; l > k; --l)
buf2[l] = buf2[l - 1];
break;
}
}
if (k < ARRAY_SIZE(buf2))
buf2[k] = buf1[ARRAY_SIZE(buf1) - 1];
}
/* Use 2 * median of 75th percentiles */
min_delta = buf2[ARRAY_SIZE(buf2) - 1] * 2;
/* Don't go too low */
if (min_delta < 0x300)
min_delta = 0x300;
pr_debug("%s: median 75th percentile=%#x, min_delta=%#x\n",
__func__, buf2[ARRAY_SIZE(buf2) - 1], min_delta);
return min_delta;
}
DEFINE_PER_CPU(struct clock_event_device, mips_clockevent_device);
int cp0_timer_irq_installed;
@@ -177,7 +254,7 @@ int r4k_clockevent_init(void)
{
unsigned int cpu = smp_processor_id();
struct clock_event_device *cd;
unsigned int irq;
unsigned int irq, min_delta;
if (!cpu_has_counter || !mips_hpt_frequency)
return -ENXIO;
@@ -203,7 +280,8 @@ int r4k_clockevent_init(void)
/* Calculate the min / max delta */
cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd);
cd->min_delta_ns = clockevent_delta2ns(0x300, cd);
min_delta = calculate_min_delta();
cd->min_delta_ns = clockevent_delta2ns(min_delta, cd);
cd->rating = 300;
cd->irq = irq;

مشاهده پرونده

@@ -18,9 +18,12 @@
#include <asm/mipsmtregs.h>
#include <asm/pm.h>
#define GCR_CPC_BASE_OFS 0x0088
#define GCR_CL_COHERENCE_OFS 0x2008
#define GCR_CL_ID_OFS 0x2028
#define CPC_CL_VC_RUN_OFS 0x2028
.extern mips_cm_base
.set noreorder
@@ -60,6 +63,37 @@
nop
.endm
/*
* Set dest to non-zero if the core supports MIPSr6 multithreading
* (ie. VPs), else zero. If MIPSr6 multithreading is not supported then
* branch to nomt.
*/
.macro has_vp dest, nomt
mfc0 \dest, CP0_CONFIG, 1
bgez \dest, \nomt
mfc0 \dest, CP0_CONFIG, 2
bgez \dest, \nomt
mfc0 \dest, CP0_CONFIG, 3
bgez \dest, \nomt
mfc0 \dest, CP0_CONFIG, 4
bgez \dest, \nomt
mfc0 \dest, CP0_CONFIG, 5
andi \dest, \dest, MIPS_CONF5_VP
beqz \dest, \nomt
nop
.endm
/* Calculate an uncached address for the CM GCRs */
.macro cmgcrb dest
.set push
.set noat
MFC0 $1, CP0_CMGCRBASE
PTR_SLL $1, $1, 4
PTR_LI \dest, UNCAC_BASE
PTR_ADDU \dest, \dest, $1
.set pop
.endm
.section .text.cps-vec
.balign 0x1000
@@ -90,120 +124,64 @@ not_nmi:
li t0, ST0_CU1 | ST0_CU0 | ST0_BEV | STATUS_BITDEPS
mtc0 t0, CP0_STATUS
/*
* Clear the bits used to index the caches. Note that the architecture
* dictates that writing to any of TagLo or TagHi selects 0 or 2 should
* be valid for all MIPS32 CPUs, even those for which said writes are
* unnecessary.
*/
mtc0 zero, CP0_TAGLO, 0
mtc0 zero, CP0_TAGHI, 0
mtc0 zero, CP0_TAGLO, 2
mtc0 zero, CP0_TAGHI, 2
ehb
/* Primary cache configuration is indicated by Config1 */
mfc0 v0, CP0_CONFIG, 1
/* Detect I-cache line size */
_EXT t0, v0, MIPS_CONF1_IL_SHF, MIPS_CONF1_IL_SZ
beqz t0, icache_done
li t1, 2
sllv t0, t1, t0
/* Detect I-cache size */
_EXT t1, v0, MIPS_CONF1_IS_SHF, MIPS_CONF1_IS_SZ
xori t2, t1, 0x7
beqz t2, 1f
li t3, 32
addiu t1, t1, 1
sllv t1, t3, t1
1: /* At this point t1 == I-cache sets per way */
_EXT t2, v0, MIPS_CONF1_IA_SHF, MIPS_CONF1_IA_SZ
addiu t2, t2, 1
mul t1, t1, t0
mul t1, t1, t2
li a0, CKSEG0
PTR_ADD a1, a0, t1
1: cache Index_Store_Tag_I, 0(a0)
PTR_ADD a0, a0, t0
bne a0, a1, 1b
/* Skip cache & coherence setup if we're already coherent */
cmgcrb v1
lw s7, GCR_CL_COHERENCE_OFS(v1)
bnez s7, 1f
nop
icache_done:
/* Detect D-cache line size */
_EXT t0, v0, MIPS_CONF1_DL_SHF, MIPS_CONF1_DL_SZ
beqz t0, dcache_done
li t1, 2
sllv t0, t1, t0
/* Detect D-cache size */
_EXT t1, v0, MIPS_CONF1_DS_SHF, MIPS_CONF1_DS_SZ
xori t2, t1, 0x7
beqz t2, 1f
li t3, 32
addiu t1, t1, 1
sllv t1, t3, t1
1: /* At this point t1 == D-cache sets per way */
_EXT t2, v0, MIPS_CONF1_DA_SHF, MIPS_CONF1_DA_SZ
addiu t2, t2, 1
mul t1, t1, t0
mul t1, t1, t2
li a0, CKSEG0
PTR_ADDU a1, a0, t1
PTR_SUBU a1, a1, t0
1: cache Index_Store_Tag_D, 0(a0)
bne a0, a1, 1b
PTR_ADD a0, a0, t0
dcache_done:
/* Set Kseg0 CCA to that in s0 */
mfc0 t0, CP0_CONFIG
ori t0, 0x7
xori t0, 0x7
or t0, t0, s0
mtc0 t0, CP0_CONFIG
ehb
/* Calculate an uncached address for the CM GCRs */
MFC0 v1, CP0_CMGCRBASE
PTR_SLL v1, v1, 4
PTR_LI t0, UNCAC_BASE
PTR_ADDU v1, v1, t0
/* Initialize the L1 caches */
jal mips_cps_cache_init
nop
/* Enter the coherent domain */
li t0, 0xff
sw t0, GCR_CL_COHERENCE_OFS(v1)
ehb
/* Set Kseg0 CCA to that in s0 */
1: mfc0 t0, CP0_CONFIG
ori t0, 0x7
xori t0, 0x7
or t0, t0, s0
mtc0 t0, CP0_CONFIG
ehb
/* Jump to kseg0 */
PTR_LA t0, 1f
jr t0
nop
/*
* We're up, cached & coherent. Perform any further required core-level
* initialisation.
* We're up, cached & coherent. Perform any EVA initialization necessary
* before we access memory.
*/
1: jal mips_cps_core_init
1: eva_init
/* Retrieve boot configuration pointers */
jal mips_cps_get_bootcfg
nop
/* Do any EVA initialization if necessary */
eva_init
/* Skip core-level init if we started up coherent */
bnez s7, 1f
nop
/* Perform any further required core-level initialisation */
jal mips_cps_core_init
nop
/*
* Boot any other VPEs within this core that should be online, and
* deactivate this VPE if it should be offline.
*/
move a1, t9
jal mips_cps_boot_vpes
nop
move a0, v0
/* Off we go! */
PTR_L t1, VPEBOOTCFG_PC(v0)
PTR_L gp, VPEBOOTCFG_GP(v0)
PTR_L sp, VPEBOOTCFG_SP(v0)
1: PTR_L t1, VPEBOOTCFG_PC(v1)
PTR_L gp, VPEBOOTCFG_GP(v1)
PTR_L sp, VPEBOOTCFG_SP(v1)
jr t1
nop
END(mips_cps_core_entry)
@@ -245,7 +223,6 @@ LEAF(excep_intex)
.org 0x480
LEAF(excep_ejtag)
DUMP_EXCEP("EJTAG")
PTR_LA k0, ejtag_debug_handler
jr k0
nop
@@ -323,22 +300,35 @@ LEAF(mips_cps_core_init)
nop
END(mips_cps_core_init)
LEAF(mips_cps_boot_vpes)
/* Retrieve CM base address */
PTR_LA t0, mips_cm_base
PTR_L t0, 0(t0)
/**
* mips_cps_get_bootcfg() - retrieve boot configuration pointers
*
* Returns: pointer to struct core_boot_config in v0, pointer to
* struct vpe_boot_config in v1, VPE ID in t9
*/
LEAF(mips_cps_get_bootcfg)
/* Calculate a pointer to this cores struct core_boot_config */
cmgcrb t0
lw t0, GCR_CL_ID_OFS(t0)
li t1, COREBOOTCFG_SIZE
mul t0, t0, t1
PTR_LA t1, mips_cps_core_bootcfg
PTR_L t1, 0(t1)
PTR_ADDU t0, t0, t1
PTR_ADDU v0, t0, t1
/* Calculate this VPEs ID. If the core doesn't support MT use 0 */
li t9, 0
#ifdef CONFIG_MIPS_MT_SMP
#if defined(CONFIG_CPU_MIPSR6)
has_vp ta2, 1f
/*
* Assume non-contiguous numbering. Perhaps some day we'll need
* to handle contiguous VP numbering, but no such systems yet
* exist.
*/
mfc0 t9, $3, 1
andi t9, t9, 0xff
#elif defined(CONFIG_MIPS_MT_SMP)
has_mt ta2, 1f
/* Find the number of VPEs present in the core */
@@ -362,22 +352,43 @@ LEAF(mips_cps_boot_vpes)
1: /* Calculate a pointer to this VPEs struct vpe_boot_config */
li t1, VPEBOOTCFG_SIZE
mul v0, t9, t1
PTR_L ta3, COREBOOTCFG_VPECONFIG(t0)
PTR_ADDU v0, v0, ta3
mul v1, t9, t1
PTR_L ta3, COREBOOTCFG_VPECONFIG(v0)
PTR_ADDU v1, v1, ta3
#ifdef CONFIG_MIPS_MT_SMP
/* If the core doesn't support MT then return */
bnez ta2, 1f
nop
jr ra
nop
END(mips_cps_get_bootcfg)
LEAF(mips_cps_boot_vpes)
PTR_L ta2, COREBOOTCFG_VPEMASK(a0)
PTR_L ta3, COREBOOTCFG_VPECONFIG(a0)
#if defined(CONFIG_CPU_MIPSR6)
has_vp t0, 5f
/* Find base address of CPC */
cmgcrb t3
PTR_L t1, GCR_CPC_BASE_OFS(t3)
PTR_LI t2, ~0x7fff
and t1, t1, t2
PTR_LI t2, UNCAC_BASE
PTR_ADD t1, t1, t2
/* Set VC_RUN to the VPE mask */
PTR_S ta2, CPC_CL_VC_RUN_OFS(t1)
ehb
#elif defined(CONFIG_MIPS_MT)
.set push
.set mt
1: /* Enter VPE configuration state */
/* If the core doesn't support MT then return */
has_mt t0, 5f
/* Enter VPE configuration state */
dvpe
PTR_LA t1, 1f
jr.hb t1
@@ -388,7 +399,6 @@ LEAF(mips_cps_boot_vpes)
ehb
/* Loop through each VPE */
PTR_L ta2, COREBOOTCFG_VPEMASK(t0)
move t8, ta2
li ta1, 0
@@ -465,7 +475,7 @@ LEAF(mips_cps_boot_vpes)
/* Check whether this VPE is meant to be running */
li t0, 1
sll t0, t0, t9
sll t0, t0, a1
and t0, t0, t8
bnez t0, 2f
nop
@@ -482,10 +492,84 @@ LEAF(mips_cps_boot_vpes)
#endif /* CONFIG_MIPS_MT_SMP */
/* Return */
jr ra
5: jr ra
nop
END(mips_cps_boot_vpes)
LEAF(mips_cps_cache_init)
/*
* Clear the bits used to index the caches. Note that the architecture
* dictates that writing to any of TagLo or TagHi selects 0 or 2 should
* be valid for all MIPS32 CPUs, even those for which said writes are
* unnecessary.
*/
mtc0 zero, CP0_TAGLO, 0
mtc0 zero, CP0_TAGHI, 0
mtc0 zero, CP0_TAGLO, 2
mtc0 zero, CP0_TAGHI, 2
ehb
/* Primary cache configuration is indicated by Config1 */
mfc0 v0, CP0_CONFIG, 1
/* Detect I-cache line size */
_EXT t0, v0, MIPS_CONF1_IL_SHF, MIPS_CONF1_IL_SZ
beqz t0, icache_done
li t1, 2
sllv t0, t1, t0
/* Detect I-cache size */
_EXT t1, v0, MIPS_CONF1_IS_SHF, MIPS_CONF1_IS_SZ
xori t2, t1, 0x7
beqz t2, 1f
li t3, 32
addiu t1, t1, 1
sllv t1, t3, t1
1: /* At this point t1 == I-cache sets per way */
_EXT t2, v0, MIPS_CONF1_IA_SHF, MIPS_CONF1_IA_SZ
addiu t2, t2, 1
mul t1, t1, t0
mul t1, t1, t2
li a0, CKSEG0
PTR_ADD a1, a0, t1
1: cache Index_Store_Tag_I, 0(a0)
PTR_ADD a0, a0, t0
bne a0, a1, 1b
nop
icache_done:
/* Detect D-cache line size */
_EXT t0, v0, MIPS_CONF1_DL_SHF, MIPS_CONF1_DL_SZ
beqz t0, dcache_done
li t1, 2
sllv t0, t1, t0
/* Detect D-cache size */
_EXT t1, v0, MIPS_CONF1_DS_SHF, MIPS_CONF1_DS_SZ
xori t2, t1, 0x7
beqz t2, 1f
li t3, 32
addiu t1, t1, 1
sllv t1, t3, t1
1: /* At this point t1 == D-cache sets per way */
_EXT t2, v0, MIPS_CONF1_DA_SHF, MIPS_CONF1_DA_SZ
addiu t2, t2, 1
mul t1, t1, t0
mul t1, t1, t2
li a0, CKSEG0
PTR_ADDU a1, a0, t1
PTR_SUBU a1, a1, t0
1: cache Index_Store_Tag_D, 0(a0)
bne a0, a1, 1b
PTR_ADD a0, a0, t0
dcache_done:
jr ra
nop
END(mips_cps_cache_init)
#if defined(CONFIG_MIPS_CPS_PM) && defined(CONFIG_CPU_PM)
/* Calculate a pointer to this CPUs struct mips_static_suspend_state */

مشاهده پرونده

@@ -539,6 +539,7 @@ static int set_ftlb_enable(struct cpuinfo_mips *c, int enable)
switch (c->cputype) {
case CPU_PROAPTIV:
case CPU_P5600:
case CPU_P6600:
/* proAptiv & related cores use Config6 to enable the FTLB */
config = read_c0_config6();
/* Clear the old probability value */
@@ -561,6 +562,19 @@ static int set_ftlb_enable(struct cpuinfo_mips *c, int enable)
write_c0_config7(config | (calculate_ftlb_probability(c)
<< MIPS_CONF7_FTLBP_SHIFT));
break;
case CPU_LOONGSON3:
/* Flush ITLB, DTLB, VTLB and FTLB */
write_c0_diag(LOONGSON_DIAG_ITLB | LOONGSON_DIAG_DTLB |
LOONGSON_DIAG_VTLB | LOONGSON_DIAG_FTLB);
/* Loongson-3 cores use Config6 to enable the FTLB */
config = read_c0_config6();
if (enable)
/* Enable FTLB */
write_c0_config6(config & ~MIPS_CONF6_FTLBDIS);
else
/* Disable FTLB */
write_c0_config6(config | MIPS_CONF6_FTLBDIS);
break;
default:
return 1;
}
@@ -634,6 +648,8 @@ static inline unsigned int decode_config1(struct cpuinfo_mips *c)
if (config1 & MIPS_CONF1_MD)
c->ases |= MIPS_ASE_MDMX;
if (config1 & MIPS_CONF1_PC)
c->options |= MIPS_CPU_PERF;
if (config1 & MIPS_CONF1_WR)
c->options |= MIPS_CPU_WATCH;
if (config1 & MIPS_CONF1_CA)
@@ -673,18 +689,25 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
if (config3 & MIPS_CONF3_SM) {
c->ases |= MIPS_ASE_SMARTMIPS;
c->options |= MIPS_CPU_RIXI;
c->options |= MIPS_CPU_RIXI | MIPS_CPU_CTXTC;
}
if (config3 & MIPS_CONF3_RXI)
c->options |= MIPS_CPU_RIXI;
if (config3 & MIPS_CONF3_CTXTC)
c->options |= MIPS_CPU_CTXTC;
if (config3 & MIPS_CONF3_DSP)
c->ases |= MIPS_ASE_DSP;
if (config3 & MIPS_CONF3_DSP2P)
if (config3 & MIPS_CONF3_DSP2P) {
c->ases |= MIPS_ASE_DSP2P;
if (cpu_has_mips_r6)
c->ases |= MIPS_ASE_DSP3;
}
if (config3 & MIPS_CONF3_VINT)
c->options |= MIPS_CPU_VINT;
if (config3 & MIPS_CONF3_VEIC)
c->options |= MIPS_CPU_VEIC;
if (config3 & MIPS_CONF3_LPA)
c->options |= MIPS_CPU_LPA;
if (config3 & MIPS_CONF3_MT)
c->ases |= MIPS_ASE_MIPSMT;
if (config3 & MIPS_CONF3_ULRI)
@@ -695,6 +718,10 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
c->ases |= MIPS_ASE_VZ;
if (config3 & MIPS_CONF3_SC)
c->options |= MIPS_CPU_SEGMENTS;
if (config3 & MIPS_CONF3_BI)
c->options |= MIPS_CPU_BADINSTR;
if (config3 & MIPS_CONF3_BP)
c->options |= MIPS_CPU_BADINSTRP;
if (config3 & MIPS_CONF3_MSA)
c->ases |= MIPS_ASE_MSA;
if (config3 & MIPS_CONF3_PW) {
@@ -715,6 +742,7 @@ static inline unsigned int decode_config4(struct cpuinfo_mips *c)
unsigned int newcf4;
unsigned int mmuextdef;
unsigned int ftlb_page = MIPS_CONF4_FTLBPAGESIZE;
unsigned long asid_mask;
config4 = read_c0_config4();
@@ -773,7 +801,20 @@ static inline unsigned int decode_config4(struct cpuinfo_mips *c)
}
}
c->kscratch_mask = (config4 >> 16) & 0xff;
c->kscratch_mask = (config4 & MIPS_CONF4_KSCREXIST)
>> MIPS_CONF4_KSCREXIST_SHIFT;
asid_mask = MIPS_ENTRYHI_ASID;
if (config4 & MIPS_CONF4_AE)
asid_mask |= MIPS_ENTRYHI_ASIDX;
set_cpu_asid_mask(c, asid_mask);
/*
* Warn if the computed ASID mask doesn't match the mask the kernel
* is built for. This may indicate either a serious problem or an
* easy optimisation opportunity, but either way should be addressed.
*/
WARN_ON(asid_mask != cpu_asid_mask(c));
return config4 & MIPS_CONF_M;
}
@@ -796,6 +837,8 @@ static inline unsigned int decode_config5(struct cpuinfo_mips *c)
if (config5 & MIPS_CONF5_MVH)
c->options |= MIPS_CPU_XPA;
#endif
if (cpu_has_mips_r6 && (config5 & MIPS_CONF5_VP))
c->options |= MIPS_CPU_VP;
return config5 & MIPS_CONF_M;
}
@@ -826,17 +869,43 @@ static void decode_configs(struct cpuinfo_mips *c)
if (ok)
ok = decode_config5(c);
mips_probe_watch_registers(c);
/* Probe the EBase.WG bit */
if (cpu_has_mips_r2_r6) {
u64 ebase;
unsigned int status;
if (cpu_has_rixi) {
/* Enable the RIXI exceptions */
set_c0_pagegrain(PG_IEC);
back_to_back_c0_hazard();
/* Verify the IEC bit is set */
if (read_c0_pagegrain() & PG_IEC)
c->options |= MIPS_CPU_RIXIEX;
/* {read,write}_c0_ebase_64() may be UNDEFINED prior to r6 */
ebase = cpu_has_mips64r6 ? read_c0_ebase_64()
: (s32)read_c0_ebase();
if (ebase & MIPS_EBASE_WG) {
/* WG bit already set, we can avoid the clumsy probe */
c->options |= MIPS_CPU_EBASE_WG;
} else {
/* Its UNDEFINED to change EBase while BEV=0 */
status = read_c0_status();
write_c0_status(status | ST0_BEV);
irq_enable_hazard();
/*
* On pre-r6 cores, this may well clobber the upper bits
* of EBase. This is hard to avoid without potentially
* hitting UNDEFINED dm*c0 behaviour if EBase is 32-bit.
*/
if (cpu_has_mips64r6)
write_c0_ebase_64(ebase | MIPS_EBASE_WG);
else
write_c0_ebase(ebase | MIPS_EBASE_WG);
back_to_back_c0_hazard();
/* Restore BEV */
write_c0_status(status);
if (read_c0_ebase() & MIPS_EBASE_WG) {
c->options |= MIPS_CPU_EBASE_WG;
write_c0_ebase(ebase);
}
}
}
mips_probe_watch_registers(c);
#ifndef CONFIG_MIPS_CPS
if (cpu_has_mips_r2_r6) {
c->core = get_ebase_cpunum();
@@ -846,6 +915,235 @@ static void decode_configs(struct cpuinfo_mips *c)
#endif
}
/*
* Probe for certain guest capabilities by writing config bits and reading back.
* Finally write back the original value.
*/
#define probe_gc0_config(name, maxconf, bits) \
do { \
unsigned int tmp; \
tmp = read_gc0_##name(); \
write_gc0_##name(tmp | (bits)); \
back_to_back_c0_hazard(); \
maxconf = read_gc0_##name(); \
write_gc0_##name(tmp); \
} while (0)
/*
* Probe for dynamic guest capabilities by changing certain config bits and
* reading back to see if they change. Finally write back the original value.
*/
#define probe_gc0_config_dyn(name, maxconf, dynconf, bits) \
do { \
maxconf = read_gc0_##name(); \
write_gc0_##name(maxconf ^ (bits)); \
back_to_back_c0_hazard(); \
dynconf = maxconf ^ read_gc0_##name(); \
write_gc0_##name(maxconf); \
maxconf |= dynconf; \
} while (0)
static inline unsigned int decode_guest_config0(struct cpuinfo_mips *c)
{
unsigned int config0;
probe_gc0_config(config, config0, MIPS_CONF_M);
if (config0 & MIPS_CONF_M)
c->guest.conf |= BIT(1);
return config0 & MIPS_CONF_M;
}
static inline unsigned int decode_guest_config1(struct cpuinfo_mips *c)
{
unsigned int config1, config1_dyn;
probe_gc0_config_dyn(config1, config1, config1_dyn,
MIPS_CONF_M | MIPS_CONF1_PC | MIPS_CONF1_WR |
MIPS_CONF1_FP);
if (config1 & MIPS_CONF1_FP)
c->guest.options |= MIPS_CPU_FPU;
if (config1_dyn & MIPS_CONF1_FP)
c->guest.options_dyn |= MIPS_CPU_FPU;
if (config1 & MIPS_CONF1_WR)
c->guest.options |= MIPS_CPU_WATCH;
if (config1_dyn & MIPS_CONF1_WR)
c->guest.options_dyn |= MIPS_CPU_WATCH;
if (config1 & MIPS_CONF1_PC)
c->guest.options |= MIPS_CPU_PERF;
if (config1_dyn & MIPS_CONF1_PC)
c->guest.options_dyn |= MIPS_CPU_PERF;
if (config1 & MIPS_CONF_M)
c->guest.conf |= BIT(2);
return config1 & MIPS_CONF_M;
}
static inline unsigned int decode_guest_config2(struct cpuinfo_mips *c)
{
unsigned int config2;
probe_gc0_config(config2, config2, MIPS_CONF_M);
if (config2 & MIPS_CONF_M)
c->guest.conf |= BIT(3);
return config2 & MIPS_CONF_M;
}
static inline unsigned int decode_guest_config3(struct cpuinfo_mips *c)
{
unsigned int config3, config3_dyn;
probe_gc0_config_dyn(config3, config3, config3_dyn,
MIPS_CONF_M | MIPS_CONF3_MSA | MIPS_CONF3_CTXTC);
if (config3 & MIPS_CONF3_CTXTC)
c->guest.options |= MIPS_CPU_CTXTC;
if (config3_dyn & MIPS_CONF3_CTXTC)
c->guest.options_dyn |= MIPS_CPU_CTXTC;
if (config3 & MIPS_CONF3_PW)
c->guest.options |= MIPS_CPU_HTW;
if (config3 & MIPS_CONF3_SC)
c->guest.options |= MIPS_CPU_SEGMENTS;
if (config3 & MIPS_CONF3_BI)
c->guest.options |= MIPS_CPU_BADINSTR;
if (config3 & MIPS_CONF3_BP)
c->guest.options |= MIPS_CPU_BADINSTRP;
if (config3 & MIPS_CONF3_MSA)
c->guest.ases |= MIPS_ASE_MSA;
if (config3_dyn & MIPS_CONF3_MSA)
c->guest.ases_dyn |= MIPS_ASE_MSA;
if (config3 & MIPS_CONF_M)
c->guest.conf |= BIT(4);
return config3 & MIPS_CONF_M;
}
static inline unsigned int decode_guest_config4(struct cpuinfo_mips *c)
{
unsigned int config4;
probe_gc0_config(config4, config4,
MIPS_CONF_M | MIPS_CONF4_KSCREXIST);
c->guest.kscratch_mask = (config4 & MIPS_CONF4_KSCREXIST)
>> MIPS_CONF4_KSCREXIST_SHIFT;
if (config4 & MIPS_CONF_M)
c->guest.conf |= BIT(5);
return config4 & MIPS_CONF_M;
}
static inline unsigned int decode_guest_config5(struct cpuinfo_mips *c)
{
unsigned int config5, config5_dyn;
probe_gc0_config_dyn(config5, config5, config5_dyn,
MIPS_CONF_M | MIPS_CONF5_MRP);
if (config5 & MIPS_CONF5_MRP)
c->guest.options |= MIPS_CPU_MAAR;
if (config5_dyn & MIPS_CONF5_MRP)
c->guest.options_dyn |= MIPS_CPU_MAAR;
if (config5 & MIPS_CONF5_LLB)
c->guest.options |= MIPS_CPU_RW_LLB;
if (config5 & MIPS_CONF_M)
c->guest.conf |= BIT(6);
return config5 & MIPS_CONF_M;
}
static inline void decode_guest_configs(struct cpuinfo_mips *c)
{
unsigned int ok;
ok = decode_guest_config0(c);
if (ok)
ok = decode_guest_config1(c);
if (ok)
ok = decode_guest_config2(c);
if (ok)
ok = decode_guest_config3(c);
if (ok)
ok = decode_guest_config4(c);
if (ok)
decode_guest_config5(c);
}
static inline void cpu_probe_guestctl0(struct cpuinfo_mips *c)
{
unsigned int guestctl0, temp;
guestctl0 = read_c0_guestctl0();
if (guestctl0 & MIPS_GCTL0_G0E)
c->options |= MIPS_CPU_GUESTCTL0EXT;
if (guestctl0 & MIPS_GCTL0_G1)
c->options |= MIPS_CPU_GUESTCTL1;
if (guestctl0 & MIPS_GCTL0_G2)
c->options |= MIPS_CPU_GUESTCTL2;
if (!(guestctl0 & MIPS_GCTL0_RAD)) {
c->options |= MIPS_CPU_GUESTID;
/*
* Probe for Direct Root to Guest (DRG). Set GuestCtl1.RID = 0
* first, otherwise all data accesses will be fully virtualised
* as if they were performed by guest mode.
*/
write_c0_guestctl1(0);
tlbw_use_hazard();
write_c0_guestctl0(guestctl0 | MIPS_GCTL0_DRG);
back_to_back_c0_hazard();
temp = read_c0_guestctl0();
if (temp & MIPS_GCTL0_DRG) {
write_c0_guestctl0(guestctl0);
c->options |= MIPS_CPU_DRG;
}
}
}
static inline void cpu_probe_guestctl1(struct cpuinfo_mips *c)
{
if (cpu_has_guestid) {
/* determine the number of bits of GuestID available */
write_c0_guestctl1(MIPS_GCTL1_ID);
back_to_back_c0_hazard();
c->guestid_mask = (read_c0_guestctl1() & MIPS_GCTL1_ID)
>> MIPS_GCTL1_ID_SHIFT;
write_c0_guestctl1(0);
}
}
static inline void cpu_probe_gtoffset(struct cpuinfo_mips *c)
{
/* determine the number of bits of GTOffset available */
write_c0_gtoffset(0xffffffff);
back_to_back_c0_hazard();
c->gtoffset_mask = read_c0_gtoffset();
write_c0_gtoffset(0);
}
static inline void cpu_probe_vz(struct cpuinfo_mips *c)
{
cpu_probe_guestctl0(c);
if (cpu_has_guestctl1)
cpu_probe_guestctl1(c);
cpu_probe_gtoffset(c);
decode_guest_configs(c);
}
#define R4K_OPTS (MIPS_CPU_TLB | MIPS_CPU_4KEX | MIPS_CPU_4K_CACHE \
| MIPS_CPU_COUNTER)
@@ -1172,7 +1470,7 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
set_isa(c, MIPS_CPU_ISA_III);
c->fpu_msk31 |= FPU_CSR_CONDX;
break;
case PRID_REV_LOONGSON3A:
case PRID_REV_LOONGSON3A_R1:
c->cputype = CPU_LOONGSON3;
__cpu_name[cpu] = "ICT Loongson-3";
set_elf_platform(cpu, "loongson3a");
@@ -1314,6 +1612,10 @@ static inline void cpu_probe_mips(struct cpuinfo_mips *c, unsigned int cpu)
c->cputype = CPU_P5600;
__cpu_name[cpu] = "MIPS P5600";
break;
case PRID_IMP_P6600:
c->cputype = CPU_P6600;
__cpu_name[cpu] = "MIPS P6600";
break;
case PRID_IMP_I6400:
c->cputype = CPU_I6400;
__cpu_name[cpu] = "MIPS I6400";
@@ -1322,6 +1624,10 @@ static inline void cpu_probe_mips(struct cpuinfo_mips *c, unsigned int cpu)
c->cputype = CPU_M5150;
__cpu_name[cpu] = "MIPS M5150";
break;
case PRID_IMP_M6250:
c->cputype = CPU_M6250;
__cpu_name[cpu] = "MIPS M6250";
break;
}
decode_configs(c);
@@ -1435,6 +1741,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
c->cputype = CPU_BMIPS4380;
__cpu_name[cpu] = "Broadcom BMIPS4380";
set_elf_platform(cpu, "bmips4380");
c->options |= MIPS_CPU_RIXI;
} else {
c->cputype = CPU_BMIPS4350;
__cpu_name[cpu] = "Broadcom BMIPS4350";
@@ -1445,9 +1752,12 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
case PRID_IMP_BMIPS5000:
case PRID_IMP_BMIPS5200:
c->cputype = CPU_BMIPS5000;
__cpu_name[cpu] = "Broadcom BMIPS5000";
if ((c->processor_id & PRID_IMP_MASK) == PRID_IMP_BMIPS5200)
__cpu_name[cpu] = "Broadcom BMIPS5200";
else
__cpu_name[cpu] = "Broadcom BMIPS5000";
set_elf_platform(cpu, "bmips5000");
c->options |= MIPS_CPU_ULRI;
c->options |= MIPS_CPU_ULRI | MIPS_CPU_RIXI;
break;
}
}
@@ -1481,6 +1791,8 @@ platform:
set_elf_platform(cpu, "octeon2");
break;
case PRID_IMP_CAVIUM_CN70XX:
case PRID_IMP_CAVIUM_CN73XX:
case PRID_IMP_CAVIUM_CNF75XX:
case PRID_IMP_CAVIUM_CN78XX:
c->cputype = CPU_CAVIUM_OCTEON3;
__cpu_name[cpu] = "Cavium Octeon III";
@@ -1493,6 +1805,29 @@ platform:
}
}
static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
{
switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_LOONGSON_64: /* Loongson-2/3 */
switch (c->processor_id & PRID_REV_MASK) {
case PRID_REV_LOONGSON3A_R2:
c->cputype = CPU_LOONGSON3;
__cpu_name[cpu] = "ICT Loongson-3";
set_elf_platform(cpu, "loongson3a");
set_isa(c, MIPS_CPU_ISA_M64R2);
break;
}
decode_configs(c);
c->options |= MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
c->writecombine = _CACHE_UNCACHED_ACCELERATED;
break;
default:
panic("Unknown Loongson Processor ID!");
break;
}
}
static inline void cpu_probe_ingenic(struct cpuinfo_mips *c, unsigned int cpu)
{
decode_configs(c);
@@ -1640,6 +1975,9 @@ void cpu_probe(void)
case PRID_COMP_CAVIUM:
cpu_probe_cavium(c, cpu);
break;
case PRID_COMP_LOONGSON:
cpu_probe_loongson(c, cpu);
break;
case PRID_COMP_INGENIC_D0:
case PRID_COMP_INGENIC_D1:
case PRID_COMP_INGENIC_E1:
@@ -1660,6 +1998,15 @@ void cpu_probe(void)
*/
BUG_ON(current_cpu_type() != c->cputype);
if (cpu_has_rixi) {
/* Enable the RIXI exceptions */
set_c0_pagegrain(PG_IEC);
back_to_back_c0_hazard();
/* Verify the IEC bit is set */
if (read_c0_pagegrain() & PG_IEC)
c->options |= MIPS_CPU_RIXIEX;
}
if (mips_fpu_disabled)
c->options &= ~MIPS_CPU_FPU;
@@ -1699,6 +2046,9 @@ void cpu_probe(void)
elf_hwcap |= HWCAP_MIPS_MSA;
}
if (cpu_has_vz)
cpu_probe_vz(c);
cpu_probe_vmbits(c);
#ifdef CONFIG_64BIT

مشاهده پرونده

@@ -14,12 +14,22 @@ static int crashing_cpu = -1;
static cpumask_t cpus_in_crash = CPU_MASK_NONE;
#ifdef CONFIG_SMP
static void crash_shutdown_secondary(void *ignore)
static void crash_shutdown_secondary(void *passed_regs)
{
struct pt_regs *regs;
struct pt_regs *regs = passed_regs;
int cpu = smp_processor_id();
regs = task_pt_regs(current);
/*
* If we are passed registers, use those. Otherwise get the
* regs from the last interrupt, which should be correct, as
* we are in an interrupt. But if the regs are not there,
* pull them from the top of the stack. They are probably
* wrong, but we need something to keep from crashing again.
*/
if (!regs)
regs = get_irq_regs();
if (!regs)
regs = task_pt_regs(current);
if (!cpu_online(cpu))
return;

مشاهده پرونده

@@ -130,7 +130,7 @@ LEAF(__r4k_wait)
/* end of rollback region (the region size must be power of two) */
1:
jr ra
nop
nop
.set pop
END(__r4k_wait)
@@ -172,7 +172,7 @@ NESTED(handle_int, PT_SIZE, sp)
mfc0 k0, CP0_EPC
.set noreorder
j k0
rfe
rfe
#else
and k0, ST0_IE
bnez k0, 1f
@@ -189,7 +189,7 @@ NESTED(handle_int, PT_SIZE, sp)
LONG_L s0, TI_REGS($28)
LONG_S sp, TI_REGS($28)
PTR_LA ra, ret_from_irq
PTR_LA v0, plat_irq_dispatch
PTR_LA v0, plat_irq_dispatch
jr v0
#ifdef CONFIG_CPU_MICROMIPS
nop
@@ -292,7 +292,7 @@ ejtag_return:
MFC0 k0, CP0_DESAVE
.set mips32
deret
.set pop
.set pop
END(ejtag_debug_handler)
/*
@@ -329,10 +329,10 @@ NESTED(nmi_handler, PT_SIZE, sp)
* Clear BEV - required for page fault exception handler to work
*/
mfc0 k0, CP0_STATUS
ori k0, k0, ST0_EXL
ori k0, k0, ST0_EXL
li k1, ~(ST0_BEV | ST0_ERL)
and k0, k0, k1
mtc0 k0, CP0_STATUS
and k0, k0, k1
mtc0 k0, CP0_STATUS
_ehb
SAVE_ALL
move a0, sp
@@ -396,7 +396,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
.macro __BUILD_count exception
LONG_L t0,exception_count_\exception
LONG_ADDIU t0, 1
LONG_ADDIU t0, 1
LONG_S t0,exception_count_\exception
.comm exception_count\exception, 8, 8
.endm
@@ -455,10 +455,10 @@ NESTED(nmi_handler, PT_SIZE, sp)
.set noreorder
/* check if TLB contains a entry for EPC */
MFC0 k1, CP0_ENTRYHI
andi k1, 0xff /* ASID_MASK */
andi k1, MIPS_ENTRYHI_ASID | MIPS_ENTRYHI_ASIDX
MFC0 k0, CP0_EPC
PTR_SRL k0, _PAGE_SHIFT + 1
PTR_SLL k0, _PAGE_SHIFT + 1
PTR_SRL k0, _PAGE_SHIFT + 1
PTR_SLL k0, _PAGE_SHIFT + 1
or k1, k0
MTC0 k1, CP0_ENTRYHI
mtc0_tlbw_hazard
@@ -478,27 +478,27 @@ NESTED(nmi_handler, PT_SIZE, sp)
/* microMIPS: 0x007d6b3c: rdhwr v1,$29 */
MFC0 k1, CP0_EPC
#if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_CPU_MIPS64_R2)
and k0, k1, 1
beqz k0, 1f
xor k1, k0
lhu k0, (k1)
lhu k1, 2(k1)
ins k1, k0, 16, 16
lui k0, 0x007d
b docheck
ori k0, 0x6b3c
and k0, k1, 1
beqz k0, 1f
xor k1, k0
lhu k0, (k1)
lhu k1, 2(k1)
ins k1, k0, 16, 16
lui k0, 0x007d
b docheck
ori k0, 0x6b3c
1:
lui k0, 0x7c03
lw k1, (k1)
ori k0, 0xe83b
lui k0, 0x7c03
lw k1, (k1)
ori k0, 0xe83b
#else
andi k0, k1, 1
bnez k0, handle_ri
lui k0, 0x7c03
lw k1, (k1)
ori k0, 0xe83b
andi k0, k1, 1
bnez k0, handle_ri
lui k0, 0x7c03
lw k1, (k1)
ori k0, 0xe83b
#endif
.set reorder
.set reorder
docheck:
bne k0, k1, handle_ri /* if not ours */

مشاهده پرونده

@@ -21,7 +21,6 @@
#include <asm/asmmacro.h>
#include <asm/irqflags.h>
#include <asm/regdef.h>
#include <asm/pgtable-bits.h>
#include <asm/mipsregs.h>
#include <asm/stackframe.h>
@@ -132,7 +131,27 @@ not_found:
set_saved_sp sp, t0, t1
PTR_SUBU sp, 4 * SZREG # init stack pointer
#ifdef CONFIG_RELOCATABLE
/* Copy kernel and apply the relocations */
jal relocate_kernel
/* Repoint the sp into the new kernel image */
PTR_LI sp, _THREAD_SIZE - 32 - PT_SIZE
PTR_ADDU sp, $28
set_saved_sp sp, t0, t1
PTR_SUBU sp, 4 * SZREG # init stack pointer
/*
* relocate_kernel returns the entry point either
* in the relocated kernel or the original if for
* some reason relocation failed - jump there now
* with instruction hazard barrier because of the
* newly sync'd icache.
*/
jr.hb v0
#else
j start_kernel
#endif
END(kernel_entry)
#ifdef CONFIG_SMP

مشاهده پرونده

@@ -181,6 +181,11 @@ void __init check_wait(void)
case CPU_XLP:
cpu_wait = r4k_wait;
break;
case CPU_LOONGSON3:
if ((c->processor_id & PRID_REV_MASK) >= PRID_REV_LOONGSON3A_R2)
cpu_wait = r4k_wait;
break;
case CPU_BMIPS5000:
cpu_wait = r4k_wait_irqoff;
break;

مشاهده پرونده

@@ -28,6 +28,7 @@
#include <asm/inst.h>
#include <asm/mips-r2-to-r6-emul.h>
#include <asm/local.h>
#include <asm/mipsregs.h>
#include <asm/ptrace.h>
#include <asm/uaccess.h>
@@ -1251,10 +1252,10 @@ fpu_emul:
" j 10b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .word 1b,8b\n"
" .word 2b,8b\n"
" .word 3b,8b\n"
" .word 4b,8b\n"
STR(PTR) " 1b,8b\n"
STR(PTR) " 2b,8b\n"
STR(PTR) " 3b,8b\n"
STR(PTR) " 4b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1326,10 +1327,10 @@ fpu_emul:
" j 10b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .word 1b,8b\n"
" .word 2b,8b\n"
" .word 3b,8b\n"
" .word 4b,8b\n"
STR(PTR) " 1b,8b\n"
STR(PTR) " 2b,8b\n"
STR(PTR) " 3b,8b\n"
STR(PTR) " 4b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1397,10 +1398,10 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .word 1b,8b\n"
" .word 2b,8b\n"
" .word 3b,8b\n"
" .word 4b,8b\n"
STR(PTR) " 1b,8b\n"
STR(PTR) " 2b,8b\n"
STR(PTR) " 3b,8b\n"
STR(PTR) " 4b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1467,10 +1468,10 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .word 1b,8b\n"
" .word 2b,8b\n"
" .word 3b,8b\n"
" .word 4b,8b\n"
STR(PTR) " 1b,8b\n"
STR(PTR) " 2b,8b\n"
STR(PTR) " 3b,8b\n"
STR(PTR) " 4b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1582,14 +1583,14 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .word 1b,8b\n"
" .word 2b,8b\n"
" .word 3b,8b\n"
" .word 4b,8b\n"
" .word 5b,8b\n"
" .word 6b,8b\n"
" .word 7b,8b\n"
" .word 0b,8b\n"
STR(PTR) " 1b,8b\n"
STR(PTR) " 2b,8b\n"
STR(PTR) " 3b,8b\n"
STR(PTR) " 4b,8b\n"
STR(PTR) " 5b,8b\n"
STR(PTR) " 6b,8b\n"
STR(PTR) " 7b,8b\n"
STR(PTR) " 0b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1701,14 +1702,14 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .word 1b,8b\n"
" .word 2b,8b\n"
" .word 3b,8b\n"
" .word 4b,8b\n"
" .word 5b,8b\n"
" .word 6b,8b\n"
" .word 7b,8b\n"
" .word 0b,8b\n"
STR(PTR) " 1b,8b\n"
STR(PTR) " 2b,8b\n"
STR(PTR) " 3b,8b\n"
STR(PTR) " 4b,8b\n"
STR(PTR) " 5b,8b\n"
STR(PTR) " 6b,8b\n"
STR(PTR) " 7b,8b\n"
STR(PTR) " 0b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1820,14 +1821,14 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .word 1b,8b\n"
" .word 2b,8b\n"
" .word 3b,8b\n"
" .word 4b,8b\n"
" .word 5b,8b\n"
" .word 6b,8b\n"
" .word 7b,8b\n"
" .word 0b,8b\n"
STR(PTR) " 1b,8b\n"
STR(PTR) " 2b,8b\n"
STR(PTR) " 3b,8b\n"
STR(PTR) " 4b,8b\n"
STR(PTR) " 5b,8b\n"
STR(PTR) " 6b,8b\n"
STR(PTR) " 7b,8b\n"
STR(PTR) " 0b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1938,14 +1939,14 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .word 1b,8b\n"
" .word 2b,8b\n"
" .word 3b,8b\n"
" .word 4b,8b\n"
" .word 5b,8b\n"
" .word 6b,8b\n"
" .word 7b,8b\n"
" .word 0b,8b\n"
STR(PTR) " 1b,8b\n"
STR(PTR) " 2b,8b\n"
STR(PTR) " 3b,8b\n"
STR(PTR) " 4b,8b\n"
STR(PTR) " 5b,8b\n"
STR(PTR) " 6b,8b\n"
STR(PTR) " 7b,8b\n"
STR(PTR) " 0b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -2000,7 +2001,7 @@ fpu_emul:
"j 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
".word 1b, 3b\n"
STR(PTR) " 1b,3b\n"
".previous\n"
: "=&r"(res), "+&r"(err)
: "r"(vaddr), "i"(SIGSEGV)
@@ -2058,7 +2059,7 @@ fpu_emul:
"j 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
".word 1b, 3b\n"
STR(PTR) " 1b,3b\n"
".previous\n"
: "+&r"(res), "+&r"(err)
: "r"(vaddr), "i"(SIGSEGV));
@@ -2119,7 +2120,7 @@ fpu_emul:
"j 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
".word 1b, 3b\n"
STR(PTR) " 1b,3b\n"
".previous\n"
: "=&r"(res), "+&r"(err)
: "r"(vaddr), "i"(SIGSEGV)
@@ -2182,7 +2183,7 @@ fpu_emul:
"j 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
".word 1b, 3b\n"
STR(PTR) " 1b,3b\n"
".previous\n"
: "+&r"(res), "+&r"(err)
: "r"(vaddr), "i"(SIGSEGV));

مشاهده پرونده

@@ -16,6 +16,7 @@
* Copyright (C) 2001 Rusty Russell.
* Copyright (C) 2003, 2004 Ralf Baechle (ralf@linux-mips.org)
* Copyright (C) 2005 Thiemo Seufer
* Copyright (C) 2015 Imagination Technologies Ltd.
*/
#include <linux/elf.h>
@@ -35,15 +36,13 @@ static int apply_r_mips_32_rela(struct module *me, u32 *location, Elf_Addr v)
static int apply_r_mips_26_rela(struct module *me, u32 *location, Elf_Addr v)
{
if (v % 4) {
pr_err("module %s: dangerous R_MIPS_26 RELArelocation\n",
pr_err("module %s: dangerous R_MIPS_26 RELA relocation\n",
me->name);
return -ENOEXEC;
}
if ((v & 0xf0000000) != (((unsigned long)location + 4) & 0xf0000000)) {
printk(KERN_ERR
"module %s: relocation overflow\n",
me->name);
pr_err("module %s: relocation overflow\n", me->name);
return -ENOEXEC;
}
@@ -67,6 +66,48 @@ static int apply_r_mips_lo16_rela(struct module *me, u32 *location, Elf_Addr v)
return 0;
}
static int apply_r_mips_pc_rela(struct module *me, u32 *location, Elf_Addr v,
unsigned bits)
{
unsigned long mask = GENMASK(bits - 1, 0);
unsigned long se_bits;
long offset;
if (v % 4) {
pr_err("module %s: dangerous R_MIPS_PC%u RELA relocation\n",
me->name, bits);
return -ENOEXEC;
}
offset = ((long)v - (long)location) >> 2;
/* check the sign bit onwards are identical - ie. we didn't overflow */
se_bits = (offset & BIT(bits - 1)) ? ~0ul : 0;
if ((offset & ~mask) != (se_bits & ~mask)) {
pr_err("module %s: relocation overflow\n", me->name);
return -ENOEXEC;
}
*location = (*location & ~mask) | (offset & mask);
return 0;
}
static int apply_r_mips_pc16_rela(struct module *me, u32 *location, Elf_Addr v)
{
return apply_r_mips_pc_rela(me, location, v, 16);
}
static int apply_r_mips_pc21_rela(struct module *me, u32 *location, Elf_Addr v)
{
return apply_r_mips_pc_rela(me, location, v, 21);
}
static int apply_r_mips_pc26_rela(struct module *me, u32 *location, Elf_Addr v)
{
return apply_r_mips_pc_rela(me, location, v, 26);
}
static int apply_r_mips_64_rela(struct module *me, u32 *location, Elf_Addr v)
{
*(Elf_Addr *)location = v;
@@ -99,9 +140,12 @@ static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
[R_MIPS_26] = apply_r_mips_26_rela,
[R_MIPS_HI16] = apply_r_mips_hi16_rela,
[R_MIPS_LO16] = apply_r_mips_lo16_rela,
[R_MIPS_PC16] = apply_r_mips_pc16_rela,
[R_MIPS_64] = apply_r_mips_64_rela,
[R_MIPS_HIGHER] = apply_r_mips_higher_rela,
[R_MIPS_HIGHEST] = apply_r_mips_highest_rela
[R_MIPS_HIGHEST] = apply_r_mips_highest_rela,
[R_MIPS_PC21_S2] = apply_r_mips_pc21_rela,
[R_MIPS_PC26_S2] = apply_r_mips_pc26_rela,
};
int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
@@ -126,11 +170,11 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
/* This is the symbol it is referring to */
sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+ ELF_MIPS_R_SYM(rel[i]);
if (IS_ERR_VALUE(sym->st_value)) {
if (sym->st_value >= -MAX_ERRNO) {
/* Ignore unresolved weak symbol */
if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
continue;
printk(KERN_WARNING "%s: Unknown symbol %s\n",
pr_warn("%s: Unknown symbol %s\n",
me->name, strtab + sym->st_name);
return -ENOENT;
}

مشاهده پرونده

@@ -73,8 +73,7 @@ static int apply_r_mips_26_rel(struct module *me, u32 *location, Elf_Addr v)
}
if ((v & 0xf0000000) != (((unsigned long)location + 4) & 0xf0000000)) {
printk(KERN_ERR
"module %s: relocation overflow\n",
pr_err("module %s: relocation overflow\n",
me->name);
return -ENOEXEC;
}
@@ -183,13 +182,62 @@ out_danger:
return -ENOEXEC;
}
static int apply_r_mips_pc_rel(struct module *me, u32 *location, Elf_Addr v,
unsigned bits)
{
unsigned long mask = GENMASK(bits - 1, 0);
unsigned long se_bits;
long offset;
if (v % 4) {
pr_err("module %s: dangerous R_MIPS_PC%u REL relocation\n",
me->name, bits);
return -ENOEXEC;
}
/* retrieve & sign extend implicit addend */
offset = *location & mask;
offset |= (offset & BIT(bits - 1)) ? ~mask : 0;
offset += ((long)v - (long)location) >> 2;
/* check the sign bit onwards are identical - ie. we didn't overflow */
se_bits = (offset & BIT(bits - 1)) ? ~0ul : 0;
if ((offset & ~mask) != (se_bits & ~mask)) {
pr_err("module %s: relocation overflow\n", me->name);
return -ENOEXEC;
}
*location = (*location & ~mask) | (offset & mask);
return 0;
}
static int apply_r_mips_pc16_rel(struct module *me, u32 *location, Elf_Addr v)
{
return apply_r_mips_pc_rel(me, location, v, 16);
}
static int apply_r_mips_pc21_rel(struct module *me, u32 *location, Elf_Addr v)
{
return apply_r_mips_pc_rel(me, location, v, 21);
}
static int apply_r_mips_pc26_rel(struct module *me, u32 *location, Elf_Addr v)
{
return apply_r_mips_pc_rel(me, location, v, 26);
}
static int (*reloc_handlers_rel[]) (struct module *me, u32 *location,
Elf_Addr v) = {
[R_MIPS_NONE] = apply_r_mips_none,
[R_MIPS_32] = apply_r_mips_32_rel,
[R_MIPS_26] = apply_r_mips_26_rel,
[R_MIPS_HI16] = apply_r_mips_hi16_rel,
[R_MIPS_LO16] = apply_r_mips_lo16_rel
[R_MIPS_LO16] = apply_r_mips_lo16_rel,
[R_MIPS_PC16] = apply_r_mips_pc16_rel,
[R_MIPS_PC21_S2] = apply_r_mips_pc21_rel,
[R_MIPS_PC26_S2] = apply_r_mips_pc26_rel,
};
int apply_relocate(Elf_Shdr *sechdrs, const char *strtab,
@@ -215,12 +263,12 @@ int apply_relocate(Elf_Shdr *sechdrs, const char *strtab,
/* This is the symbol it is referring to */
sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+ ELF_MIPS_R_SYM(rel[i]);
if (IS_ERR_VALUE(sym->st_value)) {
if (sym->st_value >= -MAX_ERRNO) {
/* Ignore unresolved weak symbol */
if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
continue;
printk(KERN_WARNING "%s: Unknown symbol %s\n",
me->name, strtab + sym->st_name);
pr_warn("%s: Unknown symbol %s\n",
me->name, strtab + sym->st_name);
return -ENOENT;
}

مشاهده پرونده

@@ -101,8 +101,6 @@ struct mips_pmu {
static struct mips_pmu mipspmu;
#define M_CONFIG1_PC (1 << 4)
#define M_PERFCTL_EXL (1 << 0)
#define M_PERFCTL_KERNEL (1 << 1)
#define M_PERFCTL_SUPERVISOR (1 << 2)
@@ -754,7 +752,7 @@ static void handle_associated_event(struct cpu_hw_events *cpuc,
static int __n_counters(void)
{
if (!(read_c0_config1() & M_CONFIG1_PC))
if (!cpu_has_perf)
return 0;
if (!(read_c0_perfctrl0() & M_PERFCTL_MORE))
return 1;
@@ -825,6 +823,16 @@ static const struct mips_perf_event mipsxxcore_event_map2
[PERF_COUNT_HW_BRANCH_MISSES] = { 0x27, CNTR_ODD, T },
};
static const struct mips_perf_event i6400_event_map[PERF_COUNT_HW_MAX] = {
[PERF_COUNT_HW_CPU_CYCLES] = { 0x00, CNTR_EVEN | CNTR_ODD },
[PERF_COUNT_HW_INSTRUCTIONS] = { 0x01, CNTR_EVEN | CNTR_ODD },
/* These only count dcache, not icache */
[PERF_COUNT_HW_CACHE_REFERENCES] = { 0x45, CNTR_EVEN | CNTR_ODD },
[PERF_COUNT_HW_CACHE_MISSES] = { 0x48, CNTR_EVEN | CNTR_ODD },
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = { 0x15, CNTR_EVEN | CNTR_ODD },
[PERF_COUNT_HW_BRANCH_MISSES] = { 0x16, CNTR_EVEN | CNTR_ODD },
};
static const struct mips_perf_event loongson3_event_map[PERF_COUNT_HW_MAX] = {
[PERF_COUNT_HW_CPU_CYCLES] = { 0x00, CNTR_EVEN },
[PERF_COUNT_HW_INSTRUCTIONS] = { 0x00, CNTR_ODD },
@@ -1015,6 +1023,46 @@ static const struct mips_perf_event mipsxxcore_cache_map2
},
};
static const struct mips_perf_event i6400_cache_map
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
[C(L1D)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = { 0x46, CNTR_EVEN | CNTR_ODD },
[C(RESULT_MISS)] = { 0x49, CNTR_EVEN | CNTR_ODD },
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = { 0x47, CNTR_EVEN | CNTR_ODD },
[C(RESULT_MISS)] = { 0x4a, CNTR_EVEN | CNTR_ODD },
},
},
[C(L1I)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = { 0x84, CNTR_EVEN | CNTR_ODD },
[C(RESULT_MISS)] = { 0x85, CNTR_EVEN | CNTR_ODD },
},
},
[C(DTLB)] = {
/* Can't distinguish read & write */
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = { 0x40, CNTR_EVEN | CNTR_ODD },
[C(RESULT_MISS)] = { 0x41, CNTR_EVEN | CNTR_ODD },
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = { 0x40, CNTR_EVEN | CNTR_ODD },
[C(RESULT_MISS)] = { 0x41, CNTR_EVEN | CNTR_ODD },
},
},
[C(BPU)] = {
/* Conditional branches / mispredicted */
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = { 0x15, CNTR_EVEN | CNTR_ODD },
[C(RESULT_MISS)] = { 0x16, CNTR_EVEN | CNTR_ODD },
},
},
};
static const struct mips_perf_event loongson3_cache_map
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -1556,6 +1604,7 @@ static const struct mips_perf_event *mipsxx_pmu_map_raw_event(u64 config)
#endif
break;
case CPU_P5600:
case CPU_P6600:
case CPU_I6400:
/* 8-bit event numbers */
raw_id = config & 0x1ff;
@@ -1718,11 +1767,16 @@ init_hw_perf_events(void)
mipspmu.general_event_map = &mipsxxcore_event_map2;
mipspmu.cache_event_map = &mipsxxcore_cache_map2;
break;
case CPU_I6400:
mipspmu.name = "mips/I6400";
case CPU_P6600:
mipspmu.name = "mips/P6600";
mipspmu.general_event_map = &mipsxxcore_event_map2;
mipspmu.cache_event_map = &mipsxxcore_cache_map2;
break;
case CPU_I6400:
mipspmu.name = "mips/I6400";
mipspmu.general_event_map = &i6400_event_map;
mipspmu.cache_event_map = &i6400_cache_map;
break;
case CPU_1004K:
mipspmu.name = "mips/1004K";
mipspmu.general_event_map = &mipsxxcore_event_map;

مشاهده پرونده

@@ -224,11 +224,18 @@ static void __init cps_gen_cache_routine(u32 **pp, struct uasm_label **pl,
uasm_build_label(pl, *pp, lbl);
/* Generate the cache ops */
for (i = 0; i < unroll_lines; i++)
uasm_i_cache(pp, op, i * cache->linesz, t0);
for (i = 0; i < unroll_lines; i++) {
if (cpu_has_mips_r6) {
uasm_i_cache(pp, op, 0, t0);
uasm_i_addiu(pp, t0, t0, cache->linesz);
} else {
uasm_i_cache(pp, op, i * cache->linesz, t0);
}
}
/* Update the base address */
uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz);
if (!cpu_has_mips_r6)
/* Update the base address */
uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz);
/* Loop if we haven't reached the end address yet */
uasm_il_bne(pp, pr, t0, t1, lbl);

مشاهده پرونده

@@ -56,7 +56,7 @@ static void mips_cpu_restore(void)
write_c0_userlocal(current_thread_info()->tp_value);
/* Restore watch registers */
__restore_watch();
__restore_watch(current);
}
/**

مشاهده پرونده

@@ -114,6 +114,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
if (cpu_has_smartmips) seq_printf(m, "%s", " smartmips");
if (cpu_has_dsp) seq_printf(m, "%s", " dsp");
if (cpu_has_dsp2) seq_printf(m, "%s", " dsp2");
if (cpu_has_dsp3) seq_printf(m, "%s", " dsp3");
if (cpu_has_mipsmt) seq_printf(m, "%s", " mt");
if (cpu_has_mmips) seq_printf(m, "%s", " micromips");
if (cpu_has_vz) seq_printf(m, "%s", " vz");

مشاهده پرونده

@@ -77,10 +77,6 @@ void exit_thread(void)
{
}
void flush_thread(void)
{
}
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
{
/*
@@ -455,7 +451,7 @@ unsigned long notrace unwind_stack_by_address(unsigned long stack_page,
*sp + sizeof(*regs) <= stack_page + THREAD_SIZE - 32) {
regs = (struct pt_regs *)*sp;
pc = regs->cp0_epc;
if (__kernel_text_address(pc)) {
if (!user_mode(regs) && __kernel_text_address(pc)) {
*sp = regs->regs[29];
*ra = regs->regs[31];
return pc;
@@ -580,11 +576,19 @@ int mips_get_process_fp_mode(struct task_struct *task)
return value;
}
static void prepare_for_fp_mode_switch(void *info)
{
struct mm_struct *mm = info;
if (current->mm == mm)
lose_fpu(1);
}
int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
{
const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE;
unsigned long switch_count;
struct task_struct *t;
int max_users;
/* Check the value is valid */
if (value & ~known_bits)
@@ -601,6 +605,9 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
if (!(value & PR_FP_MODE_FR) && cpu_has_fpu && cpu_has_mips_r6)
return -EOPNOTSUPP;
/* Proceed with the mode switch */
preempt_disable();
/* Save FP & vector context, then disable FPU & MSA */
if (task->signal == current->signal)
lose_fpu(1);
@@ -610,31 +617,17 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
smp_mb__after_atomic();
/*
* If there are multiple online CPUs then wait until all threads whose
* FP mode is about to change have been context switched. This approach
* allows us to only worry about whether an FP mode switch is in
* progress when FP is first used in a tasks time slice. Pretty much all
* of the mode switch overhead can thus be confined to cases where mode
* switches are actually occurring. That is, to here. However for the
* thread performing the mode switch it may take a while...
* If there are multiple online CPUs then force any which are running
* threads in this process to lose their FPU context, which they can't
* regain until fp_mode_switching is cleared later.
*/
if (num_online_cpus() > 1) {
spin_lock_irq(&task->sighand->siglock);
/* No need to send an IPI for the local CPU */
max_users = (task->mm == current->mm) ? 1 : 0;
for_each_thread(task, t) {
if (t == current)
continue;
switch_count = t->nvcsw + t->nivcsw;
do {
spin_unlock_irq(&task->sighand->siglock);
cond_resched();
spin_lock_irq(&task->sighand->siglock);
} while ((t->nvcsw + t->nivcsw) == switch_count);
}
spin_unlock_irq(&task->sighand->siglock);
if (atomic_read(&current->mm->mm_users) > max_users)
smp_call_function(prepare_for_fp_mode_switch,
(void *)current->mm, 1);
}
/*
@@ -659,6 +652,7 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
/* Allow threads to use FP again */
atomic_set(&task->mm->context.fp_mode_switching, 0);
preempt_enable();
return 0;
}

مشاهده پرونده

@@ -57,8 +57,7 @@ static void init_fp_ctx(struct task_struct *target)
/* Begin with data registers set to all 1s... */
memset(&target->thread.fpu.fpr, ~0, sizeof(target->thread.fpu.fpr));
/* ...and FCSR zeroed */
target->thread.fpu.fcr31 = 0;
/* FCSR has been preset by `mips_set_personality_nan'. */
/*
* Record that the target has "used" math, such that the context
@@ -79,6 +78,22 @@ void ptrace_disable(struct task_struct *child)
clear_tsk_thread_flag(child, TIF_LOAD_WATCH);
}
/*
* Poke at FCSR according to its mask. Don't set the cause bits as
* this is currently not handled correctly in FP context restoration
* and will cause an oops if a corresponding enable bit is set.
*/
static void ptrace_setfcr31(struct task_struct *child, u32 value)
{
u32 fcr31;
u32 mask;
value &= ~FPU_CSR_ALL_X;
fcr31 = child->thread.fpu.fcr31;
mask = boot_cpu_data.fpu_msk31;
child->thread.fpu.fcr31 = (value & ~mask) | (fcr31 & mask);
}
/*
* Read a general register set. We always use the 64-bit format, even
* for 32-bit kernels and for 32-bit processes on a 64-bit kernel.
@@ -159,9 +174,7 @@ int ptrace_setfpregs(struct task_struct *child, __u32 __user *data)
{
union fpureg *fregs;
u64 fpr_val;
u32 fcr31;
u32 value;
u32 mask;
int i;
if (!access_ok(VERIFY_READ, data, 33 * 8))
@@ -176,9 +189,7 @@ int ptrace_setfpregs(struct task_struct *child, __u32 __user *data)
}
__get_user(value, data + 64);
fcr31 = child->thread.fpu.fcr31;
mask = boot_cpu_data.fpu_msk31;
child->thread.fpu.fcr31 = (value & ~mask) | (fcr31 & mask);
ptrace_setfcr31(child, value);
/* FIR may not be written. */
@@ -210,7 +221,8 @@ int ptrace_get_watch_regs(struct task_struct *child,
for (i = 0; i < boot_cpu_data.watch_reg_use_cnt; i++) {
__put_user(child->thread.watch.mips3264.watchlo[i],
&addr->WATCH_STYLE.watchlo[i]);
__put_user(child->thread.watch.mips3264.watchhi[i] & 0xfff,
__put_user(child->thread.watch.mips3264.watchhi[i] &
(MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW),
&addr->WATCH_STYLE.watchhi[i]);
__put_user(boot_cpu_data.watch_reg_masks[i],
&addr->WATCH_STYLE.watch_masks[i]);
@@ -252,12 +264,12 @@ int ptrace_set_watch_regs(struct task_struct *child,
}
#endif
__get_user(ht[i], &addr->WATCH_STYLE.watchhi[i]);
if (ht[i] & ~0xff8)
if (ht[i] & ~MIPS_WATCHHI_MASK)
return -EINVAL;
}
/* Install them. */
for (i = 0; i < boot_cpu_data.watch_reg_use_cnt; i++) {
if (lt[i] & 7)
if (lt[i] & MIPS_WATCHLO_IRW)
watch_active = 1;
child->thread.watch.mips3264.watchlo[i] = lt[i];
/* Set the G bit. */
@@ -805,7 +817,7 @@ long arch_ptrace(struct task_struct *child, long request,
break;
#endif
case FPC_CSR:
child->thread.fpu.fcr31 = data & ~FPU_CSR_ALL_X;
ptrace_setfcr31(child, data);
break;
case DSP_BASE ... DSP_BASE + 5: {
dspreg_t *dregs;

مشاهده پرونده

@@ -244,17 +244,17 @@ LEAF(\name)
.set push
.set noat
#ifdef CONFIG_64BIT
copy_u_d \wr, 1
copy_s_d \wr, 1
EX sd $1, \off(\base)
#elif defined(CONFIG_CPU_LITTLE_ENDIAN)
copy_u_w \wr, 2
copy_s_w \wr, 2
EX sw $1, \off(\base)
copy_u_w \wr, 3
copy_s_w \wr, 3
EX sw $1, (\off+4)(\base)
#else /* CONFIG_CPU_BIG_ENDIAN */
copy_u_w \wr, 2
copy_s_w \wr, 2
EX sw $1, (\off+4)(\base)
copy_u_w \wr, 3
copy_s_w \wr, 3
EX sw $1, \off(\base)
#endif
.set pop

مشاهده پرونده

@@ -15,7 +15,6 @@
#include <asm/fpregdef.h>
#include <asm/mipsregs.h>
#include <asm/asm-offsets.h>
#include <asm/pgtable-bits.h>
#include <asm/regdef.h>
#include <asm/stackframe.h>
#include <asm/thread_info.h>

386
arch/mips/kernel/relocate.c Normal file
مشاهده پرونده

@@ -0,0 +1,386 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Support for Kernel relocation at boot time
*
* Copyright (C) 2015, Imagination Technologies Ltd.
* Authors: Matt Redfearn (matt.redfearn@imgtec.com)
*/
#include <asm/bootinfo.h>
#include <asm/cacheflush.h>
#include <asm/fw/fw.h>
#include <asm/sections.h>
#include <asm/setup.h>
#include <asm/timex.h>
#include <linux/elf.h>
#include <linux/kernel.h>
#include <linux/libfdt.h>
#include <linux/of_fdt.h>
#include <linux/sched.h>
#include <linux/start_kernel.h>
#include <linux/string.h>
#include <linux/printk.h>
#define RELOCATED(x) ((void *)((long)x + offset))
extern u32 _relocation_start[]; /* End kernel image / start relocation table */
extern u32 _relocation_end[]; /* End relocation table */
extern long __start___ex_table; /* Start exception table */
extern long __stop___ex_table; /* End exception table */
static inline u32 __init get_synci_step(void)
{
u32 res;
__asm__("rdhwr %0, $1" : "=r" (res));
return res;
}
static void __init sync_icache(void *kbase, unsigned long kernel_length)
{
void *kend = kbase + kernel_length;
u32 step = get_synci_step();
do {
__asm__ __volatile__(
"synci 0(%0)"
: /* no output */
: "r" (kbase));
kbase += step;
} while (kbase < kend);
/* Completion barrier */
__sync();
}
static int __init apply_r_mips_64_rel(u32 *loc_orig, u32 *loc_new, long offset)
{
*(u64 *)loc_new += offset;
return 0;
}
static int __init apply_r_mips_32_rel(u32 *loc_orig, u32 *loc_new, long offset)
{
*loc_new += offset;
return 0;
}
static int __init apply_r_mips_26_rel(u32 *loc_orig, u32 *loc_new, long offset)
{
unsigned long target_addr = (*loc_orig) & 0x03ffffff;
if (offset % 4) {
pr_err("Dangerous R_MIPS_26 REL relocation\n");
return -ENOEXEC;
}
/* Original target address */
target_addr <<= 2;
target_addr += (unsigned long)loc_orig & ~0x03ffffff;
/* Get the new target address */
target_addr += offset;
if ((target_addr & 0xf0000000) != ((unsigned long)loc_new & 0xf0000000)) {
pr_err("R_MIPS_26 REL relocation overflow\n");
return -ENOEXEC;
}
target_addr -= (unsigned long)loc_new & ~0x03ffffff;
target_addr >>= 2;
*loc_new = (*loc_new & ~0x03ffffff) | (target_addr & 0x03ffffff);
return 0;
}
static int __init apply_r_mips_hi16_rel(u32 *loc_orig, u32 *loc_new, long offset)
{
unsigned long insn = *loc_orig;
unsigned long target = (insn & 0xffff) << 16; /* high 16bits of target */
target += offset;
*loc_new = (insn & ~0xffff) | ((target >> 16) & 0xffff);
return 0;
}
static int (*reloc_handlers_rel[]) (u32 *, u32 *, long) __initdata = {
[R_MIPS_64] = apply_r_mips_64_rel,
[R_MIPS_32] = apply_r_mips_32_rel,
[R_MIPS_26] = apply_r_mips_26_rel,
[R_MIPS_HI16] = apply_r_mips_hi16_rel,
};
int __init do_relocations(void *kbase_old, void *kbase_new, long offset)
{
u32 *r;
u32 *loc_orig;
u32 *loc_new;
int type;
int res;
for (r = _relocation_start; r < _relocation_end; r++) {
/* Sentinel for last relocation */
if (*r == 0)
break;
type = (*r >> 24) & 0xff;
loc_orig = (void *)(kbase_old + ((*r & 0x00ffffff) << 2));
loc_new = RELOCATED(loc_orig);
if (reloc_handlers_rel[type] == NULL) {
/* Unsupported relocation */
pr_err("Unhandled relocation type %d at 0x%pK\n",
type, loc_orig);
return -ENOEXEC;
}
res = reloc_handlers_rel[type](loc_orig, loc_new, offset);
if (res)
return res;
}
return 0;
}
/*
* The exception table is filled in by the relocs tool after vmlinux is linked.
* It must be relocated separately since there will not be any relocation
* information for it filled in by the linker.
*/
static int __init relocate_exception_table(long offset)
{
unsigned long *etable_start, *etable_end, *e;
etable_start = RELOCATED(&__start___ex_table);
etable_end = RELOCATED(&__stop___ex_table);
for (e = etable_start; e < etable_end; e++)
*e += offset;
return 0;
}
#ifdef CONFIG_RANDOMIZE_BASE
static inline __init unsigned long rotate_xor(unsigned long hash,
const void *area, size_t size)
{
size_t i;
unsigned long *ptr = (unsigned long *)area;
for (i = 0; i < size / sizeof(hash); i++) {
/* Rotate by odd number of bits and XOR. */
hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
hash ^= ptr[i];
}
return hash;
}
static inline __init unsigned long get_random_boot(void)
{
unsigned long entropy = random_get_entropy();
unsigned long hash = 0;
/* Attempt to create a simple but unpredictable starting entropy. */
hash = rotate_xor(hash, linux_banner, strlen(linux_banner));
/* Add in any runtime entropy we can get */
hash = rotate_xor(hash, &entropy, sizeof(entropy));
#if defined(CONFIG_USE_OF)
/* Get any additional entropy passed in device tree */
{
int node, len;
u64 *prop;
node = fdt_path_offset(initial_boot_params, "/chosen");
if (node >= 0) {
prop = fdt_getprop_w(initial_boot_params, node,
"kaslr-seed", &len);
if (prop && (len == sizeof(u64)))
hash = rotate_xor(hash, prop, sizeof(*prop));
}
}
#endif /* CONFIG_USE_OF */
return hash;
}
static inline __init bool kaslr_disabled(void)
{
char *str;
#if defined(CONFIG_CMDLINE_BOOL)
const char *builtin_cmdline = CONFIG_CMDLINE;
str = strstr(builtin_cmdline, "nokaslr");
if (str == builtin_cmdline ||
(str > builtin_cmdline && *(str - 1) == ' '))
return true;
#endif
str = strstr(arcs_cmdline, "nokaslr");
if (str == arcs_cmdline || (str > arcs_cmdline && *(str - 1) == ' '))
return true;
return false;
}
static inline void __init *determine_relocation_address(void)
{
/* Choose a new address for the kernel */
unsigned long kernel_length;
void *dest = &_text;
unsigned long offset;
if (kaslr_disabled())
return dest;
kernel_length = (long)_end - (long)(&_text);
offset = get_random_boot() << 16;
offset &= (CONFIG_RANDOMIZE_BASE_MAX_OFFSET - 1);
if (offset < kernel_length)
offset += ALIGN(kernel_length, 0xffff);
return RELOCATED(dest);
}
#else
static inline void __init *determine_relocation_address(void)
{
/*
* Choose a new address for the kernel
* For now we'll hard code the destination
*/
return (void *)0xffffffff81000000;
}
#endif
static inline int __init relocation_addr_valid(void *loc_new)
{
if ((unsigned long)loc_new & 0x0000ffff) {
/* Inappropriately aligned new location */
return 0;
}
if ((unsigned long)loc_new < (unsigned long)&_end) {
/* New location overlaps original kernel */
return 0;
}
return 1;
}
void *__init relocate_kernel(void)
{
void *loc_new;
unsigned long kernel_length;
unsigned long bss_length;
long offset = 0;
int res = 1;
/* Default to original kernel entry point */
void *kernel_entry = start_kernel;
/* Get the command line */
fw_init_cmdline();
#if defined(CONFIG_USE_OF)
/* Deal with the device tree */
early_init_dt_scan(plat_get_fdt());
if (boot_command_line[0]) {
/* Boot command line was passed in device tree */
strlcpy(arcs_cmdline, boot_command_line, COMMAND_LINE_SIZE);
}
#endif /* CONFIG_USE_OF */
kernel_length = (long)(&_relocation_start) - (long)(&_text);
bss_length = (long)&__bss_stop - (long)&__bss_start;
loc_new = determine_relocation_address();
/* Sanity check relocation address */
if (relocation_addr_valid(loc_new))
offset = (unsigned long)loc_new - (unsigned long)(&_text);
/* Reset the command line now so we don't end up with a duplicate */
arcs_cmdline[0] = '\0';
if (offset) {
/* Copy the kernel to it's new location */
memcpy(loc_new, &_text, kernel_length);
/* Perform relocations on the new kernel */
res = do_relocations(&_text, loc_new, offset);
if (res < 0)
goto out;
/* Sync the caches ready for execution of new kernel */
sync_icache(loc_new, kernel_length);
res = relocate_exception_table(offset);
if (res < 0)
goto out;
/*
* The original .bss has already been cleared, and
* some variables such as command line parameters
* stored to it so make a copy in the new location.
*/
memcpy(RELOCATED(&__bss_start), &__bss_start, bss_length);
/* The current thread is now within the relocated image */
__current_thread_info = RELOCATED(&init_thread_union);
/* Return the new kernel's entry point */
kernel_entry = RELOCATED(start_kernel);
}
out:
return kernel_entry;
}
/*
* Show relocation information on panic.
*/
void show_kernel_relocation(const char *level)
{
unsigned long offset;
offset = __pa_symbol(_text) - __pa_symbol(VMLINUX_LOAD_ADDRESS);
if (IS_ENABLED(CONFIG_RELOCATABLE) && offset > 0) {
printk(level);
pr_cont("Kernel relocated by 0x%pK\n", (void *)offset);
pr_cont(" .text @ 0x%pK\n", _text);
pr_cont(" .data @ 0x%pK\n", _sdata);
pr_cont(" .bss @ 0x%pK\n", __bss_start);
}
}
static int kernel_location_notifier_fn(struct notifier_block *self,
unsigned long v, void *p)
{
show_kernel_relocation(KERN_EMERG);
return NOTIFY_DONE;
}
static struct notifier_block kernel_location_notifier = {
.notifier_call = kernel_location_notifier_fn
};
static int __init register_kernel_offset_dumper(void)
{
atomic_notifier_chain_register(&panic_notifier_list,
&kernel_location_notifier);
return 0;
}
__initcall(register_kernel_offset_dumper);

مشاهده پرونده

@@ -35,7 +35,6 @@ NESTED(handle_sys, PT_SIZE, sp)
lw t1, PT_EPC(sp) # skip syscall on return
subu v0, v0, __NR_O32_Linux # check syscall number
addiu t1, 4 # skip to next instruction
sw t1, PT_EPC(sp)
@@ -89,6 +88,7 @@ loads_done:
and t0, t1
bnez t0, syscall_trace_entry # -> yes
syscall_common:
subu v0, v0, __NR_O32_Linux # check syscall number
sltiu t0, v0, __NR_O32_Linux_syscalls + 1
beqz t0, illegal_syscall
@@ -118,24 +118,23 @@ o32_syscall_exit:
syscall_trace_entry:
SAVE_STATIC
move s0, v0
move a0, sp
/*
* syscall number is in v0 unless we called syscall(__NR_###)
* where the real syscall number is in a0
*/
addiu a1, v0, __NR_O32_Linux
bnez v0, 1f /* __NR_syscall at offset 0 */
move a1, v0
subu t2, v0, __NR_O32_Linux
bnez t2, 1f /* __NR_syscall at offset 0 */
lw a1, PT_R4(sp)
1: jal syscall_trace_enter
bltz v0, 1f # seccomp failed? Skip syscall
move v0, s0 # restore syscall
RESTORE_STATIC
lw v0, PT_R2(sp) # Restore syscall (maybe modified)
lw a0, PT_R4(sp) # Restore argument registers
lw a1, PT_R5(sp)
lw a2, PT_R6(sp)

مشاهده پرونده

@@ -82,15 +82,14 @@ n64_syscall_exit:
syscall_trace_entry:
SAVE_STATIC
move s0, v0
move a0, sp
move a1, v0
jal syscall_trace_enter
bltz v0, 1f # seccomp failed? Skip syscall
move v0, s0
RESTORE_STATIC
ld v0, PT_R2(sp) # Restore syscall (maybe modified)
ld a0, PT_R4(sp) # Restore argument registers
ld a1, PT_R5(sp)
ld a2, PT_R6(sp)

مشاهده پرونده

@@ -42,9 +42,6 @@ NESTED(handle_sysn32, PT_SIZE, sp)
#endif
beqz t0, not_n32_scall
dsll t0, v0, 3 # offset into table
ld t2, (sysn32_call_table - (__NR_N32_Linux * 8))(t0)
sd a3, PT_R26(sp) # save a3 for syscall restarting
li t1, _TIF_WORK_SYSCALL_ENTRY
@@ -53,6 +50,9 @@ NESTED(handle_sysn32, PT_SIZE, sp)
bnez t0, n32_syscall_trace_entry
syscall_common:
dsll t0, v0, 3 # offset into table
ld t2, (sysn32_call_table - (__NR_N32_Linux * 8))(t0)
jalr t2 # Do The Real Thing (TM)
li t0, -EMAXERRNO - 1 # error?
@@ -71,21 +71,25 @@ syscall_common:
n32_syscall_trace_entry:
SAVE_STATIC
move s0, t2
move a0, sp
move a1, v0
jal syscall_trace_enter
bltz v0, 1f # seccomp failed? Skip syscall
move t2, s0
RESTORE_STATIC
ld v0, PT_R2(sp) # Restore syscall (maybe modified)
ld a0, PT_R4(sp) # Restore argument registers
ld a1, PT_R5(sp)
ld a2, PT_R6(sp)
ld a3, PT_R7(sp)
ld a4, PT_R8(sp)
ld a5, PT_R9(sp)
dsubu t2, v0, __NR_N32_Linux # check (new) syscall number
sltiu t0, t2, __NR_N32_Linux_syscalls + 1
beqz t0, not_n32_scall
j syscall_common
1: j syscall_exit

مشاهده پرونده

@@ -52,9 +52,6 @@ NESTED(handle_sys, PT_SIZE, sp)
sll a2, a2, 0
sll a3, a3, 0
dsll t0, v0, 3 # offset into table
ld t2, (sys32_call_table - (__NR_O32_Linux * 8))(t0)
sd a3, PT_R26(sp) # save a3 for syscall restarting
/*
@@ -88,6 +85,9 @@ loads_done:
bnez t0, trace_a_syscall
syscall_common:
dsll t0, v0, 3 # offset into table
ld t2, (sys32_call_table - (__NR_O32_Linux * 8))(t0)
jalr t2 # Do The Real Thing (TM)
li t0, -EMAXERRNO - 1 # error?
@@ -112,7 +112,6 @@ trace_a_syscall:
sd a6, PT_R10(sp)
sd a7, PT_R11(sp) # For indirect syscalls
move s0, t2 # Save syscall pointer
move a0, sp
/*
* absolute syscall number is in v0 unless we called syscall(__NR_###)
@@ -133,8 +132,8 @@ trace_a_syscall:
bltz v0, 1f # seccomp failed? Skip syscall
move t2, s0
RESTORE_STATIC
ld v0, PT_R2(sp) # Restore syscall (maybe modified)
ld a0, PT_R4(sp) # Restore argument registers
ld a1, PT_R5(sp)
ld a2, PT_R6(sp)
@@ -143,6 +142,11 @@ trace_a_syscall:
ld a5, PT_R9(sp)
ld a6, PT_R10(sp)
ld a7, PT_R11(sp) # For indirect syscalls
dsubu t0, v0, __NR_O32_Linux # check (new) syscall number
sltiu t0, t0, __NR_O32_Linux_syscalls + 1
beqz t0, not_o32_scall
j syscall_common
1: j syscall_exit

مشاهده پرونده

@@ -26,6 +26,7 @@
#include <linux/sizes.h>
#include <linux/device.h>
#include <linux/dma-contiguous.h>
#include <linux/decompress/generic.h>
#include <asm/addrspace.h>
#include <asm/bootinfo.h>
@@ -51,13 +52,6 @@ EXPORT_SYMBOL(cpu_data);
struct screen_info screen_info;
#endif
/*
* Despite it's name this variable is even if we don't have PCI
*/
unsigned int PCI_DMA_BUS_IS_PHYS;
EXPORT_SYMBOL(PCI_DMA_BUS_IS_PHYS);
/*
* Setup information
*
@@ -250,6 +244,35 @@ disable:
return 0;
}
/* In some conditions (e.g. big endian bootloader with a little endian
kernel), the initrd might appear byte swapped. Try to detect this and
byte swap it if needed. */
static void __init maybe_bswap_initrd(void)
{
#if defined(CONFIG_CPU_CAVIUM_OCTEON)
u64 buf;
/* Check for CPIO signature */
if (!memcmp((void *)initrd_start, "070701", 6))
return;
/* Check for compressed initrd */
if (decompress_method((unsigned char *)initrd_start, 8, NULL))
return;
/* Try again with a byte swapped header */
buf = swab64p((u64 *)initrd_start);
if (!memcmp(&buf, "070701", 6) ||
decompress_method((unsigned char *)(&buf), 8, NULL)) {
unsigned long i;
pr_info("Byteswapped initrd detected\n");
for (i = initrd_start; i < ALIGN(initrd_end, 8); i += 8)
swab64s((u64 *)i);
}
#endif
}
static void __init finalize_initrd(void)
{
unsigned long size = initrd_end - initrd_start;
@@ -263,6 +286,8 @@ static void __init finalize_initrd(void)
goto disable;
}
maybe_bswap_initrd();
reserve_bootmem(__pa(initrd_start), size, BOOTMEM_DEFAULT);
initrd_below_start_ok = 1;
@@ -469,6 +494,29 @@ static void __init bootmem_init(void)
*/
reserve_bootmem(PFN_PHYS(mapstart), bootmap_size, BOOTMEM_DEFAULT);
#ifdef CONFIG_RELOCATABLE
/*
* The kernel reserves all memory below its _end symbol as bootmem,
* but the kernel may now be at a much higher address. The memory
* between the original and new locations may be returned to the system.
*/
if (__pa_symbol(_text) > __pa_symbol(VMLINUX_LOAD_ADDRESS)) {
unsigned long offset;
extern void show_kernel_relocation(const char *level);
offset = __pa_symbol(_text) - __pa_symbol(VMLINUX_LOAD_ADDRESS);
free_bootmem(__pa_symbol(VMLINUX_LOAD_ADDRESS), offset);
#if defined(CONFIG_DEBUG_KERNEL) && defined(CONFIG_DEBUG_INFO)
/*
* This information is necessary when debugging the kernel
* But is a security vulnerability otherwise!
*/
show_kernel_relocation(KERN_INFO);
#endif
}
#endif
/*
* Reserve initrd memory if needed.
*/
@@ -624,6 +672,8 @@ static void __init request_crashkernel(struct resource *res)
#define USE_PROM_CMDLINE IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER)
#define USE_DTB_CMDLINE IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_DTB)
#define EXTEND_WITH_PROM IS_ENABLED(CONFIG_MIPS_CMDLINE_DTB_EXTEND)
#define BUILTIN_EXTEND_WITH_PROM \
IS_ENABLED(CONFIG_MIPS_CMDLINE_BUILTIN_EXTEND)
static void __init arch_mem_init(char **cmdline_p)
{
@@ -657,15 +707,23 @@ static void __init arch_mem_init(char **cmdline_p)
strlcpy(boot_command_line, arcs_cmdline, COMMAND_LINE_SIZE);
if (EXTEND_WITH_PROM && arcs_cmdline[0]) {
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
if (boot_command_line[0])
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, arcs_cmdline, COMMAND_LINE_SIZE);
}
#if defined(CONFIG_CMDLINE_BOOL)
if (builtin_cmdline[0]) {
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
if (boot_command_line[0])
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
if (BUILTIN_EXTEND_WITH_PROM && arcs_cmdline[0]) {
if (boot_command_line[0])
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, arcs_cmdline, COMMAND_LINE_SIZE);
}
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
@@ -706,6 +764,9 @@ static void __init arch_mem_init(char **cmdline_p)
for_each_memblock(reserved, reg)
if (reg->size != 0)
reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT);
reserve_bootmem_region(__pa_symbol(&__nosave_begin),
__pa_symbol(&__nosave_end)); /* Reserve for hibernation */
}
static void __init resource_init(void)

مشاهده پرونده

@@ -195,6 +195,9 @@ static int restore_msa_extcontext(void __user *buf, unsigned int size)
unsigned int csr;
int i, err;
if (!config_enabled(CONFIG_CPU_HAS_MSA))
return SIGSYS;
if (size != sizeof(*msa))
return -EINVAL;
@@ -398,8 +401,8 @@ int protected_restore_fp_context(void __user *sc)
}
fp_done:
if (used & USED_EXTCONTEXT)
err |= restore_extcontext(sc_to_extcontext(sc));
if (!err && (used & USED_EXTCONTEXT))
err = restore_extcontext(sc_to_extcontext(sc));
return err ?: sig;
}
@@ -798,7 +801,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
regs->regs[0] = 0; /* Don't deal with this again. */
}
if (sig_uses_siginfo(&ksig->ka))
if (sig_uses_siginfo(&ksig->ka, abi))
ret = abi->setup_rt_frame(vdso + abi->vdso->off_rt_sigreturn,
ksig, regs, oldset);
else

مشاهده پرونده

@@ -227,6 +227,12 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
err |= __put_user(from->si_uid, &to->si_uid);
err |= __put_user(from->si_int, &to->si_int);
break;
case __SI_SYS >> 16:
err |= __copy_to_user(&to->si_call_addr, &from->si_call_addr,
sizeof(compat_uptr_t));
err |= __put_user(from->si_syscall, &to->si_syscall);
err |= __put_user(from->si_arch, &to->si_arch);
break;
}
}
return err;

مشاهده پرونده

@@ -243,6 +243,7 @@ static void bmips_init_secondary(void)
break;
case CPU_BMIPS5000:
write_c0_brcm_action(ACTION_CLR_IPI(smp_processor_id(), 0));
current_cpu_data.core = (read_c0_brcm_config() >> 25) & 3;
break;
}
}
@@ -565,3 +566,90 @@ asmlinkage void __weak plat_wired_tlb_setup(void)
* once the wired entries are present.
*/
}
void __init bmips_cpu_setup(void)
{
void __iomem __maybe_unused *cbr = BMIPS_GET_CBR();
u32 __maybe_unused cfg;
switch (current_cpu_type()) {
case CPU_BMIPS3300:
/* Set BIU to async mode */
set_c0_brcm_bus_pll(BIT(22));
__sync();
/* put the BIU back in sync mode */
clear_c0_brcm_bus_pll(BIT(22));
/* clear BHTD to enable branch history table */
clear_c0_brcm_reset(BIT(16));
/* Flush and enable RAC */
cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
__raw_writel(cfg | 0x100, BMIPS_RAC_CONFIG);
__raw_readl(cbr + BMIPS_RAC_CONFIG);
cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
__raw_writel(cfg | 0xf, BMIPS_RAC_CONFIG);
__raw_readl(cbr + BMIPS_RAC_CONFIG);
cfg = __raw_readl(cbr + BMIPS_RAC_ADDRESS_RANGE);
__raw_writel(cfg | 0x0fff0000, cbr + BMIPS_RAC_ADDRESS_RANGE);
__raw_readl(cbr + BMIPS_RAC_ADDRESS_RANGE);
break;
case CPU_BMIPS4380:
/* CBG workaround for early BMIPS4380 CPUs */
switch (read_c0_prid()) {
case 0x2a040:
case 0x2a042:
case 0x2a044:
case 0x2a060:
cfg = __raw_readl(cbr + BMIPS_L2_CONFIG);
__raw_writel(cfg & ~0x07000000, cbr + BMIPS_L2_CONFIG);
__raw_readl(cbr + BMIPS_L2_CONFIG);
}
/* clear BHTD to enable branch history table */
clear_c0_brcm_config_0(BIT(21));
/* XI/ROTR enable */
set_c0_brcm_config_0(BIT(23));
set_c0_brcm_cmt_ctrl(BIT(15));
break;
case CPU_BMIPS5000:
/* enable RDHWR, BRDHWR */
set_c0_brcm_config(BIT(17) | BIT(21));
/* Disable JTB */
__asm__ __volatile__(
" .set noreorder\n"
" li $8, 0x5a455048\n"
" .word 0x4088b00f\n" /* mtc0 t0, $22, 15 */
" .word 0x4008b008\n" /* mfc0 t0, $22, 8 */
" li $9, 0x00008000\n"
" or $8, $8, $9\n"
" .word 0x4088b008\n" /* mtc0 t0, $22, 8 */
" sync\n"
" li $8, 0x0\n"
" .word 0x4088b00f\n" /* mtc0 t0, $22, 15 */
" .set reorder\n"
: : : "$8", "$9");
/* XI enable */
set_c0_brcm_config(BIT(27));
/* enable MIPS32R2 ROR instruction for XI TLB handlers */
__asm__ __volatile__(
" li $8, 0x5a455048\n"
" .word 0x4088b00f\n" /* mtc0 $8, $22, 15 */
" nop; nop; nop\n"
" .word 0x4008b008\n" /* mfc0 $8, $22, 8 */
" lui $9, 0x0100\n"
" or $8, $9\n"
" .word 0x4088b008\n" /* mtc0 $8, $22, 8 */
: : : "$8", "$9");
break;
}
}

مشاهده پرونده

@@ -27,15 +27,27 @@
#include <asm/time.h>
#include <asm/uasm.h>
static bool threads_disabled;
static DECLARE_BITMAP(core_power, NR_CPUS);
struct core_boot_config *mips_cps_core_bootcfg;
static int __init setup_nothreads(char *s)
{
threads_disabled = true;
return 0;
}
early_param("nothreads", setup_nothreads);
static unsigned core_vpe_count(unsigned core)
{
unsigned cfg;
if (!config_enabled(CONFIG_MIPS_MT_SMP) || !cpu_has_mipsmt)
if (threads_disabled)
return 1;
if ((!config_enabled(CONFIG_MIPS_MT_SMP) || !cpu_has_mipsmt)
&& (!config_enabled(CONFIG_CPU_MIPSR6) || !cpu_has_vp))
return 1;
mips_cm_lock_other(core, 0);
@@ -47,11 +59,12 @@ static unsigned core_vpe_count(unsigned core)
static void __init cps_smp_setup(void)
{
unsigned int ncores, nvpes, core_vpes;
unsigned long core_entry;
int c, v;
/* Detect & record VPE topology */
ncores = mips_cm_numcores();
pr_info("VPE topology ");
pr_info("%s topology ", cpu_has_mips_r6 ? "VP" : "VPE");
for (c = nvpes = 0; c < ncores; c++) {
core_vpes = core_vpe_count(c);
pr_cont("%c%u", c ? ',' : '{', core_vpes);
@@ -62,7 +75,7 @@ static void __init cps_smp_setup(void)
for (v = 0; v < min_t(int, core_vpes, NR_CPUS - nvpes); v++) {
cpu_data[nvpes + v].core = c;
#ifdef CONFIG_MIPS_MT_SMP
#if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6)
cpu_data[nvpes + v].vpe_id = v;
#endif
}
@@ -91,6 +104,11 @@ static void __init cps_smp_setup(void)
/* Make core 0 coherent with everything */
write_gcr_cl_coherence(0xff);
if (mips_cm_revision() >= CM_REV_CM3) {
core_entry = CKSEG1ADDR((unsigned long)mips_cps_core_entry);
write_gcr_bev_base(core_entry);
}
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
@@ -213,6 +231,18 @@ static void boot_core(unsigned core)
if (mips_cpc_present()) {
/* Reset the core */
mips_cpc_lock_other(core);
if (mips_cm_revision() >= CM_REV_CM3) {
/* Run VP0 following the reset */
write_cpc_co_vp_run(0x1);
/*
* Ensure that the VP_RUN register is written before the
* core leaves reset.
*/
wmb();
}
write_cpc_co_cmd(CPC_Cx_CMD_RESET);
timeout = 100;
@@ -250,7 +280,10 @@ static void boot_core(unsigned core)
static void remote_vpe_boot(void *dummy)
{
mips_cps_boot_vpes();
unsigned core = current_cpu_data.core;
struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core];
mips_cps_boot_vpes(core_cfg, cpu_vpe_id(&current_cpu_data));
}
static void cps_boot_secondary(int cpu, struct task_struct *idle)
@@ -259,6 +292,7 @@ static void cps_boot_secondary(int cpu, struct task_struct *idle)
unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]);
struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core];
struct vpe_boot_config *vpe_cfg = &core_cfg->vpe_config[vpe_id];
unsigned long core_entry;
unsigned int remote;
int err;
@@ -276,6 +310,13 @@ static void cps_boot_secondary(int cpu, struct task_struct *idle)
goto out;
}
if (cpu_has_vp) {
mips_cm_lock_other(core, vpe_id);
core_entry = CKSEG1ADDR((unsigned long)mips_cps_core_entry);
write_gcr_co_reset_base(core_entry);
mips_cm_unlock_other();
}
if (core != current_cpu_data.core) {
/* Boot a VPE on another powered up core */
for (remote = 0; remote < NR_CPUS; remote++) {
@@ -293,10 +334,10 @@ static void cps_boot_secondary(int cpu, struct task_struct *idle)
goto out;
}
BUG_ON(!cpu_has_mipsmt);
BUG_ON(!cpu_has_mipsmt && !cpu_has_vp);
/* Boot a VPE on this core */
mips_cps_boot_vpes();
mips_cps_boot_vpes(core_cfg, vpe_id);
out:
preempt_enable();
}
@@ -307,6 +348,17 @@ static void cps_init_secondary(void)
if (cpu_has_mipsmt)
dmt();
if (mips_cm_revision() >= CM_REV_CM3) {
unsigned ident = gic_read_local_vp_id();
/*
* Ensure that our calculation of the VP ID matches up with
* what the GIC reports, otherwise we'll have configured
* interrupts incorrectly.
*/
BUG_ON(ident != mips_cm_vp_id(smp_processor_id()));
}
change_c0_status(ST0_IM, STATUSF_IP2 | STATUSF_IP3 | STATUSF_IP4 |
STATUSF_IP5 | STATUSF_IP6 | STATUSF_IP7);
}

مشاهده پرونده

@@ -243,18 +243,6 @@ static int __init mips_smp_ipi_init(void)
struct irq_domain *ipidomain;
struct device_node *node;
/*
* In some cases like qemu-malta, it is desired to try SMP with
* a single core. Qemu-malta has no GIC, so an attempt to set any IPIs
* would cause a BUG_ON() to be triggered since there's no ipidomain.
*
* Since for a single core system IPIs aren't required really, skip the
* initialisation which should generally keep any such configurations
* happy and only fail hard when trying to truely run SMP.
*/
if (cpumask_weight(cpu_possible_mask) == 1)
return 0;
node = of_irq_find_parent(of_root);
ipidomain = irq_find_matching_host(node, DOMAIN_BUS_IPI);
@@ -266,7 +254,17 @@ static int __init mips_smp_ipi_init(void)
if (node && !ipidomain)
ipidomain = irq_find_matching_host(NULL, DOMAIN_BUS_IPI);
BUG_ON(!ipidomain);
/*
* There are systems which only use IPI domains some of the time,
* depending upon configuration we don't know until runtime. An
* example is Malta where we may compile in support for GIC & the
* MT ASE, but run on a system which has multiple VPEs in a single
* core and doesn't include a GIC. Until all IPI implementations
* have been converted to use IPI domains the best we can do here
* is to return & hope some other code sets up the IPIs.
*/
if (!ipidomain)
return 0;
call_virq = irq_reserve_ipi(ipidomain, cpu_possible_mask);
BUG_ON(!call_virq);

مشاهده پرونده

@@ -210,6 +210,7 @@ void spram_config(void)
case CPU_P5600:
case CPU_QEMU_GENERIC:
case CPU_I6400:
case CPU_P6600:
config0 = read_c0_config();
/* FIXME: addresses are Malta specific */
if (config0 & (1<<24)) {

مشاهده پرونده

@@ -145,7 +145,7 @@ static void show_backtrace(struct task_struct *task, const struct pt_regs *regs)
if (!task)
task = current;
if (raw_show_trace || !__kernel_text_address(pc)) {
if (raw_show_trace || user_mode(regs) || !__kernel_text_address(pc)) {
show_raw_backtrace(sp);
return;
}
@@ -399,11 +399,8 @@ void __noreturn die(const char *str, struct pt_regs *regs)
if (in_interrupt())
panic("Fatal exception in interrupt");
if (panic_on_oops) {
printk(KERN_EMERG "Fatal exception: panic in 5 seconds");
ssleep(5);
if (panic_on_oops)
panic("Fatal exception");
}
if (regs && kexec_should_crash(current))
crash_kexec(regs);
@@ -1249,7 +1246,7 @@ static int enable_restore_fp_context(int msa)
err = init_fpu();
if (msa && !err) {
enable_msa();
_init_msa_upper();
init_msa_upper();
set_thread_flag(TIF_USEDMSA);
set_thread_flag(TIF_MSA_CTX_LIVE);
}
@@ -1312,7 +1309,7 @@ static int enable_restore_fp_context(int msa)
*/
prior_msa = test_and_set_thread_flag(TIF_MSA_CTX_LIVE);
if (!prior_msa && was_fpu_owner) {
_init_msa_upper();
init_msa_upper();
goto out;
}
@@ -1329,7 +1326,7 @@ static int enable_restore_fp_context(int msa)
* of each vector register such that it cannot see data left
* behind by another task.
*/
_init_msa_upper();
init_msa_upper();
} else {
/* We need to restore the vector context. */
restore_msa(current);
@@ -1356,7 +1353,6 @@ asmlinkage void do_cpu(struct pt_regs *regs)
unsigned long fcr31;
unsigned int cpid;
int status, err;
unsigned long __maybe_unused flags;
int sig;
prev_state = exception_enter();
@@ -1501,16 +1497,13 @@ asmlinkage void do_watch(struct pt_regs *regs)
{
siginfo_t info = { .si_signo = SIGTRAP, .si_code = TRAP_HWBKPT };
enum ctx_state prev_state;
u32 cause;
prev_state = exception_enter();
/*
* Clear WP (bit 22) bit of cause register so we don't loop
* forever.
*/
cause = read_c0_cause();
cause &= ~(1 << 22);
write_c0_cause(cause);
clear_c0_cause(CAUSEF_WP);
/*
* If the current thread has the watch registers loaded, save
@@ -1647,6 +1640,7 @@ static inline void parity_protection_init(void)
case CPU_P5600:
case CPU_QEMU_GENERIC:
case CPU_I6400:
case CPU_P6600:
{
#define ERRCTL_PE 0x80000000
#define ERRCTL_L2P 0x00800000
@@ -1777,7 +1771,8 @@ asmlinkage void do_ftlb(void)
/* For the moment, report the problem and hang. */
if ((cpu_has_mips_r2_r6) &&
((current_cpu_data.processor_id & 0xff0000) == PRID_COMP_MIPS)) {
(((current_cpu_data.processor_id & 0xff0000) == PRID_COMP_MIPS) ||
((current_cpu_data.processor_id & 0xff0000) == PRID_COMP_LOONGSON))) {
pr_err("FTLB error exception, cp0_ecc=0x%08x:\n",
read_c0_ecc());
pr_err("cp0_errorepc == %0*lx\n", field, read_c0_errorepc());
@@ -2119,6 +2114,13 @@ void per_cpu_trap_init(bool is_boot_cpu)
* o read IntCtl.IPFDC to determine the fast debug channel interrupt
*/
if (cpu_has_mips_r2_r6) {
/*
* We shouldn't trust a secondary core has a sane EBASE register
* so use the one calculated by the boot CPU.
*/
if (!is_boot_cpu)
write_c0_ebase(ebase);
cp0_compare_irq_shift = CAUSEB_TI - CAUSEB_IP;
cp0_compare_irq = (read_c0_intctl() >> INTCTLB_IPTI) & 7;
cp0_perfcount_irq = (read_c0_intctl() >> INTCTLB_IPPCI) & 7;
@@ -2134,7 +2136,7 @@ void per_cpu_trap_init(bool is_boot_cpu)
}
if (!cpu_data[cpu].asid_cache)
cpu_data[cpu].asid_cache = ASID_FIRST_VERSION;
cpu_data[cpu].asid_cache = asid_first_version(cpu);
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;

مشاهده پرونده

@@ -1191,6 +1191,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
case ldc1_op:
case swc1_op:
case sdc1_op:
case cop1x_op:
die_if_kernel("Unaligned FP access in kernel code", regs);
BUG_ON(!used_math());

مشاهده پرونده

@@ -136,6 +136,27 @@ SECTIONS
#ifdef CONFIG_SMP
PERCPU_SECTION(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
#endif
#ifdef CONFIG_RELOCATABLE
. = ALIGN(4);
.data.reloc : {
_relocation_start = .;
/*
* Space for relocation table
* This needs to be filled so that the
* relocs tool can overwrite the content.
* An invalid value is left at the start of the
* section to abort relocation if the table
* has not been filled in.
*/
LONG(0xFFFFFFFF);
FILL(0);
. += CONFIG_RELOCATION_TABLE_SIZE - 4;
_relocation_end = .;
}
#endif
#ifdef CONFIG_MIPS_RAW_APPENDED_DTB
__appended_dtb = .;
/* leave space for appended DTB */

مشاهده پرونده

@@ -15,10 +15,9 @@
* Install the watch registers for the current thread. A maximum of
* four registers are installed although the machine may have more.
*/
void mips_install_watch_registers(void)
void mips_install_watch_registers(struct task_struct *t)
{
struct mips3264_watch_reg_state *watches =
&current->thread.watch.mips3264;
struct mips3264_watch_reg_state *watches = &t->thread.watch.mips3264;
switch (current_cpu_data.watch_reg_use_cnt) {
default:
BUG();
@@ -26,16 +25,20 @@ void mips_install_watch_registers(void)
write_c0_watchlo3(watches->watchlo[3]);
/* Write 1 to the I, R, and W bits to clear them, and
1 to G so all ASIDs are trapped. */
write_c0_watchhi3(0x40000007 | watches->watchhi[3]);
write_c0_watchhi3(MIPS_WATCHHI_G | MIPS_WATCHHI_IRW |
watches->watchhi[3]);
case 3:
write_c0_watchlo2(watches->watchlo[2]);
write_c0_watchhi2(0x40000007 | watches->watchhi[2]);
write_c0_watchhi2(MIPS_WATCHHI_G | MIPS_WATCHHI_IRW |
watches->watchhi[2]);
case 2:
write_c0_watchlo1(watches->watchlo[1]);
write_c0_watchhi1(0x40000007 | watches->watchhi[1]);
write_c0_watchhi1(MIPS_WATCHHI_G | MIPS_WATCHHI_IRW |
watches->watchhi[1]);
case 1:
write_c0_watchlo0(watches->watchlo[0]);
write_c0_watchhi0(0x40000007 | watches->watchhi[0]);
write_c0_watchhi0(MIPS_WATCHHI_G | MIPS_WATCHHI_IRW |
watches->watchhi[0]);
}
}
@@ -52,22 +55,26 @@ void mips_read_watch_registers(void)
default:
BUG();
case 4:
watches->watchhi[3] = (read_c0_watchhi3() & 0x0fff);
watches->watchhi[3] = (read_c0_watchhi3() &
(MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW));
case 3:
watches->watchhi[2] = (read_c0_watchhi2() & 0x0fff);
watches->watchhi[2] = (read_c0_watchhi2() &
(MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW));
case 2:
watches->watchhi[1] = (read_c0_watchhi1() & 0x0fff);
watches->watchhi[1] = (read_c0_watchhi1() &
(MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW));
case 1:
watches->watchhi[0] = (read_c0_watchhi0() & 0x0fff);
watches->watchhi[0] = (read_c0_watchhi0() &
(MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW));
}
if (current_cpu_data.watch_reg_use_cnt == 1 &&
(watches->watchhi[0] & 7) == 0) {
(watches->watchhi[0] & MIPS_WATCHHI_IRW) == 0) {
/* Pathological case of release 1 architecture that
* doesn't set the condition bits. We assume that
* since we got here, the watch condition was met and
* signal that the conditions requested in watchlo
* were met. */
watches->watchhi[0] |= (watches->watchlo[0] & 7);
watches->watchhi[0] |= (watches->watchlo[0] & MIPS_WATCHHI_IRW);
}
}
@@ -110,86 +117,86 @@ void mips_probe_watch_registers(struct cpuinfo_mips *c)
* Check which of the I,R and W bits are supported, then
* disable the register.
*/
write_c0_watchlo0(7);
write_c0_watchlo0(MIPS_WATCHLO_IRW);
back_to_back_c0_hazard();
t = read_c0_watchlo0();
write_c0_watchlo0(0);
c->watch_reg_masks[0] = t & 7;
c->watch_reg_masks[0] = t & MIPS_WATCHLO_IRW;
/* Write the mask bits and read them back to determine which
* can be used. */
c->watch_reg_count = 1;
c->watch_reg_use_cnt = 1;
t = read_c0_watchhi0();
write_c0_watchhi0(t | 0xff8);
write_c0_watchhi0(t | MIPS_WATCHHI_MASK);
back_to_back_c0_hazard();
t = read_c0_watchhi0();
c->watch_reg_masks[0] |= (t & 0xff8);
if ((t & 0x80000000) == 0)
c->watch_reg_masks[0] |= (t & MIPS_WATCHHI_MASK);
if ((t & MIPS_WATCHHI_M) == 0)
return;
write_c0_watchlo1(7);
write_c0_watchlo1(MIPS_WATCHLO_IRW);
back_to_back_c0_hazard();
t = read_c0_watchlo1();
write_c0_watchlo1(0);
c->watch_reg_masks[1] = t & 7;
c->watch_reg_masks[1] = t & MIPS_WATCHLO_IRW;
c->watch_reg_count = 2;
c->watch_reg_use_cnt = 2;
t = read_c0_watchhi1();
write_c0_watchhi1(t | 0xff8);
write_c0_watchhi1(t | MIPS_WATCHHI_MASK);
back_to_back_c0_hazard();
t = read_c0_watchhi1();
c->watch_reg_masks[1] |= (t & 0xff8);
if ((t & 0x80000000) == 0)
c->watch_reg_masks[1] |= (t & MIPS_WATCHHI_MASK);
if ((t & MIPS_WATCHHI_M) == 0)
return;
write_c0_watchlo2(7);
write_c0_watchlo2(MIPS_WATCHLO_IRW);
back_to_back_c0_hazard();
t = read_c0_watchlo2();
write_c0_watchlo2(0);
c->watch_reg_masks[2] = t & 7;
c->watch_reg_masks[2] = t & MIPS_WATCHLO_IRW;
c->watch_reg_count = 3;
c->watch_reg_use_cnt = 3;
t = read_c0_watchhi2();
write_c0_watchhi2(t | 0xff8);
write_c0_watchhi2(t | MIPS_WATCHHI_MASK);
back_to_back_c0_hazard();
t = read_c0_watchhi2();
c->watch_reg_masks[2] |= (t & 0xff8);
if ((t & 0x80000000) == 0)
c->watch_reg_masks[2] |= (t & MIPS_WATCHHI_MASK);
if ((t & MIPS_WATCHHI_M) == 0)
return;
write_c0_watchlo3(7);
write_c0_watchlo3(MIPS_WATCHLO_IRW);
back_to_back_c0_hazard();
t = read_c0_watchlo3();
write_c0_watchlo3(0);
c->watch_reg_masks[3] = t & 7;
c->watch_reg_masks[3] = t & MIPS_WATCHLO_IRW;
c->watch_reg_count = 4;
c->watch_reg_use_cnt = 4;
t = read_c0_watchhi3();
write_c0_watchhi3(t | 0xff8);
write_c0_watchhi3(t | MIPS_WATCHHI_MASK);
back_to_back_c0_hazard();
t = read_c0_watchhi3();
c->watch_reg_masks[3] |= (t & 0xff8);
if ((t & 0x80000000) == 0)
c->watch_reg_masks[3] |= (t & MIPS_WATCHHI_MASK);
if ((t & MIPS_WATCHHI_M) == 0)
return;
/* We use at most 4, but probe and report up to 8. */
c->watch_reg_count = 5;
t = read_c0_watchhi4();
if ((t & 0x80000000) == 0)
if ((t & MIPS_WATCHHI_M) == 0)
return;
c->watch_reg_count = 6;
t = read_c0_watchhi5();
if ((t & 0x80000000) == 0)
if ((t & MIPS_WATCHHI_M) == 0)
return;
c->watch_reg_count = 7;
t = read_c0_watchhi6();
if ((t & 0x80000000) == 0)
if ((t & MIPS_WATCHHI_M) == 0)
return;
c->watch_reg_count = 8;