Merge tag 'powerpc-4.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Highlights:
   - Major rework of Book3S 64-bit exception vectors (Nicholas Piggin)
   - Use gas sections for arranging exception vectors et. al.
   - Large set of TM cleanups and selftests (Cyril Bur)
   - Enable transactional memory (TM) lazily for userspace (Cyril Bur)
   - Support for XZ compression in the zImage wrapper (Oliver
     O'Halloran)
   - Add support for bpf constant blinding (Naveen N. Rao)
   - Beginnings of upstream support for PA Semi Nemo motherboards
     (Darren Stevens)

  Fixes:
   - Ensure .mem(init|exit).text are within _stext/_etext (Michael
     Ellerman)
   - xmon: Don't use ld on 32-bit (Michael Ellerman)
   - vdso64: Use double word compare on pointers (Anton Blanchard)
   - powerpc/nvram: Fix an incorrect partition merge (Pan Xinhui)
   - powerpc: Fix usage of _PAGE_RO in hugepage (Christophe Leroy)
   - powerpc/mm: Update FORCE_MAX_ZONEORDER range to allow hugetlb w/4K
     (Aneesh Kumar K.V)
   - Fix memory leak in queue_hotplug_event() error path (Andrew
     Donnellan)
   - Replay hypervisor maintenance interrupt first (Nicholas Piggin)

  Various performance optimisations (Anton Blanchard):
   - Align hot loops of memset() and backwards_memcpy()
   - During context switch, check before setting mm_cpumask
   - Remove static branch prediction in atomic{, 64}_add_unless
   - Only disable HAVE_EFFICIENT_UNALIGNED_ACCESS on POWER7 little
     endian
   - Set default CPU type to POWER8 for little endian builds

  Cleanups & features:
   - Sparse fixes/cleanups (Daniel Axtens)
   - Preserve CFAR value on SLB miss caused by access to bogus address
     (Paul Mackerras)
   - Radix MMU fixups for POWER9 (Aneesh Kumar K.V)
   - Support for setting used_(vsr|vr|spe) in sigreturn path (for CRIU)
     (Simon Guo)
   - Optimise syscall entry for virtual, relocatable case (Nicholas
     Piggin)
   - Optimise MSR handling in exception handling (Nicholas Piggin)
   - Support for kexec with Radix MMU (Benjamin Herrenschmidt)
   - powernv EEH fixes (Russell Currey)
   - Suprise PCI hotplug support for powernv (Gavin Shan)
   - Endian/sparse fixes for powernv PCI (Gavin Shan)
   - Defconfig updates (Anton Blanchard)
   - KVM: PPC: Book3S HV: Migrate pinned pages out of CMA (Balbir Singh)
   - cxl: Flush PSL cache before resetting the adapter (Frederic Barrat)
   - cxl: replace loop with for_each_child_of_node(), remove unneeded
     of_node_put() (Andrew Donnellan)
   - Fix HV facility unavailable to use correct handler (Nicholas
     Piggin)
   - Remove unnecessary syscall trampoline (Nicholas Piggin)
   - fadump: Fix build break when CONFIG_PROC_VMCORE=n (Michael
     Ellerman)
   - Quieten EEH message when no adapters are found (Anton Blanchard)
   - powernv: Add PHB register dump debugfs handle (Russell Currey)
   - Use kprobe blacklist for exception handlers & asm functions
     (Nicholas Piggin)
   - Document the syscall ABI (Nicholas Piggin)
   - MAINTAINERS: Update cxl maintainers (Michael Neuling)
   - powerpc: Remove all usages of NO_IRQ (Michael Ellerman)

  Minor cleanups:
   - Andrew Donnellan, Christophe Leroy, Colin Ian King, Cyril Bur,
     Frederic Barrat, Pan Xinhui, PrasannaKumar Muralidharan, Rui Teng,
     Simon Guo"

* tag 'powerpc-4.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (156 commits)
  powerpc/bpf: Add support for bpf constant blinding
  powerpc/bpf: Implement support for tail calls
  powerpc/bpf: Introduce accessors for using the tmp local stack space
  powerpc/fadump: Fix build break when CONFIG_PROC_VMCORE=n
  powerpc: tm: Enable transactional memory (TM) lazily for userspace
  powerpc/tm: Add TM Unavailable Exception
  powerpc: Remove do_load_up_transact_{fpu,altivec}
  powerpc: tm: Rename transct_(*) to ck(\1)_state
  powerpc: tm: Always use fp_state and vr_state to store live registers
  selftests/powerpc: Add checks for transactional VSXs in signal contexts
  selftests/powerpc: Add checks for transactional VMXs in signal contexts
  selftests/powerpc: Add checks for transactional FPUs in signal contexts
  selftests/powerpc: Add checks for transactional GPRs in signal contexts
  selftests/powerpc: Check that signals always get delivered
  selftests/powerpc: Add TM tcheck helpers in C
  selftests/powerpc: Allow tests to extend their kill timeout
  selftests/powerpc: Introduce GPR asm helper header file
  selftests/powerpc: Move VMX stack frame macros to header file
  selftests/powerpc: Rework FPU stack placement macros and move to header file
  selftests/powerpc: Check for VSX preservation across userspace preemption
  ...
This commit is contained in:
Linus Torvalds
2016-10-07 20:19:31 -07:00
227 changed files with 5471 additions and 2931 deletions

View File

@@ -31,8 +31,7 @@ obj-y := cputable.o ptrace.o syscalls.o \
process.o systbl.o idle.o \
signal.o sysfs.o cacheinfo.o time.o \
prom.o traps.o setup-common.o \
udbg.o misc.o io.o dma.o \
misc_$(CONFIG_WORD_SIZE).o \
udbg.o misc.o io.o dma.o misc_$(BITS).o \
of_platform.o prom_parse.o
obj-$(CONFIG_PPC64) += setup_64.o sys_ppc32.o \
signal_64.o ptrace32.o \
@@ -70,23 +69,23 @@ obj-$(CONFIG_HIBERNATION) += swsusp.o suspend.o
ifeq ($(CONFIG_FSL_BOOKE),y)
obj-$(CONFIG_HIBERNATION) += swsusp_booke.o
else
obj-$(CONFIG_HIBERNATION) += swsusp_$(CONFIG_WORD_SIZE).o
obj-$(CONFIG_HIBERNATION) += swsusp_$(BITS).o
endif
obj64-$(CONFIG_HIBERNATION) += swsusp_asm64.o
obj-$(CONFIG_MODULES) += module.o module_$(CONFIG_WORD_SIZE).o
obj-$(CONFIG_MODULES) += module.o module_$(BITS).o
obj-$(CONFIG_44x) += cpu_setup_44x.o
obj-$(CONFIG_PPC_FSL_BOOK3E) += cpu_setup_fsl_booke.o
obj-$(CONFIG_PPC_DOORBELL) += dbell.o
obj-$(CONFIG_JUMP_LABEL) += jump_label.o
extra-y := head_$(CONFIG_WORD_SIZE).o
extra-y := head_$(BITS).o
extra-$(CONFIG_40x) := head_40x.o
extra-$(CONFIG_44x) := head_44x.o
extra-$(CONFIG_FSL_BOOKE) := head_fsl_booke.o
extra-$(CONFIG_8xx) := head_8xx.o
extra-y += vmlinux.lds
obj-$(CONFIG_RELOCATABLE) += reloc_$(CONFIG_WORD_SIZE).o
obj-$(CONFIG_RELOCATABLE) += reloc_$(BITS).o
obj-$(CONFIG_PPC32) += entry_32.o setup_32.o
obj-$(CONFIG_PPC64) += dma-iommu.o iommu.o
@@ -104,11 +103,11 @@ obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-$(CONFIG_SWIOTLB) += dma-swiotlb.o
pci64-$(CONFIG_PPC64) += pci_dn.o pci-hotplug.o isa-bridge.o
obj-$(CONFIG_PCI) += pci_$(CONFIG_WORD_SIZE).o $(pci64-y) \
obj-$(CONFIG_PCI) += pci_$(BITS).o $(pci64-y) \
pci-common.o pci_of_scan.o
obj-$(CONFIG_PCI_MSI) += msi.o
obj-$(CONFIG_KEXEC) += machine_kexec.o crash.o \
machine_kexec_$(CONFIG_WORD_SIZE).o
machine_kexec_$(BITS).o
obj-$(CONFIG_AUDIT) += audit.o
obj64-$(CONFIG_AUDIT) += compat_audit.o

View File

@@ -142,12 +142,12 @@ int main(void)
DEFINE(THREAD_TM_PPR, offsetof(struct thread_struct, tm_ppr));
DEFINE(THREAD_TM_DSCR, offsetof(struct thread_struct, tm_dscr));
DEFINE(PT_CKPT_REGS, offsetof(struct thread_struct, ckpt_regs));
DEFINE(THREAD_TRANSACT_VRSTATE, offsetof(struct thread_struct,
transact_vr));
DEFINE(THREAD_TRANSACT_VRSAVE, offsetof(struct thread_struct,
transact_vrsave));
DEFINE(THREAD_TRANSACT_FPSTATE, offsetof(struct thread_struct,
transact_fp));
DEFINE(THREAD_CKVRSTATE, offsetof(struct thread_struct,
ckvr_state));
DEFINE(THREAD_CKVRSAVE, offsetof(struct thread_struct,
ckvrsave));
DEFINE(THREAD_CKFPSTATE, offsetof(struct thread_struct,
ckfp_state));
/* Local pt_regs on stack for Transactional Memory funcs. */
DEFINE(TM_FRAME_SIZE, STACK_FRAME_OVERHEAD +
sizeof(struct pt_regs) + 16);

View File

@@ -506,6 +506,25 @@ static struct cpu_spec __initdata cpu_specs[] = {
.machine_check_early = __machine_check_early_realmode_p8,
.platform = "power8",
},
{ /* Power9 DD1*/
.pvr_mask = 0xffffff00,
.pvr_value = 0x004e0100,
.cpu_name = "POWER9 (raw)",
.cpu_features = CPU_FTRS_POWER9_DD1,
.cpu_user_features = COMMON_USER_POWER9,
.cpu_user_features2 = COMMON_USER2_POWER9,
.mmu_features = MMU_FTRS_POWER9,
.icache_bsize = 128,
.dcache_bsize = 128,
.num_pmcs = 6,
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/power9",
.oprofile_type = PPC_OPROFILE_INVALID,
.cpu_setup = __setup_cpu_power9,
.cpu_restore = __restore_cpu_power9,
.flush_tlb = __flush_tlb_power9,
.platform = "power9",
},
{ /* Power9 */
.pvr_mask = 0xffff0000,
.pvr_value = 0x004e0000,

View File

@@ -116,6 +116,7 @@ struct eeh_ops *eeh_ops = NULL;
/* Lock to avoid races due to multiple reports of an error */
DEFINE_RAW_SPINLOCK(confirm_error_lock);
EXPORT_SYMBOL_GPL(confirm_error_lock);
/* Lock to protect passed flags */
static DEFINE_MUTEX(eeh_dev_mutex);
@@ -1044,7 +1045,7 @@ int eeh_init(void)
if (eeh_enabled())
pr_info("EEH: PCI Enhanced I/O Error Handling Enabled\n");
else
pr_warn("EEH: No capable adapters found\n");
pr_info("EEH: No capable adapters found\n");
return ret;
}
@@ -1502,6 +1503,7 @@ int eeh_pe_set_option(struct eeh_pe *pe, int option)
break;
case EEH_OPT_THAW_MMIO:
case EEH_OPT_THAW_DMA:
case EEH_OPT_FREEZE_PE:
if (!eeh_ops || !eeh_ops->set_option) {
ret = -ENOENT;
break;

View File

@@ -993,9 +993,17 @@ static void eeh_handle_special_event(void)
/* Notify all devices to be down */
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
bus = eeh_pe_bus_get(phb_pe);
eeh_pe_dev_traverse(pe,
eeh_report_failure, NULL);
bus = eeh_pe_bus_get(phb_pe);
if (!bus) {
pr_err("%s: Cannot find PCI bus for "
"PHB#%d-PE#%x\n",
__func__,
pe->phb->global_number,
pe->addr);
break;
}
pci_hp_remove_devices(bus);
}
pci_unlock_rescan_remove();

View File

@@ -581,6 +581,7 @@ void eeh_pe_state_mark(struct eeh_pe *pe, int state)
{
eeh_pe_traverse(pe, __eeh_pe_state_mark, &state);
}
EXPORT_SYMBOL_GPL(eeh_pe_state_mark);
static void *__eeh_pe_dev_mode_mark(void *data, void *flag)
{

View File

@@ -654,7 +654,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_SPE)
#endif /* CONFIG_SMP */
tophys(r0,r4)
CLR_TOP32(r0)
mtspr SPRN_SPRG_THREAD,r0 /* Update current THREAD phys addr */
lwz r1,KSP(r4) /* Load new stack pointer */

View File

@@ -139,7 +139,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
#ifdef CONFIG_PPC_BOOK3E
wrteei 1
#else
ld r11,PACAKMSR(r13)
li r11,MSR_RI
ori r11,r11,MSR_EE
mtmsrd r11,1
#endif /* CONFIG_PPC_BOOK3E */
@@ -195,7 +195,6 @@ system_call: /* label this so stack traces look sane */
#ifdef CONFIG_PPC_BOOK3E
wrteei 0
#else
ld r10,PACAKMSR(r13)
/*
* For performance reasons we clear RI the same time that we
* clear EE. We only need to clear RI just before we restore r13
@@ -203,8 +202,7 @@ system_call: /* label this so stack traces look sane */
* We have to be careful to restore RI if we branch anywhere from
* here (eg syscall_exit_work).
*/
li r9,MSR_RI
andc r11,r10,r9
li r11,0
mtmsrd r11,1
#endif /* CONFIG_PPC_BOOK3E */
@@ -221,13 +219,12 @@ system_call: /* label this so stack traces look sane */
#endif
2: addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_BOOK3S
li r10,MSR_RI
mtmsrd r10,1 /* Restore RI */
#endif
bl restore_math
#ifdef CONFIG_PPC_BOOK3S
ld r10,PACAKMSR(r13)
li r9,MSR_RI
andc r11,r10,r9 /* Re-clear RI */
li r11,0
mtmsrd r11,1
#endif
ld r8,_MSR(r1)
@@ -308,6 +305,7 @@ syscall_enosys:
syscall_exit_work:
#ifdef CONFIG_PPC_BOOK3S
li r10,MSR_RI
mtmsrd r10,1 /* Restore RI */
#endif
/* If TIF_RESTOREALL is set, don't scribble on either r3 or ccr.
@@ -354,7 +352,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
#ifdef CONFIG_PPC_BOOK3E
wrteei 1
#else
ld r10,PACAKMSR(r13)
li r10,MSR_RI
ori r10,r10,MSR_EE
mtmsrd r10,1
#endif /* CONFIG_PPC_BOOK3E */
@@ -619,7 +617,7 @@ _GLOBAL(ret_from_except_lite)
#ifdef CONFIG_PPC_BOOK3E
wrteei 0
#else
ld r10,PACAKMSR(r13) /* Get kernel MSR without EE */
li r10,MSR_RI
mtmsrd r10,1 /* Update machine state */
#endif /* CONFIG_PPC_BOOK3E */
@@ -751,7 +749,7 @@ resume_kernel:
#ifdef CONFIG_PPC_BOOK3E
wrteei 0
#else
ld r10,PACAKMSR(r13) /* Get kernel MSR without EE */
li r10,MSR_RI
mtmsrd r10,1 /* Update machine state */
#endif /* CONFIG_PPC_BOOK3E */
#endif /* CONFIG_PREEMPT */
@@ -841,8 +839,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
* userspace and we take an exception after restoring r13,
* we end up corrupting the userspace r13 value.
*/
ld r4,PACAKMSR(r13) /* Get kernel MSR without EE */
andc r4,r4,r0 /* r0 contains MSR_RI here */
li r4,0
mtmsrd r4,1
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM

File diff suppressed because it is too large Load Diff

View File

@@ -778,7 +778,11 @@ static int fadump_init_elfcore_header(char *bufp)
elf->e_entry = 0;
elf->e_phoff = sizeof(struct elfhdr);
elf->e_shoff = 0;
elf->e_flags = ELF_CORE_EFLAGS;
#if defined(_CALL_ELF)
elf->e_flags = _CALL_ELF;
#else
elf->e_flags = 0;
#endif
elf->e_ehsize = sizeof(struct elfhdr);
elf->e_phentsize = sizeof(struct elf_phdr);
elf->e_phnum = 0;
@@ -1104,7 +1108,9 @@ static ssize_t fadump_release_memory_store(struct kobject *kobj,
* Take away the '/proc/vmcore'. We are releasing the dump
* memory, hence it will not be valid anymore.
*/
#ifdef CONFIG_PROC_VMCORE
vmcore_cleanup();
#endif
fadump_invalidate_release_mem();
} else

View File

@@ -50,32 +50,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX); \
#define REST_32FPVSRS(n,c,base) __REST_32FPVSRS(n,__REG_##c,__REG_##base)
#define SAVE_32FPVSRS(n,c,base) __SAVE_32FPVSRS(n,__REG_##c,__REG_##base)
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
/* void do_load_up_transact_fpu(struct thread_struct *thread)
*
* This is similar to load_up_fpu but for the transactional version of the FP
* register set. It doesn't mess with the task MSR or valid flags.
* Furthermore, we don't do lazy FP with TM currently.
*/
_GLOBAL(do_load_up_transact_fpu)
mfmsr r6
ori r5,r6,MSR_FP
#ifdef CONFIG_VSX
BEGIN_FTR_SECTION
oris r5,r5,MSR_VSX@h
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
#endif
SYNC
MTMSRD(r5)
addi r7,r3,THREAD_TRANSACT_FPSTATE
lfd fr0,FPSTATE_FPSCR(r7)
MTFSF_L(fr0)
REST_32FPVSRS(0, R4, R7)
blr
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
/*
* Load state from memory into FP registers including FPSCR.
* Assumes the caller has enabled FP in the MSR.

View File

@@ -266,7 +266,6 @@ __secondary_hold_acknowledge:
#define EXCEPTION_PROLOG_2 \
CLR_TOP32(r11); \
stw r10,_CCR(r11); /* save registers */ \
stw r12,GPR12(r11); \
stw r9,GPR9(r11); \
@@ -862,7 +861,6 @@ __secondary_start:
/* ptr to phys current thread */
tophys(r4,r2)
addi r4,r4,THREAD /* phys address of our thread_struct */
CLR_TOP32(r4)
mtspr SPRN_SPRG_THREAD,r4
li r3,0
mtspr SPRN_SPRG_RTAS,r3 /* 0 => not in RTAS */
@@ -949,7 +947,6 @@ start_here:
/* ptr to phys current thread */
tophys(r4,r2)
addi r4,r4,THREAD /* init task's THREAD */
CLR_TOP32(r4)
mtspr SPRN_SPRG_THREAD,r4
li r3,0
mtspr SPRN_SPRG_RTAS,r3 /* 0 => not in RTAS */

View File

@@ -28,6 +28,7 @@
#include <asm/page.h>
#include <asm/mmu.h>
#include <asm/ppc_asm.h>
#include <asm/head-64.h>
#include <asm/asm-offsets.h>
#include <asm/bug.h>
#include <asm/cputable.h>
@@ -65,9 +66,14 @@
* 2. The kernel is entered at __start
*/
.text
.globl _stext
_stext:
OPEN_FIXED_SECTION(first_256B, 0x0, 0x100)
USE_FIXED_SECTION(first_256B)
/*
* Offsets are relative from the start of fixed section, and
* first_256B starts at 0. Offsets are a bit easier to use here
* than the fixed section entry macros.
*/
. = 0x0
_GLOBAL(__start)
/* NOP this out unconditionally */
BEGIN_FTR_SECTION
@@ -104,6 +110,7 @@ __secondary_hold_acknowledge:
. = 0x5c
.globl __run_at_load
__run_at_load:
DEFINE_FIXED_SYMBOL(__run_at_load)
.long 0x72756e30 /* "run0" -- relocate to 0 by default */
#endif
@@ -133,7 +140,7 @@ __secondary_hold:
/* Tell the master cpu we're here */
/* Relocation is off & we are located at an address less */
/* than 0x100, so only need to grab low order offset. */
std r24,__secondary_hold_acknowledge-_stext(0)
std r24,(ABS_ADDR(__secondary_hold_acknowledge))(0)
sync
li r26,0
@@ -141,7 +148,7 @@ __secondary_hold:
tovirt(r26,r26)
#endif
/* All secondary cpus wait here until told to start. */
100: ld r12,__secondary_hold_spinloop-_stext(r26)
100: ld r12,(ABS_ADDR(__secondary_hold_spinloop))(r26)
cmpdi 0,r12,0
beq 100b
@@ -166,12 +173,13 @@ __secondary_hold:
#else
BUG_OPCODE
#endif
CLOSE_FIXED_SECTION(first_256B)
/* This value is used to mark exception frames on the stack. */
.section ".toc","aw"
exception_marker:
.tc ID_72656773_68657265[TC],0x7265677368657265
.text
.previous
/*
* On server, we include the exception vectors code here as it
@@ -180,8 +188,12 @@ exception_marker:
*/
#ifdef CONFIG_PPC_BOOK3S
#include "exceptions-64s.S"
#else
OPEN_TEXT_SECTION(0x100)
#endif
USE_TEXT_SECTION()
#ifdef CONFIG_PPC_BOOK3E
/*
* The booting_thread_hwid holds the thread id we want to boot in cpu
@@ -558,7 +570,7 @@ __after_prom_start:
#if defined(CONFIG_PPC_BOOK3E)
tovirt(r26,r26) /* on booke, we already run at PAGE_OFFSET */
#endif
lwz r7,__run_at_load-_stext(r26)
lwz r7,(FIXED_SYMBOL_ABS_ADDR(__run_at_load))(r26)
#if defined(CONFIG_PPC_BOOK3E)
tophys(r26,r26)
#endif
@@ -601,7 +613,7 @@ __after_prom_start:
#if defined(CONFIG_PPC_BOOK3E)
tovirt(r26,r26) /* on booke, we already run at PAGE_OFFSET */
#endif
lwz r7,__run_at_load-_stext(r26)
lwz r7,(FIXED_SYMBOL_ABS_ADDR(__run_at_load))(r26)
cmplwi cr0,r7,1
bne 3f
@@ -611,28 +623,35 @@ __after_prom_start:
sub r5,r5,r11
#else
/* just copy interrupts */
LOAD_REG_IMMEDIATE(r5, __end_interrupts - _stext)
LOAD_REG_IMMEDIATE(r5, FIXED_SYMBOL_ABS_ADDR(__end_interrupts))
#endif
b 5f
3:
#endif
lis r5,(copy_to_here - _stext)@ha
addi r5,r5,(copy_to_here - _stext)@l /* # bytes of memory to copy */
/* # bytes of memory to copy */
lis r5,(ABS_ADDR(copy_to_here))@ha
addi r5,r5,(ABS_ADDR(copy_to_here))@l
bl copy_and_flush /* copy the first n bytes */
/* this includes the code being */
/* executed here. */
addis r8,r3,(4f - _stext)@ha /* Jump to the copy of this code */
addi r12,r8,(4f - _stext)@l /* that we just made */
/* Jump to the copy of this code that we just made */
addis r8,r3,(ABS_ADDR(4f))@ha
addi r12,r8,(ABS_ADDR(4f))@l
mtctr r12
bctr
.balign 8
p_end: .llong _end - _stext
p_end: .llong _end - copy_to_here
4: /* Now copy the rest of the kernel up to _end */
addis r5,r26,(p_end - _stext)@ha
ld r5,(p_end - _stext)@l(r5) /* get _end */
4:
/*
* Now copy the rest of the kernel up to _end, add
* _end - copy_to_here to the copy limit and run again.
*/
addis r8,r26,(ABS_ADDR(p_end))@ha
ld r8,(ABS_ADDR(p_end))@l(r8)
add r5,r5,r8
5: bl copy_and_flush /* copy the rest */
9: b start_here_multiplatform

View File

@@ -151,7 +151,6 @@ turn_on_mmu:
#define EXCEPTION_PROLOG_2 \
CLR_TOP32(r11); \
stw r10,_CCR(r11); /* save registers */ \
stw r12,GPR12(r11); \
stw r9,GPR9(r11); \

View File

@@ -206,7 +206,7 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs)
/*
* Handle debug exception notifications.
*/
int __kprobes hw_breakpoint_handler(struct die_args *args)
int hw_breakpoint_handler(struct die_args *args)
{
int rc = NOTIFY_STOP;
struct perf_event *bp;
@@ -290,11 +290,12 @@ out:
rcu_read_unlock();
return rc;
}
NOKPROBE_SYMBOL(hw_breakpoint_handler);
/*
* Handle single-step exceptions following a DABR hit.
*/
static int __kprobes single_step_dabr_instruction(struct die_args *args)
static int single_step_dabr_instruction(struct die_args *args)
{
struct pt_regs *regs = args->regs;
struct perf_event *bp = NULL;
@@ -329,11 +330,12 @@ static int __kprobes single_step_dabr_instruction(struct die_args *args)
return NOTIFY_STOP;
}
NOKPROBE_SYMBOL(single_step_dabr_instruction);
/*
* Handle debug exception notifications.
*/
int __kprobes hw_breakpoint_exceptions_notify(
int hw_breakpoint_exceptions_notify(
struct notifier_block *unused, unsigned long val, void *data)
{
int ret = NOTIFY_DONE;
@@ -349,6 +351,7 @@ int __kprobes hw_breakpoint_exceptions_notify(
return ret;
}
NOKPROBE_SYMBOL(hw_breakpoint_exceptions_notify);
/*
* Release the user breakpoints used by ptrace

View File

@@ -227,7 +227,7 @@ int ibmebus_request_irq(u32 ist, irq_handler_t handler,
{
unsigned int irq = irq_create_mapping(NULL, ist);
if (irq == NO_IRQ)
if (!irq)
return -EINVAL;
return request_irq(irq, handler, irq_flags, devname, dev_id);

View File

@@ -67,6 +67,7 @@
#include <asm/smp.h>
#include <asm/debug.h>
#include <asm/livepatch.h>
#include <asm/asm-prototypes.h>
#ifdef CONFIG_PPC64
#include <asm/paca.h>
@@ -155,6 +156,15 @@ notrace unsigned int __check_irq_replay(void)
lv1_get_version_info(&tmp, &tmp2);
}
/*
* Check if an hypervisor Maintenance interrupt happened.
* This is a higher priority interrupt than the others, so
* replay it first.
*/
local_paca->irq_happened &= ~PACA_IRQ_HMI;
if (happened & PACA_IRQ_HMI)
return 0xe60;
/*
* We may have missed a decrementer interrupt. We check the
* decrementer itself rather than the paca irq_happened field
@@ -190,11 +200,6 @@ notrace unsigned int __check_irq_replay(void)
}
#endif /* CONFIG_PPC_BOOK3E */
/* Check if an hypervisor Maintenance interrupt happened */
local_paca->irq_happened &= ~PACA_IRQ_HMI;
if (happened & PACA_IRQ_HMI)
return 0xe60;
/* There should be nothing left ! */
BUG_ON(local_paca->irq_happened != 0);
@@ -514,7 +519,7 @@ void __do_irq(struct pt_regs *regs)
may_hard_irq_enable();
/* And finally process it */
if (unlikely(irq == NO_IRQ))
if (unlikely(!irq))
__this_cpu_inc(irq_stat.spurious_irqs);
else
generic_handle_irq(irq);

View File

@@ -193,10 +193,10 @@ static int __init add_legacy_soc_port(struct device_node *np,
*/
if (tsi && !strcmp(tsi->type, "tsi-bridge"))
return add_legacy_port(np, -1, UPIO_TSI, addr, addr,
NO_IRQ, legacy_port_flags, 0);
0, legacy_port_flags, 0);
else
return add_legacy_port(np, -1, UPIO_MEM, addr, addr,
NO_IRQ, legacy_port_flags, 0);
0, legacy_port_flags, 0);
}
static int __init add_legacy_isa_port(struct device_node *np,
@@ -242,7 +242,7 @@ static int __init add_legacy_isa_port(struct device_node *np,
/* Add port, irq will be dealt with later */
return add_legacy_port(np, index, UPIO_PORT, be32_to_cpu(reg[1]),
taddr, NO_IRQ, legacy_port_flags, 0);
taddr, 0, legacy_port_flags, 0);
}
@@ -314,7 +314,7 @@ static int __init add_legacy_pci_port(struct device_node *np,
/* Add port, irq will be dealt with later. We passed a translated
* IO port value. It will be fixed up later along with the irq
*/
return add_legacy_port(np, index, iotype, base, addr, NO_IRQ,
return add_legacy_port(np, index, iotype, base, addr, 0,
legacy_port_flags, np != pci_dev);
}
#endif
@@ -462,14 +462,14 @@ static void __init fixup_port_irq(int index,
DBG("fixup_port_irq(%d)\n", index);
virq = irq_of_parse_and_map(np, 0);
if (virq == NO_IRQ && legacy_serial_infos[index].irq_check_parent) {
if (!virq && legacy_serial_infos[index].irq_check_parent) {
np = of_get_parent(np);
if (np == NULL)
return;
virq = irq_of_parse_and_map(np, 0);
of_node_put(np);
}
if (virq == NO_IRQ)
if (!virq)
return;
port->irq = virq;
@@ -543,7 +543,7 @@ static int __init serial_dev_init(void)
struct plat_serial8250_port *port = &legacy_serial_ports[i];
struct device_node *np = legacy_serial_infos[i].np;
if (port->irq == NO_IRQ)
if (!port->irq)
fixup_port_irq(i, np, port);
if (port->iotype == UPIO_PORT)
fixup_port_pio(i, np, port);

View File

@@ -23,6 +23,7 @@
#include <asm/current.h>
#include <asm/machdep.h>
#include <asm/cacheflush.h>
#include <asm/firmware.h>
#include <asm/paca.h>
#include <asm/mmu.h>
#include <asm/sections.h> /* _end */
@@ -31,21 +32,6 @@
#include <asm/hw_breakpoint.h>
#include <asm/asm-prototypes.h>
#ifdef CONFIG_PPC_BOOK3E
int default_machine_kexec_prepare(struct kimage *image)
{
int i;
/*
* Since we use the kernel fault handlers and paging code to
* handle the virtual mode, we must make sure no destination
* overlaps kernel static data or bss.
*/
for (i = 0; i < image->nr_segments; i++)
if (image->segment[i].mem < __pa(_end))
return -ETXTBSY;
return 0;
}
#else
int default_machine_kexec_prepare(struct kimage *image)
{
int i;
@@ -55,9 +41,6 @@ int default_machine_kexec_prepare(struct kimage *image)
const unsigned long *basep;
const unsigned int *sizep;
if (!mmu_hash_ops.hpte_clear_all)
return -ENOENT;
/*
* Since we use the kernel fault handlers and paging code to
* handle the virtual mode, we must make sure no destination
@@ -67,31 +50,6 @@ int default_machine_kexec_prepare(struct kimage *image)
if (image->segment[i].mem < __pa(_end))
return -ETXTBSY;
/*
* For non-LPAR, we absolutely can not overwrite the mmu hash
* table, since we are still using the bolted entries in it to
* do the copy. Check that here.
*
* It is safe if the end is below the start of the blocked
* region (end <= low), or if the beginning is after the
* end of the blocked region (begin >= high). Use the
* boolean identity !(a || b) === (!a && !b).
*/
#ifdef CONFIG_PPC_STD_MMU_64
if (htab_address) {
low = __pa(htab_address);
high = low + htab_size_bytes;
for (i = 0; i < image->nr_segments; i++) {
begin = image->segment[i].mem;
end = begin + image->segment[i].memsz;
if ((begin < high) && (end > low))
return -ETXTBSY;
}
}
#endif /* CONFIG_PPC_STD_MMU_64 */
/* We also should not overwrite the tce tables */
for_each_node_by_type(node, "pci") {
basep = of_get_property(node, "linux,tce-base", NULL);
@@ -113,7 +71,6 @@ int default_machine_kexec_prepare(struct kimage *image)
return 0;
}
#endif /* !CONFIG_PPC_BOOK3E */
static void copy_segments(unsigned long ind)
{
@@ -332,11 +289,14 @@ struct paca_struct kexec_paca;
/* Our assembly helper, in misc_64.S */
extern void kexec_sequence(void *newstack, unsigned long start,
void *image, void *control,
void (*clear_all)(void)) __noreturn;
void (*clear_all)(void),
bool copy_with_mmu_off) __noreturn;
/* too late to fail here */
void default_machine_kexec(struct kimage *image)
{
bool copy_with_mmu_off;
/* prepare control code if any */
/*
@@ -374,18 +334,29 @@ void default_machine_kexec(struct kimage *image)
/* XXX: If anyone does 'dynamic lppacas' this will also need to be
* switched to a static version!
*/
/*
* On Book3S, the copy must happen with the MMU off if we are either
* using Radix page tables or we are not in an LPAR since we can
* overwrite the page tables while copying.
*
* In an LPAR, we keep the MMU on otherwise we can't access beyond
* the RMA. On BookE there is no real MMU off mode, so we have to
* keep it enabled as well (but then we have bolted TLB entries).
*/
#ifdef CONFIG_PPC_BOOK3E
copy_with_mmu_off = false;
#else
copy_with_mmu_off = radix_enabled() ||
!(firmware_has_feature(FW_FEATURE_LPAR) ||
firmware_has_feature(FW_FEATURE_PS3_LV1));
#endif
/* Some things are best done in assembly. Finding globals with
* a toc is easier in C, so pass in what we can.
*/
kexec_sequence(&kexec_stack, image->start, image,
page_address(image->control_code_page),
#ifdef CONFIG_PPC_STD_MMU
mmu_hash_ops.hpte_clear_all
#else
NULL
#endif
);
page_address(image->control_code_page),
mmu_cleanup_all, copy_with_mmu_off);
/* NOTREACHED */
}

View File

@@ -328,7 +328,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_UNIFIED_ID_CACHE)
*
* flush_icache_range(unsigned long start, unsigned long stop)
*/
_KPROBE(flush_icache_range)
_GLOBAL(flush_icache_range)
BEGIN_FTR_SECTION
PURGE_PREFETCHED_INS
blr /* for 601, do nothing */
@@ -358,6 +358,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
sync /* additional sync needed on g4 */
isync
blr
_ASM_NOKPROBE_SYMBOL(flush_icache_range)
/*
* Flush a particular page from the data cache to RAM.
* Note: this is necessary because the instruction cache does *not*

View File

@@ -66,7 +66,7 @@ PPC64_CACHES:
* flush all bytes from start through stop-1 inclusive
*/
_KPROBE(flush_icache_range)
_GLOBAL(flush_icache_range)
BEGIN_FTR_SECTION
PURGE_PREFETCHED_INS
blr
@@ -109,7 +109,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
bdnz 2b
isync
blr
.previous .text
_ASM_NOKPROBE_SYMBOL(flush_icache_range)
/*
* Like above, but only do the D-cache.
*
@@ -591,7 +592,8 @@ real_mode: /* assume normal blr return */
#endif
/*
* kexec_sequence(newstack, start, image, control, clear_all())
* kexec_sequence(newstack, start, image, control, clear_all(),
copy_with_mmu_off)
*
* does the grungy work with stack switching and real mode switches
* also does simple calls to other code
@@ -627,7 +629,7 @@ _GLOBAL(kexec_sequence)
mr r29,r5 /* image (virt) */
mr r28,r6 /* control, unused */
mr r27,r7 /* clear_all() fn desc */
mr r26,r8 /* spare */
mr r26,r8 /* copy_with_mmu_off */
lhz r25,PACAHWCPUID(r13) /* get our phys cpu from paca */
/* disable interrupts, we are overwriting kernel data next */
@@ -639,15 +641,24 @@ _GLOBAL(kexec_sequence)
mtmsrd r3,1
#endif
/* We need to turn the MMU off unless we are in hash mode
* under a hypervisor
*/
cmpdi r26,0
beq 1f
bl real_mode
1:
/* copy dest pages, flush whole dest image */
mr r3,r29
bl kexec_copy_flush /* (image) */
/* turn off mmu */
/* turn off mmu now if not done earlier */
cmpdi r26,0
bne 1f
bl real_mode
/* copy 0x100 bytes starting at start to 0 */
li r3,0
1: li r3,0
mr r4,r30 /* start, aka phys mem offset */
li r5,0x100
li r6,0
@@ -659,7 +670,9 @@ _GLOBAL(kexec_sequence)
li r6,1
stw r6,kexec_flag-1b(5)
#ifndef CONFIG_PPC_BOOK3E
cmpdi r27,0
beq 1f
/* clear out hardware hash page table and tlb */
#ifdef PPC64_ELF_ABI_v1
ld r12,0(r27) /* deref function descriptor */
@@ -668,7 +681,6 @@ _GLOBAL(kexec_sequence)
#endif
mtctr r12
bctrl /* mmu_hash_ops.hpte_clear_all(void); */
#endif /* !CONFIG_PPC_BOOK3E */
/*
* kexec image calling is:
@@ -695,7 +707,7 @@ _GLOBAL(kexec_sequence)
* are the boot cpu ?????
* other device tree differences (prop sizes, va vs pa, etc)...
*/
mr r3,r25 # my phys cpu
1: mr r3,r25 # my phys cpu
mr r4,r30 # start, aka phys mem offset
mtlr 4
li r5,0

View File

@@ -27,7 +27,7 @@
#include <linux/sort.h>
#include <asm/setup.h>
LIST_HEAD(module_bug_list);
static LIST_HEAD(module_bug_list);
static const Elf_Shdr *find_section(const Elf_Ehdr *hdr,
const Elf_Shdr *sechdrs,

View File

@@ -542,9 +542,9 @@ static ssize_t nvram_pstore_read(u64 *id, enum pstore_type_id *type,
time->tv_nsec = 0;
}
*buf = kmemdup(buff + hdr_size, length, GFP_KERNEL);
kfree(buff);
if (*buf == NULL)
return -ENOMEM;
kfree(buff);
*ecc_notice_size = 0;
if (err_type == ERR_TYPE_KERNEL_PANIC_GZ)
@@ -851,7 +851,7 @@ static long dev_nvram_ioctl(struct file *file, unsigned int cmd,
}
}
const struct file_operations nvram_fops = {
static const struct file_operations nvram_fops = {
.owner = THIS_MODULE,
.llseek = dev_nvram_llseek,
.read = dev_nvram_read,
@@ -956,7 +956,7 @@ int __init nvram_remove_partition(const char *name, int sig,
/* Make partition a free partition */
part->header.signature = NVRAM_SIG_FREE;
strncpy(part->header.name, "wwwwwwwwwwww", 12);
memset(part->header.name, 'w', 12);
part->header.checksum = nvram_checksum(&part->header);
rc = nvram_write_header(part);
if (rc <= 0) {
@@ -974,8 +974,8 @@ int __init nvram_remove_partition(const char *name, int sig,
}
if (prev) {
prev->header.length += part->header.length;
prev->header.checksum = nvram_checksum(&part->header);
rc = nvram_write_header(part);
prev->header.checksum = nvram_checksum(&prev->header);
rc = nvram_write_header(prev);
if (rc <= 0) {
printk(KERN_ERR "nvram_remove_partition: nvram_write failed (%d)\n", rc);
return rc;

View File

@@ -360,7 +360,7 @@ static int pci_read_irq_line(struct pci_dev *pci_dev)
line, pin);
virq = irq_create_mapping(NULL, line);
if (virq != NO_IRQ)
if (virq)
irq_set_irq_type(virq, IRQ_TYPE_LEVEL_LOW);
} else {
pr_debug(" Got one, spec %d cells (0x%08x 0x%08x...) on %s\n",
@@ -369,7 +369,8 @@ static int pci_read_irq_line(struct pci_dev *pci_dev)
virq = irq_create_of_mapping(&oirq);
}
if(virq == NO_IRQ) {
if (!virq) {
pr_debug(" Failed to map !\n");
return -1;
}

View File

@@ -178,7 +178,7 @@ struct pci_dev *of_create_pci_dev(struct device_node *node,
dev->hdr_type = PCI_HEADER_TYPE_NORMAL;
dev->rom_base_reg = PCI_ROM_ADDRESS;
/* Maybe do a default OF mapping here */
dev->irq = NO_IRQ;
dev->irq = 0;
}
of_pci_parse_addrs(node, dev);

View File

@@ -59,6 +59,7 @@
#include <asm/exec.h>
#include <asm/livepatch.h>
#include <asm/cpu_has_feature.h>
#include <asm/asm-prototypes.h>
#include <linux/kprobes.h>
#include <linux/kdebug.h>
@@ -88,7 +89,13 @@ static void check_if_tm_restore_required(struct task_struct *tsk)
set_thread_flag(TIF_RESTORE_TM);
}
}
static inline bool msr_tm_active(unsigned long msr)
{
return MSR_TM_ACTIVE(msr);
}
#else
static inline bool msr_tm_active(unsigned long msr) { return false; }
static inline void check_if_tm_restore_required(struct task_struct *tsk) { }
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
@@ -104,7 +111,7 @@ static int __init enable_strict_msr_control(char *str)
}
early_param("ppc_strict_facility_enable", enable_strict_msr_control);
void msr_check_and_set(unsigned long bits)
unsigned long msr_check_and_set(unsigned long bits)
{
unsigned long oldmsr = mfmsr();
unsigned long newmsr;
@@ -118,6 +125,8 @@ void msr_check_and_set(unsigned long bits)
if (oldmsr != newmsr)
mtmsr_isync(newmsr);
return newmsr;
}
void __msr_check_and_clear(unsigned long bits)
@@ -196,19 +205,30 @@ EXPORT_SYMBOL_GPL(flush_fp_to_thread);
void enable_kernel_fp(void)
{
unsigned long cpumsr;
WARN_ON(preemptible());
msr_check_and_set(MSR_FP);
cpumsr = msr_check_and_set(MSR_FP);
if (current->thread.regs && (current->thread.regs->msr & MSR_FP)) {
check_if_tm_restore_required(current);
/*
* If a thread has already been reclaimed then the
* checkpointed registers are on the CPU but have definitely
* been saved by the reclaim code. Don't need to and *cannot*
* giveup as this would save to the 'live' structure not the
* checkpointed structure.
*/
if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
return;
__giveup_fpu(current);
}
}
EXPORT_SYMBOL(enable_kernel_fp);
static int restore_fp(struct task_struct *tsk) {
if (tsk->thread.load_fp) {
if (tsk->thread.load_fp || msr_tm_active(tsk->thread.regs->msr)) {
load_fp_state(&current->thread.fp_state);
current->thread.load_fp++;
return 1;
@@ -248,12 +268,23 @@ EXPORT_SYMBOL(giveup_altivec);
void enable_kernel_altivec(void)
{
unsigned long cpumsr;
WARN_ON(preemptible());
msr_check_and_set(MSR_VEC);
cpumsr = msr_check_and_set(MSR_VEC);
if (current->thread.regs && (current->thread.regs->msr & MSR_VEC)) {
check_if_tm_restore_required(current);
/*
* If a thread has already been reclaimed then the
* checkpointed registers are on the CPU but have definitely
* been saved by the reclaim code. Don't need to and *cannot*
* giveup as this would save to the 'live' structure not the
* checkpointed structure.
*/
if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
return;
__giveup_altivec(current);
}
}
@@ -278,7 +309,8 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread);
static int restore_altivec(struct task_struct *tsk)
{
if (cpu_has_feature(CPU_FTR_ALTIVEC) && tsk->thread.load_vec) {
if (cpu_has_feature(CPU_FTR_ALTIVEC) &&
(tsk->thread.load_vec || msr_tm_active(tsk->thread.regs->msr))) {
load_vr_state(&tsk->thread.vr_state);
tsk->thread.used_vr = 1;
tsk->thread.load_vec++;
@@ -321,12 +353,23 @@ static void save_vsx(struct task_struct *tsk)
void enable_kernel_vsx(void)
{
unsigned long cpumsr;
WARN_ON(preemptible());
msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX);
cpumsr = msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX);
if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) {
check_if_tm_restore_required(current);
/*
* If a thread has already been reclaimed then the
* checkpointed registers are on the CPU but have definitely
* been saved by the reclaim code. Don't need to and *cannot*
* giveup as this would save to the 'live' structure not the
* checkpointed structure.
*/
if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
return;
if (current->thread.regs->msr & MSR_FP)
__giveup_fpu(current);
if (current->thread.regs->msr & MSR_VEC)
@@ -438,6 +481,7 @@ void giveup_all(struct task_struct *tsk)
return;
msr_check_and_set(msr_all_available);
check_if_tm_restore_required(tsk);
#ifdef CONFIG_PPC_FPU
if (usermsr & MSR_FP)
@@ -464,7 +508,8 @@ void restore_math(struct pt_regs *regs)
{
unsigned long msr;
if (!current->thread.load_fp && !loadvec(current->thread))
if (!msr_tm_active(regs->msr) &&
!current->thread.load_fp && !loadvec(current->thread))
return;
msr = regs->msr;
@@ -767,29 +812,15 @@ static inline bool hw_brk_match(struct arch_hw_breakpoint *a,
}
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
static inline bool tm_enabled(struct task_struct *tsk)
{
return tsk && tsk->thread.regs && (tsk->thread.regs->msr & MSR_TM);
}
static void tm_reclaim_thread(struct thread_struct *thr,
struct thread_info *ti, uint8_t cause)
{
unsigned long msr_diff = 0;
/*
* If FP/VSX registers have been already saved to the
* thread_struct, move them to the transact_fp array.
* We clear the TIF_RESTORE_TM bit since after the reclaim
* the thread will no longer be transactional.
*/
if (test_ti_thread_flag(ti, TIF_RESTORE_TM)) {
msr_diff = thr->ckpt_regs.msr & ~thr->regs->msr;
if (msr_diff & MSR_FP)
memcpy(&thr->transact_fp, &thr->fp_state,
sizeof(struct thread_fp_state));
if (msr_diff & MSR_VEC)
memcpy(&thr->transact_vr, &thr->vr_state,
sizeof(struct thread_vr_state));
clear_ti_thread_flag(ti, TIF_RESTORE_TM);
msr_diff &= MSR_FP | MSR_VEC | MSR_VSX | MSR_FE0 | MSR_FE1;
}
/*
* Use the current MSR TM suspended bit to track if we have
* checkpointed state outstanding.
@@ -808,15 +839,9 @@ static void tm_reclaim_thread(struct thread_struct *thr,
if (!MSR_TM_SUSPENDED(mfmsr()))
return;
tm_reclaim(thr, thr->regs->msr, cause);
giveup_all(container_of(thr, struct task_struct, thread));
/* Having done the reclaim, we now have the checkpointed
* FP/VSX values in the registers. These might be valid
* even if we have previously called enable_kernel_fp() or
* flush_fp_to_thread(), so update thr->regs->msr to
* indicate their current validity.
*/
thr->regs->msr |= msr_diff;
tm_reclaim(thr, thr->ckpt_regs.msr, cause);
}
void tm_reclaim_current(uint8_t cause)
@@ -832,8 +857,8 @@ static inline void tm_reclaim_task(struct task_struct *tsk)
*
* In switching we need to maintain a 2nd register state as
* oldtask->thread.ckpt_regs. We tm_reclaim(oldproc); this saves the
* checkpointed (tbegin) state in ckpt_regs and saves the transactional
* (current) FPRs into oldtask->thread.transact_fpr[].
* checkpointed (tbegin) state in ckpt_regs, ckfp_state and
* ckvr_state
*
* We also context switch (save) TFHAR/TEXASR/TFIAR in here.
*/
@@ -845,14 +870,6 @@ static inline void tm_reclaim_task(struct task_struct *tsk)
if (!MSR_TM_ACTIVE(thr->regs->msr))
goto out_and_saveregs;
/* Stash the original thread MSR, as giveup_fpu et al will
* modify it. We hold onto it to see whether the task used
* FP & vector regs. If the TIF_RESTORE_TM flag is set,
* ckpt_regs.msr is already set.
*/
if (!test_ti_thread_flag(task_thread_info(tsk), TIF_RESTORE_TM))
thr->ckpt_regs.msr = thr->regs->msr;
TM_DEBUG("--- tm_reclaim on pid %d (NIP=%lx, "
"ccr=%lx, msr=%lx, trap=%lx)\n",
tsk->pid, thr->regs->nip,
@@ -881,6 +898,9 @@ void tm_recheckpoint(struct thread_struct *thread,
{
unsigned long flags;
if (!(thread->regs->msr & MSR_TM))
return;
/* We really can't be interrupted here as the TEXASR registers can't
* change and later in the trecheckpoint code, we have a userspace R1.
* So let's hard disable over this region.
@@ -910,10 +930,10 @@ static inline void tm_recheckpoint_new_task(struct task_struct *new)
* If the task was using FP, we non-lazily reload both the original and
* the speculative FP register states. This is because the kernel
* doesn't see if/when a TM rollback occurs, so if we take an FP
* unavoidable later, we are unable to determine which set of FP regs
* unavailable later, we are unable to determine which set of FP regs
* need to be restored.
*/
if (!new->thread.regs)
if (!tm_enabled(new))
return;
if (!MSR_TM_ACTIVE(new->thread.regs->msr)){
@@ -926,35 +946,35 @@ static inline void tm_recheckpoint_new_task(struct task_struct *new)
"(new->msr 0x%lx, new->origmsr 0x%lx)\n",
new->pid, new->thread.regs->msr, msr);
/* This loads the checkpointed FP/VEC state, if used */
tm_recheckpoint(&new->thread, msr);
/* This loads the speculative FP/VEC state, if used */
if (msr & MSR_FP) {
do_load_up_transact_fpu(&new->thread);
new->thread.regs->msr |=
(MSR_FP | new->thread.fpexc_mode);
}
#ifdef CONFIG_ALTIVEC
if (msr & MSR_VEC) {
do_load_up_transact_altivec(&new->thread);
new->thread.regs->msr |= MSR_VEC;
}
#endif
/* We may as well turn on VSX too since all the state is restored now */
if (msr & MSR_VSX)
new->thread.regs->msr |= MSR_VSX;
/*
* The checkpointed state has been restored but the live state has
* not, ensure all the math functionality is turned off to trigger
* restore_math() to reload.
*/
new->thread.regs->msr &= ~(MSR_FP | MSR_VEC | MSR_VSX);
TM_DEBUG("*** tm_recheckpoint of pid %d complete "
"(kernel msr 0x%lx)\n",
new->pid, mfmsr());
}
static inline void __switch_to_tm(struct task_struct *prev)
static inline void __switch_to_tm(struct task_struct *prev,
struct task_struct *new)
{
if (cpu_has_feature(CPU_FTR_TM)) {
tm_enable();
tm_reclaim_task(prev);
if (tm_enabled(prev) || tm_enabled(new))
tm_enable();
if (tm_enabled(prev)) {
prev->thread.load_tm++;
tm_reclaim_task(prev);
if (!MSR_TM_ACTIVE(prev->thread.regs->msr) && prev->thread.load_tm == 0)
prev->thread.regs->msr &= ~MSR_TM;
}
tm_recheckpoint_new_task(new);
}
}
@@ -976,6 +996,12 @@ void restore_tm_state(struct pt_regs *regs)
{
unsigned long msr_diff;
/*
* This is the only moment we should clear TIF_RESTORE_TM as
* it is here that ckpt_regs.msr and pt_regs.msr become the same
* again, anything else could lead to an incorrect ckpt_msr being
* saved and therefore incorrect signal contexts.
*/
clear_thread_flag(TIF_RESTORE_TM);
if (!MSR_TM_ACTIVE(regs->msr))
return;
@@ -983,6 +1009,13 @@ void restore_tm_state(struct pt_regs *regs)
msr_diff = current->thread.ckpt_regs.msr & ~regs->msr;
msr_diff &= MSR_FP | MSR_VEC | MSR_VSX;
/* Ensure that restore_math() will restore */
if (msr_diff & MSR_FP)
current->thread.load_fp = 1;
#ifdef CONFIG_ALIVEC
if (cpu_has_feature(CPU_FTR_ALTIVEC) && msr_diff & MSR_VEC)
current->thread.load_vec = 1;
#endif
restore_math(regs);
regs->msr |= msr_diff;
@@ -990,7 +1023,7 @@ void restore_tm_state(struct pt_regs *regs)
#else
#define tm_recheckpoint_new_task(new)
#define __switch_to_tm(prev)
#define __switch_to_tm(prev, new)
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
static inline void save_sprs(struct thread_struct *t)
@@ -1131,11 +1164,11 @@ struct task_struct *__switch_to(struct task_struct *prev,
*/
save_sprs(&prev->thread);
__switch_to_tm(prev);
/* Save FPU, Altivec, VSX and SPE state */
giveup_all(prev);
__switch_to_tm(prev, new);
/*
* We can't take a PMU exception inside _switch() since there is a
* window where the kernel stack SLB and the kernel stack are out
@@ -1143,8 +1176,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
*/
hard_irq_disable();
tm_recheckpoint_new_task(new);
/*
* Call restore_sprs() before calling _switch(). If we move it after
* _switch() then we miss out on calling it for new tasks. The reason
@@ -1379,9 +1410,11 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
* transitions the CPU out of TM mode. Hence we need to call
* tm_recheckpoint_new_task() (on the same task) to restore the
* checkpointed state back and the TM mode.
*
* Can't pass dst because it isn't ready. Doesn't matter, passing
* dst is only important for __switch_to()
*/
__switch_to_tm(src);
tm_recheckpoint_new_task(src);
__switch_to_tm(src, src);
*dst = *src;
@@ -1623,8 +1656,6 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
current->thread.used_spe = 0;
#endif /* CONFIG_SPE */
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (cpu_has_feature(CPU_FTR_TM))
regs->msr |= MSR_TM;
current->thread.tm_tfhar = 0;
current->thread.tm_texasr = 0;
current->thread.tm_tfiar = 0;

View File

@@ -42,6 +42,7 @@
#include <asm/sections.h>
#include <asm/machdep.h>
#include <asm/opal.h>
#include <asm/asm-prototypes.h>
#include <linux/linux_logo.h>
@@ -2643,6 +2644,86 @@ static void __init fixup_device_tree_efika(void)
#define fixup_device_tree_efika()
#endif
#ifdef CONFIG_PPC_PASEMI_NEMO
/*
* CFE supplied on Nemo is broken in several ways, biggest
* problem is that it reassigns ISA interrupts to unused mpic ints.
* Add an interrupt-controller property for the io-bridge to use
* and correct the ints so we can attach them to an irq_domain
*/
static void __init fixup_device_tree_pasemi(void)
{
u32 interrupts[2], parent, rval, val = 0;
char *name, *pci_name;
phandle iob, node;
/* Find the root pci node */
name = "/pxp@0,e0000000";
iob = call_prom("finddevice", 1, 1, ADDR(name));
if (!PHANDLE_VALID(iob))
return;
/* check if interrupt-controller node set yet */
if (prom_getproplen(iob, "interrupt-controller") !=PROM_ERROR)
return;
prom_printf("adding interrupt-controller property for SB600...\n");
prom_setprop(iob, name, "interrupt-controller", &val, 0);
pci_name = "/pxp@0,e0000000/pci@11";
node = call_prom("finddevice", 1, 1, ADDR(pci_name));
parent = ADDR(iob);
for( ; prom_next_node(&node); ) {
/* scan each node for one with an interrupt */
if (!PHANDLE_VALID(node))
continue;
rval = prom_getproplen(node, "interrupts");
if (rval == 0 || rval == PROM_ERROR)
continue;
prom_getprop(node, "interrupts", &interrupts, sizeof(interrupts));
if ((interrupts[0] < 212) || (interrupts[0] > 222))
continue;
/* found a node, update both interrupts and interrupt-parent */
if ((interrupts[0] >= 212) && (interrupts[0] <= 215))
interrupts[0] -= 203;
if ((interrupts[0] >= 216) && (interrupts[0] <= 220))
interrupts[0] -= 213;
if (interrupts[0] == 221)
interrupts[0] = 14;
if (interrupts[0] == 222)
interrupts[0] = 8;
prom_setprop(node, pci_name, "interrupts", interrupts,
sizeof(interrupts));
prom_setprop(node, pci_name, "interrupt-parent", &parent,
sizeof(parent));
}
/*
* The io-bridge has device_type set to 'io-bridge' change it to 'isa'
* so that generic isa-bridge code can add the SB600 and its on-board
* peripherals.
*/
name = "/pxp@0,e0000000/io-bridge@0";
iob = call_prom("finddevice", 1, 1, ADDR(name));
if (!PHANDLE_VALID(iob))
return;
/* device_type is already set, just change it. */
prom_printf("Changing device_type of SB600 node...\n");
prom_setprop(iob, name, "device_type", "isa", sizeof("isa"));
}
#else /* !CONFIG_PPC_PASEMI_NEMO */
static inline void fixup_device_tree_pasemi(void) { }
#endif
static void __init fixup_device_tree(void)
{
fixup_device_tree_maple();
@@ -2650,6 +2731,7 @@ static void __init fixup_device_tree(void)
fixup_device_tree_chrp();
fixup_device_tree_pmac();
fixup_device_tree_efika();
fixup_device_tree_pasemi();
}
static void __init prom_find_boot_cpu(void)

View File

@@ -39,6 +39,7 @@
#include <asm/pgtable.h>
#include <asm/switch_to.h>
#include <asm/tm.h>
#include <asm/asm-prototypes.h>
#define CREATE_TRACE_POINTS
#include <trace/events/syscalls.h>
@@ -402,13 +403,9 @@ static int gpr_set(struct task_struct *target, const struct user_regset *regset,
}
/*
* When the transaction is active, 'transact_fp' holds the current running
* value of all FPR registers and 'fp_state' holds the last checkpointed
* value of all FPR registers for the current transaction. When transaction
* is not active 'fp_state' holds the current running state of all the FPR
* registers. So this function which returns the current running values of
* all the FPR registers, needs to know whether any transaction is active
* or not.
* Regardless of transactions, 'fp_state' holds the current running
* value of all FPR registers and 'ckfp_state' holds the last checkpointed
* value of all FPR registers for the current transaction.
*
* Userspace interface buffer layout:
*
@@ -416,13 +413,6 @@ static int gpr_set(struct task_struct *target, const struct user_regset *regset,
* u64 fpr[32];
* u64 fpscr;
* };
*
* There are two config options CONFIG_VSX and CONFIG_PPC_TRANSACTIONAL_MEM
* which determines the final code in this function. All the combinations of
* these two config options are possible except the one below as transactional
* memory config pulls in CONFIG_VSX automatically.
*
* !defined(CONFIG_VSX) && defined(CONFIG_PPC_TRANSACTIONAL_MEM)
*/
static int fpr_get(struct task_struct *target, const struct user_regset *regset,
unsigned int pos, unsigned int count,
@@ -431,50 +421,29 @@ static int fpr_get(struct task_struct *target, const struct user_regset *regset,
#ifdef CONFIG_VSX
u64 buf[33];
int i;
#endif
flush_fp_to_thread(target);
#if defined(CONFIG_VSX) && defined(CONFIG_PPC_TRANSACTIONAL_MEM)
/* copy to local buffer then write that out */
if (MSR_TM_ACTIVE(target->thread.regs->msr)) {
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
for (i = 0; i < 32 ; i++)
buf[i] = target->thread.TS_TRANS_FPR(i);
buf[32] = target->thread.transact_fp.fpscr;
} else {
for (i = 0; i < 32 ; i++)
buf[i] = target->thread.TS_FPR(i);
buf[32] = target->thread.fp_state.fpscr;
}
return user_regset_copyout(&pos, &count, &kbuf, &ubuf, buf, 0, -1);
#endif
#if defined(CONFIG_VSX) && !defined(CONFIG_PPC_TRANSACTIONAL_MEM)
/* copy to local buffer then write that out */
for (i = 0; i < 32 ; i++)
buf[i] = target->thread.TS_FPR(i);
buf[32] = target->thread.fp_state.fpscr;
return user_regset_copyout(&pos, &count, &kbuf, &ubuf, buf, 0, -1);
#endif
#if !defined(CONFIG_VSX) && !defined(CONFIG_PPC_TRANSACTIONAL_MEM)
#else
BUILD_BUG_ON(offsetof(struct thread_fp_state, fpscr) !=
offsetof(struct thread_fp_state, fpr[32]));
flush_fp_to_thread(target);
return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
&target->thread.fp_state, 0, -1);
#endif
}
/*
* When the transaction is active, 'transact_fp' holds the current running
* value of all FPR registers and 'fp_state' holds the last checkpointed
* value of all FPR registers for the current transaction. When transaction
* is not active 'fp_state' holds the current running state of all the FPR
* registers. So this function which setss the current running values of
* all the FPR registers, needs to know whether any transaction is active
* or not.
* Regardless of transactions, 'fp_state' holds the current running
* value of all FPR registers and 'ckfp_state' holds the last checkpointed
* value of all FPR registers for the current transaction.
*
* Userspace interface buffer layout:
*
@@ -483,12 +452,6 @@ static int fpr_get(struct task_struct *target, const struct user_regset *regset,
* u64 fpscr;
* };
*
* There are two config options CONFIG_VSX and CONFIG_PPC_TRANSACTIONAL_MEM
* which determines the final code in this function. All the combinations of
* these two config options are possible except the one below as transactional
* memory config pulls in CONFIG_VSX automatically.
*
* !defined(CONFIG_VSX) && defined(CONFIG_PPC_TRANSACTIONAL_MEM)
*/
static int fpr_set(struct task_struct *target, const struct user_regset *regset,
unsigned int pos, unsigned int count,
@@ -497,44 +460,24 @@ static int fpr_set(struct task_struct *target, const struct user_regset *regset,
#ifdef CONFIG_VSX
u64 buf[33];
int i;
#endif
flush_fp_to_thread(target);
#if defined(CONFIG_VSX) && defined(CONFIG_PPC_TRANSACTIONAL_MEM)
/* copy to local buffer then write that out */
i = user_regset_copyin(&pos, &count, &kbuf, &ubuf, buf, 0, -1);
if (i)
return i;
if (MSR_TM_ACTIVE(target->thread.regs->msr)) {
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
for (i = 0; i < 32 ; i++)
target->thread.TS_TRANS_FPR(i) = buf[i];
target->thread.transact_fp.fpscr = buf[32];
} else {
for (i = 0; i < 32 ; i++)
target->thread.TS_FPR(i) = buf[i];
target->thread.fp_state.fpscr = buf[32];
}
return 0;
#endif
#if defined(CONFIG_VSX) && !defined(CONFIG_PPC_TRANSACTIONAL_MEM)
/* copy to local buffer then write that out */
i = user_regset_copyin(&pos, &count, &kbuf, &ubuf, buf, 0, -1);
if (i)
return i;
for (i = 0; i < 32 ; i++)
target->thread.TS_FPR(i) = buf[i];
target->thread.fp_state.fpscr = buf[32];
return 0;
#endif
#if !defined(CONFIG_VSX) && !defined(CONFIG_PPC_TRANSACTIONAL_MEM)
#else
BUILD_BUG_ON(offsetof(struct thread_fp_state, fpscr) !=
offsetof(struct thread_fp_state, fpr[32]));
flush_fp_to_thread(target);
return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&target->thread.fp_state, 0, -1);
#endif
@@ -562,13 +505,10 @@ static int vr_active(struct task_struct *target,
}
/*
* When the transaction is active, 'transact_vr' holds the current running
* value of all the VMX registers and 'vr_state' holds the last checkpointed
* value of all the VMX registers for the current transaction to fall back
* on in case it aborts. When transaction is not active 'vr_state' holds
* the current running state of all the VMX registers. So this function which
* gets the current running values of all the VMX registers, needs to know
* whether any transaction is active or not.
* Regardless of transactions, 'vr_state' holds the current running
* value of all the VMX registers and 'ckvr_state' holds the last
* checkpointed value of all the VMX registers for the current
* transaction to fall back on in case it aborts.
*
* Userspace interface buffer layout:
*
@@ -582,7 +522,6 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
struct thread_vr_state *addr;
int ret;
flush_altivec_to_thread(target);
@@ -590,19 +529,8 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
BUILD_BUG_ON(offsetof(struct thread_vr_state, vscr) !=
offsetof(struct thread_vr_state, vr[32]));
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(target->thread.regs->msr)) {
flush_fp_to_thread(target);
flush_tmregs_to_thread(target);
addr = &target->thread.transact_vr;
} else {
addr = &target->thread.vr_state;
}
#else
addr = &target->thread.vr_state;
#endif
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
addr, 0,
&target->thread.vr_state, 0,
33 * sizeof(vector128));
if (!ret) {
/*
@@ -614,14 +542,7 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
} vrsave;
memset(&vrsave, 0, sizeof(vrsave));
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(target->thread.regs->msr))
vrsave.word = target->thread.transact_vrsave;
else
vrsave.word = target->thread.vrsave;
#else
vrsave.word = target->thread.vrsave;
#endif
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &vrsave,
33 * sizeof(vector128), -1);
@@ -631,13 +552,10 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
}
/*
* When the transaction is active, 'transact_vr' holds the current running
* value of all the VMX registers and 'vr_state' holds the last checkpointed
* value of all the VMX registers for the current transaction to fall back
* on in case it aborts. When transaction is not active 'vr_state' holds
* the current running state of all the VMX registers. So this function which
* sets the current running values of all the VMX registers, needs to know
* whether any transaction is active or not.
* Regardless of transactions, 'vr_state' holds the current running
* value of all the VMX registers and 'ckvr_state' holds the last
* checkpointed value of all the VMX registers for the current
* transaction to fall back on in case it aborts.
*
* Userspace interface buffer layout:
*
@@ -651,7 +569,6 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
struct thread_vr_state *addr;
int ret;
flush_altivec_to_thread(target);
@@ -659,19 +576,8 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
BUILD_BUG_ON(offsetof(struct thread_vr_state, vscr) !=
offsetof(struct thread_vr_state, vr[32]));
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(target->thread.regs->msr)) {
flush_fp_to_thread(target);
flush_tmregs_to_thread(target);
addr = &target->thread.transact_vr;
} else {
addr = &target->thread.vr_state;
}
#else
addr = &target->thread.vr_state;
#endif
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
addr, 0,
&target->thread.vr_state, 0,
33 * sizeof(vector128));
if (!ret && count > 0) {
/*
@@ -683,27 +589,12 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
} vrsave;
memset(&vrsave, 0, sizeof(vrsave));
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(target->thread.regs->msr))
vrsave.word = target->thread.transact_vrsave;
else
vrsave.word = target->thread.vrsave;
#else
vrsave.word = target->thread.vrsave;
#endif
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &vrsave,
33 * sizeof(vector128), -1);
if (!ret) {
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(target->thread.regs->msr))
target->thread.transact_vrsave = vrsave.word;
else
target->thread.vrsave = vrsave.word;
#else
if (!ret)
target->thread.vrsave = vrsave.word;
#endif
}
}
return ret;
@@ -725,13 +616,10 @@ static int vsr_active(struct task_struct *target,
}
/*
* When the transaction is active, 'transact_fp' holds the current running
* value of all FPR registers and 'fp_state' holds the last checkpointed
* value of all FPR registers for the current transaction. When transaction
* is not active 'fp_state' holds the current running state of all the FPR
* registers. So this function which returns the current running values of
* all the FPR registers, needs to know whether any transaction is active
* or not.
* Regardless of transactions, 'fp_state' holds the current running
* value of all FPR registers and 'ckfp_state' holds the last
* checkpointed value of all FPR registers for the current
* transaction.
*
* Userspace interface buffer layout:
*
@@ -746,27 +634,14 @@ static int vsr_get(struct task_struct *target, const struct user_regset *regset,
u64 buf[32];
int ret, i;
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
#endif
flush_vsx_to_thread(target);
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(target->thread.regs->msr)) {
for (i = 0; i < 32 ; i++)
buf[i] = target->thread.
transact_fp.fpr[i][TS_VSRLOWOFFSET];
} else {
for (i = 0; i < 32 ; i++)
buf[i] = target->thread.
fp_state.fpr[i][TS_VSRLOWOFFSET];
}
#else
for (i = 0; i < 32 ; i++)
buf[i] = target->thread.fp_state.fpr[i][TS_VSRLOWOFFSET];
#endif
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
buf, 0, 32 * sizeof(double));
@@ -774,12 +649,10 @@ static int vsr_get(struct task_struct *target, const struct user_regset *regset,
}
/*
* When the transaction is active, 'transact_fp' holds the current running
* value of all FPR registers and 'fp_state' holds the last checkpointed
* value of all FPR registers for the current transaction. When transaction
* is not active 'fp_state' holds the current running state of all the FPR
* registers. So this function which sets the current running values of all
* the FPR registers, needs to know whether any transaction is active or not.
* Regardless of transactions, 'fp_state' holds the current running
* value of all FPR registers and 'ckfp_state' holds the last
* checkpointed value of all FPR registers for the current
* transaction.
*
* Userspace interface buffer layout:
*
@@ -794,31 +667,16 @@ static int vsr_set(struct task_struct *target, const struct user_regset *regset,
u64 buf[32];
int ret,i;
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
#endif
flush_vsx_to_thread(target);
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
buf, 0, 32 * sizeof(double));
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(target->thread.regs->msr)) {
if (!ret)
for (i = 0; i < 32 ; i++)
target->thread.transact_fp.
fpr[i][TS_VSRLOWOFFSET] = buf[i];
} else {
for (i = 0; i < 32 ; i++)
target->thread.fp_state.
fpr[i][TS_VSRLOWOFFSET] = buf[i];
}
#else
for (i = 0; i < 32 ; i++)
target->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
#endif
target->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
return ret;
}
@@ -944,9 +802,9 @@ static int tm_cgpr_get(struct task_struct *target,
if (!MSR_TM_ACTIVE(target->thread.regs->msr))
return -ENODATA;
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
&target->thread.ckpt_regs,
@@ -1009,9 +867,9 @@ static int tm_cgpr_set(struct task_struct *target,
if (!MSR_TM_ACTIVE(target->thread.regs->msr))
return -ENODATA;
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&target->thread.ckpt_regs,
@@ -1087,7 +945,7 @@ static int tm_cfpr_active(struct task_struct *target,
*
* This function gets in transaction checkpointed FPR registers.
*
* When the transaction is active 'fp_state' holds the checkpointed
* When the transaction is active 'ckfp_state' holds the checkpointed
* values for the current transaction to fall back on if it aborts
* in between. This function gets those checkpointed FPR registers.
* The userspace interface buffer layout is as follows.
@@ -1111,14 +969,14 @@ static int tm_cfpr_get(struct task_struct *target,
if (!MSR_TM_ACTIVE(target->thread.regs->msr))
return -ENODATA;
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
/* copy to local buffer then write that out */
for (i = 0; i < 32 ; i++)
buf[i] = target->thread.TS_FPR(i);
buf[32] = target->thread.fp_state.fpscr;
buf[i] = target->thread.TS_CKFPR(i);
buf[32] = target->thread.ckfp_state.fpscr;
return user_regset_copyout(&pos, &count, &kbuf, &ubuf, buf, 0, -1);
}
@@ -1133,7 +991,7 @@ static int tm_cfpr_get(struct task_struct *target,
*
* This function sets in transaction checkpointed FPR registers.
*
* When the transaction is active 'fp_state' holds the checkpointed
* When the transaction is active 'ckfp_state' holds the checkpointed
* FPR register values for the current transaction to fall back on
* if it aborts in between. This function sets these checkpointed
* FPR registers. The userspace interface buffer layout is as follows.
@@ -1157,17 +1015,17 @@ static int tm_cfpr_set(struct task_struct *target,
if (!MSR_TM_ACTIVE(target->thread.regs->msr))
return -ENODATA;
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
/* copy to local buffer then write that out */
i = user_regset_copyin(&pos, &count, &kbuf, &ubuf, buf, 0, -1);
if (i)
return i;
for (i = 0; i < 32 ; i++)
target->thread.TS_FPR(i) = buf[i];
target->thread.fp_state.fpscr = buf[32];
target->thread.TS_CKFPR(i) = buf[i];
target->thread.ckfp_state.fpscr = buf[32];
return 0;
}
@@ -1202,7 +1060,7 @@ static int tm_cvmx_active(struct task_struct *target,
*
* This function gets in transaction checkpointed VMX registers.
*
* When the transaction is active 'vr_state' and 'vr_save' hold
* When the transaction is active 'ckvr_state' and 'ckvrsave' hold
* the checkpointed values for the current transaction to fall
* back on if it aborts in between. The userspace interface buffer
* layout is as follows.
@@ -1229,12 +1087,12 @@ static int tm_cvmx_get(struct task_struct *target,
return -ENODATA;
/* Flush the state */
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
&target->thread.vr_state, 0,
&target->thread.ckvr_state, 0,
33 * sizeof(vector128));
if (!ret) {
/*
@@ -1245,7 +1103,7 @@ static int tm_cvmx_get(struct task_struct *target,
u32 word;
} vrsave;
memset(&vrsave, 0, sizeof(vrsave));
vrsave.word = target->thread.vrsave;
vrsave.word = target->thread.ckvrsave;
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &vrsave,
33 * sizeof(vector128), -1);
}
@@ -1264,7 +1122,7 @@ static int tm_cvmx_get(struct task_struct *target,
*
* This function sets in transaction checkpointed VMX registers.
*
* When the transaction is active 'vr_state' and 'vr_save' hold
* When the transaction is active 'ckvr_state' and 'ckvrsave' hold
* the checkpointed values for the current transaction to fall
* back on if it aborts in between. The userspace interface buffer
* layout is as follows.
@@ -1290,12 +1148,12 @@ static int tm_cvmx_set(struct task_struct *target,
if (!MSR_TM_ACTIVE(target->thread.regs->msr))
return -ENODATA;
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&target->thread.vr_state, 0,
&target->thread.ckvr_state, 0,
33 * sizeof(vector128));
if (!ret && count > 0) {
/*
@@ -1306,11 +1164,11 @@ static int tm_cvmx_set(struct task_struct *target,
u32 word;
} vrsave;
memset(&vrsave, 0, sizeof(vrsave));
vrsave.word = target->thread.vrsave;
vrsave.word = target->thread.ckvrsave;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &vrsave,
33 * sizeof(vector128), -1);
if (!ret)
target->thread.vrsave = vrsave.word;
target->thread.ckvrsave = vrsave.word;
}
return ret;
@@ -1348,7 +1206,7 @@ static int tm_cvsx_active(struct task_struct *target,
*
* This function gets in transaction checkpointed VSX registers.
*
* When the transaction is active 'fp_state' holds the checkpointed
* When the transaction is active 'ckfp_state' holds the checkpointed
* values for the current transaction to fall back on if it aborts
* in between. This function gets those checkpointed VSX registers.
* The userspace interface buffer layout is as follows.
@@ -1372,13 +1230,13 @@ static int tm_cvsx_get(struct task_struct *target,
return -ENODATA;
/* Flush the state */
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
flush_vsx_to_thread(target);
for (i = 0; i < 32 ; i++)
buf[i] = target->thread.fp_state.fpr[i][TS_VSRLOWOFFSET];
buf[i] = target->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET];
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
buf, 0, 32 * sizeof(double));
@@ -1396,7 +1254,7 @@ static int tm_cvsx_get(struct task_struct *target,
*
* This function sets in transaction checkpointed VSX registers.
*
* When the transaction is active 'fp_state' holds the checkpointed
* When the transaction is active 'ckfp_state' holds the checkpointed
* VSX register values for the current transaction to fall back on
* if it aborts in between. This function sets these checkpointed
* FPR registers. The userspace interface buffer layout is as follows.
@@ -1420,15 +1278,16 @@ static int tm_cvsx_set(struct task_struct *target,
return -ENODATA;
/* Flush the state */
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
flush_vsx_to_thread(target);
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
buf, 0, 32 * sizeof(double));
for (i = 0; i < 32 ; i++)
target->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
if (!ret)
for (i = 0; i < 32 ; i++)
target->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
return ret;
}
@@ -1484,9 +1343,9 @@ static int tm_spr_get(struct task_struct *target,
return -ENODEV;
/* Flush the states */
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
/* TFHAR register */
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
@@ -1540,9 +1399,9 @@ static int tm_spr_set(struct task_struct *target,
return -ENODEV;
/* Flush the states */
flush_tmregs_to_thread(target);
flush_fp_to_thread(target);
flush_altivec_to_thread(target);
flush_tmregs_to_thread(target);
/* TFHAR register */
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
@@ -2065,33 +1924,12 @@ static const struct user_regset_view user_ppc_native_view = {
static int gpr32_get_common(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf, bool tm_active)
void *kbuf, void __user *ubuf,
unsigned long *regs)
{
const unsigned long *regs = &target->thread.regs->gpr[0];
const unsigned long *ckpt_regs;
compat_ulong_t *k = kbuf;
compat_ulong_t __user *u = ubuf;
compat_ulong_t reg;
int i;
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
ckpt_regs = &target->thread.ckpt_regs.gpr[0];
#endif
if (tm_active) {
regs = ckpt_regs;
} else {
if (target->thread.regs == NULL)
return -EIO;
if (!FULL_REGS(target->thread.regs)) {
/*
* We have a partial register set.
* Fill 14-31 with bogus values.
*/
for (i = 14; i < 32; i++)
target->thread.regs->gpr[i] = NV_REG_POISON;
}
}
pos /= sizeof(reg);
count /= sizeof(reg);
@@ -2133,29 +1971,13 @@ static int gpr32_get_common(struct task_struct *target,
static int gpr32_set_common(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf, bool tm_active)
const void *kbuf, const void __user *ubuf,
unsigned long *regs)
{
unsigned long *regs = &target->thread.regs->gpr[0];
unsigned long *ckpt_regs;
const compat_ulong_t *k = kbuf;
const compat_ulong_t __user *u = ubuf;
compat_ulong_t reg;
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
ckpt_regs = &target->thread.ckpt_regs.gpr[0];
#endif
if (tm_active) {
regs = ckpt_regs;
} else {
regs = &target->thread.regs->gpr[0];
if (target->thread.regs == NULL)
return -EIO;
CHECK_FULL_REGS(target->thread.regs);
}
pos /= sizeof(reg);
count /= sizeof(reg);
@@ -2220,7 +2042,8 @@ static int tm_cgpr32_get(struct task_struct *target,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
return gpr32_get_common(target, regset, pos, count, kbuf, ubuf, 1);
return gpr32_get_common(target, regset, pos, count, kbuf, ubuf,
&target->thread.ckpt_regs.gpr[0]);
}
static int tm_cgpr32_set(struct task_struct *target,
@@ -2228,7 +2051,8 @@ static int tm_cgpr32_set(struct task_struct *target,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
return gpr32_set_common(target, regset, pos, count, kbuf, ubuf, 1);
return gpr32_set_common(target, regset, pos, count, kbuf, ubuf,
&target->thread.ckpt_regs.gpr[0]);
}
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
@@ -2237,7 +2061,21 @@ static int gpr32_get(struct task_struct *target,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
return gpr32_get_common(target, regset, pos, count, kbuf, ubuf, 0);
int i;
if (target->thread.regs == NULL)
return -EIO;
if (!FULL_REGS(target->thread.regs)) {
/*
* We have a partial register set.
* Fill 14-31 with bogus values.
*/
for (i = 14; i < 32; i++)
target->thread.regs->gpr[i] = NV_REG_POISON;
}
return gpr32_get_common(target, regset, pos, count, kbuf, ubuf,
&target->thread.regs->gpr[0]);
}
static int gpr32_set(struct task_struct *target,
@@ -2245,7 +2083,12 @@ static int gpr32_set(struct task_struct *target,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
return gpr32_set_common(target, regset, pos, count, kbuf, ubuf, 0);
if (target->thread.regs == NULL)
return -EIO;
CHECK_FULL_REGS(target->thread.regs);
return gpr32_set_common(target, regset, pos, count, kbuf, ubuf,
&target->thread.regs->gpr[0]);
}
/*

View File

@@ -99,22 +99,24 @@ static void check_syscall_restart(struct pt_regs *regs, struct k_sigaction *ka,
}
}
static void do_signal(struct pt_regs *regs)
static void do_signal(struct task_struct *tsk)
{
sigset_t *oldset = sigmask_to_save();
struct ksignal ksig;
int ret;
int is32 = is_32bit_task();
BUG_ON(tsk != current);
get_signal(&ksig);
/* Is there any syscall restart business here ? */
check_syscall_restart(regs, &ksig.ka, ksig.sig > 0);
check_syscall_restart(tsk->thread.regs, &ksig.ka, ksig.sig > 0);
if (ksig.sig <= 0) {
/* No signal to deliver -- put the saved sigmask back */
restore_saved_sigmask();
regs->trap = 0;
tsk->thread.regs->trap = 0;
return; /* no signals delivered */
}
@@ -124,23 +126,22 @@ static void do_signal(struct pt_regs *regs)
* user space. The DABR will have been cleared if it
* triggered inside the kernel.
*/
if (current->thread.hw_brk.address &&
current->thread.hw_brk.type)
__set_breakpoint(&current->thread.hw_brk);
if (tsk->thread.hw_brk.address && tsk->thread.hw_brk.type)
__set_breakpoint(&tsk->thread.hw_brk);
#endif
/* Re-enable the breakpoints for the signal stack */
thread_change_pc(current, regs);
thread_change_pc(tsk, tsk->thread.regs);
if (is32) {
if (ksig.ka.sa.sa_flags & SA_SIGINFO)
ret = handle_rt_signal32(&ksig, oldset, regs);
ret = handle_rt_signal32(&ksig, oldset, tsk);
else
ret = handle_signal32(&ksig, oldset, regs);
ret = handle_signal32(&ksig, oldset, tsk);
} else {
ret = handle_rt_signal64(&ksig, oldset, regs);
ret = handle_rt_signal64(&ksig, oldset, tsk);
}
regs->trap = 0;
tsk->thread.regs->trap = 0;
signal_setup_done(ret, &ksig, test_thread_flag(TIF_SINGLESTEP));
}
@@ -151,8 +152,10 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
if (thread_info_flags & _TIF_UPROBE)
uprobe_notify_resume(regs);
if (thread_info_flags & _TIF_SIGPENDING)
do_signal(regs);
if (thread_info_flags & _TIF_SIGPENDING) {
BUG_ON(regs != current->thread.regs);
do_signal(current);
}
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
@@ -162,7 +165,7 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
user_enter();
}
unsigned long get_tm_stackpointer(struct pt_regs *regs)
unsigned long get_tm_stackpointer(struct task_struct *tsk)
{
/* When in an active transaction that takes a signal, we need to be
* careful with the stack. It's possible that the stack has moved back
@@ -187,11 +190,13 @@ unsigned long get_tm_stackpointer(struct pt_regs *regs)
*/
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(regs->msr)) {
BUG_ON(tsk != current);
if (MSR_TM_ACTIVE(tsk->thread.regs->msr)) {
tm_reclaim_current(TM_CAUSE_SIGNAL);
if (MSR_TM_TRANSACTIONAL(regs->msr))
return current->thread.ckpt_regs.gpr[1];
if (MSR_TM_TRANSACTIONAL(tsk->thread.regs->msr))
return tsk->thread.ckpt_regs.gpr[1];
}
#endif
return regs->gpr[1];
return tsk->thread.regs->gpr[1];
}

View File

@@ -16,39 +16,41 @@ extern void __user *get_sigframe(struct ksignal *ksig, unsigned long sp,
size_t frame_size, int is_32);
extern int handle_signal32(struct ksignal *ksig, sigset_t *oldset,
struct pt_regs *regs);
struct task_struct *tsk);
extern int handle_rt_signal32(struct ksignal *ksig, sigset_t *oldset,
struct pt_regs *regs);
struct task_struct *tsk);
extern unsigned long copy_fpr_to_user(void __user *to,
struct task_struct *task);
extern unsigned long copy_transact_fpr_to_user(void __user *to,
extern unsigned long copy_ckfpr_to_user(void __user *to,
struct task_struct *task);
extern unsigned long copy_fpr_from_user(struct task_struct *task,
void __user *from);
extern unsigned long copy_transact_fpr_from_user(struct task_struct *task,
extern unsigned long copy_ckfpr_from_user(struct task_struct *task,
void __user *from);
extern unsigned long get_tm_stackpointer(struct task_struct *tsk);
#ifdef CONFIG_VSX
extern unsigned long copy_vsx_to_user(void __user *to,
struct task_struct *task);
extern unsigned long copy_transact_vsx_to_user(void __user *to,
extern unsigned long copy_ckvsx_to_user(void __user *to,
struct task_struct *task);
extern unsigned long copy_vsx_from_user(struct task_struct *task,
void __user *from);
extern unsigned long copy_transact_vsx_from_user(struct task_struct *task,
extern unsigned long copy_ckvsx_from_user(struct task_struct *task,
void __user *from);
#endif
#ifdef CONFIG_PPC64
extern int handle_rt_signal64(struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs);
struct task_struct *tsk);
#else /* CONFIG_PPC64 */
static inline int handle_rt_signal64(struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs)
struct task_struct *tsk)
{
return -EFAULT;
}

View File

@@ -44,6 +44,7 @@
#include <asm/vdso.h>
#include <asm/switch_to.h>
#include <asm/tm.h>
#include <asm/asm-prototypes.h>
#ifdef CONFIG_PPC64
#include "ppc32.h"
#include <asm/unistd.h>
@@ -315,7 +316,7 @@ unsigned long copy_vsx_from_user(struct task_struct *task,
}
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
unsigned long copy_transact_fpr_to_user(void __user *to,
unsigned long copy_ckfpr_to_user(void __user *to,
struct task_struct *task)
{
u64 buf[ELF_NFPREG];
@@ -323,12 +324,12 @@ unsigned long copy_transact_fpr_to_user(void __user *to,
/* save FPR copy to local buffer then write to the thread_struct */
for (i = 0; i < (ELF_NFPREG - 1) ; i++)
buf[i] = task->thread.TS_TRANS_FPR(i);
buf[i] = task->thread.transact_fp.fpscr;
buf[i] = task->thread.TS_CKFPR(i);
buf[i] = task->thread.ckfp_state.fpscr;
return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
}
unsigned long copy_transact_fpr_from_user(struct task_struct *task,
unsigned long copy_ckfpr_from_user(struct task_struct *task,
void __user *from)
{
u64 buf[ELF_NFPREG];
@@ -337,13 +338,13 @@ unsigned long copy_transact_fpr_from_user(struct task_struct *task,
if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
return 1;
for (i = 0; i < (ELF_NFPREG - 1) ; i++)
task->thread.TS_TRANS_FPR(i) = buf[i];
task->thread.transact_fp.fpscr = buf[i];
task->thread.TS_CKFPR(i) = buf[i];
task->thread.ckfp_state.fpscr = buf[i];
return 0;
}
unsigned long copy_transact_vsx_to_user(void __user *to,
unsigned long copy_ckvsx_to_user(void __user *to,
struct task_struct *task)
{
u64 buf[ELF_NVSRHALFREG];
@@ -351,11 +352,11 @@ unsigned long copy_transact_vsx_to_user(void __user *to,
/* save FPR copy to local buffer then write to the thread_struct */
for (i = 0; i < ELF_NVSRHALFREG; i++)
buf[i] = task->thread.transact_fp.fpr[i][TS_VSRLOWOFFSET];
buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET];
return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
}
unsigned long copy_transact_vsx_from_user(struct task_struct *task,
unsigned long copy_ckvsx_from_user(struct task_struct *task,
void __user *from)
{
u64 buf[ELF_NVSRHALFREG];
@@ -364,7 +365,7 @@ unsigned long copy_transact_vsx_from_user(struct task_struct *task,
if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
return 1;
for (i = 0; i < ELF_NVSRHALFREG ; i++)
task->thread.transact_fp.fpr[i][TS_VSRLOWOFFSET] = buf[i];
task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
return 0;
}
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
@@ -384,17 +385,17 @@ inline unsigned long copy_fpr_from_user(struct task_struct *task,
}
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
inline unsigned long copy_transact_fpr_to_user(void __user *to,
inline unsigned long copy_ckfpr_to_user(void __user *to,
struct task_struct *task)
{
return __copy_to_user(to, task->thread.transact_fp.fpr,
return __copy_to_user(to, task->thread.ckfp_state.fpr,
ELF_NFPREG * sizeof(double));
}
inline unsigned long copy_transact_fpr_from_user(struct task_struct *task,
inline unsigned long copy_ckfpr_from_user(struct task_struct *task,
void __user *from)
{
return __copy_from_user(task->thread.transact_fp.fpr, from,
return __copy_from_user(task->thread.ckfp_state.fpr, from,
ELF_NFPREG * sizeof(double));
}
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
@@ -525,9 +526,6 @@ static int save_tm_user_regs(struct pt_regs *regs,
*/
regs->msr &= ~MSR_TS_MASK;
/* Make sure floating point registers are stored in regs */
flush_fp_to_thread(current);
/* Save both sets of general registers */
if (save_general_regs(&current->thread.ckpt_regs, frame)
|| save_general_regs(regs, tm_frame))
@@ -545,18 +543,17 @@ static int save_tm_user_regs(struct pt_regs *regs,
#ifdef CONFIG_ALTIVEC
/* save altivec registers */
if (current->thread.used_vr) {
flush_altivec_to_thread(current);
if (__copy_to_user(&frame->mc_vregs, &current->thread.vr_state,
if (__copy_to_user(&frame->mc_vregs, &current->thread.ckvr_state,
ELF_NVRREG * sizeof(vector128)))
return 1;
if (msr & MSR_VEC) {
if (__copy_to_user(&tm_frame->mc_vregs,
&current->thread.transact_vr,
&current->thread.vr_state,
ELF_NVRREG * sizeof(vector128)))
return 1;
} else {
if (__copy_to_user(&tm_frame->mc_vregs,
&current->thread.vr_state,
&current->thread.ckvr_state,
ELF_NVRREG * sizeof(vector128)))
return 1;
}
@@ -573,28 +570,28 @@ static int save_tm_user_regs(struct pt_regs *regs,
* most significant bits of that same vector. --BenH
*/
if (cpu_has_feature(CPU_FTR_ALTIVEC))
current->thread.vrsave = mfspr(SPRN_VRSAVE);
if (__put_user(current->thread.vrsave,
current->thread.ckvrsave = mfspr(SPRN_VRSAVE);
if (__put_user(current->thread.ckvrsave,
(u32 __user *)&frame->mc_vregs[32]))
return 1;
if (msr & MSR_VEC) {
if (__put_user(current->thread.transact_vrsave,
if (__put_user(current->thread.vrsave,
(u32 __user *)&tm_frame->mc_vregs[32]))
return 1;
} else {
if (__put_user(current->thread.vrsave,
if (__put_user(current->thread.ckvrsave,
(u32 __user *)&tm_frame->mc_vregs[32]))
return 1;
}
#endif /* CONFIG_ALTIVEC */
if (copy_fpr_to_user(&frame->mc_fregs, current))
if (copy_ckfpr_to_user(&frame->mc_fregs, current))
return 1;
if (msr & MSR_FP) {
if (copy_transact_fpr_to_user(&tm_frame->mc_fregs, current))
if (copy_fpr_to_user(&tm_frame->mc_fregs, current))
return 1;
} else {
if (copy_fpr_to_user(&tm_frame->mc_fregs, current))
if (copy_ckfpr_to_user(&tm_frame->mc_fregs, current))
return 1;
}
@@ -606,15 +603,14 @@ static int save_tm_user_regs(struct pt_regs *regs,
* contains valid data
*/
if (current->thread.used_vsr) {
flush_vsx_to_thread(current);
if (copy_vsx_to_user(&frame->mc_vsregs, current))
if (copy_ckvsx_to_user(&frame->mc_vsregs, current))
return 1;
if (msr & MSR_VSX) {
if (copy_transact_vsx_to_user(&tm_frame->mc_vsregs,
if (copy_vsx_to_user(&tm_frame->mc_vsregs,
current))
return 1;
} else {
if (copy_vsx_to_user(&tm_frame->mc_vsregs, current))
if (copy_ckvsx_to_user(&tm_frame->mc_vsregs, current))
return 1;
}
@@ -698,6 +694,7 @@ static long restore_user_regs(struct pt_regs *regs,
if (__copy_from_user(&current->thread.vr_state, &sr->mc_vregs,
sizeof(sr->mc_vregs)))
return 1;
current->thread.used_vr = true;
} else if (current->thread.used_vr)
memset(&current->thread.vr_state, 0,
ELF_NVRREG * sizeof(vector128));
@@ -724,6 +721,7 @@ static long restore_user_regs(struct pt_regs *regs,
*/
if (copy_vsx_from_user(current, &sr->mc_vsregs))
return 1;
current->thread.used_vsr = true;
} else if (current->thread.used_vsr)
for (i = 0; i < 32 ; i++)
current->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
@@ -743,6 +741,7 @@ static long restore_user_regs(struct pt_regs *regs,
if (__copy_from_user(current->thread.evr, &sr->mc_vregs,
ELF_NEVRREG * sizeof(u32)))
return 1;
current->thread.used_spe = true;
} else if (current->thread.used_spe)
memset(current->thread.evr, 0, ELF_NEVRREG * sizeof(u32));
@@ -793,33 +792,34 @@ static long restore_tm_user_regs(struct pt_regs *regs,
regs->msr &= ~MSR_VEC;
if (msr & MSR_VEC) {
/* restore altivec registers from the stack */
if (__copy_from_user(&current->thread.vr_state, &sr->mc_vregs,
if (__copy_from_user(&current->thread.ckvr_state, &sr->mc_vregs,
sizeof(sr->mc_vregs)) ||
__copy_from_user(&current->thread.transact_vr,
__copy_from_user(&current->thread.vr_state,
&tm_sr->mc_vregs,
sizeof(sr->mc_vregs)))
return 1;
current->thread.used_vr = true;
} else if (current->thread.used_vr) {
memset(&current->thread.vr_state, 0,
ELF_NVRREG * sizeof(vector128));
memset(&current->thread.transact_vr, 0,
memset(&current->thread.ckvr_state, 0,
ELF_NVRREG * sizeof(vector128));
}
/* Always get VRSAVE back */
if (__get_user(current->thread.vrsave,
if (__get_user(current->thread.ckvrsave,
(u32 __user *)&sr->mc_vregs[32]) ||
__get_user(current->thread.transact_vrsave,
__get_user(current->thread.vrsave,
(u32 __user *)&tm_sr->mc_vregs[32]))
return 1;
if (cpu_has_feature(CPU_FTR_ALTIVEC))
mtspr(SPRN_VRSAVE, current->thread.vrsave);
mtspr(SPRN_VRSAVE, current->thread.ckvrsave);
#endif /* CONFIG_ALTIVEC */
regs->msr &= ~(MSR_FP | MSR_FE0 | MSR_FE1);
if (copy_fpr_from_user(current, &sr->mc_fregs) ||
copy_transact_fpr_from_user(current, &tm_sr->mc_fregs))
copy_ckfpr_from_user(current, &tm_sr->mc_fregs))
return 1;
#ifdef CONFIG_VSX
@@ -829,13 +829,14 @@ static long restore_tm_user_regs(struct pt_regs *regs,
* Restore altivec registers from the stack to a local
* buffer, then write this out to the thread_struct
*/
if (copy_vsx_from_user(current, &sr->mc_vsregs) ||
copy_transact_vsx_from_user(current, &tm_sr->mc_vsregs))
if (copy_vsx_from_user(current, &tm_sr->mc_vsregs) ||
copy_ckvsx_from_user(current, &sr->mc_vsregs))
return 1;
current->thread.used_vsr = true;
} else if (current->thread.used_vsr)
for (i = 0; i < 32 ; i++) {
current->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
current->thread.transact_fp.fpr[i][TS_VSRLOWOFFSET] = 0;
current->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
}
#endif /* CONFIG_VSX */
@@ -848,6 +849,7 @@ static long restore_tm_user_regs(struct pt_regs *regs,
if (__copy_from_user(current->thread.evr, &sr->mc_vregs,
ELF_NEVRREG * sizeof(u32)))
return 1;
current->thread.used_spe = true;
} else if (current->thread.used_spe)
memset(current->thread.evr, 0, ELF_NEVRREG * sizeof(u32));
@@ -877,13 +879,14 @@ static long restore_tm_user_regs(struct pt_regs *regs,
tm_recheckpoint(&current->thread, msr);
/* This loads the speculative FP/VEC state, if used */
msr_check_and_set(msr & (MSR_FP | MSR_VEC));
if (msr & MSR_FP) {
do_load_up_transact_fpu(&current->thread);
load_fp_state(&current->thread.fp_state);
regs->msr |= (MSR_FP | current->thread.fpexc_mode);
}
#ifdef CONFIG_ALTIVEC
if (msr & MSR_VEC) {
do_load_up_transact_altivec(&current->thread);
load_vr_state(&current->thread.vr_state);
regs->msr |= MSR_VEC;
}
#endif
@@ -971,7 +974,7 @@ int copy_siginfo_from_user32(siginfo_t *to, struct compat_siginfo __user *from)
* (one which gets siginfo).
*/
int handle_rt_signal32(struct ksignal *ksig, sigset_t *oldset,
struct pt_regs *regs)
struct task_struct *tsk)
{
struct rt_sigframe __user *rt_sf;
struct mcontext __user *frame;
@@ -980,10 +983,13 @@ int handle_rt_signal32(struct ksignal *ksig, sigset_t *oldset,
unsigned long newsp = 0;
int sigret;
unsigned long tramp;
struct pt_regs *regs = tsk->thread.regs;
BUG_ON(tsk != current);
/* Set up Signal Frame */
/* Put a Real Time Context onto stack */
rt_sf = get_sigframe(ksig, get_tm_stackpointer(regs), sizeof(*rt_sf), 1);
rt_sf = get_sigframe(ksig, get_tm_stackpointer(tsk), sizeof(*rt_sf), 1);
addr = rt_sf;
if (unlikely(rt_sf == NULL))
goto badframe;
@@ -1000,9 +1006,9 @@ int handle_rt_signal32(struct ksignal *ksig, sigset_t *oldset,
/* Save user registers on the stack */
frame = &rt_sf->uc.uc_mcontext;
addr = frame;
if (vdso32_rt_sigtramp && current->mm->context.vdso_base) {
if (vdso32_rt_sigtramp && tsk->mm->context.vdso_base) {
sigret = 0;
tramp = current->mm->context.vdso_base + vdso32_rt_sigtramp;
tramp = tsk->mm->context.vdso_base + vdso32_rt_sigtramp;
} else {
sigret = __NR_rt_sigreturn;
tramp = (unsigned long) frame->tramp;
@@ -1029,7 +1035,7 @@ int handle_rt_signal32(struct ksignal *ksig, sigset_t *oldset,
}
regs->link = tramp;
current->thread.fp_state.fpscr = 0; /* turn off all fp exceptions */
tsk->thread.fp_state.fpscr = 0; /* turn off all fp exceptions */
/* create a stack frame for the caller of the handler */
newsp = ((unsigned long)rt_sf) - (__SIGNAL_FRAMESIZE + 16);
@@ -1054,7 +1060,7 @@ badframe:
printk_ratelimited(KERN_INFO
"%s[%d]: bad frame in handle_rt_signal32: "
"%p nip %08lx lr %08lx\n",
current->comm, current->pid,
tsk->comm, tsk->pid,
addr, regs->nip, regs->link);
return 1;
@@ -1410,7 +1416,8 @@ int sys_debug_setcontext(struct ucontext __user *ctx,
/*
* OK, we're invoking a handler
*/
int handle_signal32(struct ksignal *ksig, sigset_t *oldset, struct pt_regs *regs)
int handle_signal32(struct ksignal *ksig, sigset_t *oldset,
struct task_struct *tsk)
{
struct sigcontext __user *sc;
struct sigframe __user *frame;
@@ -1418,9 +1425,12 @@ int handle_signal32(struct ksignal *ksig, sigset_t *oldset, struct pt_regs *regs
unsigned long newsp = 0;
int sigret;
unsigned long tramp;
struct pt_regs *regs = tsk->thread.regs;
BUG_ON(tsk != current);
/* Set up Signal Frame */
frame = get_sigframe(ksig, get_tm_stackpointer(regs), sizeof(*frame), 1);
frame = get_sigframe(ksig, get_tm_stackpointer(tsk), sizeof(*frame), 1);
if (unlikely(frame == NULL))
goto badframe;
sc = (struct sigcontext __user *) &frame->sctx;
@@ -1439,9 +1449,9 @@ int handle_signal32(struct ksignal *ksig, sigset_t *oldset, struct pt_regs *regs
|| __put_user(ksig->sig, &sc->signal))
goto badframe;
if (vdso32_sigtramp && current->mm->context.vdso_base) {
if (vdso32_sigtramp && tsk->mm->context.vdso_base) {
sigret = 0;
tramp = current->mm->context.vdso_base + vdso32_sigtramp;
tramp = tsk->mm->context.vdso_base + vdso32_sigtramp;
} else {
sigret = __NR_sigreturn;
tramp = (unsigned long) frame->mctx.tramp;
@@ -1463,7 +1473,7 @@ int handle_signal32(struct ksignal *ksig, sigset_t *oldset, struct pt_regs *regs
regs->link = tramp;
current->thread.fp_state.fpscr = 0; /* turn off all fp exceptions */
tsk->thread.fp_state.fpscr = 0; /* turn off all fp exceptions */
/* create a stack frame for the caller of the handler */
newsp = ((unsigned long)frame) - __SIGNAL_FRAMESIZE;
@@ -1483,7 +1493,7 @@ badframe:
printk_ratelimited(KERN_INFO
"%s[%d]: bad frame in handle_signal32: "
"%p nip %08lx lr %08lx\n",
current->comm, current->pid,
tsk->comm, tsk->pid,
frame, regs->nip, regs->link);
return 1;

View File

@@ -35,6 +35,7 @@
#include <asm/vdso.h>
#include <asm/switch_to.h>
#include <asm/tm.h>
#include <asm/asm-prototypes.h>
#include "signal.h"
@@ -90,9 +91,9 @@ static elf_vrreg_t __user *sigcontext_vmx_regs(struct sigcontext __user *sc)
* Set up the sigcontext for the signal frame.
*/
static long setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
int signr, sigset_t *set, unsigned long handler,
int ctx_has_vsx_region)
static long setup_sigcontext(struct sigcontext __user *sc,
struct task_struct *tsk, int signr, sigset_t *set,
unsigned long handler, int ctx_has_vsx_region)
{
/* When CONFIG_ALTIVEC is set, we _always_ setup v_regs even if the
* process never used altivec yet (MSR_VEC is zero in pt_regs of
@@ -106,17 +107,20 @@ static long setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
elf_vrreg_t __user *v_regs = sigcontext_vmx_regs(sc);
unsigned long vrsave;
#endif
struct pt_regs *regs = tsk->thread.regs;
unsigned long msr = regs->msr;
long err = 0;
BUG_ON(tsk != current);
#ifdef CONFIG_ALTIVEC
err |= __put_user(v_regs, &sc->v_regs);
/* save altivec registers */
if (current->thread.used_vr) {
flush_altivec_to_thread(current);
if (tsk->thread.used_vr) {
flush_altivec_to_thread(tsk);
/* Copy 33 vec registers (vr0..31 and vscr) to the stack */
err |= __copy_to_user(v_regs, &current->thread.vr_state,
err |= __copy_to_user(v_regs, &tsk->thread.vr_state,
33 * sizeof(vector128));
/* set MSR_VEC in the MSR value in the frame to indicate that sc->v_reg)
* contains valid data.
@@ -129,16 +133,16 @@ static long setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
vrsave = 0;
if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
vrsave = mfspr(SPRN_VRSAVE);
current->thread.vrsave = vrsave;
tsk->thread.vrsave = vrsave;
}
err |= __put_user(vrsave, (u32 __user *)&v_regs[33]);
#else /* CONFIG_ALTIVEC */
err |= __put_user(0, &sc->v_regs);
#endif /* CONFIG_ALTIVEC */
flush_fp_to_thread(current);
flush_fp_to_thread(tsk);
/* copy fpr regs and fpscr */
err |= copy_fpr_to_user(&sc->fp_regs, current);
err |= copy_fpr_to_user(&sc->fp_regs, tsk);
/*
* Clear the MSR VSX bit to indicate there is no valid state attached
@@ -151,10 +155,10 @@ static long setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
* then out to userspace. Update v_regs to point after the
* VMX data.
*/
if (current->thread.used_vsr && ctx_has_vsx_region) {
flush_vsx_to_thread(current);
if (tsk->thread.used_vsr && ctx_has_vsx_region) {
flush_vsx_to_thread(tsk);
v_regs += ELF_NVRREG;
err |= copy_vsx_to_user(v_regs, current);
err |= copy_vsx_to_user(v_regs, tsk);
/* set MSR_VSX in the MSR value in the frame to
* indicate that sc->vs_reg) contains valid data.
*/
@@ -187,7 +191,7 @@ static long setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
*/
static long setup_tm_sigcontexts(struct sigcontext __user *sc,
struct sigcontext __user *tm_sc,
struct pt_regs *regs,
struct task_struct *tsk,
int signr, sigset_t *set, unsigned long handler)
{
/* When CONFIG_ALTIVEC is set, we _always_ setup v_regs even if the
@@ -202,9 +206,12 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
elf_vrreg_t __user *v_regs = sigcontext_vmx_regs(sc);
elf_vrreg_t __user *tm_v_regs = sigcontext_vmx_regs(tm_sc);
#endif
unsigned long msr = regs->msr;
struct pt_regs *regs = tsk->thread.regs;
unsigned long msr = tsk->thread.ckpt_regs.msr;
long err = 0;
BUG_ON(tsk != current);
BUG_ON(!MSR_TM_ACTIVE(regs->msr));
/* Remove TM bits from thread's MSR. The MSR in the sigcontext
@@ -214,28 +221,25 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
*/
regs->msr &= ~MSR_TS_MASK;
flush_fp_to_thread(current);
#ifdef CONFIG_ALTIVEC
err |= __put_user(v_regs, &sc->v_regs);
err |= __put_user(tm_v_regs, &tm_sc->v_regs);
/* save altivec registers */
if (current->thread.used_vr) {
flush_altivec_to_thread(current);
if (tsk->thread.used_vr) {
/* Copy 33 vec registers (vr0..31 and vscr) to the stack */
err |= __copy_to_user(v_regs, &current->thread.vr_state,
err |= __copy_to_user(v_regs, &tsk->thread.ckvr_state,
33 * sizeof(vector128));
/* If VEC was enabled there are transactional VRs valid too,
* else they're a copy of the checkpointed VRs.
*/
if (msr & MSR_VEC)
err |= __copy_to_user(tm_v_regs,
&current->thread.transact_vr,
&tsk->thread.vr_state,
33 * sizeof(vector128));
else
err |= __copy_to_user(tm_v_regs,
&current->thread.vr_state,
&tsk->thread.ckvr_state,
33 * sizeof(vector128));
/* set MSR_VEC in the MSR value in the frame to indicate
@@ -247,13 +251,13 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
* use altivec.
*/
if (cpu_has_feature(CPU_FTR_ALTIVEC))
current->thread.vrsave = mfspr(SPRN_VRSAVE);
err |= __put_user(current->thread.vrsave, (u32 __user *)&v_regs[33]);
tsk->thread.ckvrsave = mfspr(SPRN_VRSAVE);
err |= __put_user(tsk->thread.ckvrsave, (u32 __user *)&v_regs[33]);
if (msr & MSR_VEC)
err |= __put_user(current->thread.transact_vrsave,
err |= __put_user(tsk->thread.vrsave,
(u32 __user *)&tm_v_regs[33]);
else
err |= __put_user(current->thread.vrsave,
err |= __put_user(tsk->thread.ckvrsave,
(u32 __user *)&tm_v_regs[33]);
#else /* CONFIG_ALTIVEC */
@@ -262,11 +266,11 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
#endif /* CONFIG_ALTIVEC */
/* copy fpr regs and fpscr */
err |= copy_fpr_to_user(&sc->fp_regs, current);
err |= copy_ckfpr_to_user(&sc->fp_regs, tsk);
if (msr & MSR_FP)
err |= copy_transact_fpr_to_user(&tm_sc->fp_regs, current);
err |= copy_fpr_to_user(&tm_sc->fp_regs, tsk);
else
err |= copy_fpr_to_user(&tm_sc->fp_regs, current);
err |= copy_ckfpr_to_user(&tm_sc->fp_regs, tsk);
#ifdef CONFIG_VSX
/*
@@ -274,17 +278,16 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
* then out to userspace. Update v_regs to point after the
* VMX data.
*/
if (current->thread.used_vsr) {
flush_vsx_to_thread(current);
if (tsk->thread.used_vsr) {
v_regs += ELF_NVRREG;
tm_v_regs += ELF_NVRREG;
err |= copy_vsx_to_user(v_regs, current);
err |= copy_ckvsx_to_user(v_regs, tsk);
if (msr & MSR_VSX)
err |= copy_transact_vsx_to_user(tm_v_regs, current);
err |= copy_vsx_to_user(tm_v_regs, tsk);
else
err |= copy_vsx_to_user(tm_v_regs, current);
err |= copy_ckvsx_to_user(tm_v_regs, tsk);
/* set MSR_VSX in the MSR value in the frame to
* indicate that sc->vs_reg) contains valid data.
@@ -298,7 +301,7 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
WARN_ON(!FULL_REGS(regs));
err |= __copy_to_user(&tm_sc->gp_regs, regs, GP_REGS_SIZE);
err |= __copy_to_user(&sc->gp_regs,
&current->thread.ckpt_regs, GP_REGS_SIZE);
&tsk->thread.ckpt_regs, GP_REGS_SIZE);
err |= __put_user(msr, &tm_sc->gp_regs[PT_MSR]);
err |= __put_user(msr, &sc->gp_regs[PT_MSR]);
err |= __put_user(signr, &sc->signal);
@@ -314,7 +317,7 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
* Restore the sigcontext from the signal frame.
*/
static long restore_sigcontext(struct pt_regs *regs, sigset_t *set, int sig,
static long restore_sigcontext(struct task_struct *tsk, sigset_t *set, int sig,
struct sigcontext __user *sc)
{
#ifdef CONFIG_ALTIVEC
@@ -323,10 +326,13 @@ static long restore_sigcontext(struct pt_regs *regs, sigset_t *set, int sig,
unsigned long err = 0;
unsigned long save_r13 = 0;
unsigned long msr;
struct pt_regs *regs = tsk->thread.regs;
#ifdef CONFIG_VSX
int i;
#endif
BUG_ON(tsk != current);
/* If this is not a signal return, we preserve the TLS in r13 */
if (!sig)
save_r13 = regs->gpr[13];
@@ -356,7 +362,7 @@ static long restore_sigcontext(struct pt_regs *regs, sigset_t *set, int sig,
/*
* Force reload of FP/VEC.
* This has to be done before copying stuff into current->thread.fpr/vr
* This has to be done before copying stuff into tsk->thread.fpr/vr
* for the reasons explained in the previous comment.
*/
regs->msr &= ~(MSR_FP | MSR_FE0 | MSR_FE1 | MSR_VEC | MSR_VSX);
@@ -368,21 +374,23 @@ static long restore_sigcontext(struct pt_regs *regs, sigset_t *set, int sig,
if (v_regs && !access_ok(VERIFY_READ, v_regs, 34 * sizeof(vector128)))
return -EFAULT;
/* Copy 33 vec registers (vr0..31 and vscr) from the stack */
if (v_regs != NULL && (msr & MSR_VEC) != 0)
err |= __copy_from_user(&current->thread.vr_state, v_regs,
if (v_regs != NULL && (msr & MSR_VEC) != 0) {
err |= __copy_from_user(&tsk->thread.vr_state, v_regs,
33 * sizeof(vector128));
else if (current->thread.used_vr)
memset(&current->thread.vr_state, 0, 33 * sizeof(vector128));
tsk->thread.used_vr = true;
} else if (tsk->thread.used_vr) {
memset(&tsk->thread.vr_state, 0, 33 * sizeof(vector128));
}
/* Always get VRSAVE back */
if (v_regs != NULL)
err |= __get_user(current->thread.vrsave, (u32 __user *)&v_regs[33]);
err |= __get_user(tsk->thread.vrsave, (u32 __user *)&v_regs[33]);
else
current->thread.vrsave = 0;
tsk->thread.vrsave = 0;
if (cpu_has_feature(CPU_FTR_ALTIVEC))
mtspr(SPRN_VRSAVE, current->thread.vrsave);
mtspr(SPRN_VRSAVE, tsk->thread.vrsave);
#endif /* CONFIG_ALTIVEC */
/* restore floating point */
err |= copy_fpr_from_user(current, &sc->fp_regs);
err |= copy_fpr_from_user(tsk, &sc->fp_regs);
#ifdef CONFIG_VSX
/*
* Get additional VSX data. Update v_regs to point after the
@@ -390,11 +398,13 @@ static long restore_sigcontext(struct pt_regs *regs, sigset_t *set, int sig,
* buffer for formatting, then into the taskstruct.
*/
v_regs += ELF_NVRREG;
if ((msr & MSR_VSX) != 0)
err |= copy_vsx_from_user(current, v_regs);
else
if ((msr & MSR_VSX) != 0) {
err |= copy_vsx_from_user(tsk, v_regs);
tsk->thread.used_vsr = true;
} else {
for (i = 0; i < 32 ; i++)
current->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
tsk->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
}
#endif
return err;
}
@@ -404,7 +414,7 @@ static long restore_sigcontext(struct pt_regs *regs, sigset_t *set, int sig,
* Restore the two sigcontexts from the frame of a transactional processes.
*/
static long restore_tm_sigcontexts(struct pt_regs *regs,
static long restore_tm_sigcontexts(struct task_struct *tsk,
struct sigcontext __user *sc,
struct sigcontext __user *tm_sc)
{
@@ -413,12 +423,16 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
#endif
unsigned long err = 0;
unsigned long msr;
struct pt_regs *regs = tsk->thread.regs;
#ifdef CONFIG_VSX
int i;
#endif
BUG_ON(tsk != current);
/* copy the GPRs */
err |= __copy_from_user(regs->gpr, tm_sc->gp_regs, sizeof(regs->gpr));
err |= __copy_from_user(&current->thread.ckpt_regs, sc->gp_regs,
err |= __copy_from_user(&tsk->thread.ckpt_regs, sc->gp_regs,
sizeof(regs->gpr));
/*
@@ -430,7 +444,7 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
* we don't need to re-copy them here.
*/
err |= __get_user(regs->nip, &tm_sc->gp_regs[PT_NIP]);
err |= __get_user(current->thread.tm_tfhar, &sc->gp_regs[PT_NIP]);
err |= __get_user(tsk->thread.tm_tfhar, &sc->gp_regs[PT_NIP]);
/* get MSR separately, transfer the LE bit if doing signal return */
err |= __get_user(msr, &sc->gp_regs[PT_MSR]);
@@ -449,13 +463,13 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
err |= __get_user(regs->link, &tm_sc->gp_regs[PT_LNK]);
err |= __get_user(regs->xer, &tm_sc->gp_regs[PT_XER]);
err |= __get_user(regs->ccr, &tm_sc->gp_regs[PT_CCR]);
err |= __get_user(current->thread.ckpt_regs.ctr,
err |= __get_user(tsk->thread.ckpt_regs.ctr,
&sc->gp_regs[PT_CTR]);
err |= __get_user(current->thread.ckpt_regs.link,
err |= __get_user(tsk->thread.ckpt_regs.link,
&sc->gp_regs[PT_LNK]);
err |= __get_user(current->thread.ckpt_regs.xer,
err |= __get_user(tsk->thread.ckpt_regs.xer,
&sc->gp_regs[PT_XER]);
err |= __get_user(current->thread.ckpt_regs.ccr,
err |= __get_user(tsk->thread.ckpt_regs.ccr,
&sc->gp_regs[PT_CCR]);
/* These regs are not checkpointed; they can go in 'regs'. */
@@ -466,7 +480,7 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
/*
* Force reload of FP/VEC.
* This has to be done before copying stuff into current->thread.fpr/vr
* This has to be done before copying stuff into tsk->thread.fpr/vr
* for the reasons explained in the previous comment.
*/
regs->msr &= ~(MSR_FP | MSR_FE0 | MSR_FE1 | MSR_VEC | MSR_VSX);
@@ -483,32 +497,33 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
return -EFAULT;
/* Copy 33 vec registers (vr0..31 and vscr) from the stack */
if (v_regs != NULL && tm_v_regs != NULL && (msr & MSR_VEC) != 0) {
err |= __copy_from_user(&current->thread.vr_state, v_regs,
err |= __copy_from_user(&tsk->thread.ckvr_state, v_regs,
33 * sizeof(vector128));
err |= __copy_from_user(&current->thread.transact_vr, tm_v_regs,
err |= __copy_from_user(&tsk->thread.vr_state, tm_v_regs,
33 * sizeof(vector128));
current->thread.used_vr = true;
}
else if (current->thread.used_vr) {
memset(&current->thread.vr_state, 0, 33 * sizeof(vector128));
memset(&current->thread.transact_vr, 0, 33 * sizeof(vector128));
else if (tsk->thread.used_vr) {
memset(&tsk->thread.vr_state, 0, 33 * sizeof(vector128));
memset(&tsk->thread.ckvr_state, 0, 33 * sizeof(vector128));
}
/* Always get VRSAVE back */
if (v_regs != NULL && tm_v_regs != NULL) {
err |= __get_user(current->thread.vrsave,
err |= __get_user(tsk->thread.ckvrsave,
(u32 __user *)&v_regs[33]);
err |= __get_user(current->thread.transact_vrsave,
err |= __get_user(tsk->thread.vrsave,
(u32 __user *)&tm_v_regs[33]);
}
else {
current->thread.vrsave = 0;
current->thread.transact_vrsave = 0;
tsk->thread.vrsave = 0;
tsk->thread.ckvrsave = 0;
}
if (cpu_has_feature(CPU_FTR_ALTIVEC))
mtspr(SPRN_VRSAVE, current->thread.vrsave);
mtspr(SPRN_VRSAVE, tsk->thread.vrsave);
#endif /* CONFIG_ALTIVEC */
/* restore floating point */
err |= copy_fpr_from_user(current, &sc->fp_regs);
err |= copy_transact_fpr_from_user(current, &tm_sc->fp_regs);
err |= copy_fpr_from_user(tsk, &tm_sc->fp_regs);
err |= copy_ckfpr_from_user(tsk, &sc->fp_regs);
#ifdef CONFIG_VSX
/*
* Get additional VSX data. Update v_regs to point after the
@@ -518,32 +533,31 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
if (v_regs && ((msr & MSR_VSX) != 0)) {
v_regs += ELF_NVRREG;
tm_v_regs += ELF_NVRREG;
err |= copy_vsx_from_user(current, v_regs);
err |= copy_transact_vsx_from_user(current, tm_v_regs);
err |= copy_vsx_from_user(tsk, tm_v_regs);
err |= copy_ckvsx_from_user(tsk, v_regs);
tsk->thread.used_vsr = true;
} else {
for (i = 0; i < 32 ; i++) {
current->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
current->thread.transact_fp.fpr[i][TS_VSRLOWOFFSET] = 0;
tsk->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
tsk->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
}
}
#endif
tm_enable();
/* Make sure the transaction is marked as failed */
current->thread.tm_texasr |= TEXASR_FS;
tsk->thread.tm_texasr |= TEXASR_FS;
/* This loads the checkpointed FP/VEC state, if used */
tm_recheckpoint(&current->thread, msr);
tm_recheckpoint(&tsk->thread, msr);
/* This loads the speculative FP/VEC state, if used */
msr_check_and_set(msr & (MSR_FP | MSR_VEC));
if (msr & MSR_FP) {
do_load_up_transact_fpu(&current->thread);
regs->msr |= (MSR_FP | current->thread.fpexc_mode);
load_fp_state(&tsk->thread.fp_state);
regs->msr |= (MSR_FP | tsk->thread.fpexc_mode);
}
#ifdef CONFIG_ALTIVEC
if (msr & MSR_VEC) {
do_load_up_transact_altivec(&current->thread);
load_vr_state(&tsk->thread.vr_state);
regs->msr |= MSR_VEC;
}
#endif
return err;
}
@@ -594,6 +608,8 @@ int sys_swapcontext(struct ucontext __user *old_ctx,
unsigned long new_msr = 0;
int ctx_has_vsx_region = 0;
BUG_ON(regs != current->thread.regs);
if (new_ctx &&
get_user(new_msr, &new_ctx->uc_mcontext.gp_regs[PT_MSR]))
return -EFAULT;
@@ -616,7 +632,7 @@ int sys_swapcontext(struct ucontext __user *old_ctx,
if (old_ctx != NULL) {
if (!access_ok(VERIFY_WRITE, old_ctx, ctx_size)
|| setup_sigcontext(&old_ctx->uc_mcontext, regs, 0, NULL, 0,
|| setup_sigcontext(&old_ctx->uc_mcontext, current, 0, NULL, 0,
ctx_has_vsx_region)
|| __copy_to_user(&old_ctx->uc_sigmask,
&current->blocked, sizeof(sigset_t)))
@@ -644,7 +660,7 @@ int sys_swapcontext(struct ucontext __user *old_ctx,
if (__copy_from_user(&set, &new_ctx->uc_sigmask, sizeof(set)))
do_exit(SIGSEGV);
set_current_blocked(&set);
if (restore_sigcontext(regs, NULL, 0, &new_ctx->uc_mcontext))
if (restore_sigcontext(current, NULL, 0, &new_ctx->uc_mcontext))
do_exit(SIGSEGV);
/* This returns like rt_sigreturn */
@@ -667,6 +683,8 @@ int sys_rt_sigreturn(unsigned long r3, unsigned long r4, unsigned long r5,
unsigned long msr;
#endif
BUG_ON(current->thread.regs != regs);
/* Always make any pending restarted system calls return -EINTR */
current->restart_block.fn = do_no_restart_syscall;
@@ -698,14 +716,14 @@ int sys_rt_sigreturn(unsigned long r3, unsigned long r4, unsigned long r5,
struct ucontext __user *uc_transact;
if (__get_user(uc_transact, &uc->uc_link))
goto badframe;
if (restore_tm_sigcontexts(regs, &uc->uc_mcontext,
if (restore_tm_sigcontexts(current, &uc->uc_mcontext,
&uc_transact->uc_mcontext))
goto badframe;
}
else
/* Fall through, for non-TM restore */
#endif
if (restore_sigcontext(regs, NULL, 1, &uc->uc_mcontext))
if (restore_sigcontext(current, NULL, 1, &uc->uc_mcontext))
goto badframe;
if (restore_altstack(&uc->uc_stack))
@@ -724,13 +742,17 @@ badframe:
return 0;
}
int handle_rt_signal64(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs)
int handle_rt_signal64(struct ksignal *ksig, sigset_t *set,
struct task_struct *tsk)
{
struct rt_sigframe __user *frame;
unsigned long newsp = 0;
long err = 0;
struct pt_regs *regs = tsk->thread.regs;
frame = get_sigframe(ksig, get_tm_stackpointer(regs), sizeof(*frame), 0);
BUG_ON(tsk != current);
frame = get_sigframe(ksig, get_tm_stackpointer(tsk), sizeof(*frame), 0);
if (unlikely(frame == NULL))
goto badframe;
@@ -751,14 +773,13 @@ int handle_rt_signal64(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs
err |= __put_user(&frame->uc_transact, &frame->uc.uc_link);
err |= setup_tm_sigcontexts(&frame->uc.uc_mcontext,
&frame->uc_transact.uc_mcontext,
regs, ksig->sig,
NULL,
tsk, ksig->sig, NULL,
(unsigned long)ksig->ka.sa.sa_handler);
} else
#endif
{
err |= __put_user(0, &frame->uc.uc_link);
err |= setup_sigcontext(&frame->uc.uc_mcontext, regs, ksig->sig,
err |= setup_sigcontext(&frame->uc.uc_mcontext, tsk, ksig->sig,
NULL, (unsigned long)ksig->ka.sa.sa_handler,
1);
}
@@ -767,11 +788,11 @@ int handle_rt_signal64(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs
goto badframe;
/* Make sure signal handler doesn't get spurious FP exceptions */
current->thread.fp_state.fpscr = 0;
tsk->thread.fp_state.fpscr = 0;
/* Set up to return from userspace. */
if (vdso64_rt_sigtramp && current->mm->context.vdso_base) {
regs->link = current->mm->context.vdso_base + vdso64_rt_sigtramp;
if (vdso64_rt_sigtramp && tsk->mm->context.vdso_base) {
regs->link = tsk->mm->context.vdso_base + vdso64_rt_sigtramp;
} else {
err |= setup_trampoline(__NR_rt_sigreturn, &frame->tramp[0]);
if (err)
@@ -821,7 +842,7 @@ int handle_rt_signal64(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs
badframe:
if (show_unhandled_signals)
printk_ratelimited(regs->msr & MSR_64BIT ? fmt64 : fmt32,
current->comm, current->pid, "setup_rt_frame",
tsk->comm, tsk->pid, "setup_rt_frame",
(long)frame, regs->nip, regs->link);
return 1;

View File

@@ -40,6 +40,7 @@
#include <asm/syscalls.h>
#include <asm/time.h>
#include <asm/unistd.h>
#include <asm/asm-prototypes.h>
static inline unsigned long do_mmap2(unsigned long addr, size_t len,
unsigned long prot, unsigned long flags,

View File

@@ -73,6 +73,7 @@
#include <asm/vdso_datapage.h>
#include <asm/firmware.h>
#include <asm/cputime.h>
#include <asm/asm-prototypes.h>
/* powerpc clocksource/clockevent code */

View File

@@ -108,6 +108,7 @@ _GLOBAL(tm_reclaim)
/* We've a struct pt_regs at [r1+STACK_FRAME_OVERHEAD]. */
std r3, STK_PARAM(R3)(r1)
std r4, STK_PARAM(R4)(r1)
SAVE_NVGPRS(r1)
/* We need to setup MSR for VSX register save instructions. */
@@ -126,43 +127,6 @@ _GLOBAL(tm_reclaim)
mtmsrd r15
std r14, TM_FRAME_L0(r1)
/* Stash the stack pointer away for use after reclaim */
std r1, PACAR1(r13)
/* ******************** FPR/VR/VSRs ************
* Before reclaiming, capture the current/transactional FPR/VR
* versions /if used/.
*
* (If VSX used, FP and VMX are implied. Or, we don't need to look
* at MSR.VSX as copying FP regs if .FP, vector regs if .VMX covers it.)
*
* We're passed the thread's MSR as parameter 2.
*
* We enabled VEC/FP/VSX in the msr above, so we can execute these
* instructions!
*/
andis. r0, r4, MSR_VEC@h
beq dont_backup_vec
addi r7, r3, THREAD_TRANSACT_VRSTATE
SAVE_32VRS(0, r6, r7) /* r6 scratch, r7 transact vr state */
mfvscr v0
li r6, VRSTATE_VSCR
stvx v0, r7, r6
dont_backup_vec:
mfspr r0, SPRN_VRSAVE
std r0, THREAD_TRANSACT_VRSAVE(r3)
andi. r0, r4, MSR_FP
beq dont_backup_fp
addi r7, r3, THREAD_TRANSACT_FPSTATE
SAVE_32FPRS_VSRS(0, R6, R7) /* r6 scratch, r7 transact fp state */
mffs fr0
stfd fr0,FPSTATE_FPSCR(r7)
dont_backup_fp:
/* Do sanity check on MSR to make sure we are suspended */
li r7, (MSR_TS_S)@higher
srdi r6, r14, 32
@@ -170,6 +134,9 @@ dont_backup_fp:
1: tdeqi r6, 0
EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,0
/* Stash the stack pointer away for use after reclaim */
std r1, PACAR1(r13)
/* Clear MSR RI since we are about to change r1, EE is already off. */
li r4, 0
mtmsrd r4, 1
@@ -273,6 +240,43 @@ dont_backup_fp:
* MSR.
*/
/* ******************** FPR/VR/VSRs ************
* After reclaiming, capture the checkpointed FPRs/VRs /if used/.
*
* (If VSX used, FP and VMX are implied. Or, we don't need to look
* at MSR.VSX as copying FP regs if .FP, vector regs if .VMX covers it.)
*
* We're passed the thread's MSR as the second parameter
*
* We enabled VEC/FP/VSX in the msr above, so we can execute these
* instructions!
*/
ld r4, STK_PARAM(R4)(r1) /* Second parameter, MSR * */
mr r3, r12
andis. r0, r4, MSR_VEC@h
beq dont_backup_vec
addi r7, r3, THREAD_CKVRSTATE
SAVE_32VRS(0, r6, r7) /* r6 scratch, r7 transact vr state */
mfvscr v0
li r6, VRSTATE_VSCR
stvx v0, r7, r6
dont_backup_vec:
mfspr r0, SPRN_VRSAVE
std r0, THREAD_CKVRSAVE(r3)
andi. r0, r4, MSR_FP
beq dont_backup_fp
addi r7, r3, THREAD_CKFPSTATE
SAVE_32FPRS_VSRS(0, R6, R7) /* r6 scratch, r7 transact fp state */
mffs fr0
stfd fr0,FPSTATE_FPSCR(r7)
dont_backup_fp:
/* TM regs, incl TEXASR -- these live in thread_struct. Note they've
* been updated by the treclaim, to explain to userland the failure
* cause (aborted).
@@ -288,6 +292,7 @@ dont_backup_fp:
/* Restore original MSR/IRQ state & clear TM mode */
ld r14, TM_FRAME_L0(r1) /* Orig MSR */
li r15, 0
rldimi r14, r15, MSR_TS_LG, (63-MSR_TS_LG)-1
mtmsrd r14
@@ -356,28 +361,29 @@ _GLOBAL(__tm_recheckpoint)
mtmsr r5
#ifdef CONFIG_ALTIVEC
/* FP and VEC registers: These are recheckpointed from thread.fpr[]
* and thread.vr[] respectively. The thread.transact_fpr[] version
* is more modern, and will be loaded subsequently by any FPUnavailable
* trap.
/*
* FP and VEC registers: These are recheckpointed from
* thread.ckfp_state and thread.ckvr_state respectively. The
* thread.fp_state[] version holds the 'live' (transactional)
* and will be loaded subsequently by any FPUnavailable trap.
*/
andis. r0, r4, MSR_VEC@h
beq dont_restore_vec
addi r8, r3, THREAD_VRSTATE
addi r8, r3, THREAD_CKVRSTATE
li r5, VRSTATE_VSCR
lvx v0, r8, r5
mtvscr v0
REST_32VRS(0, r5, r8) /* r5 scratch, r8 ptr */
dont_restore_vec:
ld r5, THREAD_VRSAVE(r3)
ld r5, THREAD_CKVRSAVE(r3)
mtspr SPRN_VRSAVE, r5
#endif
andi. r0, r4, MSR_FP
beq dont_restore_fp
addi r8, r3, THREAD_FPSTATE
addi r8, r3, THREAD_CKFPSTATE
lfd fr0, FPSTATE_FPSCR(r8)
MTFSF_L(fr0)
REST_32FPRS_VSRS(0, R4, R8)

View File

@@ -117,7 +117,7 @@ static int die_owner = -1;
static unsigned int die_nest_count;
static int die_counter;
static unsigned __kprobes long oops_begin(struct pt_regs *regs)
static unsigned long oops_begin(struct pt_regs *regs)
{
int cpu;
unsigned long flags;
@@ -144,8 +144,9 @@ static unsigned __kprobes long oops_begin(struct pt_regs *regs)
pmac_backlight_unblank();
return flags;
}
NOKPROBE_SYMBOL(oops_begin);
static void __kprobes oops_end(unsigned long flags, struct pt_regs *regs,
static void oops_end(unsigned long flags, struct pt_regs *regs,
int signr)
{
bust_spinlocks(0);
@@ -196,8 +197,9 @@ static void __kprobes oops_end(unsigned long flags, struct pt_regs *regs,
panic("Fatal exception");
do_exit(signr);
}
NOKPROBE_SYMBOL(oops_end);
static int __kprobes __die(const char *str, struct pt_regs *regs, long err)
static int __die(const char *str, struct pt_regs *regs, long err)
{
printk("Oops: %s, sig: %ld [#%d]\n", str, err, ++die_counter);
#ifdef CONFIG_PREEMPT
@@ -221,6 +223,7 @@ static int __kprobes __die(const char *str, struct pt_regs *regs, long err)
return 0;
}
NOKPROBE_SYMBOL(__die);
void die(const char *str, struct pt_regs *regs, long err)
{
@@ -802,7 +805,7 @@ void RunModeException(struct pt_regs *regs)
_exception(SIGTRAP, regs, 0, 0);
}
void __kprobes single_step_exception(struct pt_regs *regs)
void single_step_exception(struct pt_regs *regs)
{
enum ctx_state prev_state = exception_enter();
@@ -819,6 +822,7 @@ void __kprobes single_step_exception(struct pt_regs *regs)
bail:
exception_exit(prev_state);
}
NOKPROBE_SYMBOL(single_step_exception);
/*
* After we have successfully emulated an instruction, we have to
@@ -1140,7 +1144,7 @@ static int emulate_math(struct pt_regs *regs)
static inline int emulate_math(struct pt_regs *regs) { return -1; }
#endif
void __kprobes program_check_exception(struct pt_regs *regs)
void program_check_exception(struct pt_regs *regs)
{
enum ctx_state prev_state = exception_enter();
unsigned int reason = get_reason(regs);
@@ -1260,16 +1264,18 @@ sigill:
bail:
exception_exit(prev_state);
}
NOKPROBE_SYMBOL(program_check_exception);
/*
* This occurs when running in hypervisor mode on POWER6 or later
* and an illegal instruction is encountered.
*/
void __kprobes emulation_assist_interrupt(struct pt_regs *regs)
void emulation_assist_interrupt(struct pt_regs *regs)
{
regs->msr |= REASON_ILLEGAL;
program_check_exception(regs);
}
NOKPROBE_SYMBOL(emulation_assist_interrupt);
void alignment_exception(struct pt_regs *regs)
{
@@ -1310,6 +1316,18 @@ bail:
exception_exit(prev_state);
}
void slb_miss_bad_addr(struct pt_regs *regs)
{
enum ctx_state prev_state = exception_enter();
if (user_mode(regs))
_exception(SIGSEGV, regs, SEGV_BNDERR, regs->dar);
else
bad_page_fault(regs, regs->dar, SIGSEGV);
exception_exit(prev_state);
}
void StackOverflow(struct pt_regs *regs)
{
printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n",
@@ -1372,6 +1390,22 @@ void vsx_unavailable_exception(struct pt_regs *regs)
}
#ifdef CONFIG_PPC64
static void tm_unavailable(struct pt_regs *regs)
{
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (user_mode(regs)) {
current->thread.load_tm++;
regs->msr |= MSR_TM;
tm_enable();
tm_restore_sprs(&current->thread);
return;
}
#endif
pr_emerg("Unrecoverable TM Unavailable Exception "
"%lx at %lx\n", regs->trap, regs->nip);
die("Unrecoverable TM Unavailable Exception", regs, SIGABRT);
}
void facility_unavailable_exception(struct pt_regs *regs)
{
static char *facility_strings[] = {
@@ -1451,6 +1485,27 @@ void facility_unavailable_exception(struct pt_regs *regs)
return;
}
if (status == FSCR_TM_LG) {
/*
* If we're here then the hardware is TM aware because it
* generated an exception with FSRM_TM set.
*
* If cpu_has_feature(CPU_FTR_TM) is false, then either firmware
* told us not to do TM, or the kernel is not built with TM
* support.
*
* If both of those things are true, then userspace can spam the
* console by triggering the printk() below just by continually
* doing tbegin (or any TM instruction). So in that case just
* send the process a SIGILL immediately.
*/
if (!cpu_has_feature(CPU_FTR_TM))
goto out;
tm_unavailable(regs);
return;
}
if ((status < ARRAY_SIZE(facility_strings)) &&
facility_strings[status])
facility = facility_strings[status];
@@ -1463,6 +1518,7 @@ void facility_unavailable_exception(struct pt_regs *regs)
"%sFacility '%s' unavailable, exception at 0x%lx, MSR=%lx\n",
hv ? "Hypervisor " : "", facility, regs->nip, regs->msr);
out:
if (user_mode(regs)) {
_exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
return;
@@ -1504,7 +1560,8 @@ void fp_unavailable_tm(struct pt_regs *regs)
/* If VMX is in use, get the transactional values back */
if (regs->msr & MSR_VEC) {
do_load_up_transact_altivec(&current->thread);
msr_check_and_set(MSR_VEC);
load_vr_state(&current->thread.vr_state);
/* At this point all the VSX state is loaded, so enable it */
regs->msr |= MSR_VSX;
}
@@ -1525,7 +1582,8 @@ void altivec_unavailable_tm(struct pt_regs *regs)
current->thread.used_vr = 1;
if (regs->msr & MSR_FP) {
do_load_up_transact_fpu(&current->thread);
msr_check_and_set(MSR_FP);
load_fp_state(&current->thread.fp_state);
regs->msr |= MSR_VSX;
}
}
@@ -1564,10 +1622,12 @@ void vsx_unavailable_tm(struct pt_regs *regs)
*/
tm_recheckpoint(&current->thread, regs->msr & ~orig_msr);
msr_check_and_set(orig_msr & (MSR_FP | MSR_VEC));
if (orig_msr & MSR_FP)
do_load_up_transact_fpu(&current->thread);
load_fp_state(&current->thread.fp_state);
if (orig_msr & MSR_VEC)
do_load_up_transact_altivec(&current->thread);
load_vr_state(&current->thread.vr_state);
}
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
@@ -1656,7 +1716,7 @@ static void handle_debug(struct pt_regs *regs, unsigned long debug_status)
mtspr(SPRN_DBCR0, current->thread.debug.dbcr0);
}
void __kprobes DebugException(struct pt_regs *regs, unsigned long debug_status)
void DebugException(struct pt_regs *regs, unsigned long debug_status)
{
current->thread.debug.dbsr = debug_status;
@@ -1717,6 +1777,7 @@ void __kprobes DebugException(struct pt_regs *regs, unsigned long debug_status)
} else
handle_debug(regs, debug_status);
}
NOKPROBE_SYMBOL(DebugException);
#endif /* CONFIG_PPC_ADV_DEBUG_REGS */
#if !defined(CONFIG_TAU_INT)

View File

@@ -31,15 +31,9 @@ $(obj)/%.so: OBJCOPYFLAGS := -S
$(obj)/%.so: $(obj)/%.so.dbg FORCE
$(call if_changed,objcopy)
# assembly rules for the .S files
$(obj-vdso64): %.o: %.S FORCE
$(call if_changed_dep,vdso64as)
# actual build commands
quiet_cmd_vdso64ld = VDSO64L $@
cmd_vdso64ld = $(CC) $(c_flags) -o $@ -Wl,-T$(filter %.lds,$^) $(filter %.o,$^)
quiet_cmd_vdso64as = VDSO64A $@
cmd_vdso64as = $(CC) $(a_flags) -c -o $@ $<
# install commands for the unstripped file
quiet_cmd_vdso_install = INSTALL $@

View File

@@ -59,7 +59,7 @@ V_FUNCTION_BEGIN(__kernel_get_syscall_map)
bl V_LOCAL_FUNC(__get_datapage)
mtlr r12
addi r3,r3,CFG_SYSCALL_MAP64
cmpli cr0,r4,0
cmpldi cr0,r4,0
crclr cr0*4+so
beqlr
li r0,NR_syscalls

View File

@@ -145,7 +145,7 @@ V_FUNCTION_BEGIN(__kernel_clock_getres)
bne cr0,99f
li r3,0
cmpli cr0,r4,0
cmpldi cr0,r4,0
crclr cr0*4+so
beqlr
lis r5,CLOCK_REALTIME_RES@h

View File

@@ -7,31 +7,6 @@
#include <asm/page.h>
#include <asm/ptrace.h>
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
/* void do_load_up_transact_altivec(struct thread_struct *thread)
*
* This is similar to load_up_altivec but for the transactional version of the
* vector regs. It doesn't mess with the task MSR or valid flags.
* Furthermore, VEC laziness is not supported with TM currently.
*/
_GLOBAL(do_load_up_transact_altivec)
mfmsr r6
oris r5,r6,MSR_VEC@h
MTMSRD(r5)
isync
li r4,1
stw r4,THREAD_USED_VR(r3)
li r10,THREAD_TRANSACT_VRSTATE+VRSTATE_VSCR
lvx v0,r10,r3
mtvscr v0
addi r10,r3,THREAD_TRANSACT_VRSTATE
REST_32VRS(0,r4,r10)
blr
#endif
/*
* Load state from memory into VMX registers including VSCR.
* Assumes the caller has enabled VMX in the MSR.

View File

@@ -44,11 +44,58 @@ SECTIONS
* Text, read only data and other permanent read-only sections
*/
/* Text and gots */
_text = .;
_stext = .;
/*
* Head text.
* This needs to be in its own output section to avoid ld placing
* branch trampoline stubs randomly throughout the fixed sections,
* which it will do (even if the branch comes from another section)
* in order to optimize stub generation.
*/
.head.text : AT(ADDR(.head.text) - LOAD_OFFSET) {
#ifdef CONFIG_PPC64
KEEP(*(.head.text.first_256B));
#ifdef CONFIG_PPC_BOOK3E
# define END_FIXED 0x100
#else
KEEP(*(.head.text.real_vectors));
*(.head.text.real_trampolines);
KEEP(*(.head.text.virt_vectors));
*(.head.text.virt_trampolines);
# if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
KEEP(*(.head.data.fwnmi_page));
# define END_FIXED 0x8000
# else
# define END_FIXED 0x7000
# endif
#endif
ASSERT((. == END_FIXED), "vmlinux.lds.S: fixed section overflow error");
#else /* !CONFIG_PPC64 */
HEAD_TEXT
#endif
} :kernel
/*
* If the build dies here, it's likely code in head_64.S is referencing
* labels it can't reach, and the linker inserting stubs without the
* assembler's knowledge. To debug, remove the above assert and
* rebuild. Look for branch stubs in the fixed section region.
*
* Linker stub generation could be allowed in "trampoline"
* sections if absolutely necessary, but this would require
* some rework of the fixed sections. Before resorting to this,
* consider references that have sufficient addressing range,
* (e.g., hand coded trampolines) so the linker does not have
* to add stubs.
*
* Linker stubs at the top of the main text section are currently not
* detected, and will result in a crash at boot due to offsets being
* wrong.
*/
.text : AT(ADDR(.text) - LOAD_OFFSET) {
ALIGN_FUNCTION();
HEAD_TEXT
_text = .;
/* careful! __ftr_alt_* sections need to be close to .text */
*(.text .fixup __ftr_alt_* .ref.text)
SCHED_TEXT
@@ -56,6 +103,8 @@ SECTIONS
KPROBES_TEXT
IRQENTRY_TEXT
SOFTIRQENTRY_TEXT
MEM_KEEP(init.text)
MEM_KEEP(exit.text)
#ifdef CONFIG_PPC32
*(.got1)