powerpc/85xx: Load all early TLB entries at once

Use an AS=1 trampoline TLB entry to allow all normal TLB1 entries to
be loaded at once.  This avoids the need to keep the translation that
code is executing from in the same TLB entry in the final TLB
configuration as during early boot, which in turn is helpful for
relocatable kernels (e.g. kdump) where the kernel is not running from
what would be the first TLB entry.

On e6500, we limit map_mem_in_cams() to the primary hwthread of a
core (the boot cpu is always considered primary, as a kdump kernel
can be entered on any cpu).  Each TLB only needs to be set up once,
and when we do, we don't want another thread to be running when we
create a temporary trampoline TLB1 entry.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Šī revīzija ir iekļauta:
Scott Wood
2015-10-06 22:48:09 -05:00
vecāks 1930bb5ccf
revīzija d9e1831a42
5 mainīti faili ar 92 papildinājumiem un 3 dzēšanām

Parādīt failu

@@ -42,6 +42,7 @@
#include <asm/tlbflush.h>
#include <asm/tlb.h>
#include <asm/code-patching.h>
#include <asm/cputhreads.h>
#include <asm/hugetlb.h>
#include <asm/paca.h>
@@ -628,10 +629,26 @@ static void early_init_this_mmu(void)
#ifdef CONFIG_PPC_FSL_BOOK3E
if (mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
unsigned int num_cams;
int __maybe_unused cpu = smp_processor_id();
bool map = true;
/* use a quarter of the TLBCAM for bolted linear map */
num_cams = (mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) / 4;
linear_map_top = map_mem_in_cams(linear_map_top, num_cams);
/*
* Only do the mapping once per core, or else the
* transient mapping would cause problems.
*/
#ifdef CONFIG_SMP
if (cpu != boot_cpuid &&
(cpu != cpu_first_thread_sibling(cpu) ||
cpu == cpu_first_thread_sibling(boot_cpuid)))
map = false;
#endif
if (map)
linear_map_top = map_mem_in_cams(linear_map_top,
num_cams);
}
#endif