Yazen Ghannam
3410200958
x86/mce/AMD: Log Deferred Errors using SMCA MCA_DE{STAT,ADDR} registers
...
Scalable MCA provides new registers for all banks for logging deferred
errors: MCA_DESTAT and MCA_DEADDR. Deferred errors are always logged to
these registers.
Update the AMD deferred error handler to use these registers, if
available.
Signed-off-by: Yazen Ghannam <Yazen.Ghannam@amd.com >
[ Sanity-check __log_error() args, massage a bit. ]
Signed-off-by: Borislav Petkov <bp@suse.de >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Aravind Gopalakrishnan <aravindksg.lkml@gmail.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Cc: linux-edac <linux-edac@vger.kernel.org >
Link: http://lkml.kernel.org/r/1462971509-3856-2-git-send-email-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-12 09:08:19 +02:00
Mathias Krause
50c73890d3
x86/extable: ensure entries are swapped completely when sorting
...
The x86 exception table sorting was changed in commit 29934b0fb8
("x86/extable: use generic search and sort routines") to use the arch
independent code in lib/extable.c. However, the patch was mangled
somehow on its way into the kernel from the last version posted at [1].
The committed version kind of attempted to incorporate the changes of
commit 548acf1923
("x86/mm: Expand the exception table logic to allow
new handling options") as in _completely_ _ignoring_ the x86 specific
'handler' member of struct exception_table_entry. This effectively
broke the sorting as entries will only partly be swapped now.
Fortunately, the x86 Kconfig selects BUILDTIME_EXTABLE_SORT, so the
exception table doesn't need to be sorted at runtime. However, in case
that ever changes, we better not break the exception table sorting just
because of that.
[ Ard Biesheuvel points out that BUILDTIME_EXTABLE_SORT applies to the
core image only, but we still rely on the sorting routines for modules
in that case - Linus ]
Fix this by providing a swap_ex_entry_fixup() macro that takes care of
the 'handler' member.
[1] https://lkml.org/lkml/2016/1/27/232
Signed-off-by: Mathias Krause <minipli@googlemail.com >
Fixes: 29934b0fb8
("x86/extable: use generic search and sort routines")
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Borislav Petkov <bp@suse.de >
Cc: H. Peter Anvin <hpa@linux.intel.com >
Cc: Ingo Molnar <mingo@kernel.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-05-11 11:17:47 -07:00
Mathias Krause
67d7a982ba
x86/extable: Ensure entries are swapped completely when sorting
...
The x86 exception table sorting was changed in this recent commit:
29934b0fb8
("x86/extable: use generic search and sort routines")
... to use the arch independent code in lib/extable.c. However, the
patch was mangled somehow on its way into the kernel from the last
version posted at:
https://lkml.org/lkml/2016/1/27/232
The committed version kind of attempted to incorporate the changes of
contemporary commit done in the x86 tree:
548acf1923
("x86/mm: Expand the exception table logic to allow new handling options")
... as in _completely_ _ignoring_ the x86 specific 'handler' member of
struct exception_table_entry. This effectively broke the sorting as
entries will only be partly swapped now.
Fortunately, the x86 Kconfig selects BUILDTIME_EXTABLE_SORT, so the
exception table doesn't need to be sorted at runtime. However, in case
that ever changes, we better not break the exception table sorting just
because of that.
Fix this by providing a swap_ex_entry_fixup() macro that takes care of
the 'handler' member.
Signed-off-by: Mathias Krause <minipli@googlemail.com >
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Borislav Petkov <bp@suse.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Link: http://lkml.kernel.org/r/1462914422-2911-1-git-send-email-minipli@googlemail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-11 11:14:06 +02:00
Borislav Petkov
3dbe345885
x86/kvm: Do not use BIT() in user-exported header
...
Apparently, we're not exporting BIT() to userspace.
Reported-by: Brooks Moses <bmoses@google.com >
Signed-off-by: Borislav Petkov <bp@suse.de >
Cc: Paolo Bonzini <pbonzini@redhat.com >
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com >
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com >
2016-05-09 16:38:54 +02:00
Kees Cook
3a94707d7a
x86/KASLR: Build identity mappings on demand
...
Currently KASLR only supports relocation in a small physical range (from
16M to 1G), due to using the initial kernel page table identity mapping.
To support ranges above this, we need to have an identity mapping for the
desired memory range before we can decompress (and later run) the kernel.
32-bit kernels already have the needed identity mapping. This patch adds
identity mappings for the needed memory ranges on 64-bit kernels. This
happens in two possible boot paths:
If loaded via startup_32(), we need to set up the needed identity map.
If loaded from a 64-bit bootloader, the bootloader will have already
set up an identity mapping, and we'll start via the compressed kernel's
startup_64(). In this case, the bootloader's page tables need to be
avoided while selecting the new uncompressed kernel location. If not,
the decompressor could overwrite them during decompression.
To accomplish this, we could walk the pagetable and find every page
that is used, and add them to mem_avoid, but this needs extra code and
will require increasing the size of the mem_avoid array.
Instead, we can create a new set of page tables for our own identity
mapping instead. The pages for the new page table will come from the
_pagetable section of the compressed kernel, which means they are
already contained by in mem_avoid array. To do this, we reuse the code
from the uncompressed kernel's identity mapping routines.
The _pgtable will be shared by both the 32-bit and 64-bit paths to reduce
init_size, as now the compressed kernel's _rodata to _end will contribute
to init_size.
To handle the possible mappings, we need to increase the existing page
table buffer size:
When booting via startup_64(), we need to cover the old VO, params,
cmdline and uncompressed kernel. In an extreme case we could have them
all beyond the 512G boundary, which needs (2+2)*4 pages with 2M mappings.
And we'll need 2 for first 2M for VGA RAM. One more is needed for level4.
This gets us to 19 pages total.
When booting via startup_32(), KASLR could move the uncompressed kernel
above 4G, so we need to create extra identity mappings, which should only
need (2+2) pages at most when it is beyond the 512G boundary. So 19
pages is sufficient for this case as well.
The resulting BOOT_*PGT_SIZE defines use the "_SIZE" suffix on their
names to maintain logical consistency with the existing BOOT_HEAP_SIZE
and BOOT_STACK_SIZE defines.
This patch is based on earlier patches from Yinghai Lu and Baoquan He.
Signed-off-by: Kees Cook <keescook@chromium.org >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Baoquan He <bhe@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Borislav Petkov <bp@suse.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Dave Young <dyoung@redhat.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Vivek Goyal <vgoyal@redhat.com >
Cc: Yinghai Lu <yinghai@kernel.org >
Cc: kernel-hardening@lists.openwall.com
Cc: lasse.collin@tukaani.org
Link: http://lkml.kernel.org/r/1462572095-11754-4-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-07 07:38:39 +02:00
Yinghai Lu
cf4fb15b31
x86/boot: Split out kernel_ident_mapping_init()
...
In order to support on-demand page table creation when moving the
kernel for KASLR, we need to use kernel_ident_mapping_init() in the
decompression code.
This splits it out into its own file for use outside of init_64.c.
Additionally, checking for __pa/__va defines is added since they
need to be overridden in the decompression code.
[kees: rewrote changelog]
Signed-off-by: Yinghai Lu <yinghai@kernel.org >
Signed-off-by: Kees Cook <keescook@chromium.org >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Baoquan He <bhe@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Borislav Petkov <bp@suse.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Dave Young <dyoung@redhat.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Vivek Goyal <vgoyal@redhat.com >
Cc: kernel-hardening@lists.openwall.com
Cc: lasse.collin@tukaani.org
Link: http://lkml.kernel.org/r/1462572095-11754-3-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-07 07:38:39 +02:00
Kees Cook
8665e6ff21
x86/boot: Clean up indenting for asm/boot.h
...
Before adding more defines to asm/boot.h, this cleans up the existing
indenting for readability.
Suggested-by: Ingo Molnar <mingo@kernel.org >
Signed-off-by: Kees Cook <keescook@chromium.org >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Baoquan He <bhe@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Borislav Petkov <bp@suse.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Dave Young <dyoung@redhat.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Vivek Goyal <vgoyal@redhat.com >
Cc: Yinghai Lu <yinghai@kernel.org >
Cc: kernel-hardening@lists.openwall.com
Cc: lasse.collin@tukaani.org
Link: http://lkml.kernel.org/r/1462572095-11754-2-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-07 07:38:39 +02:00
Ingo Molnar
35dc9ec107
Merge branch 'linus' into efi/core, to pick up fixes
...
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-07 07:00:07 +02:00
Julia Lawall
775d054aba
intel_telemetry: Constify telemetry_core_ops structures
...
The telemetry_core_ops structures are never modified, so declare them as
const.
Done with the help of Coccinelle.
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr >
Signed-off-by: Darren Hart <dvhart@linux.intel.com >
2016-05-05 13:58:55 -07:00
Alexander Shishkin
f127fa098d
perf/x86/intel/pt: Add IP filtering register/CPUID bits
...
New versions of Intel PT support address range-based filtering. Add
the new registers, bit definitions and relevant CPUID bits.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org >
Cc: Arnaldo Carvalho de Melo <acme@infradead.org >
Cc: Arnaldo Carvalho de Melo <acme@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Olsa <jolsa@redhat.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Mathieu Poirier <mathieu.poirier@linaro.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Vince Weaver <vincent.weaver@maine.edu >
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461771888-10409-4-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 10:13:56 +02:00
Alexander Shishkin
0dd28e2cda
perf/x86/intel/pt: Move PT specific MSR bit definitions to a private header
...
Nothing outside of the Intel PT driver should ever care about its MSR
bits, so there is no reason to keep them in msr-index.h. This patch
moves them to a pt-local header.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org >
Cc: Arnaldo Carvalho de Melo <acme@infradead.org >
Cc: Arnaldo Carvalho de Melo <acme@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Olsa <jolsa@redhat.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Mathieu Poirier <mathieu.poirier@linaro.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Vince Weaver <vincent.weaver@maine.edu >
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461771888-10409-3-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 10:13:55 +02:00
Ingo Molnar
1a618c2cfe
Merge branch 'perf/urgent' into perf/core, to pick up fixes
...
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 10:12:37 +02:00
Ingo Molnar
64b7aad579
Merge branch 'sched/urgent' into sched/core, to pick up fixes before applying new changes
...
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 09:01:49 +02:00
Sudeep Holla
3282e6b8f8
x86/topology: Remove redundant ENABLE_TOPO_DEFINES
...
Commit c8e56d20f2
("x86: Kill CONFIG_X86_HT") removed CONFIG_X86_HT
and defined ENABLE_TOPO_DEFINES always if CONFIG_SMP, which makes
ENABLE_TOPO_DEFINES redundant.
This patch removes the redundant ENABLE_TOPO_DEFINES and instead uses
CONFIG_SMP directly
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com >
Acked-by: Borislav Petkov <bp@suse.de >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/1462380659-5968-1-git-send-email-sudeep.holla@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 08:42:48 +02:00
Ingo Molnar
f3391a160b
Merge tag 'v4.6-rc6' into x86/cpu, to refresh the tree
...
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 08:41:36 +02:00
Brian Gerst
0676b4e0a1
x86/entry/32: Remove asmlinkage_protect()
...
Now that syscalls are called from C code, which copies the args to
new stack slots instead of overlaying pt_regs, asmlinkage_protect()
is no longer needed.
Signed-off-by: Brian Gerst <brgerst@gmail.com >
Acked-by: Andy Lutomirski <luto@kernel.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Borislav Petkov <bp@suse.de >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/1462416278-11974-4-git-send-email-brgerst@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 08:37:31 +02:00
Brian Gerst
092c74e420
x86/entry, sched/x86: Don't save/restore EFLAGS on task switch
...
Now that NT is filtered by the SYSENTER entry code, it is safe to skip saving and
restoring flags on task switch. Also remove a leftover reset of flags on 64-bit
fork.
Signed-off-by: Brian Gerst <brgerst@gmail.com >
Acked-by: Andy Lutomirski <luto@kernel.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Borislav Petkov <bp@suse.de >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/1462416278-11974-2-git-send-email-brgerst@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 08:37:30 +02:00
Ingo Molnar
1fb48f8e54
Merge tag 'v4.6-rc6' into x86/asm, to refresh the tree
...
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-05 08:35:00 +02:00
Dimitri Sivanich
40bfb8eedf
x86/platform/UV: Remove Obsolete GRU MMR address translation
...
Use no-op messages in place of cross-partition interrupts when nacking a
put message in the GRU. This allows us to remove MMR's as a destination
from the GRU driver.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Dimitri Sivanich <sivanich@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215406.012228480@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:51 +02:00
Mike Travis
c85375cd19
x86/platform/UV: Update physical address conversions for UV4
...
This patch builds support for the new conversions of physical addresses
to and from sockets, pnodes and nodes in UV4. It is designed to be as
efficient as possible as lookups are done inside an interrupt context
in some cases. It will be further optimized when physical hardware is
available to measure execution time.
Tested-by: Dimitri Sivanich <sivanich@sgi.com >
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215405.841051741@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:50 +02:00
Mike Travis
6e27b91cf4
x86/platform/UV: Build GAM reference tables
...
An aspect of the UV4 system architecture changes involve changing the
way sockets, nodes, and pnodes are translated between one another.
Decode the information from the BIOS provided EFI system table to build
the needed conversion tables.
Tested-by: Dimitri Sivanich <sivanich@sgi.com >
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215405.673495324@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:50 +02:00
Mike Travis
1de329c10d
x86/platform/UV: Support UV4 socket address changes
...
With the UV4 system architecture addressing changes, BIOS now provides
this information via an EFI system table. This is the initial decoding
of that system table. It also collects the sizing information for
later allocation of dynamic conversion tables.
Tested-by: Dimitri Sivanich <sivanich@sgi.com >
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215405.503022681@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:50 +02:00
Mike Travis
ef93bf8039
x86/platform/UV: Add obtaining GAM Range Table from UV BIOS
...
UV4 uses a GAM (globally addressed memory) architecture that supports
variable sized memory per node. This replaces the old "M" value (number
of address bits per node) with a range table for conversions between
addresses and physical node (pnode) id's. This table is obtained from UV
BIOS via the EFI UVsystab table. Support for older EFI UVsystab tables
is maintained.
Tested-by: Dimitri Sivanich <sivanich@sgi.com >
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215405.329827545@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:50 +02:00
Mike Travis
906f3b20da
x86/platform/UV: Fold blade info into per node hub info structs
...
Migrate references from the blade info structs to the per node hub info
structs. This phases out the allocation of the list of per blade info
structs on node 0, in favor of a per node hub info struct allocated on
the node's local memory.
There are also some minor cosemetic changes in the comments and whitespace
to clean things up a bit.
Tested-by: Dimitri Sivanich <sivanich@sgi.com >
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215404.987204515@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:49 +02:00
Mike Travis
3edcf2ff7a
x86/platform/UV: Allocate common per node hub info structs on local node
...
Allocate and setup per node hub info structs. CPU 0/Node 0 hub info
is statically allocated to be accessible early in system startup. The
remaining hub info structs are allocated on the node's local memory,
and shared among the CPU's on that node. This leaves the small amount
of info unique to each CPU in the per CPU info struct.
Memory is saved by combining the common per node info fields to common
node local structs. In addtion, since the info is read only only after
setup, it should stay in the L3 cache of the local processor socket.
This should therefore improve the cache hit rate when a group of cpus
on a node are all interrupted for a common task.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Reviewed-by: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215404.813051625@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:49 +02:00
Mike Travis
5627a8251f
x86/platform/UV: Move blade local processor ID to the per cpu info struct
...
Move references to blade local processor ID to the new per cpu info
structs. Create an access function that makes this move, and other
potential moves opaque to callers of this function. Define a flag
that indicates to callers in external GPL modules that this function
replaces any local definition. This allows calling source code to be
built for both pre-UV4 kernels as well as post-UV4 kernels.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215404.644173122@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:49 +02:00
Mike Travis
d38bb135d8
x86/platform/UV: Move scir info to the per cpu info struct
...
Change the references to the SCIR fields to the new per cpu info structs.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215404.452538234@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:49 +02:00
Mike Travis
0045ddd23f
x86/platform/UV: Create per cpu info structs to replace per hub info structs
...
The major portion of the hub info is common to all cpus on that hub.
This is step one of moving the per cpu hub info to a per node hub info
struct. This patch creates the small per cpu info struct that will
contain only information specific to each CPU.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215404.282265563@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:48 +02:00
Mike Travis
b608f87fe8
x86/platform/UV: Clean up redunduncies after merge of UV4 MMR definitions
...
Clean up any redundancies caused by new UV4 MMR definitions superseding
any previously definitions local to functions.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Reviewed-by: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215403.934728974@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:48 +02:00
Mike Travis
0f0d84c08d
x86/platform/UV: Add UV4 Specific MMR definitions
...
This adds the MMR definitions for UV4 via an automated script that uses
the output from a hardware verilog code to symbol converter. The large
number of insertions is caused by the UV4 design changing many similarly
named fields in MMR's that are named the same. This prompted the extra
production of architecture dependent field defines.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Dimitri Sivanich <sivanich@sgi.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215403.580158916@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:48 +02:00
Mike Travis
c443c03dd0
x86/platform/UV: Prep for UV4 MMR updates
...
Cleanup patch to rearrange code and modify some defines so the next
patch, the new UV4 MMR definitions can be merged cleanly.
* Clean up the M/N related address constants (M is # of address bits per
blade, N is the # of blade selection bits per SSI/partition).
* Fix the lookup of the alias overlay addresses and NMI definitions to
allow for flexibility in newer UV architecture types.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215403.401604203@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:47 +02:00
Mike Travis
e0ee1c97c3
x86/platform/UV: Add UV Architecture Defines
...
Add defines to control which UV architectures are supported, and modify the
'if (is_uvX_*)' functions to return constant 0 for those not supported.
This will help optimize code paths when support for specific UV arches
is removed.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Reviewed-by: Dimitri Sivanich <sivanich@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215402.897143440@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:47 +02:00
Mike Travis
eb1e3461b8
x86/platform/UV: Add Initial UV4 definitions
...
Add preliminary UV4 defines.
Tested-by: John Estabrook <estabrook@sgi.com >
Tested-by: Gary Kroening <gfk@sgi.com >
Tested-by: Nathan Zimmer <nzimmer@sgi.com >
Signed-off-by: Mike Travis <travis@sgi.com >
Cc: Andrew Banman <abanman@sgi.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Dimitri Sivanich <sivanich@sgi.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Len Brown <len.brown@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Russ Anderson <rja@sgi.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/20160429215402.703593187@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:46 +02:00
Ingo Molnar
d63c214b0a
Merge tag 'v4.6-rc6' into x86/platform, to refresh the tree
...
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-04 08:48:26 +02:00
Yazen Ghannam
a9750a31ef
x86/mce: Define vendor-specific MSR accessors
...
Scalable MCA processors have a whole new range of MSR addresses to
obtain bank related info such as CTL, MISC, ADDR, STATUS. Therefore, we
need a way to abstract the MSR addresses per vendor.
Carved out from a patch by Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com >.
Signed-off-by: Yazen Ghannam <Yazen.Ghannam@amd.com >
Signed-off-by: Borislav Petkov <bp@suse.de >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Aravind Gopalakrishnan <aravindksg.lkml@gmail.com >
Cc: Ashok Raj <ashok.raj@intel.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Cc: linux-edac <linux-edac@vger.kernel.org >
Link: http://lkml.kernel.org/r/1462019637-16474-5-git-send-email-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-05-03 08:24:16 +02:00
Andy Lutomirski
296f781a4b
x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase
...
Unlike ds and es, these are base addresses, not selectors. Rename
them so their meaning is more obvious.
On x86_32, the field is still called fs. Fixing that could make sense
as a future cleanup.
Signed-off-by: Andy Lutomirski <luto@kernel.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/69a18a51c4cba0ce29a241e570fc618ad721d908.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-29 11:56:42 +02:00
Andy Lutomirski
731e33e39a
x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization
...
As far as I know, the optimization doesn't work on any modern distro
because modern distros use high addresses for ASLR. Remove it.
The ptrace code was either wrong or very strange, but the behavior
with this patch should be essentially identical to the behavior
without this patch unless user code goes out of its way to mislead
ptrace.
On newer CPUs, once the FSGSBASE instructions are enabled, we won't
want to use the optimized variant anyway.
This isn't actually much of a performance regression, it has no effect
on normal dynamically linked programs, and it's a considerably
simplification. It also removes some nasty special cases from code
that is already way too full of special cases for comfort.
Signed-off-by: Andy Lutomirski <luto@kernel.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/dd1599b08866961dba9d2458faa6bbd7fba471d7.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-29 11:56:41 +02:00
Andy Lutomirski
45e876f794
x86/segments/64: When loadsegment(fs, ...) fails, clear the base
...
On AMD CPUs, a failed loadsegment currently may not clear the FS
base. Fix it.
While we're at it, prevent loadsegment(gs, xyz) from even compiling
on 64-bit kernels. It shouldn't be used.
Signed-off-by: Andy Lutomirski <luto@kernel.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/a084c1b93b7b1408b58d3fd0b5d6e47da8e7d7cf.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-29 11:56:41 +02:00
Andy Lutomirski
f005f5d860
x86/asm: Make asm/alternative.h safe from assembly
...
asm/alternative.h isn't directly useful from assembly, but it
shouldn't break the build.
Signed-off-by: Andy Lutomirski <luto@kernel.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/e5b693fcef99fe6e80341c9e97a002fb23871e91.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-29 11:56:41 +02:00
Andy Lutomirski
35de5b0692
x86/asm: Stop depending on ptrace.h in alternative.h
...
alternative.h pulls in ptrace.h, which means that alternatives can't
be used in anything referenced from ptrace.h, which is a mess.
Break the dependency by pulling text patching helpers into their own
header.
Signed-off-by: Andy Lutomirski <luto@kernel.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/99b93b13f2c9eb671f5c98bba4c2cbdc061293a2.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-29 11:56:40 +02:00
Linus Torvalds
814dd9481d
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
...
Pull perf fixes from Ingo Molnar:
"x86 PMU driver fixes plus a core code race fix"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/intel: Fix incorrect lbr_sel_mask value
perf/x86/intel/pt: Don't die on VMXON
perf/core: Fix perf_event_open() vs. execve() race
perf/x86/amd: Set the size of event map array to PERF_COUNT_HW_MAX
perf/core: Make sysctl_perf_cpu_time_max_percent conform to documentation
perf/x86/intel/rapl: Add missing Haswell model
perf/x86/intel: Add model number for Skylake Server to perf
2016-04-28 20:19:04 -07:00
Andy Lutomirski
078194f8e9
x86/mm, sched/core: Turn off IRQs in switch_mm()
...
Potential races between switch_mm() and TLB-flush or LDT-flush IPIs
could be very messy. AFAICT the code is currently okay, whether by
accident or by careful design, but enabling PCID will make it
considerably more complicated and will no longer be obviously safe.
Fix it with a big hammer: run switch_mm() with IRQs off.
To avoid a performance hit in the scheduler, we take advantage of
our knowledge that the scheduler already has IRQs disabled when it
calls switch_mm().
Signed-off-by: Andy Lutomirski <luto@kernel.org >
Reviewed-by: Borislav Petkov <bp@suse.de >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/f19baf759693c9dcae64bbff76189db77cb13398.1461688545.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-28 11:44:20 +02:00
Andy Lutomirski
69c0319aab
x86/mm, sched/core: Uninline switch_mm()
...
It's fairly large and it has quite a few callers. This may also
help untangle some headers down the road.
Signed-off-by: Andy Lutomirski <luto@kernel.org >
Reviewed-by: Borislav Petkov <bp@suse.de >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Link: http://lkml.kernel.org/r/54f3367803e7f80b2be62c8a21879aa74b1a5f57.1461688545.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-28 11:44:19 +02:00
Mark Rutland
9788375dc4
x86/efi: Enable runtime call flag checking
...
Define ARCH_EFI_IRQ_FLAGS_MASK for x86, which will enable the generic
runtime wrapper code to detect when firmware erroneously modifies flags
over a runtime services function call.
For x86 (both 32-bit and 64-bit), we only need check the interrupt flag.
Signed-off-by: Mark Rutland <mark.rutland@arm.com >
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk >
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org >
Cc: Ben Hutchings <ben@decadent.org.uk >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Christoph Hellwig <hch@lst.de >
Cc: Darren Hart <dvhart@infradead.org >
Cc: David Herrmann <dh.herrmann@gmail.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Hannes Reinecke <hare@suse.de >
Cc: Harald Hoyer harald@redhat.com
Cc: James Bottomley <James.Bottomley@HansenPartnership.com >
Cc: Kweh Hock Leong <hock.leong.kweh@intel.com >
Cc: Leif Lindholm <leif.lindholm@linaro.org >
Cc: Peter Jones <pjones@redhat.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Raphael Hertzog <hertzog@debian.org >
Cc: Russell King <linux@arm.linux.org.uk >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Will Deacon <will.deacon@arm.com >
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1461614832-17633-40-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-28 11:34:13 +02:00
Mark Rutland
bc25f9dba1
x86/efi: Move to generic {__,}efi_call_virt()
...
Now there's a common template for {__,}efi_call_virt(), remove the
duplicate logic from the x86 EFI code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com >
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk >
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Leif Lindholm <leif.lindholm@linaro.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ricardo Neri <ricardo.neri-calderon@linux.intel.com >
Cc: Russell King <linux@arm.linux.org.uk >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Will Deacon <will.deacon@arm.com >
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1461614832-17633-35-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-28 11:34:09 +02:00
Ard Biesheuvel
21289ec02b
x86/efi/efifb: Move DMI based quirks handling out of generic code
...
The efifb quirks handling based on DMI identification of the platform is
specific to x86, so move it to x86 arch code.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org >
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk >
Acked-by: David Herrmann <dh.herrmann@gmail.com >
Acked-by: Peter Jones <pjones@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Will Deacon <will.deacon@arm.com >
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1461614832-17633-19-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-28 11:33:57 +02:00
Ard Biesheuvel
2c23b73c2d
x86/efi: Prepare GOP handling code for reuse as generic code
...
In preparation of moving this code to drivers/firmware/efi and reusing
it on ARM and arm64, apply any changes that will be required to make this
code build for other architectures. This should make it easier to track
down problems that this move may cause to its operation on x86.
Note that the generic version uses slightly different ways of casting the
protocol methods and some other variables to the correct types, since such
method calls are not loosely typed on ARM and arm64 as they are on x86.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org >
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk >
Cc: Borislav Petkov <bp@alien8.de >
Cc: David Herrmann <dh.herrmann@gmail.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Peter Jones <pjones@redhat.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Will Deacon <will.deacon@arm.com >
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1461614832-17633-17-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-28 11:33:56 +02:00
Ingo Molnar
0b20e59cef
Merge branch 'perf/urgent' into perf/core, to resolve conflict
...
Conflicts:
arch/x86/events/intel/pt.c
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-28 10:35:17 +02:00
Alexander Shishkin
1c5ac21a0e
perf/x86/intel/pt: Don't die on VMXON
...
Some versions of Intel PT do not support tracing across VMXON, more
specifically, VMXON will clear TraceEn control bit and any attempt to
set it before VMXOFF will throw a #GP, which in the current state of
things will crash the kernel. Namely:
$ perf record -e intel_pt// kvm -nographic
on such a machine will kill it.
To avoid this, notify the intel_pt driver before VMXON and after
VMXOFF so that it knows when not to enable itself.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org >
Cc: Arnaldo Carvalho de Melo <acme@infradead.org >
Cc: Arnaldo Carvalho de Melo <acme@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Gleb Natapov <gleb@kernel.org >
Cc: Jiri Olsa <jolsa@redhat.com >
Cc: Paolo Bonzini <pbonzini@redhat.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Vince Weaver <vincent.weaver@maine.edu >
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/87oa9dwrfk.fsf@ashishki-desk.ger.corp.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-28 10:32:42 +02:00
Srinivas Pandruvada
dcee75b3b7
perf/x86/intel/rapl: Support Skylake RAPL domains
...
Add Skylake client support for RAPL domains. In addition to RAPL domains
in Broadwell clients, it has support for platform domain (aka PSys). The
PSys domain controls the entire SoC instead of just a CPU package. Unlike
package domain, PSys support requires more than just processor level
implementation. The other parts in the system need additional HW level
signaling, which OEMs need to support. When not supported, the energy
counter register in PSys domain returns 0.
Also corrected error in comment for GPU counter, which previously was
DRAM counter.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com
[ Cnverted to model_match stuff. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Arnaldo Carvalho de Melo <acme@redhat.com >
Cc: Jiri Olsa <jolsa@redhat.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Vince Weaver <vincent.weaver@maine.edu >
Cc: bp@alien8.de
Cc: hpa@zytor.com
Cc: jacob.jun.pan@linux.intel.com
Cc: rjw@rjwysocki.net
Link: http://lkml.kernel.org/r/1460930581-29748-2-git-send-email-srinivas.pandruvada@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2016-04-23 14:13:36 +02:00