Kan Liang
ec71a398c1
perf/x86/intel/ds: Handle PEBS overflow for fixed counters
...
The pebs_drain() need to support fixed counters. The DS Save Area now
include "counter reset value" fields for each fixed counters.
Extend the related variables (e.g. mask, counters, error) to support
fixed counters. There is no extended PEBS in PEBS v2 and earlier PEBS
format. Only need to change the code for PEBS v3 and later PEBS format.
Extend the pebs_event_reset[] logic to support new "counter reset value" fields.
Increase the reserve space for fixed counters.
Based-on-code-from: Andi Kleen <ak@linux.intel.com >
Signed-off-by: Kan Liang <kan.liang@linux.intel.com >
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Arnaldo Carvalho de Melo <acme@redhat.com >
Cc: Jiri Olsa <jolsa@redhat.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Vince Weaver <vincent.weaver@maine.edu >
Cc: acme@kernel.org
Link: http://lkml.kernel.org/r/20180309021542.11374-3-kan.liang@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-25 11:50:50 +02:00
Ingo Molnar
93081caaae
Merge branch 'perf/urgent' into perf/core, to pick up fixes
...
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-25 11:47:02 +02:00
Waiman Long
c0dc373a78
locking/pvqspinlock/x86: Use LOCK_PREFIX in __pv_queued_spin_unlock() assembly code
...
The LOCK_PREFIX macro should be used in the __raw_callee_save___pv_queued_spin_unlock()
assembly code, so that the lock prefix can be patched out on UP systems.
Signed-off-by: Waiman Long <longman@redhat.com >
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org >
Cc: Joe Mario <jmario@redhat.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Will Deacon <will.deacon@arm.com >
Link: http://lkml.kernel.org/r/1531858560-21547-1-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-25 11:22:20 +02:00
Linus Torvalds
43227e098c
Merge branch 'x86-pti-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
...
Pull x86 pti fixes from Ingo Molnar:
"An APM fix, and a BTS hardware-tracing fix related to PTI changes"
* 'x86-pti-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/apm: Don't access __preempt_count with zeroed fs
x86/events/intel/ds: Fix bts_interrupt_threshold alignment
2018-07-21 17:23:58 -07:00
Joerg Roedel
6df934b92a
x86/ldt: Enable LDT user-mapping for PAE
...
This adds the needed special case for PAE to get the LDT mapped into the
user page-table when PTI is enabled. The big difference to the other paging
modes is that on PAE there is no full top-level PGD entry available for the
LDT, but only a PMD entry.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-37-git-send-email-joro@8bytes.org
2018-07-20 01:11:48 +02:00
Joerg Roedel
8195d869d1
x86/ldt: Define LDT_END_ADDR
...
It marks the end of the address-space range reserved for the LDT. The
LDT-code will use it when unmapping the LDT for user-space.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-35-git-send-email-joro@8bytes.org
2018-07-20 01:11:47 +02:00
Joerg Roedel
f3e48e546c
x86/ldt: Reserve address-space range on 32 bit for the LDT
...
Reserve 2MB/4MB of address-space for mapping the LDT to user-space on 32
bit PTI kernels.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-34-git-send-email-joro@8bytes.org
2018-07-20 01:11:47 +02:00
Joerg Roedel
b976690f5d
x86/mm/pti: Introduce pti_finalize()
...
Introduce a new function to finalize the kernel mappings for the userspace
page-table after all ro/nx protections have been applied to the kernel
mappings.
Also move the call to pti_clone_kernel_text() to that function so that it
will run on 32 bit kernels too.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-30-git-send-email-joro@8bytes.org
2018-07-20 01:11:45 +02:00
Joerg Roedel
39d668e04e
x86/mm/pti: Make pti_clone_kernel_text() compile on 32 bit
...
The pti_clone_kernel_text() function references __end_rodata_hpage_align,
which is only present on x86-64. This makes sense as the end of the rodata
section is not huge-page aligned on 32 bit.
Nevertheless a symbol is required for the function that points at the right
address for both 32 and 64 bit. Introduce __end_rodata_aligned for that
purpose and use it in pti_clone_kernel_text().
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-28-git-send-email-joro@8bytes.org
2018-07-20 01:11:44 +02:00
Joerg Roedel
2c1b9fbe83
x86/mm/pti: Define X86_CR3_PTI_PCID_USER_BIT on x86_32
...
Move it out of the X86_64 specific processor defines so that its visible
for 32bit too.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Reviewed-by: Andy Lutomirski <luto@kernel.org >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-26-git-send-email-joro@8bytes.org
2018-07-20 01:11:44 +02:00
Joerg Roedel
1f40a46cf4
x86/mm/legacy: Populate the user page-table with user pgd's
...
Also populate the user-spage pgd's in the user page-table.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-24-git-send-email-joro@8bytes.org
2018-07-20 01:11:43 +02:00
Joerg Roedel
9b7b8bbd7f
x86/mm/pae: Populate the user page-table with user pgd's
...
When a PGD entry is populated, make sure to populate it in the user
page-table too.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-23-git-send-email-joro@8bytes.org
2018-07-20 01:11:43 +02:00
Joerg Roedel
6c0df86894
x86/mm/pae: Populate valid user PGD entries
...
Generic page-table code populates all non-leaf entries with _KERNPG_TABLE
bits set. This is fine for all paging modes except PAE.
In PAE mode only a subset of the bits is allowed to be set. Make sure to
only set allowed bits by masking out the reserved bits.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-22-git-send-email-joro@8bytes.org
2018-07-20 01:11:42 +02:00
Joerg Roedel
76e258add7
x86/pgtable: Move two more functions from pgtable_64.h to pgtable.h
...
These two functions are required for PTI on 32 bit:
* pgdp_maps_userspace()
* pgd_large()
Also re-implement pgdp_maps_userspace() so that it will work on 64 and 32
bit kernels.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-21-git-send-email-joro@8bytes.org
2018-07-20 01:11:42 +02:00
Joerg Roedel
fcbbd97757
x86/pgtable: Move pti_set_user_pgtbl() to pgtable.h
...
There it is also usable from 32 bit code.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-20-git-send-email-joro@8bytes.org
2018-07-20 01:11:42 +02:00
Joerg Roedel
8372d66865
x86/pgtable: Move pgdp kernel/user conversion functions to pgtable.h
...
Make them available on 32 bit and clone_pgd_range() happy.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-19-git-send-email-joro@8bytes.org
2018-07-20 01:11:41 +02:00
Joerg Roedel
7ffcf1497c
x86/pgtable/pae: Unshare kernel PMDs when PTI is enabled
...
With PTI the per-process LDT must be mapped into the kernel address-space
for each process, which requires separate kernel PMDs per PGD.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-17-git-send-email-joro@8bytes.org
2018-07-20 01:11:40 +02:00
Joerg Roedel
23b772883d
x86/pgtable: Rename pti_set_user_pgd() to pti_set_user_pgtbl()
...
The way page-table folding is implemented on 32 bit, these functions are
not only setting, but also PUDs and even PMDs. Give the function a more
generic name to reflect that.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-16-git-send-email-joro@8bytes.org
2018-07-20 01:11:40 +02:00
Joerg Roedel
252e1a0526
x86/entry: Rename update_sp0 to update_task_stack
...
The function does not update sp0 anymore but updates makes the task-stack
visible for entry code. This is by either writing it to sp1 or by doing a
hypercall. Rename the function to get rid of the misleading name.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-15-git-send-email-joro@8bytes.org
2018-07-20 01:11:40 +02:00
Joerg Roedel
45d7b25574
x86/entry/32: Enter the kernel via trampoline stack
...
Use the entry-stack as a trampoline to enter the kernel. The entry-stack is
already in the cpu_entry_area and will be mapped to userspace when PTI is
enabled.
Signed-off-by: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Pavel Machek <pavel@ucw.cz >
Cc: "H . Peter Anvin" <hpa@zytor.com >
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Dave Hansen <dave.hansen@intel.com >
Cc: Josh Poimboeuf <jpoimboe@redhat.com >
Cc: Juergen Gross <jgross@suse.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Jiri Kosina <jkosina@suse.cz >
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com >
Cc: Brian Gerst <brgerst@gmail.com >
Cc: David Laight <David.Laight@aculab.com >
Cc: Denys Vlasenko <dvlasenk@redhat.com >
Cc: Eduardo Valentin <eduval@amazon.com >
Cc: Greg KH <gregkh@linuxfoundation.org >
Cc: Will Deacon <will.deacon@arm.com >
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Waiman Long <llong@redhat.com >
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca >
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-8-git-send-email-joro@8bytes.org
2018-07-20 01:11:37 +02:00
Pavel Tatashin
8dbe438589
x86/tsc: Make use of tsc_calibrate_cpu_early()
...
During early boot enable tsc_calibrate_cpu_early() and switch to
tsc_calibrate_cpu() only later. Do this unconditionally, because it is
unknown what methods other cpus will use to calibrate once they are
onlined.
If by the time tsc_init() is called tsc frequency is still unknown do only
pit_hpet_ptimer_calibrate_cpu() to calibrate, as this function contains the
only methods wich have not been called and tried earlier.
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: steven.sistare@oracle.com
Cc: daniel.m.jordan@oracle.com
Cc: linux@armlinux.org.uk
Cc: schwidefsky@de.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: john.stultz@linaro.org
Cc: sboyd@codeaurora.org
Cc: hpa@zytor.com
Cc: douly.fnst@cn.fujitsu.com
Cc: peterz@infradead.org
Cc: prarit@redhat.com
Cc: feng.tang@intel.com
Cc: pmladek@suse.com
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: linux-s390@vger.kernel.org
Cc: boris.ostrovsky@oracle.com
Cc: jgross@suse.com
Cc: pbonzini@redhat.com
Link: https://lkml.kernel.org/r/20180719205545.16512-27-pasha.tatashin@oracle.com
2018-07-20 00:02:44 +02:00
Pavel Tatashin
03821f451d
x86/tsc: Split native_calibrate_cpu() into early and late parts
...
During early boot TSC and CPU frequency can be calibrated using MSR, CPUID,
and quick PIT calibration methods. The other methods PIT/HPET/PMTIMER are
available only after ACPI is initialized.
Split native_calibrate_cpu() into early and late parts so they can be
called separately during early and late tsc calibration.
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: steven.sistare@oracle.com
Cc: daniel.m.jordan@oracle.com
Cc: linux@armlinux.org.uk
Cc: schwidefsky@de.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: john.stultz@linaro.org
Cc: sboyd@codeaurora.org
Cc: hpa@zytor.com
Cc: douly.fnst@cn.fujitsu.com
Cc: peterz@infradead.org
Cc: prarit@redhat.com
Cc: feng.tang@intel.com
Cc: pmladek@suse.com
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: linux-s390@vger.kernel.org
Cc: boris.ostrovsky@oracle.com
Cc: jgross@suse.com
Cc: pbonzini@redhat.com
Link: https://lkml.kernel.org/r/20180719205545.16512-26-pasha.tatashin@oracle.com
2018-07-20 00:02:44 +02:00
Pavel Tatashin
cf7a63ef4e
x86/tsc: Calibrate tsc only once
...
During boot tsc is calibrated twice: once in tsc_early_delay_calibrate(),
and the second time in tsc_init().
Rename tsc_early_delay_calibrate() to tsc_early_init(), and rework it so
the calibration is done only early, and make tsc_init() to use the values
already determined in tsc_early_init().
Sometimes it is not possible to determine tsc early, as the subsystem that
is required is not yet initialized, in such case try again later in
tsc_init().
Suggested-by: Thomas Gleixner <tglx@linutronix.de >
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: steven.sistare@oracle.com
Cc: daniel.m.jordan@oracle.com
Cc: linux@armlinux.org.uk
Cc: schwidefsky@de.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: john.stultz@linaro.org
Cc: sboyd@codeaurora.org
Cc: hpa@zytor.com
Cc: douly.fnst@cn.fujitsu.com
Cc: peterz@infradead.org
Cc: prarit@redhat.com
Cc: feng.tang@intel.com
Cc: pmladek@suse.com
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: linux-s390@vger.kernel.org
Cc: boris.ostrovsky@oracle.com
Cc: jgross@suse.com
Cc: pbonzini@redhat.com
Link: https://lkml.kernel.org/r/20180719205545.16512-20-pasha.tatashin@oracle.com
2018-07-20 00:02:42 +02:00
Pavel Tatashin
6fffacb303
x86/alternatives, jumplabel: Use text_poke_early() before mm_init()
...
It supposed to be safe to modify static branches after jump_label_init().
But, because static key modifying code eventually calls text_poke() it can
end up accessing a struct page which has not been initialized yet.
Here is how to quickly reproduce the problem. Insert code like this
into init/main.c:
| +static DEFINE_STATIC_KEY_FALSE(__test);
| asmlinkage __visible void __init start_kernel(void)
| {
| char *command_line;
|@@ -587,6 +609,10 @@ asmlinkage __visible void __init start_kernel(void)
| vfs_caches_init_early();
| sort_main_extable();
| trap_init();
|+ {
|+ static_branch_enable(&__test);
|+ WARN_ON(!static_branch_likely(&__test));
|+ }
| mm_init();
The following warnings show-up:
WARNING: CPU: 0 PID: 0 at arch/x86/kernel/alternative.c:701 text_poke+0x20d/0x230
RIP: 0010:text_poke+0x20d/0x230
Call Trace:
? text_poke_bp+0x50/0xda
? arch_jump_label_transform+0x89/0xe0
? __jump_label_update+0x78/0xb0
? static_key_enable_cpuslocked+0x4d/0x80
? static_key_enable+0x11/0x20
? start_kernel+0x23e/0x4c8
? secondary_startup_64+0xa5/0xb0
---[ end trace abdc99c031b8a90a ]---
If the code above is moved after mm_init(), no warning is shown, as struct
pages are initialized during handover from memblock.
Use text_poke_early() in static branching until early boot IRQs are enabled
and from there switch to text_poke. Also, ensure text_poke() is never
invoked when unitialized memory access may happen by using adding a
!after_bootmem assertion.
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com >
Cc: steven.sistare@oracle.com
Cc: daniel.m.jordan@oracle.com
Cc: linux@armlinux.org.uk
Cc: schwidefsky@de.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: john.stultz@linaro.org
Cc: sboyd@codeaurora.org
Cc: hpa@zytor.com
Cc: douly.fnst@cn.fujitsu.com
Cc: peterz@infradead.org
Cc: prarit@redhat.com
Cc: feng.tang@intel.com
Cc: pmladek@suse.com
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: linux-s390@vger.kernel.org
Cc: boris.ostrovsky@oracle.com
Cc: jgross@suse.com
Cc: pbonzini@redhat.com
Link: https://lkml.kernel.org/r/20180719205545.16512-9-pasha.tatashin@oracle.com
2018-07-20 00:02:38 +02:00
Thomas Gleixner
e499a9b6dc
x86/kvmclock: Move kvmclock vsyscall param and init to kvmclock
...
There is no point to have this in the kvm code itself and call it from
there. This can be called from an initcall and the parameter is cleared
when the hypervisor is not KVM.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com >
Acked-by: Paolo Bonzini <pbonzini@redhat.com >
Cc: steven.sistare@oracle.com
Cc: daniel.m.jordan@oracle.com
Cc: linux@armlinux.org.uk
Cc: schwidefsky@de.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: john.stultz@linaro.org
Cc: sboyd@codeaurora.org
Cc: hpa@zytor.com
Cc: douly.fnst@cn.fujitsu.com
Cc: peterz@infradead.org
Cc: prarit@redhat.com
Cc: feng.tang@intel.com
Cc: pmladek@suse.com
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: linux-s390@vger.kernel.org
Cc: boris.ostrovsky@oracle.com
Cc: jgross@suse.com
Link: https://lkml.kernel.org/r/20180719205545.16512-7-pasha.tatashin@oracle.com
2018-07-20 00:02:37 +02:00
Thomas Gleixner
7a5ddc8fe0
x86/kvmclock: Decrapify kvm_register_clock()
...
The return value is pointless because the wrmsr cannot fail if
KVM_FEATURE_CLOCKSOURCE or KVM_FEATURE_CLOCKSOURCE2 are set.
kvm_register_clock() is only called locally so wants to be static.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com >
Acked-by: Paolo Bonzini <pbonzini@redhat.com >
Cc: steven.sistare@oracle.com
Cc: daniel.m.jordan@oracle.com
Cc: linux@armlinux.org.uk
Cc: schwidefsky@de.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: john.stultz@linaro.org
Cc: sboyd@codeaurora.org
Cc: hpa@zytor.com
Cc: douly.fnst@cn.fujitsu.com
Cc: peterz@infradead.org
Cc: prarit@redhat.com
Cc: feng.tang@intel.com
Cc: pmladek@suse.com
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: linux-s390@vger.kernel.org
Cc: boris.ostrovsky@oracle.com
Cc: jgross@suse.com
Link: https://lkml.kernel.org/r/20180719205545.16512-4-pasha.tatashin@oracle.com
2018-07-20 00:02:36 +02:00
Thomas Gleixner
73ab603f44
Merge branch 'linus' into x86/timers
...
Pick up upstream changes to avoid conflicts
2018-07-19 23:11:52 +02:00
Jiang Biao
d9f4426c73
x86/speculation: Remove SPECTRE_V2_IBRS in enum spectre_v2_mitigation
...
SPECTRE_V2_IBRS in enum spectre_v2_mitigation is never used. Remove it.
Signed-off-by: Jiang Biao <jiang.biao2@zte.com.cn >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: hpa@zytor.com
Cc: dwmw2@amazon.co.uk
Cc: konrad.wilk@oracle.com
Cc: bp@suse.de
Cc: zhong.weidong@zte.com.cn
Link: https://lkml.kernel.org/r/1531872194-39207-1-git-send-email-jiang.biao2@zte.com.cn
2018-07-19 12:31:00 +02:00
Rik van Riel
95b0e6357d
x86/mm/tlb: Always use lazy TLB mode
...
Now that CPUs in lazy TLB mode no longer receive TLB shootdown IPIs, except
at page table freeing time, and idle CPUs will no longer get shootdown IPIs
for things like mprotect and madvise, we can always use lazy TLB mode.
Tested-by: Song Liu <songliubraving@fb.com >
Signed-off-by: Rik van Riel <riel@surriel.com >
Acked-by: Dave Hansen <dave.hansen@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: efault@gmx.de
Cc: kernel-team@fb.com
Cc: luto@kernel.org
Link: http://lkml.kernel.org/r/20180716190337.26133-7-riel@surriel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-17 09:35:34 +02:00
Rik van Riel
2ff6ddf19c
x86/mm/tlb: Leave lazy TLB mode at page table free time
...
Andy discovered that speculative memory accesses while in lazy
TLB mode can crash a system, when a CPU tries to dereference a
speculative access using memory contents that used to be valid
page table memory, but have since been reused for something else
and point into la-la land.
The latter problem can be prevented in two ways. The first is to
always send a TLB shootdown IPI to CPUs in lazy TLB mode, while
the second one is to only send the TLB shootdown at page table
freeing time.
The second should result in fewer IPIs, since operationgs like
mprotect and madvise are very common with some workloads, but
do not involve page table freeing. Also, on munmap, batching
of page table freeing covers much larger ranges of virtual
memory than the batching of unmapped user pages.
Tested-by: Song Liu <songliubraving@fb.com >
Signed-off-by: Rik van Riel <riel@surriel.com >
Acked-by: Dave Hansen <dave.hansen@intel.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: efault@gmx.de
Cc: kernel-team@fb.com
Cc: luto@kernel.org
Link: http://lkml.kernel.org/r/20180716190337.26133-3-riel@surriel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-17 09:35:31 +02:00
Ingo Molnar
52b544bd38
Merge tag 'v4.18-rc5' into locking/core, to pick up fixes
...
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-17 09:27:43 +02:00
Ville Syrjälä
6f6060a5c9
x86/apm: Don't access __preempt_count with zeroed fs
...
APM_DO_POP_SEGS does not restore fs/gs which were zeroed by
APM_DO_ZERO_SEGS. Trying to access __preempt_count with
zeroed fs doesn't really work.
Move the ibrs call outside the APM_DO_SAVE_SEGS/APM_DO_RESTORE_SEGS
invocations so that fs is actually restored before calling
preempt_enable().
Fixes the following sort of oopses:
[ 0.313581] general protection fault: 0000 [#1 ] PREEMPT SMP
[ 0.313803] Modules linked in:
[ 0.314040] CPU: 0 PID: 268 Comm: kapmd Not tainted 4.16.0-rc1-triton-bisect-00090-gdd84441a7971 #19
[ 0.316161] EIP: __apm_bios_call_simple+0xc8/0x170
[ 0.316161] EFLAGS: 00210016 CPU: 0
[ 0.316161] EAX: 00000102 EBX: 00000000 ECX: 00000102 EDX: 00000000
[ 0.316161] ESI: 0000530e EDI: dea95f64 EBP: dea95f18 ESP: dea95ef0
[ 0.316161] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 0.316161] CR0: 80050033 CR2: 00000000 CR3: 015d3000 CR4: 000006d0
[ 0.316161] Call Trace:
[ 0.316161] ? cpumask_weight.constprop.15+0x20/0x20
[ 0.316161] on_cpu0+0x44/0x70
[ 0.316161] apm+0x54e/0x720
[ 0.316161] ? __switch_to_asm+0x26/0x40
[ 0.316161] ? __schedule+0x17d/0x590
[ 0.316161] kthread+0xc0/0xf0
[ 0.316161] ? proc_apm_show+0x150/0x150
[ 0.316161] ? kthread_create_worker_on_cpu+0x20/0x20
[ 0.316161] ret_from_fork+0x2e/0x38
[ 0.316161] Code: da 8e c2 8e e2 8e ea 57 55 2e ff 1d e0 bb 5d b1 0f 92 c3 5d 5f 07 1f 89 47 0c 90 8d b4 26 00 00 00 00 90 8d b4 26 00 00 00 00 90 <64> ff 0d 84 16 5c b1 74 7f 8b 45 dc 8e e0 8b 45 d8 8e e8 8b 45
[ 0.316161] EIP: __apm_bios_call_simple+0xc8/0x170 SS:ESP: 0068:dea95ef0
[ 0.316161] ---[ end trace 656253db2deaa12c ]---
Fixes: dd84441a79
("x86/speculation: Use IBRS if available before calling into firmware")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: stable@vger.kernel.org
Cc: David Woodhouse <dwmw@amazon.co.uk >
Cc: "H. Peter Anvin" <hpa@zytor.com >
Cc: x86@kernel.org
Cc: David Woodhouse <dwmw@amazon.co.uk >
Cc: "H. Peter Anvin" <hpa@zytor.com >
Link: https://lkml.kernel.org/r/20180709133534.5963-1-ville.syrjala@linux.intel.com
2018-07-16 17:59:57 +02:00
Greg Kroah-Hartman
83cf9cd6d5
Merge 4.18-rc5 into char-misc-next
...
We want the char-misc fixes in here as well.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
2018-07-16 09:04:54 +02:00
Dan Williams
092b31aa20
x86/asm/memcpy_mcsafe: Fix copy_to_user_mcsafe() exception handling
...
All copy_to_user() implementations need to be prepared to handle faults
accessing userspace. The __memcpy_mcsafe() implementation handles both
mmu-faults on the user destination and machine-check-exceptions on the
source buffer. However, the memcpy_mcsafe() wrapper may silently
fallback to memcpy() depending on build options and cpu-capabilities.
Force copy_to_user_mcsafe() to always use __memcpy_mcsafe() when
available, and otherwise disable all of the copy_to_user_mcsafe()
infrastructure when __memcpy_mcsafe() is not available, i.e.
CONFIG_X86_MCE=n.
This fixes crashes of the form:
run fstests generic/323 at 2018-07-02 12:46:23
BUG: unable to handle kernel paging request at 00007f0d50001000
RIP: 0010:__memcpy+0x12/0x20
[..]
Call Trace:
copyout_mcsafe+0x3a/0x50
_copy_to_iter_mcsafe+0xa1/0x4a0
? dax_alive+0x30/0x50
dax_iomap_actor+0x1f9/0x280
? dax_iomap_rw+0x100/0x100
iomap_apply+0xba/0x130
? dax_iomap_rw+0x100/0x100
dax_iomap_rw+0x95/0x100
? dax_iomap_rw+0x100/0x100
xfs_file_dax_read+0x7b/0x1d0 [xfs]
xfs_file_read_iter+0xa7/0xc0 [xfs]
aio_read+0x11c/0x1a0
Reported-by: Ross Zwisler <ross.zwisler@linux.intel.com >
Tested-by: Ross Zwisler <ross.zwisler@linux.intel.com >
Signed-off-by: Dan Williams <dan.j.williams@intel.com >
Cc: Al Viro <viro@zeniv.linux.org.uk >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Lutomirski <luto@amacapital.net >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Fixes: 8780356ef6
("x86/asm/memcpy_mcsafe: Define copy_to_iter_mcsafe()")
Link: http://lkml.kernel.org/r/153108277790.37979.1486841789275803399.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-16 00:05:05 +02:00
Jiri Kosina
d90a7a0ec8
x86/bugs, kvm: Introduce boot-time control of L1TF mitigations
...
Introduce the 'l1tf=' kernel command line option to allow for boot-time
switching of mitigation that is used on processors affected by L1TF.
The possible values are:
full
Provides all available mitigations for the L1TF vulnerability. Disables
SMT and enables all mitigations in the hypervisors. SMT control via
/sys/devices/system/cpu/smt/control is still possible after boot.
Hypervisors will issue a warning when the first VM is started in
a potentially insecure configuration, i.e. SMT enabled or L1D flush
disabled.
full,force
Same as 'full', but disables SMT control. Implies the 'nosmt=force'
command line option. sysfs control of SMT and the hypervisor flush
control is disabled.
flush
Leaves SMT enabled and enables the conditional hypervisor mitigation.
Hypervisors will issue a warning when the first VM is started in a
potentially insecure configuration, i.e. SMT enabled or L1D flush
disabled.
flush,nosmt
Disables SMT and enables the conditional hypervisor mitigation. SMT
control via /sys/devices/system/cpu/smt/control is still possible
after boot. If SMT is reenabled or flushing disabled at runtime
hypervisors will issue a warning.
flush,nowarn
Same as 'flush', but hypervisors will not warn when
a VM is started in a potentially insecure configuration.
off
Disables hypervisor mitigations and doesn't emit any warnings.
Default is 'flush'.
Let KVM adhere to these semantics, which means:
- 'lt1f=full,force' : Performe L1D flushes. No runtime control
possible.
- 'l1tf=full'
- 'l1tf-flush'
- 'l1tf=flush,nosmt' : Perform L1D flushes and warn on VM start if
SMT has been runtime enabled or L1D flushing
has been run-time enabled
- 'l1tf=flush,nowarn' : Perform L1D flushes and no warnings are emitted.
- 'l1tf=off' : L1D flushes are not performed and no warnings
are emitted.
KVM can always override the L1D flushing behavior using its 'vmentry_l1d_flush'
module parameter except when lt1f=full,force is set.
This makes KVM's private 'nosmt' option redundant, and as it is a bit
non-systematic anyway (this is something to control globally, not on
hypervisor level), remove that option.
Add the missing Documentation entry for the l1tf vulnerability sysfs file
while at it.
Signed-off-by: Jiri Kosina <jkosina@suse.cz >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Jiri Kosina <jkosina@suse.cz >
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com >
Link: https://lkml.kernel.org/r/20180713142323.202758176@linutronix.de
2018-07-13 16:29:56 +02:00
Thomas Gleixner
a7b9020b06
x86/l1tf: Handle EPT disabled state proper
...
If Extended Page Tables (EPT) are disabled or not supported, no L1D
flushing is required. The setup function can just avoid setting up the L1D
flush for the EPT=n case.
Invoke it after the hardware setup has be done and enable_ept has the
correct state and expose the EPT disabled state in the mitigation status as
well.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Jiri Kosina <jkosina@suse.cz >
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com >
Link: https://lkml.kernel.org/r/20180713142322.612160168@linutronix.de
2018-07-13 16:29:53 +02:00
Thomas Gleixner
72c6d2db64
x86/litf: Introduce vmx status variable
...
Store the effective mitigation of VMX in a status variable and use it to
report the VMX state in the l1tf sysfs file.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Jiri Kosina <jkosina@suse.cz >
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com >
Link: https://lkml.kernel.org/r/20180713142322.433098358@linutronix.de
2018-07-13 16:29:53 +02:00
Sunil Muthuswamy
81b18bce48
Drivers: HV: Send one page worth of kmsg dump over Hyper-V during panic
...
In the VM mode on Hyper-V, currently, when the kernel panics, an error
code and few register values are populated in an MSR and the Hypervisor
notified. This information is collected on the host. The amount of
information currently collected is found to be limited and not very
actionable. To gather more actionable data, such as stack trace, the
proposal is to write one page worth of kmsg data on an allocated page
and the Hypervisor notified of the page address through the MSR.
- Sysctl option to control the behavior, with ON by default.
Cc: K. Y. Srinivasan <kys@microsoft.com >
Cc: Stephen Hemminger <sthemmin@microsoft.com >
Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com >
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com >
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
2018-07-08 15:54:31 +02:00
Suravee Suthikulpanit
818b7587b4
x86: irq_remapping: Move irq remapping mode enum
...
The enum is currently defined in Intel-specific DMAR header file,
but it is also used by APIC common code. Therefore, move it to
a more appropriate interrupt-remapping common header file.
This will also be used by subsequent patches.
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Joerg Roedel <jroedel@suse.de >
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com >
Signed-off-by: Joerg Roedel <jroedel@suse.de >
2018-07-06 14:43:47 +02:00
Thomas Gleixner
8f63e9230d
Merge branch 'x86/urgent' into x86/hyperv
...
Integrate the upstream bug fix to resolve the resulting conflict in
__send_ipi_mask().
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
2018-07-06 12:35:56 +02:00
K. Y. Srinivasan
1268ed0c47
x86/hyper-v: Fix the circular dependency in IPI enlightenment
...
The IPI hypercalls depend on being able to map the Linux notion of CPU ID
to the hypervisor's notion of the CPU ID. The array hv_vp_index[] provides
this mapping. Code for populating this array depends on the IPI functionality.
Break this circular dependency.
[ tglx: Use a proper define instead of '-1' with a u32 variable as pointed
out by Vitaly ]
Fixes: 68bb7bfb79
("X86/Hyper-V: Enable IPI enlightenments")
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Tested-by: Michael Kelley <mikelley@microsoft.com >
Cc: gregkh@linuxfoundation.org
Cc: devel@linuxdriverproject.org
Cc: olaf@aepfle.de
Cc: apw@canonical.com
Cc: jasowang@redhat.com
Cc: hpa@zytor.com
Cc: sthemmin@microsoft.com
Cc: Michael.H.Kelley@microsoft.com
Cc: vkuznets@redhat.com
Link: https://lkml.kernel.org/r/20180703230155.15160-1-kys@linuxonhyperv.com
2018-07-06 12:32:59 +02:00
Paolo Bonzini
c595ceee45
x86/KVM/VMX: Add L1D flush logic
...
Add the logic for flushing L1D on VMENTER. The flush depends on the static
key being enabled and the new l1tf_flush_l1d flag being set.
The flags is set:
- Always, if the flush module parameter is 'always'
- Conditionally at:
- Entry to vcpu_run(), i.e. after executing user space
- From the sched_in notifier, i.e. when switching to a vCPU thread.
- From vmexit handlers which are considered unsafe, i.e. where
sensitive data can be brought into L1D:
- The emulator, which could be a good target for other speculative
execution-based threats,
- The MMU, which can bring host page tables in the L1 cache.
- External interrupts
- Nested operations that require the MMU (see above). That is
vmptrld, vmptrst, vmclear,vmwrite,vmread.
- When handling invept,invvpid
[ tglx: Split out from combo patch and reduced to a single flag ]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com >
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
2018-07-04 20:49:39 +02:00
Paolo Bonzini
3fa045be4c
x86/KVM/VMX: Add L1D MSR based flush
...
336996-Speculative-Execution-Side-Channel-Mitigations.pdf defines a new MSR
(IA32_FLUSH_CMD aka 0x10B) which has similar write-only semantics to other
MSRs defined in the document.
The semantics of this MSR is to allow "finer granularity invalidation of
caching structures than existing mechanisms like WBINVD. It will writeback
and invalidate the L1 data cache, including all cachelines brought in by
preceding instructions, without invalidating all caches (eg. L2 or
LLC). Some processors may also invalidate the first level level instruction
cache on a L1D_FLUSH command. The L1 data and instruction caches may be
shared across the logical processors of a core."
Use it instead of the loop based L1 flush algorithm.
A copy of this document is available at
https://bugzilla.kernel.org/show_bug.cgi?id=199511
[ tglx: Avoid allocating pages when the MSR is available ]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com >
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
2018-07-04 20:49:39 +02:00
Michael Kelley
7dc9b6b808
Drivers: hv: vmbus: Make TLFS #define names architecture neutral
...
The Hyper-V feature and hint flags in hyperv-tlfs.h are all defined
with the string "X64" in the name. Some of these flags are indeed
x86/x64 specific, but others are not. For the ones that are used
in architecture independent Hyper-V driver code, or will be used in
the upcoming support for Hyper-V for ARM64, this patch removes the
"X64" from the name.
This patch changes the flags that are currently known to be
used on multiple architectures. Hyper-V for ARM64 is still a
work-in-progress and the Top Level Functional Spec (TLFS) has not
been separated into x86/x64 and ARM64 areas. So additional flags
may need to be updated later.
This patch only changes symbol names. There are no functional
changes.
Signed-off-by: Michael Kelley <mikelley@microsoft.com >
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com >
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
2018-07-03 13:09:15 +02:00
Andy Shevchenko
41afb1dfad
x86/platform/intel-mid: Remove per platform code
...
After custom TSC calibration gone, there is no more reason to have
custom platform code for each of Intel MID.
Thus, remove it for good.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: "H. Peter Anvin" <hpa@zytor.com >
Cc: Pavel Tatashin <pasha.tatashin@oracle.com >
Link: https://lkml.kernel.org/r/20180629193113.84425-7-andriy.shevchenko@linux.intel.com
2018-07-03 13:08:21 +02:00
Andy Shevchenko
d99e5da91b
x86/platform/intel-mid: Remove custom TSC calibration
...
Since the commit
7da7c15613
("x86, tsc: Add static (MSR) TSC calibration on Intel Atom SoCs")
introduced a common way for all Intel MID chips to get their TSC frequency
via MSRs, there is no need to keep a duplication in each of Intel MID
platform code.
Thus, remove the custom calibration code for good.
Note, there is slight difference in how to get frequency for (reserved?)
values in MSRs, i.e. legacy code enforces some defaults while new code just
uses 0 in that cases.
Suggested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: "H. Peter Anvin" <hpa@zytor.com >
Cc: Pavel Tatashin <pasha.tatashin@oracle.com >
Cc: Bin Gao <bin.gao@intel.com >
Link: https://lkml.kernel.org/r/20180629193113.84425-6-andriy.shevchenko@linux.intel.com
2018-07-03 13:08:21 +02:00
Andy Shevchenko
e2ce67b2b3
x86/cpu: Introduce INTEL_CPU_FAM*() helper macros
...
These macros are often used by drivers and there exists already a lot of
duplication as ICPU() macro across the drivers.
Provide a generic x86 macro for users.
Note, as Ingo Molnar pointed out this has a hidden issue when a driver
needs to preserve const qualifier. Though, it would be addressed
separately at some point.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com >
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: "H. Peter Anvin" <hpa@zytor.com >
Cc: Pavel Tatashin <pasha.tatashin@oracle.com >
Link: https://lkml.kernel.org/r/20180629193113.84425-2-andriy.shevchenko@linux.intel.com
2018-07-03 13:08:20 +02:00
Michael Kelley
619a4c8b2b
Drivers: hv: vmbus: Remove x86 MSR refs in arch independent code
...
In architecture independent code for manipulating Hyper-V synthetic timers
and synthetic interrupts, pass in an ordinal number identifying the timer
or interrupt, rather than an actual MSR register address. Then in
x86/x64 specific code, map the ordinal number to the appropriate MSR.
This change facilitates the introduction of an ARM64 version of Hyper-V,
which uses the same synthetic timers and interrupts, but a different
mechanism for accessing them.
Signed-off-by: Michael Kelley <mikelley@microsoft.com >
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com >
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
2018-07-03 13:02:28 +02:00
Nick Desaulniers
d0a8d9378d
x86/paravirt: Make native_save_fl() extern inline
...
native_save_fl() is marked static inline, but by using it as
a function pointer in arch/x86/kernel/paravirt.c, it MUST be outlined.
paravirt's use of native_save_fl() also requires that no GPRs other than
%rax are clobbered.
Compilers have different heuristics which they use to emit stack guard
code, the emittance of which can break paravirt's callee saved assumption
by clobbering %rcx.
Marking a function definition extern inline means that if this version
cannot be inlined, then the out-of-line version will be preferred. By
having the out-of-line version be implemented in assembly, it cannot be
instrumented with a stack protector, which might violate custom calling
conventions that code like paravirt rely on.
The semantics of extern inline has changed since gnu89. This means that
folks using GCC versions >= 5.1 may see symbol redefinition errors at
link time for subdirs that override KBUILD_CFLAGS (making the C standard
used implicit) regardless of this patch. This has been cleaned up
earlier in the patch set, but is left as a note in the commit message
for future travelers.
Reports:
https://lkml.org/lkml/2018/5/7/534
https://github.com/ClangBuiltLinux/linux/issues/16
Discussion:
https://bugs.llvm.org/show_bug.cgi?id=37512
https://lkml.org/lkml/2018/5/24/1371
Thanks to the many folks that participated in the discussion.
Debugged-by: Alistair Strachan <astrachan@google.com >
Debugged-by: Matthias Kaehlcke <mka@chromium.org >
Suggested-by: Arnd Bergmann <arnd@arndb.de >
Suggested-by: H. Peter Anvin <hpa@zytor.com >
Suggested-by: Tom Stellar <tstellar@redhat.com >
Reported-by: Sedat Dilek <sedat.dilek@gmail.com >
Tested-by: Sedat Dilek <sedat.dilek@gmail.com >
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com >
Acked-by: Juergen Gross <jgross@suse.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: acme@redhat.com
Cc: akataria@vmware.com
Cc: akpm@linux-foundation.org
Cc: andrea.parri@amarulasolutions.com
Cc: ard.biesheuvel@linaro.org
Cc: aryabinin@virtuozzo.com
Cc: astrachan@google.com
Cc: boris.ostrovsky@oracle.com
Cc: brijesh.singh@amd.com
Cc: caoj.fnst@cn.fujitsu.com
Cc: geert@linux-m68k.org
Cc: ghackmann@google.com
Cc: gregkh@linuxfoundation.org
Cc: jan.kiszka@siemens.com
Cc: jarkko.sakkinen@linux.intel.com
Cc: joe@perches.com
Cc: jpoimboe@redhat.com
Cc: keescook@google.com
Cc: kirill.shutemov@linux.intel.com
Cc: kstewart@linuxfoundation.org
Cc: linux-efi@vger.kernel.org
Cc: linux-kbuild@vger.kernel.org
Cc: manojgupta@google.com
Cc: mawilcox@microsoft.com
Cc: michal.lkml@markovi.net
Cc: mjg59@google.com
Cc: mka@chromium.org
Cc: pombredanne@nexb.com
Cc: rientjes@google.com
Cc: rostedt@goodmis.org
Cc: thomas.lendacky@amd.com
Cc: tweek@google.com
Cc: virtualization@lists.linux-foundation.org
Cc: will.deacon@arm.com
Cc: yamada.masahiro@socionext.com
Link: http://lkml.kernel.org/r/20180621162324.36656-4-ndesaulniers@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-03 10:56:27 +02:00
H. Peter Anvin
0e2e160033
x86/asm: Add _ASM_ARG* constants for argument registers to <asm/asm.h>
...
i386 and x86-64 uses different registers for arguments; make them
available so we don't have to #ifdef in the actual code.
Native size and specified size (q, l, w, b) versions are provided.
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com >
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com >
Reviewed-by: Sedat Dilek <sedat.dilek@gmail.com >
Acked-by: Juergen Gross <jgross@suse.com >
Cc: Linus Torvalds <torvalds@linux-foundation.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: acme@redhat.com
Cc: akataria@vmware.com
Cc: akpm@linux-foundation.org
Cc: andrea.parri@amarulasolutions.com
Cc: ard.biesheuvel@linaro.org
Cc: arnd@arndb.de
Cc: aryabinin@virtuozzo.com
Cc: astrachan@google.com
Cc: boris.ostrovsky@oracle.com
Cc: brijesh.singh@amd.com
Cc: caoj.fnst@cn.fujitsu.com
Cc: geert@linux-m68k.org
Cc: ghackmann@google.com
Cc: gregkh@linuxfoundation.org
Cc: jan.kiszka@siemens.com
Cc: jarkko.sakkinen@linux.intel.com
Cc: joe@perches.com
Cc: jpoimboe@redhat.com
Cc: keescook@google.com
Cc: kirill.shutemov@linux.intel.com
Cc: kstewart@linuxfoundation.org
Cc: linux-efi@vger.kernel.org
Cc: linux-kbuild@vger.kernel.org
Cc: manojgupta@google.com
Cc: mawilcox@microsoft.com
Cc: michal.lkml@markovi.net
Cc: mjg59@google.com
Cc: mka@chromium.org
Cc: pombredanne@nexb.com
Cc: rientjes@google.com
Cc: rostedt@goodmis.org
Cc: thomas.lendacky@amd.com
Cc: tstellar@redhat.com
Cc: tweek@google.com
Cc: virtualization@lists.linux-foundation.org
Cc: will.deacon@arm.com
Cc: yamada.masahiro@socionext.com
Link: http://lkml.kernel.org/r/20180621162324.36656-3-ndesaulniers@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org >
2018-07-03 10:56:27 +02:00