KVM reads the current boottime value as a struct timespec in order to
calculate the guest wallclock time, resulting in an overflow in 2038
on 32-bit systems.
The data then gets passed as an unsigned 32-bit number to the guest,
and that in turn overflows in 2106.
We cannot do much about the second overflow, which affects both 32-bit
and 64-bit hosts, but we can ensure that they both behave the same
way and don't overflow until 2106, by using getboottime64() to read
a timespec64 value.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On Intel platforms, this patch adds LMCE to KVM MCE supported
capabilities and handles guest access to LMCE related MSRs.
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
[Haozhong: macro KVM_MCE_CAP_SUPPORTED => variable kvm_mce_cap_supported
Only enable LMCE on Intel platform
Check MSR_IA32_FEATURE_CONTROL when handling guest
access to MSR_IA32_MCG_EXT_CTL]
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM currently does not check the value written to guest
MSR_IA32_FEATURE_CONTROL, though bits corresponding to disabled features
may be set. This patch makes KVM to validate individual bits written to
guest MSR_IA32_FEATURE_CONTROL according to enabled features.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
msr_ia32_feature_control will be used for LMCE and not depend only on
nested anymore, so move it from struct nested_vmx to struct vcpu_vmx.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When page tables entries are set using xen_set_pte_init() during early
boot there is no page fault handler that could handle a fault when
performing an M2P lookup.
In 64 bit guests (usually dom0) early_ioremap() would fault in
xen_set_pte_init() because an M2P lookup faults because the MFN is in
MMIO space and not mapped in the M2P. This lookup is done to see if
the PFN in in the range used for the initial page table pages, so that
the PTE may be set as read-only.
The M2P lookup can be avoided by moving the check (and clear of RW)
earlier when the PFN is still available.
Reported-by: Kevin Moraga <kmoragas@riseup.net>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
xen_cleanhighmap() is operating on level2_kernel_pgt only. The upper
bound of the loop setting non-kernel-image entries to zero should not
exceed the size of level2_kernel_pgt.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Herbert wants the sha1-mb algorithm to have an async implementation:
https://lkml.org/lkml/2016/4/5/286.
Currently, sha1-mb uses an async interface for the outer algorithm
and a sync interface for the inner algorithm. This patch introduces
a async interface for even the inner algorithm.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes an old bug where requests can be reordered because
some are processed by cryptd while others are processed directly
in softirq context.
The fix is to always postpone to cryptd if there are currently
requests outstanding from the same tfm.
This patch also removes the redundant use of cryptd in the async
init function as init never touches the FPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes an old bug where gcm requests can be reordered
because some are processed by cryptd while others are processed
directly in softirq context.
The fix is to always postpone to cryptd if there are currently
requests outstanding from the same tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On 16-byte requests the optimised version is actually slower than
the generic code, so we should simply use that instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cheers,
We want to use the table upgrade feature in ARM64.
Introduce a new configuration option that allows that.
Signed-off-by: Aleksey Makarov <aleksey.makarov@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The constant that defines max phys address where the new upgraded
ACPI table should be allocated is arch-specific. Move it to
<asm/acpi.h>
Signed-off-by: Aleksey Makarov <aleksey.makarov@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Refer initrd_start, initrd_end directly from drivers/acpi/tables.c.
This allows to use the table upgrade feature in architectures
other than x86. Also this simplifies header files.
The patch renames acpi_table_initrd_init() to acpi_table_upgrade()
(what reflects the purpose of the function) and removes the unneeded
wraps early_acpi_table_init() and early_initrd_acpi_init().
Signed-off-by: Aleksey Makarov <aleksey.makarov@linaro.org>
Acked-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Child devices in a VMD domain that want to use MSI are slowing down MSI-X
using devices sharing the same vectors. Move all MSI usage to a single VMD
vector, and MSI-X devices can share the rest.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Jon Derrick <jonathan.derrick@intel.com>
Otherwise APIC code assumes VMD's IRQ domain can be managed by the APIC,
resulting in an invalid cast of irq_data during irq_force_complete_move().
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Enabling interrupts may result in an interrupt raised and serviced while
VMD holds a lock, resulting in contention with the spin lock held while
enabling interrupts.
The solution is to disable preemption and save/restore the state during
interrupt enable and disable.
Fixes lockdep:
======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
4.6.0-2016-06-16-lockdep+ #47 Tainted: G E
------------------------------------------------------
kworker/0:1/447 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
(list_lock){+.+...}, at: [<ffffffffa04eb8fc>] vmd_irq_enable+0x3c/0x70 [vmd]
and this task is already holding:
(&irq_desc_lock_class){-.-...}, at: [<ffffffff810e1ff6>] __setup_irq+0xa6/0x610
which would create a new lock dependency:
(&irq_desc_lock_class){-.-...} -> (list_lock){+.+...}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&irq_desc_lock_class){-.-...}
... which became HARDIRQ-irq-safe at:
[<ffffffff810c9f21>] __lock_acquire+0x981/0xe00
[<ffffffff810cb039>] lock_acquire+0x119/0x220
[<ffffffff8167294d>] _raw_spin_lock+0x3d/0x80
[<ffffffff810e36d4>] handle_level_irq+0x24/0x110
[<ffffffff8101f20a>] handle_irq+0x1a/0x30
[<ffffffff81675fc1>] do_IRQ+0x61/0x120
[<ffffffff8167404c>] ret_from_intr+0x0/0x20
[<ffffffff81672e30>] _raw_spin_unlock_irqrestore+0x40/0x60
[<ffffffff810e21ee>] __setup_irq+0x29e/0x610
[<ffffffff810e25a1>] setup_irq+0x41/0x90
[<ffffffff81f5777f>] setup_default_timer_irq+0x1e/0x20
[<ffffffff81f57798>] hpet_time_init+0x17/0x19
[<ffffffff81f5775a>] x86_late_time_init+0xa/0x11
[<ffffffff81f51e9b>] start_kernel+0x382/0x436
[<ffffffff81f51308>] x86_64_start_reservations+0x2a/0x2c
[<ffffffff81f51445>] x86_64_start_kernel+0x13b/0x14a
to a HARDIRQ-irq-unsafe lock:
(list_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
... [<ffffffff810c9d8e>] __lock_acquire+0x7ee/0xe00
[<ffffffff810cb039>] lock_acquire+0x119/0x220
[<ffffffff8167294d>] _raw_spin_lock+0x3d/0x80
[<ffffffffa04eba42>] vmd_msi_init+0x72/0x150 [vmd]
[<ffffffff810e8597>] msi_domain_alloc+0xb7/0x140
[<ffffffff810e6b10>] irq_domain_alloc_irqs_recursive+0x40/0xa0
[<ffffffff810e6cea>] __irq_domain_alloc_irqs+0x14a/0x330
[<ffffffff810e8a8c>] msi_domain_alloc_irqs+0x8c/0x1d0
[<ffffffff813ca4e3>] pci_msi_setup_msi_irqs+0x43/0x70
[<ffffffff813cada1>] pci_enable_msi_range+0x131/0x280
[<ffffffff813bf5e0>] pcie_port_device_register+0x320/0x4e0
[<ffffffff813bf9a4>] pcie_portdrv_probe+0x34/0x60
[<ffffffff813b0e85>] local_pci_probe+0x45/0xa0
[<ffffffff813b226b>] pci_device_probe+0xdb/0x130
[<ffffffff8149e3cc>] driver_probe_device+0x22c/0x440
[<ffffffff8149e774>] __device_attach_driver+0x94/0x110
[<ffffffff8149bfad>] bus_for_each_drv+0x5d/0x90
[<ffffffff8149e030>] __device_attach+0xc0/0x140
[<ffffffff8149e0c0>] device_attach+0x10/0x20
[<ffffffff813a77f7>] pci_bus_add_device+0x47/0x90
[<ffffffff813a7879>] pci_bus_add_devices+0x39/0x70
[<ffffffff813aaba7>] pci_rescan_bus+0x27/0x30
[<ffffffffa04ec1af>] vmd_probe+0x68f/0x76c [vmd]
[<ffffffff813b0e85>] local_pci_probe+0x45/0xa0
[<ffffffff81088064>] work_for_cpu_fn+0x14/0x20
[<ffffffff8108c244>] process_one_work+0x1f4/0x740
[<ffffffff8108c9c6>] worker_thread+0x236/0x4f0
[<ffffffff810935c2>] kthread+0xf2/0x110
[<ffffffff816738f2>] ret_from_fork+0x22/0x50
other info that might help us debug this:
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(list_lock);
local_irq_disable();
lock(&irq_desc_lock_class);
lock(list_lock);
<Interrupt>
lock(&irq_desc_lock_class);
*** DEADLOCK ***
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Keith Busch <keith.busch@intel.com>
Pull driver core fixes from Greg KH:
"Here are a small number of debugfs, ISA, and one driver core fix for
4.7-rc4.
All of these resolve reported issues. The ISA ones have spent the
least amount of time in linux-next, sorry about that, I didn't realize
they were regressions that needed to get in now (thanks to Thorsten
for the prodding!) but they do all pass the 0-day bot tests. The
others have been in linux-next for a while now.
Full details about them are in the shortlog below"
* tag 'driver-core-4.7-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
isa: Dummy isa_register_driver should return error code
isa: Call isa_bus_init before dependent ISA bus drivers register
watchdog: ebc-c384_wdt: Allow build for X86_64
iio: stx104: Allow build for X86_64
gpio: Allow PC/104 devices on X86_64
isa: Allow ISA-style drivers on modern systems
base: make module_create_drivers_dir race-free
debugfs: open_proxy_open(): avoid double fops release
debugfs: full_proxy_open(): free proxy on ->open() failure
kernel/kcov: unproxify debugfs file's fops
Several modern devices, such as PC/104 cards, are expected to run on
modern systems via an ISA bus interface. Since ISA is a legacy interface
for most modern architectures, ISA support should remain disabled in
general. Support for ISA-style drivers should be enabled on a per driver
basis.
To allow ISA-style drivers on modern systems, this patch introduces the
ISA_BUS_API and ISA_BUS Kconfig options. The ISA bus driver will now
build conditionally on the ISA_BUS_API Kconfig option, which defaults to
the legacy ISA Kconfig option. The ISA_BUS Kconfig option allows the
ISA_BUS_API Kconfig option to be selected on architectures which do not
enable ISA (e.g. X86_64).
The ISA_BUS Kconfig option is currently only implemented for X86
architectures. Other architectures may have their own ISA_BUS Kconfig
options added as required.
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: William Breathitt Gray <vilhelm.gray@gmail.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pull KVM fixes from Paolo Bonzini:
- miscellaneous fixes for MIPS and s390
- one new kvm_stat for s390
- correctly disable VT-d posted interrupts with the rest of posted
interrupts
- "make randconfig" fix for x86 AMD
- off-by-one in irq route check (the "good" kind that errors out a bit
too early!)
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
kvm: vmx: check apicv is active before using VT-d posted interrupt
kvm: Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES
kvm: svm: Do not support AVIC if not CONFIG_X86_LOCAL_APIC
kvm: svm: Fix implicit declaration for __default_cpu_present_to_apicid()
MIPS: KVM: Fix CACHE triggered exception emulation
MIPS: KVM: Don't unwind PC when emulating CACHE
MIPS: KVM: Include bit 31 in segment matches
MIPS: KVM: Fix modular KVM under QEMU
KVM: s390: Add stats for PEI events
KVM: s390: ignore IBC if zero
Hook the VMX preemption timer to the "hv timer" functionality added
by the previous patch. This includes: checking if the feature is
supported, if the feature is broken on the CPU, the hooks to
setup/clean the VMX preemption timer, arming the timer on vmentry
and handling the vmexit.
A module parameter states if the VMX preemption timer should be
utilized.
Signed-off-by: Yunhong Jiang <yunhong.jiang@intel.com>
[Move hv_deadline_tsc to struct vcpu_vmx, use -1 as the "unset" value.
Put all VMX bits here. Enable it by default #yolo. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Prepare to switch from preemption timer to hrtimer in the
vmx_pre/post_block. Current functions are only for posted interrupt,
rename them accordingly.
Signed-off-by: Yunhong Jiang <yunhong.jiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The VMX preemption timer can be used to virtualize the TSC deadline timer.
The VMX preemption timer is armed when the vCPU is running, and a VMExit
will happen if the virtual TSC deadline timer expires.
When the vCPU thread is blocked because of HLT, KVM will switch to use
an hrtimer, and then go back to the VMX preemption timer when the vCPU
thread is unblocked.
This solution avoids the complex OS's hrtimer system, and the host
timer interrupt handling cost, replacing them with a little math
(for guest->host TSC and host TSC->preemption timer conversion)
and a cheaper VMexit. This benefits latency for isolated pCPUs.
[A word about performance... Yunhong reported a 30% reduction in average
latency from cyclictest. I made a similar test with tscdeadline_latency
from kvm-unit-tests, and measured
- ~20 clock cycles loss (out of ~3200, so less than 1% but still
statistically significant) in the worst case where the test halts
just after programming the TSC deadline timer
- ~800 clock cycles gain (25% reduction in latency) in the best case
where the test busy waits.
I removed the VMX bits from Yunhong's patch, to concentrate them in the
next patch - Paolo]
Signed-off-by: Yunhong Jiang <yunhong.jiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The function to start the tsc deadline timer virtualization will be used
also by the pre_block hook when we use the preemption timer; change it
to a separate function. No logic changes.
Signed-off-by: Yunhong Jiang <yunhong.jiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The commit 8221c13700 ("svm: Manage vcpu load/unload when enable AVIC")
introduces a build error due to implicit function declaration
when #ifdef CONFIG_X86_32 and #ifndef CONFIG_X86_LOCAL_APIC
(as reported by Kbuild test robot i386-randconfig-x0-06121009).
So, this patch introduces kvm_cpu_get_apicid() wrapper
around __default_cpu_present_to_apicid() with additional
handling if CONFIG_X86_LOCAL_APIC is not defined.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Fixes: commit 8221c13700 ("svm: Manage vcpu load/unload when enable AVIC")
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The new created_vcpus field makes it possible to avoid the race between
irqchip and VCPU creation in a much nicer way; just check under kvm->lock
whether a VCPU has already been created.
We can then remove KVM_APIC_ARCHITECTURE too, because at this point the
symbol is only governing the default definition of kvm_vcpu_compatible.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This moves seccomp after ptrace on x86 to that seccomp can catch changes
made by ptrace. Emulation should skip the rest of processing too.
We can get rid of test_thread_flag because there's no longer any
opportunity for seccomp to mess with ptrace state before invoking
ptrace.
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: x86@kernel.org
Cc: Andy Lutomirski <luto@kernel.org>
I added two-phase syscall entry work back when the entry slow path
was very slow. Nowadays, the entry slow path is fast and two-phase
entry work serves no purpose. Remove it.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Currently, if arch code wants to supply seccomp_data directly to
seccomp (which is generally much faster than having seccomp do it
using the syscall_get_xyz() API), it has to use the two-phase
seccomp hooks. Add it to the easy hooks, too.
Cc: linux-arch@vger.kernel.org
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
On Intel Merrifield platform several PCI devices have a bogus configuration,
i.e. the IRQ0 had been assigned to few of them. These are PCI root bridge,
eMMC0, HS UART common registers, PWM, and HDMI. The actual interrupt line can
be allocated to one device exclusively, in our case to eMMC0, the rest should
cope without it and basically known drivers for them are not using interrupt
line at all.
Rework IRQ0 workaround, which was previously done to avoid conflict between
eMMC0 and HS UART common registers, to behave differently based on the device
in question, i.e. allocate interrupt line to eMMC0, but silently skip interrupt
allocation for the rest except HS UART common registers which are not used
anyway. With this rework IOSF MBI driver in particular would be used.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: 39d9b77b8d ("x86/pci/intel_mid_pci: Work around for IRQ0 assignment")
Link: http://lkml.kernel.org/r/1465842481-136852-1-git-send-email-andriy.shevchenko@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
There were at least 3 features added to the __SI_FAULT area of the
siginfo struct that did not make it to the compat siginfo:
1. The si_addr_lsb used in SIGBUS's sent for machine checks
2. The upper/lower bounds for MPX SIGSEGV faults
3. The protection key for pkey faults
There was also some turmoil when I was attempting to add the pkey
field because it needs to be a fixed size on 32 and 64-bit and
not have any alignment constraints.
This patch adds some compile-time checks to the compat code to
make it harder to screw this up. Basically, the checks are
supposed to trip any time someone changes the siginfo structure.
That sounds bad, but it's what we want. If someone changes
siginfo, we want them to also be _forced_ to go look at the
compat code.
The details are in the comments.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: linux-edac@vger.kernel.org
Link: http://lkml.kernel.org/r/20160608172534.C73DAFC3@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The 32-bit siginfo is a different binary format than the 64-bit
one. So, when running 32-bit binaries on 64-bit kernels, we have
to convert the kernel's 64-bit version to a 32-bit version that
userspace can grok.
We've added a few features to siginfo over the past few years and
neglected to add them to arch/x86/kernel/signal_compat.c:
1. The si_addr_lsb used in SIGBUS's sent for machine checks
2. The upper/lower bounds for MPX SIGSEGV faults
3. The protection key for pkey faults
I caught this with some protection keys unit tests and realized
it affected a few more features.
This was tested only with my protection keys patch that looks
for a proper value in si_pkey. I didn't actually test the machine
check or MPX code.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: linux-edac@vger.kernel.org
Link: http://lkml.kernel.org/r/20160608172533.F8F05637@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Fix kprobe_fault_handler() to clear the TF (trap flag) bit of
the flags register in the case of a fault fixup on single-stepping.
If we put a kprobe on the instruction which caused a
page fault (e.g. actual mov instructions in copy_user_*),
that fault happens on the single-stepping buffer. In this
case, kprobes resets running instance so that the CPU can
retry execution on the original ip address.
However, current code forgets to reset the TF bit. Since this
fault happens with TF bit set for enabling single-stepping,
when it retries, it causes a debug exception and kprobes
can not handle it because it already reset itself.
On the most of x86-64 platform, it can be easily reproduced
by using kprobe tracer. E.g.
# cd /sys/kernel/debug/tracing
# echo p copy_user_enhanced_fast_string+5 > kprobe_events
# echo 1 > events/kprobes/enable
And you'll see a kernel panic on do_debug(), since the debug
trap is not handled by kprobes.
To fix this problem, we just need to clear the TF bit when
resetting running kprobe.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: systemtap@sourceware.org
Cc: stable@vger.kernel.org # All the way back to ancient kernels
Link: http://lkml.kernel.org/r/20160611140648.25885.37482.stgit@devbox
[ Updated the comments. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The NMI watchdog uses either the fixed cycles or a generic cycles
counter. This causes a lot of conflicts with users of the PMU who want
to run a full group including the cycles fixed counter, for example
the --topdown support recently added to perf stat. The code needs to
fall back to not use groups, which can cause measurement inaccuracy
due to multiplexing errors.
This patch switches the NMI watchdog to use reference cycles
on Intel systems. This is actually more accurate than cycles,
because cycles can tick faster than the measured CPU Frequency
due to Turbo mode.
The ref cycles always tick at their frequency, or slower when
the system is idling. That means the NMI watchdog can never
expire too early, unlike with cycles.
The reference cycles tick roughly at the frequency of the TSC,
so the same period computation can be used.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: acme@kernel.org
Cc: jolsa@kernel.org
Link: http://lkml.kernel.org/r/1465478079-19993-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>