Commit Graph

6659 Commits

Author SHA1 Message Date
Frederic Weisbecker
5fa10b28e5 hw-breakpoints: Use struct perf_event_attr to define user breakpoints
In-kernel user breakpoints are created using functions in which
we pass breakpoint parameters as individual variables: address,
length and type.

Although it fits well for x86, this just does not scale across
archictectures that may support this api later as these may have
more or different needs. Pass in a perf_event_attr structure
instead because it is meant to evolve as much as possible into
a generic hardware breakpoint parameter structure.

Reported-by: K.Prasad <prasad@linux.vnet.ibm.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1259294154-5197-1-git-send-regression-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-27 06:22:58 +01:00
Jack Steiner
918bc960dc x86: SGI UV: Map low MMR ranges
Explicitly mmap the UV chipset MMR address ranges used to
access blade-local registers. Although these same MMRs are also
mmaped at higher addresses, the low range is more
convenient when accessing blade-local registers.

The low range addresses always alias to the local blade
regardless of the blade id.

Signed-off-by: Jack Steiner <steiner@sgi.com>
LKML-Reference: <20091125162018.GA25445@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26 10:52:36 +01:00
Brian Gerst
8ec6993d9f x86, 64-bit: Set data segments to null after switching to 64-bit mode
This prevents kernel threads from inheriting non-null segment
selectors, and causing optimizations in __switch_to() to be
ineffective.

Signed-off-by: Brian Gerst <brgerst@gmail.com>
Cc: Tim Blechmann <tim@klingt.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jan Beulich <JBeulich@novell.com>
LKML-Reference: <1259165856-3512-1-git-send-email-brgerst@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26 10:44:30 +01:00
Hidetoshi Seto
767df1bdd8 x86, mce: Add __cpuinit to hotplug callback functions
The mce_disable_cpu() and mce_reenable_cpu() are called only
from mce_cpu_callback() which is marked as __cpuinit.
So these functions can be __cpuinit too.

Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Andi Kleen <ak@linux.intel.com>
LKML-Reference: <4B0E3C4E.4090809@jp.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26 10:29:41 +01:00
Mike Travis
9b3660a55a x86: Limit number of per cpu TSC sync messages
Limit the number of per cpu TSC sync messages by only printing
to the console if an error occurs, otherwise print as a DEBUG
message.

The info message "Skipping synchronization ..." is only printed
after the last cpu has booted.

Signed-off-by: Mike Travis <travis@sgi.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Roland Dreier <rdreier@cisco.com>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Jack Steiner <steiner@sgi.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <20091118002222.181053000@alcatraz.americas.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26 10:17:45 +01:00
Frederic Weisbecker
2c31b7958f x86/hw-breakpoints: Don't lose GE flag while disabling a breakpoint
When we schedule out a breakpoint from the cpu, we also
incidentally remove the "Global exact breakpoint" flag from the
breakpoint control register. It makes us losing the fine grained
precision about the origin of the instructions that may trigger
breakpoint exceptions for the other breakpoints running in this
cpu.

Reported-by: Prasad <prasad@linux.vnet.ibm.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1259211878-6013-1-git-send-regression-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26 09:29:22 +01:00
Frederic Weisbecker
605bfaee90 hw-breakpoints: Simplify error handling in breakpoint creation requests
This simplifies the error handling when we create a breakpoint.
We don't need to check the NULL return value corner case anymore
since we have improved perf_event_create_kernel_counter() to
always return an error code in the failure case.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Prasad <prasad@linux.vnet.ibm.com>
LKML-Reference: <1259210142-5714-3-git-send-regression-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26 09:29:21 +01:00
Ingo Molnar
67f2de0bf9 x86: dumpstack, 64-bit: Disable preemption when walking the IRQ/exception stacks
This warning:

[  847.140022] rb_producer   D 0000000000000000  5928   519      2 0x00000000
[  847.203627] BUG: using smp_processor_id() in preemptible [00000000] code: khungtaskd/517
[  847.207360] caller is show_stack_log_lvl+0x2e/0x241
[  847.210364] Pid: 517, comm: khungtaskd Not tainted 2.6.32-rc8-tip+ #13761
[  847.213395] Call Trace:
[  847.215847]  [<ffffffff81413bde>] debug_smp_processor_id+0x1f0/0x20a
[  847.216809]  [<ffffffff81015eae>] show_stack_log_lvl+0x2e/0x241
[  847.220027]  [<ffffffff81018512>] show_stack+0x1c/0x1e
[  847.223365]  [<ffffffff8107b7db>] sched_show_task+0xe4/0xe9
[  847.226694]  [<ffffffff8112f21f>] check_hung_task+0x140/0x199
[  847.230261]  [<ffffffff8112f4a8>] check_hung_uninterruptible_tasks+0x1b7/0x20f
[  847.233371]  [<ffffffff8112f500>] ? watchdog+0x0/0x50
[  847.236683]  [<ffffffff8112f54e>] watchdog+0x4e/0x50
[  847.240034]  [<ffffffff810cee56>] kthread+0x97/0x9f
[  847.243372]  [<ffffffff81012aea>] child_rip+0xa/0x20
[  847.246690]  [<ffffffff81e43494>] ? restore_args+0x0/0x30
[  847.250019]  [<ffffffff81e43083>] ? _spin_lock+0xe/0x10
[  847.253351]  [<ffffffff810cedbf>] ? kthread+0x0/0x9f
[  847.256833]  [<ffffffff81012ae0>] ? child_rip+0x0/0x20

Happens because on preempt-RCU, khungd calls show_stack() with
preemption enabled.

Make sure we are not preemptible while walking the IRQ and exception
stacks on 64-bit. (32-bit stack dumping is preemption safe.)

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26 08:29:10 +01:00
Ingo Molnar
b803090615 x86: dumpstack: Clean up the x86_stack_ids[][] initalization and other details
Make the initialization more readable, plus tidy up a few small
visual details as well.

No change in functionality.

LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26 08:24:33 +01:00
Tejun Heo
28b4e0d86a x86: Rename global percpu symbol dr7 to cpu_dr7
Percpu symbols now occupy the same namespace as other global
symbols and as such short global symbols without subsystem
prefix tend to collide with local variables.  dr7 percpu
variable used by x86 was hit by this. Rename it to cpu_dr7.

The rename also makes it more consistent with its fellow
cpu_debugreg percpu variable.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Cc: Andrew Morton <akpm@linux-foundation.org>
LKML-Reference: <20091125115856.GA17856@elte.hu>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
2009-11-25 14:30:04 +01:00
FUJITA Tomonori
273bee27fa x86: Fix iommu=soft boot option
iommu=soft boot option forces the kernel to use swiotlb.

( This has the side-effect of enabling the swiotlb over the
  GART if this boot option is provided. This is the desired
  behavior of the swiotlb boot option and works like that
  for all other hw-IOMMU drivers. )

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: yinghai@kernel.org
LKML-Reference: <20091125084611O.fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-25 10:12:51 +01:00
Lin Ming
2263576cfc ACPICA: Add post-order callback to acpi_walk_namespace
The existing interface only has a pre-order callback. This change
adds an additional parameter for a post-order callback which will
be more useful for bus scans. ACPICA BZ 779.

Also update the external calls to acpi_walk_namespace.

http://www.acpica.org/bugzilla/show_bug.cgi?id=779

Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2009-11-24 21:31:10 -05:00
Yinghai Lu
5bf65b9ba6 x86, mtrr: Fix sorting of mtrr after subtracting
In some cases we can coalesce MTRR entries after cleanup; this may
allow us to have more entries.  As such, introduce clean_sort_range to
to sort and coaelsce the MTRR entries.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <4B0BB9A3.5020908@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-11-24 13:06:16 -08:00
Thomas Renninger
e2f74f355e [ACPI/CPUFREQ] Introduce bios_limit per cpu cpufreq sysfs interface
This interface is mainly intended (and implemented) for ACPI _PPC BIOS
frequency limitations, but other cpufreq drivers can also use it for
similar use-cases.

Why is this needed:

Currently it's not obvious why cpufreq got limited.
People see cpufreq/scaling_max_freq reduced, but this could have
happened by:
  - any userspace prog writing to scaling_max_freq
  - thermal limitations
  - hardware (_PPC in ACPI case) limitiations

Therefore export bios_limit (in kHz) to:
  - Point the user that it's the BIOS (broken or intended) which limits
    frequency
  - Export it as a sysfs interface for userspace progs.
    While this was a rarely used feature on laptops, there will appear
    more and more server implemenations providing "Green IT" features like
    allowing the service processor to limit the frequency. People want
    to know about HW/BIOS frequency limitations.

All ACPI P-state driven cpufreq drivers are covered with this patch:
  - powernow-k8
  - powernow-k7
  - acpi-cpufreq

Tested with a patched DSDT which limits the first two cores (_PPC returns 1)
via _PPC, exposed by bios_limit:
# echo 2200000 >cpu2/cpufreq/scaling_max_freq
# cat cpu*/cpufreq/scaling_max_freq
2600000
2600000
2200000
2200000
# #scaling_max_freq shows general user/thermal/BIOS limitations

# cat cpu*/cpufreq/bios_limit
2600000
2600000
2800000
2800000
# #bios_limit only shows the HW/BIOS limitation

CC: Pallipadi Venkatesh <venkatesh.pallipadi@intel.com>
CC: Len Brown <lenb@kernel.org>
CC: davej@codemonkey.org.uk
CC: linux@dominikbrodowski.net

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24 13:33:34 -05:00
Rusty Russell
1cce76c2ac [CPUFREQ] use an enum for speedstep processor identification
The "unsigned int processor" everywhere confused Rusty, leading to
breakage when he passed in smp_processor_id().

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24 13:33:34 -05:00
Krzysztof Helt
db2820dd54 [CPUFREQ] powernow-k6: set transition latency value so ondemand governor can be used
Set the transition latency to value smaller than CPUFREQ_ETERNAL so
governors other than "performance" work (like the "ondemand" one).

The value is found in "AMD PowerNow! Technology Platform Design Guide for
Embedded Processors" dated December 2000 (AMD doc #24267A). There is the
answer to one of FAQs on page 40 which states that suggested complete transition
period is 200 us.

Tested on K6-2+ CPU with K6-3 core (model 13, stepping 4).

Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24 13:33:33 -05:00
Rusty Russell
b8cbe7e82e [CPUFREQ] cpumask: don't put a cpumask on the stack in x86...cpufreq/powernow-k8.c
It's still mugging the current process's cpumask, but as comment in
1ff6e97f1d says, it's not a trivial fix.

So, at least we can use a cpumask_var_t to do the Wrong Thing the Right Way :)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
To: cpufreq@vger.kernel.org
Cc: Mark Langsdorf <mark.langsdorf@amd.com>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24 13:33:33 -05:00
Harald Welte
d77b819745 [CPUFREQ] Enable ACPI PDC handshake for VIA/Centaur CPUs
In commit 0de51088e6, we introduced the
use of acpi-cpufreq on VIA/Centaur CPU's by removing a vendor check for
VENDOR_INTEL.  However, as it turns out, at least the Nano CPU's also
need the PDC (processor driver capabilities) handshake in order to
activate the methods required for acpi-cpufreq.

Since arch_acpi_processor_init_pdc() contains another vendor check for
Intel, the PDC is not initialized on VIA CPU's.  The resulting behavior
of a current mainline kernel on such systems is:  acpi-cpufreq
loads and it indicates CPU frequency changes.  However, the CPU stays at
a single frequency

This trivial patch ensures that init_intel_pdc() is called on Intel and
VIA/Centaur CPU's alike.

Signed-off-by: Harald Welte <HaraldWelte@viatech.com>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24 13:33:32 -05:00
Stephane Eranian
1261a02a0c perf_events, x86: Fix validate_event bug
The validate_event() was failing on valid event combinations. The
function was assuming that if x86_schedule_event() returned 0, it
meant error. But x86_schedule_event() returns the counter index and
0 is a perfectly valid value. An error is returned if the function
returns a negative value.

Furthermore, validate_event() was also failing for event groups
because the event->pmu was not set until after
hw_perf_event_init().

Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: peterz@infradead.org
Cc: paulus@samba.org
Cc: perfmon2-devel@lists.sourceforge.net
Cc: eranian@gmail.com
LKML-Reference: <4b0bdf36.1818d00a.07cc.25ae@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
--
 arch/x86/kernel/cpu/perf_event.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
2009-11-24 19:23:48 +01:00
Yinghai Lu
b24c2a925a x86: Move find_smp_config() earlier and avoid bootmem usage
Move the find_smp_config() call to before bootmem is initialized.
Use reserve_early() instead of reserve_bootmem() in it.

This simplifies the code, we only need to call find_smp_config()
once and can remove the now unneeded reserve parameter from
x86_init_mpparse::find_smp_config.

We thus also reduce x86's dependency on bootmem allocations.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <4B0BB9F2.70907@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-24 12:10:51 +01:00
H. Peter Anvin
eb41c8be89 x86, platform: Change is_untracked_pat_range() to bool; cleanup init
- Change is_untracked_pat_range() to return bool.
- Clean up the initialization of is_untracked_pat_range() -- by default,
  we simply point it at is_ISA_range() directly.
- Move is_untracked_pat_range to the end of struct x86_platform, since
  it is the newest field.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jack Steiner <steiner@sgi.com>
LKML-Reference: <20091119202341.GA4420@sgi.com>
2009-11-23 17:09:59 -08:00
Borislav Petkov
27c13ecec4 x86, cpu: mv display_cacheinfo -> cpu_detect_cache_sizes
display_cacheinfo() doesn't display anything anymore and it is used to
detect CPU cache sizes. Rename it accordingly.

Signed-off-by: Borislav Petkov <petkovbb@gmail.com>
LKML-Reference: <20091121130145.GA31357@liondog.tnic>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-11-23 11:59:53 -08:00
Jack Steiner
fd12a0d69a x86: UV SGI: Don't track GRU space in PAT
GRU space is always mapped as WB in the page table. There is
no need to track the mappings in the PAT. This also eliminates
the "freeing invalid memtype" messages when the GRU space is
unmapped.

Signed-off-by: Jack Steiner <steiner@sgi.com>
LKML-Reference: <20091119202341.GA4420@sgi.com>
[ v2: fix build failure ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 19:47:42 +01:00
Dimitri Sivanich
581f202bcd x86: UV RTC: Always enable RTC clocksource
Always enable the RTC clocksource on UV systems.

Signed-off-by: Dimitri Sivanich <sivanich@sgi.com>
LKML-Reference: <20091120214826.GA20016@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 19:41:30 +01:00
Jan Beulich
0444c9bd0c x86: Tighten conditionals on MCE related statistics
irq_thermal_count is only being maintained when
X86_THERMAL_VECTOR, and both X86_THERMAL_VECTOR and
X86_MCE_THRESHOLD don't need extra wrapping in X86_MCE
conditionals.

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Yong Wang <yong.y.wang@intel.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Arjan van de Ven <arjan@infradead.org>
LKML-Reference: <4B06AFA902000078000211F8@vpn.id2.novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 19:40:06 +01:00
Cliff Wickman
e38e2af1c5 x86: SGI UV: Fix BAU initialization
A memory mapped register that affects the SGI UV Broadcast
Assist Unit's interrupt handling may sometimes be unintialized.

Remove the condition on its initialization, as that condition
can be randomly satisfied by a hardware reset.

Signed-off-by: Cliff Wickman <cpw@sgi.com>
Cc: <stable@kernel.org>
LKML-Reference: <E1NBGB9-0005nU-Dp@eag09.americas.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 19:12:50 +01:00
K.Prasad
ba6909b719 hw-breakpoint: Attribute authorship of hw-breakpoint related files
Attribute authorship to developers of hw-breakpoint related
files.

Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <20091123154713.GA5593@in.ibm.com>
[ v2: moved it to latest -tip ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 18:18:29 +01:00
Joerg Roedel
be83129771 x86/amd-iommu: attach devices to pre-allocated domains early
For some devices the ACPI table may define unity map
requirements which must me met when the IOMMU is enabled. So
we need to attach devices to their domains as early as
possible so that these mappings are in place when needed.
This patch assigns the domains right after they are
allocated. Otherwise this can result in I/O page faults
before a driver binds to a device and BIOS is still using
it.

Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-23 12:54:17 +01:00
Joerg Roedel
9f800de38b x86/amd-iommu: un__init iommu_setup_msi
This function may be called on the resume path and can not
be dropped after booting.

Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-23 12:45:25 +01:00
Ingo Molnar
6e3d8330ae perf events: Do not generate function trace entries in perf code
Decreases perf overhead when function tracing is enabled,
by about 50%.

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 10:19:20 +01:00
Yinghai Lu
d9c2d5ac6a x86, numa: Use near(er) online node instead of roundrobin for NUMA
CPU to node mapping is set via the following sequence:

 1. numa_init_array(): Set up roundrobin from cpu to online node

 2. init_cpu_to_node(): Set that according to apicid_to_node[]
			according to srat only handle the node that
			is online, and leave other cpu on node
			without ram (aka not online) to still
			roundrobin.

3. later call srat_detect_node for Intel/AMD, will use first_online
   node or nearby node.

Problem is that setup_per_cpu_areas() is not called between 2 and 3,
the per_cpu for cpu on node with ram is on different node, and could
put that on node with two hops away.

So try to optimize this and add find_near_online_node() and call
init_cpu_to_node().

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
LKML-Reference: <4B07A739.3030104@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 10:06:24 +01:00
Yinghai Lu
37ef2a3029 x86: Re-get cfg_new in case reuse/move irq_desc
When irq_desc is moved, we need to make sure to use the right cfg_new.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <4B07A739.3030104@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 09:56:05 +01:00
Yinghai Lu
e670761f12 x86: apic: Remove not needed #ifdef
Suresh made dmar_table_init() already have that protection.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <4B07A739.3030104@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 09:54:15 +01:00
Yinghai Lu
44280733e7 x86: Change crash kernel to reserve via reserve_early()
use find_e820_area()/reserve_early() instead.

-v2: address Eric's request, to restore original semantics.
     will fail, if the provided address can not be used.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Eric W. Biederman <ebiederm@xmission.com>
LKML-Reference: <4B09E2F9.7040403@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23 09:09:23 +01:00
Ingo Molnar
96200591a3 Merge branch 'tracing/hw-breakpoints' into perf/core
Conflicts:
	arch/x86/kernel/kprobes.c
	kernel/trace/Makefile

Merge reason: hw-breakpoints perf integration is looking
              good in testing and in reviews, plus conflicts
              are mounting up - so merge & resolve.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-21 14:07:23 +01:00
David S. Miller
3505d1a9fd Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:
	drivers/net/sfc/sfe4001.c
	drivers/net/wireless/libertas/cmd.c
	drivers/staging/Kconfig
	drivers/staging/Makefile
	drivers/staging/rtl8187se/Kconfig
	drivers/staging/rtl8192e/Kconfig
2009-11-18 22:19:03 -08:00
Jan Beulich
350f8f5631 x86: Eliminate redundant/contradicting cache line size config options
Rather than having X86_L1_CACHE_BYTES and X86_L1_CACHE_SHIFT
(with inconsistent defaults), just having the latter suffices as
the former can be easily calculated from it.

To be consistent, also change X86_INTERNODE_CACHE_BYTES to
X86_INTERNODE_CACHE_SHIFT, and set it to 7 (128 bytes) for NUMA
to account for last level cache line size (which here matters
more than L1 cache line size).

Finally, make sure the default value for X86_L1_CACHE_SHIFT,
when X86_GENERIC is selected, is being seen before that for the
individual CPU model options (other than on x86-64, where
GENERIC_CPU is part of the choice construct, X86_GENERIC is a
separate option on ix86).

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Acked-by: Ravikiran Thirumalai <kiran@scalex86.org>
Acked-by: Nick Piggin <npiggin@suse.de>
LKML-Reference: <4AFD5710020000780001F8F0@vpn.id2.novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-19 04:58:34 +01:00
Thomas Gleixner
070e5c3f99 x86: vmiclock: Fix printk format
clockevents.mult became u32. Fix the printk format.
    
Pointed-out-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2009-11-18 12:31:06 +01:00
Thomas Gleixner
6dbfe5a57d x86: Fixup last users of irq_chip->typename
The typename member of struct irq_chip was kept for migration purposes
and is obsolete since more than 2 years. Fix up the leftovers.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2009-11-18 11:45:29 +01:00
Rusty Russell
8dca15e408 [CPUFREQ] speedstep-ich: fix error caused by 394122ab14
"[CPUFREQ] cpumask: avoid playing with cpus_allowed in speedstep-ich.c"
changed the code to mistakenly pass the current cpu as the "processor"
argument of speedstep_get_frequency(), whereas it should be the type of
the processor.

Addresses http://bugzilla.kernel.org/show_bug.cgi?id=14340

Based on a patch by Dave Mueller.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Dominik Brodowski <linux@brodo.de>
Reported-by: Dave Mueller <dave.mueller@gmx.ch>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-17 23:15:04 -05:00
John Villalovos
293afe44d7 [CPUFREQ] acpi-cpufreq: blacklist Intel 0f68: Fix HT detection and put in notification message
Removing the SMT/HT check, since the Errata doesn't mention
Hyper-Threading.

Adding in a printk, so that the user knows why acpi-cpufreq refuses to
load.  Also, once system is blacklisted, don't repeat checks to see if
blacklisted.  This also causes the message to only be printed once,
rather than for each CPU.

Signed-off-by: John L. Villalovos <john.l.villalovos@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-17 23:15:03 -05:00
Roel Kluin
c53614ec17 [CPUFREQ] powernow-k8: Fix test in get_transition_latency()
Not makes it a bool before the comparison.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-17 23:15:03 -05:00
Krzysztof Helt
f7f3cad060 [CPUFREQ] longhaul: select Longhaul version 2 for capable CPUs
There is a typo in the longhaul detection code so only Longhaul v1 or Longhaul v3
is selected. The Longhaul v2 is not selected even for CPUs which are capable of.

Tested on PCChips Giga Pro board. Frequency changes work and the Longhaul v2
detects that the board is not capable of changing CPU voltage.

Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-17 23:15:03 -05:00
Yinghai Lu
508d85c2c6 x86: When cleaning MTRRs, do not fold WP into UC
The current MTRR code treats WP as a form of UC.  This really isn't
desirable behaviour, except possibly in the case of severe MTRR
shortage.  Disable this, to allow legitimate uses of WP to remain
unmolested.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
2009-11-17 10:26:53 -08:00
Lin Ming
0696b711e4 timekeeping: Fix clock_gettime vsyscall time warp
Since commit 0a544198 "timekeeping: Move NTP adjusted clock multiplier
to struct timekeeper" the clock multiplier of vsyscall is updated with
the unmodified clock multiplier of the clock source and not with the
NTP adjusted multiplier of the timekeeper.

This causes user space observerable time warps:
new CLOCK-warp maximum: 120 nsecs,  00000025c337c537 -> 00000025c337c4bf

Add a new argument "mult" to update_vsyscall() and hand in the
timekeeping internal NTP adjusted multiplier.

Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Cc: "Zhang Yanmin" <yanmin_zhang@linux.intel.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Tony Luck <tony.luck@intel.com>
LKML-Reference: <1258436990.17765.83.camel@minggr.sh.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2009-11-17 11:52:34 +01:00
Ingo Molnar
a7b63425a4 Merge branch 'perf/core' into perf/probes
Resolved merge conflict in tools/perf/Makefile

Merge reason: we want to queue up a dependent patch.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-17 10:17:47 +01:00
Andreas Herrmann
8cc2361bd0 x86: ucode-amd: Move family check to microcde_amd.c's init function
... to avoid useless trial to load firmware on systems with
unsupported AMD CPUs.

Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Cc: Mike Travis <travis@sgi.com>
Cc: Tigran Aivazian <tigran@aivazian.fsnet.co.uk>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Andreas Mohr <andi@lisas.de>
Cc: Jack Steiner <steiner@sgi.com>
LKML-Reference: <20091117070638.GA27691@alberich.amd.com>
[ v2: changed BUG_ON() to WARN_ON() ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-17 10:10:40 +01:00
Eric W. Biederman
bb9074ff58 Merge commit 'v2.6.32-rc7'
Resolve the conflict between v2.6.32-rc7 where dn_def_dev_handler
gets a small bug fix and the sysctl tree where I am removing all
sysctl strategy routines.
2009-11-17 01:01:34 -08:00
Ingo Molnar
123bf0e2ed x86: gart: Clean up the code a bit
Clean up various small stylistic details in the GART code. No
functionality changed.

Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: muli@il.ibm.com
Cc: joerg.roedel@amd.com
LKML-Reference: <1258287594-8777-2-git-send-email-fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-17 07:57:00 +01:00
FUJITA Tomonori
1f7564ca83 x86: Calgary: Remove unnecessary DMA_ERROR_CODE usage
This cleans up iommu_alloc() a bit and removes unnecessary
DMA_ERROR_CODE usage.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: muli@il.ibm.com
Cc: joerg.roedel@amd.com
LKML-Reference: <1258287594-8777-4-git-send-email-fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-17 07:53:21 +01:00