Commit Graph

26775 Commits

Author SHA1 Message Date
Linus Torvalds
cf6ace16a3 Merge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  signal: align __lock_task_sighand() irq disabling and RCU
  softirq,rcu: Inform RCU of irq_exit() activity
  sched: Add irq_{enter,exit}() to scheduler_ipi()
  rcu: protect __rcu_read_unlock() against scheduler-using irq handlers
  rcu: Streamline code produced by __rcu_read_unlock()
  rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special
  rcu: decrease rcu_report_exp_rnp coupling with scheduler
2011-07-20 15:56:25 -07:00
Philip Rakity
ca8e99b32e mmc: core: Set non-default Drive Strength via platform hook
Non default Drive Strength cannot be set automatically.  It is a function
of the board design and only if there is a specific platform handler can
it be set.  The platform handler needs to take into account the board
design.  Pass to the platform code the necessary information.

For example:  The card and host controller may indicate they support HIGH
and LOW drive strength.  There is no way to know what should be chosen
without specific board knowledge.  Setting HIGH may lead to reflections
and setting LOW may not suffice.  There is no mechanism (like ethernet
duplex or speed pulses) to determine what should be done automatically.

If no platform handler is defined -- use the default value.

Signed-off-by: Philip Rakity <prakity@marvell.com>
Reviewed-by: Arindam Nath <arindam.nath@amd.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:16 -04:00
Per Forlin
aa8b683a7d mmc: core: add non-blocking mmc request function
Previously there has only been one function mmc_wait_for_req()
to start and wait for a request. This patch adds:

 * mmc_start_req() - starts a request wihtout waiting
   If there is on ongoing request wait for completion
   of that request and start the new one and return.
   Does not wait for the new command to complete.

This patch also adds new function members in struct mmc_host_ops
only called from core.c:

 * pre_req - asks the host driver to prepare for the next job
 * post_req - asks the host driver to clean up after a completed job

The intention is to use pre_req() and post_req() to do cache maintenance
while a request is active. pre_req() can be called while a request is
active to minimize latency to start next job. post_req() can be used after
the next job is started to clean up the request. This will minimize the
host driver request end latency. post_req() is typically used before
ending the block request and handing over the buffer to the block layer.

Add a host-private member in mmc_data to be used by pre_req to mark the
data. The host driver will then check this mark to see if the data is
prepared or not.

Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:10 -04:00
James Hogan
03e8cb534e mmc: dw_mmc: fix stop when fallen back to PIO
There are several situations when dw_mci_submit_data_dma() decides to
fall back to PIO mode instead of using DMA, due to a short (to avoid
overhead) or "complex" (e.g. with unaligned buffers) transaction, even
though host->use_dma is set. However dw_mci_stop_dma() decides whether
to stop DMA or set the EVENT_XFER_COMPLETE event based on host->use_dma.
When falling back to PIO mode this results in data timeout errors
getting missed and the driver locking up.

Therefore add host->using_dma to indicate whether the current
transaction is using dma or not, and adjust dw_mci_stop_dma() to use
that instead.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Will Newton <will.newton@imgtec.com>
Tested-by: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:05 -04:00
Adrian Hunter
e056a1b5b6 mmc: queue: let host controllers specify maximum discard timeout
Some host controllers will not operate without a hardware
timeout that is limited in value.  However large discards
require large timeouts, so there needs to be a way to
specify the maximum discard size.

A host controller driver may now specify the maximum discard
timeout possible so that max_discard_sectors can be calculated.

However, for eMMC when the High Capacity Erase Group Size
is not in use, the timeout calculation depends on clock
rate which may change.  For that case Preferred Erase Size
is used instead.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:03 -04:00
James Hogan
34b664a20e mmc: dw_mmc: handle unaligned buffers and sizes
Update functions for PIO pushing and pulling data to and from the FIFO
so that they can handle unaligned output buffers and unaligned buffer
lengths. This makes more of the tests in mmc_test pass.

Unaligned lengths in pulls are handled by reading the full FIFO item,
and storing the remaining bytes in a small internal buffer (part_buf).
The next data pull will copy data out of this buffer first before
accessing the FIFO again. Similarly, for pushes the final bytes that
don't fill a FIFO item are stored in the part_buf (or sent anyway if
it's the last transfer), and then the part_buf is included at the
beginning of the next buffer pushed.

Unaligned buffers in pulls are handled specially if the architecture
cannot do efficient unaligned accesses, by reading FIFO items into a
aligned local buffer, and memcpy'ing them into the output buffer, again
storing any remaining bytes in the internal buffer. Similarly for pushes
the buffer is memcpy'd into an aligned local buffer then written to the
FIFO.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Will Newton <will.newton@imgtec.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:00 -04:00
James Hogan
b86d825323 mmc: dw_mmc: don't hard code fifo depth, fix usage
The FIFO_DEPTH hardware configuration parameter can be found from the
power-on value of RX_WMark in the FIFOTH register. This is used to
initialise the watermarks, but when calculating the number of free fifo
spaces a preprocessor definition is used which is hard coded to 32.

Fix reading the value out of FIFOTH (the default value in the RX_WMark
field is FIFO_DEPTH-1 not FIFO_DEPTH). Allow the fifo depth to be
overriden by platform data (since a bootloader may have changed FIFOTH
making auto-detection unreliable). Store the fifo_depth for later use.
Also fix the calculation to find the number of free bytes in the fifo to
include the fifo depth in the left shift by the data shift, since the
fifo depth is measured in fifo items not bytes.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Will Newton <will.newton@imgtec.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:59 -04:00
James Hogan
1791b13ea4 mmc: dw_mmc: convert card tasklet to workqueue
Convert the card insert/remove tasklet to a workqueue, and call the
setpower platform specific callback without the spinlock held. This
means neither of the setpower or get_cd callbacks are called from atomic
context which allows them to sleep.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Will Newton <will.newton@imgtec.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:58 -04:00
Simon Horman
973ed3af1a mmc: sdhi: Add write16_hook
Some controllers require waiting for the bus to become idle
before writing to some registers. I have implemented this
by adding a hook to sd_ctrl_write16() and implementing
a hook for SDHI which waits for the bus to become idle.

Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Magnus Damm <magnus.damm@gmail.com>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:57 -04:00
Simon Horman
95c7348d94 mmc: tmio: name 0xd8 as CTL_DMA_ENABLE
This reflects at least the current usage of this register
and I think it improves the readability of the code ever so slightly.

Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Magnus Damm <magnus.damm@gmail.com>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:55 -04:00
Russell King - ARM Linux
0a2d4048a2 mmc: block: allow get_card_status() to return error status
If the MMC_SEND_STATUS command is not successful, we should not return
a zero status word, but instead allow the caller to know positively
that an error occurred.

Convert the open-coded get_card_status() to use the helper function,
and provide definitions for the card state field.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:54 -04:00
Zhangfei Gao
bfed345edf mmc: sdhci-pxa: move platform data to include/linux/platform_data
As suggested by Arnd, move platform data to include/linux/platform_data
in order to improve build coverage for the driver.

Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:52 -04:00
Robert P. J. Day
100e918610 mmc: Standardize header file inclusion checks.
Standardize the checks for multiple MMC header file inclusion,
including adding comments to terminating #endif's, and fixing
one incorrect comment.

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:48 -04:00
Shawn Guo
94cc6a8656 mmc: sdhci: merge two sdhci-pltfm.h into one
The structure sdhci_pltfm_data is not necessarily to be in a public
header like include/linux/mmc/sdhci-pltfm.h, so the patch moves it
into drivers/mmc/host/sdhci-pltfm.h and eliminates the former one.

Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Reviewed-by: Grant Likely <grant.likely@secretlab.ca>
Reviewed-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:48 -04:00
Shawn Guo
85d6509dc8 mmc: sdhci: make sdhci-pltfm device drivers self registered
The patch turns the common stuff in sdhci-pltfm.c into functions, and
add device drivers their own .probe and .remove which in turn call
into the common functions, so that those sdhci-pltfm device drivers
register itself and keep all device specific things away from common
sdhci-pltfm file.

Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Reviewed-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Anton Vorontsov <cbouatmailru@gmail.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:16:06 -04:00
Jan H. Schönherr
7f70893173 rcu: Fix wrong check in list_splice_init_rcu()
If the list to be spliced is empty, then list_splice_init_rcu() has
nothing to do.  Unfortunately, list_splice_init_rcu() does not check
the list to be spliced; it instead checks the list to be spliced into.
This results in memory leaks given current usage.  This commit
therefore fixes the empty-list check.

Signed-off-by: Jan H. Schönherr <schnhrr@cs.tu-berlin.de>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-07-20 14:10:20 -07:00
Ingo Molnar
d1e9ae47a0 Merge branch 'rcu/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-2.6-rcu into core/urgent 2011-07-20 20:59:26 +02:00
Eric Dumazet
b56efcf0a4 slab: shrink sizeof(struct kmem_cache)
Reduce high order allocations for some setups.
(NR_CPUS=4096 -> we need 64KB per kmem_cache struct)

We now allocate exact needed size (using nr_cpu_ids and nr_node_ids)

This also makes code a bit smaller on x86_64, since some field offsets
are less than the 127 limit :

Before patch :
# size mm/slab.o
   text    data     bss     dec     hex filename
  22605  361665      32  384302   5dd2e mm/slab.o

After patch :
# size mm/slab.o
   text    data     bss     dec     hex filename
  22349	 353473	   8224	 384046	  5dc2e	mm/slab.o

CC: Andrew Morton <akpm@linux-foundation.org>
Reported-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-20 20:27:56 +03:00
Peter Zijlstra
e3589f6c81 sched: Allow for overlapping sched_domain spans
Allow for sched_domain spans that overlap by giving such domains their
own sched_group list instead of sharing the sched_groups amongst
each-other.

This is needed for machines with more than 16 nodes, because
sched_domain_node_span() will generate a node mask from the
16 nearest nodes without regard if these masks have any overlap.

Currently sched_domains have a sched_group that maps to their child
sched_domain span, and since there is no overlap we share the
sched_group between the sched_domains of the various CPUs. If however
there is overlap, we would need to link the sched_group list in
different ways for each cpu, and hence sharing isn't possible.

In order to solve this, allocate private sched_groups for each CPU's
sched_domain but have the sched_groups share a sched_group_power
structure such that we can uniquely track the power.

Reported-and-tested-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/n/tip-08bxqw9wis3qti9u5inifh3y@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-20 18:32:41 +02:00
Peter Zijlstra
9c3f75cbd1 sched: Break out cpu_power from the sched_group structure
In order to prepare for non-unique sched_groups per domain, we need to
carry the cpu_power elsewhere, so put a level of indirection in.

Reported-and-tested-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/n/tip-qkho2byuhe4482fuknss40ad@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-20 18:32:40 +02:00
Jan Kara
d12dc25654 quota: Remove unused declaration
There is no point in declaring quotactl() syscall prototype in kernel header and
'make headers_check' complains about it. So just remove those lines.

Signed-off-by: Jan Kara <jack@suse.cz>
2011-07-20 14:31:47 +02:00
Dave Chinner
09cc9fc7a7 inode: move to per-sb LRU locks
With the inode LRUs moving to per-sb structures, there is no longer
a need for a global inode_lru_lock. The locking can be made more
fine-grained by moving to a per-sb LRU lock, isolating the LRU
operations of different filesytsems completely from each other.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:36 -04:00
Dave Chinner
98b745c647 inode: Make unused inode LRU per superblock
The inode unused list is currently a global LRU. This does not match
the other global filesystem cache - the dentry cache - which uses
per-superblock LRU lists. Hence we have related filesystem object
types using different LRU reclaimation schemes.

To enable a per-superblock filesystem cache shrinker, both of these
caches need to have per-sb unused object LRU lists. Hence this patch
converts the global inode LRU to per-sb LRUs.

The patch only does rudimentary per-sb propotioning in the shrinker
infrastructure, as this gets removed when the per-sb shrinker
callouts are introduced later on.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:35 -04:00
Dave Chinner
e9299f5058 vmscan: add customisable shrinker batch size
For shrinkers that have their own cond_resched* calls, having
shrink_slab break the work down into small batches is not
paticularly efficient. Add a custom batchsize field to the struct
shrinker so that shrinkers can use a larger batch size if they
desire.

A value of zero (uninitialised) means "use the default", so
behaviour is unchanged by this patch.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:32 -04:00
Al Viro
0ee5dc676a btrfs: kill magical embedded struct superblock
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:20 -04:00
Al Viro
e0a0124936 switch vfs_path_lookup() to struct path
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:14 -04:00
Al Viro
ed75e95de5 kill lookup_create()
folded into the only caller (kern_path_create())

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:12 -04:00
Al Viro
6657719390 make sure that nsproxy_cache is initialized early enough
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:07 -04:00
Al Viro
dae6ad8f37 new helpers: kern_path_create/user_path_create
combination of kern_path_parent() and lookup_create().  Does *not*
expose struct nameidata to caller.  Syscalls converted to that...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:05 -04:00
Al Viro
49084c3bb2 kill LOOKUP_CONTINUE
LOOKUP_PARENT is equivalent to it now

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:44:03 -04:00
Al Viro
3d4ff43d89 nfs_open_context doesn't need struct path either
just dentry, please...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:44 -04:00
Al Viro
729cdb3a1e kill IPERM_FLAG_RCU
not used anymore

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:36 -04:00
Al Viro
eecdd358b4 ->permission() sanitizing: don't pass flags to exec_permission()
pass mask instead; kill security_inode_exec_permission() since we can use
security_inode_permission() instead.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:29 -04:00
Al Viro
e74f71eb78 ->permission() sanitizing: don't pass flags to ->inode_permission()
pass that via mask instead.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:26 -04:00
Al Viro
10556cb21a ->permission() sanitizing: don't pass flags to ->permission()
not used by the instances anymore.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:24 -04:00
Al Viro
2830ba7f34 ->permission() sanitizing: don't pass flags to generic_permission()
redundant; all callers get it duplicated in mask & MAY_NOT_BLOCK and none of
them removes that bit.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:22 -04:00
Al Viro
7e40145eb1 ->permission() sanitizing: don't pass flags to ->check_acl()
not used in the instances anymore.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:21 -04:00
Al Viro
1fc0f78ca9 ->permission() sanitizing: MAY_NOT_BLOCK
Duplicate the flags argument into mask bitmap.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:18 -04:00
Al Viro
178ea73521 kill check_acl callback of generic_permission()
its value depends only on inode and does not change; we might as
well store it in ->i_op->check_acl and be done with that.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:16 -04:00
Al Viro
07b8ce1ee8 lockless get_write_access/deny_write_access
new helpers: atomic_inc_unless_negative()/atomic_dec_unless_positive()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:14 -04:00
Al Viro
3bfa784a65 kill file_permission() completely
convert the last remaining caller to inode_permission()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:11 -04:00
Al Viro
1b5d783c94 consolidate BINPRM_FLAGS_ENFORCE_NONDUMP handling
new helper: would_dump(bprm, file).  Checks if we are allowed to
read the file and if we are not - sets ENFORCE_NODUMP.  Exported,
used in places that previously open-coded the same logics.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:10 -04:00
Al Viro
43e15cdbef new helper: iterate_supers_type()
Call the given function for all superblocks of given type.  Function
gets a superblock (with s_umount locked shared) and (void *) argument
supplied by caller of iterator.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:04 -04:00
Josef Bacik
44396f4b5c fs: add a DCACHE_NEED_LOOKUP flag for d_flags
Btrfs (and I'd venture most other fs's) stores its indexes in nice disk order
for readdir, but unfortunately in the case of anything that stats the files in
order that readdir spits back (like oh say ls) that means we still have to do
the normal lookup of the file, which means looking up our other index and then
looking up the inode.  What I want is a way to create dummy dentries when we
find them in readdir so that when ls or anything else subsequently does a
stat(), we already have the location information in the dentry and can go
straight to the inode itself.  The lookup stuff just assumes that if it finds a
dentry it is done, it doesn't perform a lookup.  So add a DCACHE_NEED_LOOKUP
flag so that the lookup code knows it still needs to run i_op->lookup() on the
parent to get the inode for the dentry.  I have tested this with btrfs and I
went from something that looks like this

http://people.redhat.com/jwhiter/ls-noreada.png

To this

http://people.redhat.com/jwhiter/ls-good.png

Thats a savings of 1300 seconds, or 22 minutes.  That is a significant savings.
Thanks,

Signed-off-by: Josef Bacik <josef@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 01:43:03 -04:00
Paul E. McKenney
7765be2fec rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special
The RCU_BOOST commits for TREE_PREEMPT_RCU introduced an other-task
write to a new RCU_READ_UNLOCK_BOOSTED bit in the task_struct structure's
->rcu_read_unlock_special field, but, as noted by Steven Rostedt, without
correctly synchronizing all accesses to ->rcu_read_unlock_special.
This could result in bits in ->rcu_read_unlock_special being spuriously
set and cleared due to conflicting accesses, which in turn could result
in deadlocks between the rcu_node structure's ->lock and the scheduler's
rq and pi locks.  These deadlocks would result from RCU incorrectly
believing that the just-ended RCU read-side critical section had been
preempted and/or boosted.  If that RCU read-side critical section was
executed with either rq or pi locks held, RCU's ensuing (incorrect)
calls to the scheduler would cause the scheduler to attempt to once
again acquire the rq and pi locks, resulting in deadlock.  More complex
deadlock cycles are also possible, involving multiple rq and pi locks
as well as locks from multiple rcu_node structures.

This commit fixes synchronization by creating ->rcu_boosted field in
task_struct that is accessed and modified only when holding the ->lock
in the rcu_node structure on which the task is queued (on that rcu_node
structure's ->blkd_tasks list).  This results in tasks accessing only
their own current->rcu_read_unlock_special fields, making unsynchronized
access once again legal, and keeping the rcu_read_unlock() fastpath free
of atomic instructions and memory barriers.

The reason that the rcu_read_unlock() fastpath does not need to access
the new current->rcu_boosted field is that this new field cannot
be non-zero unless the RCU_READ_UNLOCK_BLOCKED bit is set in the
current->rcu_read_unlock_special field.  Therefore, rcu_read_unlock()
need only test current->rcu_read_unlock_special: if that is zero, then
current->rcu_boosted must also be zero.

This bug does not affect TINY_PREEMPT_RCU because this implementation
of RCU accesses current->rcu_read_unlock_special with irqs disabled,
thus preventing races on the !SMP systems that TINY_PREEMPT_RCU runs on.

Maybe-reported-by: Dave Jones <davej@redhat.com>
Maybe-reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-19 21:38:52 -07:00
Rafał Miłecki
6f53912fc3 bcma: allow enabling PLL
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-07-19 17:03:11 -04:00
Rafał Miłecki
7424dd0d03 bcma: allow setting FAST clockmode for a core
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-07-19 17:03:11 -04:00
Rafał Miłecki
3de1a7748f bcma: trivial: add helpers for masking/setting
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-07-19 17:03:09 -04:00
Rafał Miłecki
bb932ad980 bcma: move define of BCMA_CLKCTLST register
Recent experiments have shown many cores share 0x1E0 register used for
clock management.

Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-07-19 17:03:08 -04:00
Johannes Berg
34850ab25d cfg80211: allow userspace to control supported rates in scan
Some P2P scans are not allowed to advertise
11b rates, but that is a rather special case
so instead of having that, allow userspace
to request the rate sets (per band) that are
advertised in scan probe request frames.

Since it's needed in two places now, factor
out some common code parsing a rate array.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-07-19 16:49:58 -04:00