Adding support for device sleep through the external input control
signal "SLEEP".
Changing the SLEEP signal state can switch the device into SLEEP and
ACTIVE state.
Also adding sleep configuration for different resources so that they
should be keep on during sleep state of device.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The mfd/asic3 driver does not set the ds1wm_driver_data clock_rate field
before passing the structure to the DS1WM w1 busmaster driver.
This was not noticed before commit 26a6afb, because ds1wm_find_divisor()
unintentionally returned the correct divisor when a zero clock_rate was
passed in. However after that commit DS1WM fails a zero clock_rate:
ds1wm ds1wm: no suitable divisor for 0Hz clock
This patch sets the ds1wm_driver_data clock_rate field.
Signed-off-by: Paul Parsons <lost.distance@yahoo.com>
Acked-by: Philipp Zabel <philipp.zabel@gmail.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This driver currently creates resources for use by a forthcoming ICH
chipset GPIO driver. It could be expanded to create the resources for
converting the esb2rom (mtd) and iTCO_wdt (wdt), and potentially more,
drivers to use the mfd model.
Signed-off-by: Aaron Sierra <asierra@xes-inc.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
skb->head is currently allocated from kmalloc(). This is convenient but
has the drawback the data cannot be converted to a page fragment if
needed.
We have three spots were it hurts :
1) GRO aggregation
When a linear skb must be appended to another skb, GRO uses the
frag_list fallback, very inefficient since we keep all struct sk_buff
around. So drivers enabling GRO but delivering linear skbs to network
stack aren't enabling full GRO power.
2) splice(socket -> pipe).
We must copy the linear part to a page fragment.
This kind of defeats splice() purpose (zero copy claim)
3) TCP coalescing.
Recently introduced, this permits to group several contiguous segments
into a single skb. This shortens queue lengths and save kernel memory,
and greatly reduce probabilities of TCP collapses. This coalescing
doesnt work on linear skbs (or we would need to copy data, this would be
too slow)
Given all these issues, the following patch introduces the possibility
of having skb->head be a fragment in itself. We use a new skb flag,
skb->head_frag to carry this information.
build_skb() is changed to accept a frag_size argument. Drivers willing
to provide a page fragment instead of kmalloc() data will set a non zero
value, set to the fragment size.
Then, on situations we need to convert the skb head to a frag in itself,
we can check if skb->head_frag is set and avoid the copies or various
fallbacks we have.
This means drivers currently using frags could be updated to avoid the
current skb->head allocation and reduce their memory footprint (aka skb
truesize). (thats 512 or 1024 bytes saved per skb). This also makes
bpf/netfilter faster since the 'first frag' will be part of skb linear
part, no need to copy data.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Maciej Żenczykowski <maze@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Cc: Matt Carlson <mcarlson@broadcom.com>
Cc: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch implements the directed yield hypercall found on other
System z hypervisors. It delegates execution time to the virtual cpu
specified in the instruction's parameter.
Useful to avoid long spinlock waits in the guest.
Christian Borntraeger: moved common code in virt/kvm/
Signed-off-by: Konstantin Weitz <WEITZKON@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Pull SCSI fixes from James Bottomley:
"This is a set of SAS and SATA fixes; there are one or two longstanding
bug fixes, but most of this is regression fixes."
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
[SCSI] libfc: update mfs boundry checking
[SCSI] Revert "[SCSI] libsas: fix sas port naming"
[SCSI] libsas: fix false positive 'device attached' conditions
[SCSI] libsas, libata: fix start of life for a sas ata_port
[SCSI] libsas: fix ata_eh clobbering ex_phys via smp_ata_check_ready
[SCSI] libsas: unify domain_device sas_rphy lifetimes
[SCSI] libsas: fix sas_get_port_device regression
[SCSI] libsas: fix sas_find_bcast_phy() in the presence of 'vacant' phys
[SCSI] libsas: introduce sas_work to fix sas_drain_work vs sas_queue_work
[SCSI] libata: Pass correct DMA device to scsi host
[SCSI] scsi_lib: use correct DMA device in __scsi_alloc_queue
Some devices does not support bulk read and write operations, for them
we have series of single write and read operations.
Signed-off-by: Anthony Olech <Anthony.Olech@diasemi.com>
Signed-off-by: Ashish Jangam <ashish.jangam@kpitcummins.com>
[Fixed coding style, don't check use_single_rw before assign --broonie ]
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
We need a hook to release host bridge resources allocated when creating
root bus.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Use that device for pci_root_bus bridge pointer.
Use pci_release_bus_bridge_dev() to release allocated pci_host_bridge in
remove path.
Use root bus bridge pointer to get host bridge pointer instead of searching
host bridge list. That leaves the host bridge list unused, so remove it.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
This introduces a fake module param $module.dyndbg. Its based upon
Thomas Renninger's $module.ddebug boot-time debugging patch from
https://lkml.org/lkml/2010/9/15/397
The 'fake' module parameter is provided for all modules, whether or
not they need it. It is not explicitly added to each module, but is
implemented in callbacks invoked from parse_args.
For builtin modules, dynamic_debug_init() now directly calls
parse_args(..., &ddebug_dyndbg_boot_params_cb), to process the params
undeclared in the modules, just after the ddebug tables are processed.
While its slightly weird to reprocess the boot params, parse_args() is
already called repeatedly by do_initcall_levels(). More importantly,
the dyndbg queries (given in ddebug_query or dyndbg params) cannot be
activated until after the ddebug tables are ready, and reusing
parse_args is cleaner than doing an ad-hoc parse. This reparse would
break options like inc_verbosity, but they probably should be params,
like verbosity=3.
ddebug_dyndbg_boot_params_cb() handles both bare dyndbg (aka:
ddebug_query) and module-prefixed dyndbg params, and ignores all other
parameters. For example, the following will enable pr_debug()s in 4
builtin modules, in the order given:
dyndbg="module params +p; module aio +p" module.dyndbg=+p pci.dyndbg
For loadable modules, parse_args() in load_module() calls
ddebug_dyndbg_module_params_cb(). This handles bare dyndbg params as
passed from modprobe, and errors on other unknown params.
Note that modprobe reads /proc/cmdline, so "modprobe foo" grabs all
foo.params, strips the "foo.", and passes these to the kernel.
ddebug_dyndbg_module_params_cb() is again called for the unknown
params; it handles dyndbg, and errors on others. The "doing" arg
added previously contains the module name.
For non CONFIG_DYNAMIC_DEBUG builds, the stub function accepts
and ignores $module.dyndbg params, other unknowns get -ENOENT.
If no param value is given (as in pci.dyndbg example above), "+p" is
assumed, which enables all pr_debug callsites in the module.
The dyndbg fake parameter is not shown in /sys/module/*/parameters,
thus it does not use any resources. Changes to it are made via the
control file.
Also change pr_info in ddebug_exec_queries to vpr_info,
no need to see it all the time.
Signed-off-by: Jim Cromie <jim.cromie@gmail.com>
CC: Thomas Renninger <trenn@suse.de>
CC: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add a 3rd arg, named "doing", to unknown-options callbacks invoked
from parse_args(). The arg is passed as:
"Booting kernel" from start_kernel(),
initcall_level_names[i] from do_initcall_level(),
mod->name from load_module(), via parse_args(), parse_one()
parse_args() already has the "name" parameter, which is renamed to
"doing" to better reflect current uses 1,2 above. parse_args() passes
it to an altered parse_one(), which now passes it down into the
unknown option handler callbacks.
The mod->name will be needed to handle dyndbg for loadable modules,
since params passed by modprobe are not qualified (they do not have a
"$modname." prefix), and by the time the unknown-param callback is
called, the module name is not otherwise available.
Minor tweaks:
Add param-name to parse_one's pr_debug(), current message doesnt
identify the param being handled, add it.
Add a pr_info to print current level and level_name of the initcall,
and number of registered initcalls at that level. This adds 7 lines
to dmesg output, like:
initlevel:6=device, 172 registered initcalls
Drop "parameters" from initcall_level_names[], its unhelpful in the
pr_info() added above. This array is passed into parse_args() by
do_initcall_level().
CC: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Jim Cromie <jim.cromie@gmail.com>
Acked-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit implements an SRCU state machine in support of call_srcu().
The state machine is preemptible, light-weight, and single-threaded,
minimizing synchronization overhead. In particular, there is no longer
any need for synchronize_srcu() to be guarded by a mutex.
Expedited processing is handled, at least in the absence of concurrent
grace-period operations on that same srcu_struct structure, by having
the synchronize_srcu_expedited() thread take on the role of the
workqueue thread for one iteration.
There is a reasonable probability that a given SRCU callback will
be invoked on the same CPU that registered it, however, there is no
guarantee. Concurrent SRCU grace-period primitives can cause callbacks
to be executed elsewhere, even in absence of CPU-hotplug operations.
Callbacks execute in process context, but under the influence of
local_bh_disable(), so it is illegal to sleep in an SRCU callback
function.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The old srcu_barrier() macro is now unused. This commit removes it so
that it may be used for the SRCU flavor of rcu_barrier(), which will in
turn be needed to allow the upcoming call_srcu() to be used from within
modules.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
This commit implements a variant of Peter's algorithm, which may be found
at https://lkml.org/lkml/2012/2/1/119.
o Make the checking lock-free to enable parallel checking.
Parallel checking is required when (1) the original checking
task is preempted for a long time, (2) sychronize_srcu_expedited()
starts during an ongoing SRCU grace period, or (3) we wish to
avoid acquiring a lock.
o Since the checking is lock-free, we avoid a mutex in state machine
for call_srcu().
o Remove the SRCU_REF_MASK and remove the coupling with the flipping.
This might allow us to remove the preempt_disable() in future
versions, though such removal will need great care because it
rescinds the one-old-reader-per-CPU guarantee.
o Remove a smp_mb(), simplify the comments and make the smp_mb() pairs
more intuitive.
Inspired-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The purpose of the upper bit of SRCU's per-CPU counters is to guarantee
that no reasonable series of srcu_read_lock() and srcu_read_unlock()
operations can return the value of the counter to its original value.
This guarantee is require only after the index has been switched to
the other set of counters, so at most one srcu_read_lock() can affect
a given CPU's counter. The number of srcu_read_unlock() operations
on a given counter is limited to the number of tasks in the system,
which given the Linux kernel's current structure is limited to far less
than 2^30 on 32-bit systems and far less than 2^62 on 64-bit systems.
(Something about a limited number of bytes in the kernel's address space.)
Therefore, if srcu_read_lock() increments the upper bits, then
srcu_read_unlock() need not do so. In this case, an srcu_read_lock() and
an srcu_read_unlock() will flip the lower bit of the upper field of the
counter. An unreasonably large additional number of srcu_read_unlock()
operations would be required to return the counter to its initial value,
thus preserving the guarantee.
This commit takes this approach, which further allows it to shrink
the size of the upper field to one bit, making the number of
srcu_read_unlock() operations required to return the counter to its
initial value even more unreasonable than before.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The current implementation of synchronize_srcu_expedited() can cause
severe OS jitter due to its use of synchronize_sched(), which in turn
invokes try_stop_cpus(), which causes each CPU to be sent an IPI.
This can result in severe performance degradation for real-time workloads
and especially for short-interation-length HPC workloads. Furthermore,
because only one instance of try_stop_cpus() can be making forward progress
at a given time, only one instance of synchronize_srcu_expedited() can
make forward progress at a time, even if they are all operating on
distinct srcu_struct structures.
This commit, inspired by an earlier implementation by Peter Zijlstra
(https://lkml.org/lkml/2012/1/31/211) and by further offline discussions,
takes a strictly algorithmic bits-in-memory approach. This has the
disadvantage of requiring one explicit memory-barrier instruction in
each of srcu_read_lock() and srcu_read_unlock(), but on the other hand
completely dispenses with OS jitter and furthermore allows SRCU to be
used freely by CPUs that RCU believes to be idle or offline.
The update-side implementation handles the single read-side memory
barrier by rechecking the per-CPU counters after summing them and
by running through the update-side state machine twice.
This implementation has passed moderate rcutorture testing on both
x86 and Power. Also updated to use this_cpu_ptr() instead of per_cpu_ptr(),
as suggested by Peter Zijlstra.
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Reviewed-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Commit b6787242f3 ("HID: hidraw: add proper error handling to raw event
reporting") forgot to update the static inline version of
hidraw_report_event() for the case when CONFIG_HIDRAW is unset. Fix that
up.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
This option has been deprecated for many years now, and no userspace
tools use it anymore, so it should be safe to finally remove it.
Reported-by: Kay Sievers <kay@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This option has been deprecated for many years now, and no userspace
tools use it anymore, so it should be safe to finally remove it.
Reported-by: Kay Sievers <kay@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There is currently no user, but we might need it in future.
So better add it now, before we have to convert drivers afterwards.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Acked-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently we use two different naming schemes in the IIO API, iio_verb_object
and iio_object_verb. E.g iio_device_register and iio_allocate_device. This
patches renames instances of the later to the former. The patch also renames allocate to
alloc as this seems to be the preferred form throughout the kernel.
In particular the following renames are performed by the patch:
iio_put_device -> iio_device_put
iio_allocate_device -> iio_device_alloc
iio_free_device -> iio_device_free
iio_get_trigger -> iio_trigger_get
iio_put_trigger -> iio_trigger_put
iio_allocate_trigger -> iio_trigger_alloc
iio_free_trigger -> iio_trigger_free
The conversion was done with the following coccinelle patch with manual fixes to
comments and documentation.
<smpl>
@@
@@
-iio_put_device
+iio_device_put
@@
@@
-iio_allocate_device
+iio_device_alloc
@@
@@
-iio_free_device
+iio_device_free
@@
@@
-iio_get_trigger
+iio_trigger_get
@@
@@
-iio_put_trigger
+iio_trigger_put
@@
@@
-iio_allocate_trigger
+iio_trigger_alloc
@@
@@
-iio_free_trigger
+iio_trigger_free
</smpl>
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Create a new BH_Verified flag to indicate that we've verified all the
data in a buffer_head for correctness. This allows us to bypass
expensive verification steps when they are not necessary without
missing them when they are.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
The actual internal pipe implementation is already really about
individual packets (called "pipe buffers"), and this simply exposes that
as a special packetized mode.
When we are in the packetized mode (marked by O_DIRECT as suggested by
Alan Cox), a write() on a pipe will not merge the new data with previous
writes, so each write will get a pipe buffer of its own. The pipe
buffer is then marked with the PIPE_BUF_FLAG_PACKET flag, which in turn
will tell the reader side to break the read at that boundary (and throw
away any partial packet contents that do not fit in the read buffer).
End result: as long as you do writes less than PIPE_BUF in size (so that
the pipe doesn't have to split them up), you can now treat the pipe as a
packet interface, where each read() system call will read one packet at
a time. You can just use a sufficiently big read buffer (PIPE_BUF is
sufficient, since bigger than that doesn't guarantee atomicity anyway),
and the return value of the read() will naturally give you the size of
the packet.
NOTE! We do not support zero-sized packets, and zero-sized reads and
writes to a pipe continue to be no-ops. Also note that big packets will
currently be split at write time, but that the size at which that
happens is not really specified (except that it's bigger than PIPE_BUF).
Currently that limit is the system page size, but we might want to
explicitly support bigger packets some day.
The main user for this is going to be the autofs packet interface,
allowing us to stop having to care so deeply about exact packet sizes
(which have had bugs with 32/64-bit compatibility modes). But user
space can create packetized pipes with "pipe2(fd, O_DIRECT)", which will
fail with an EINVAL on kernels that do not support this interface.
Tested-by: Michael Tokarev <mjt@tls.msk.ru>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: David Miller <davem@davemloft.net>
Cc: Ian Kent <raven@themaw.net>
Cc: Thomas Meyer <thomas@m3y3r.de>
Cc: stable@kernel.org # needed for systemd/autofs interaction fix
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
With this optional callback the driver is notified when the first page
is entered into the pagelist and a new deferred_io call is scheduled.
A possible use-case for this is runtime-pm. In the first_io call
pm_runtime_get()
could be called, which starts an asynchronous runtime_resume of the
device. In the deferred_io callback a call to
pm_runtime_barrier()
makes the sure, the device is resumed by then and a
pm_runtime_put()
may put the device back to sleep.
Also, some SoCs may use the runtime-pm system to determine if they
are able to enter deeper idle states. Therefore it is necessary to
keep the use-count from the first written page until the conclusion
of the screen update, to prevent the system from going to sleep before
completing the pending update.
Two users of defio were using kmalloc to allocate the structure.
These allocations are changed to kzalloc, to prevent uninitialised
.first_io members in those drivers.
Signed-off-by: Heiko Stübner <heiko@sntech.de>
Signed-off-by: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
Pull USB fixes from Greg Kroah-Hartman:
"Here are a number of small USB fixes for 3.4-rc5.
Nothing major, as before, some USB gadget fixes. There's a crash fix
for a number of ASUS laptops on resume that had been reported by a
number of different people. We think the fix might also pertain to
other machines, as this was a BIOS bug, and they seem to travel to
different models and manufacturers quite easily. Other than that,
some other reported problems fixed as well."
* tag 'usb-3.4-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
usb: gadget: udc-core: fix incompatibility with dummy-hcd
usb: gadget: udc-core: fix wrong call order
USB: cdc-wdm: fix race leading leading to memory corruption
USB: EHCI: fix crash during suspend on ASUS computers
usb gadget: uvc: uvc_request_data::length field must be signed
usb: gadget: dummy: do not call pullup() on udc_stop()
usb: musb: davinci.c: add missing unregister
usb: musb: drop __deprecated flag
USB: gadget: storage gadgets send wrong error code for unknown commands
usb: otg: gpio_vbus: Add otg transceiver events and notifiers
Now that encap_rcv() works on IPv6 UDP sockets, wire L2TP up to IPv6.
Support has been tested with and without hardware offloading. This
version fixes the L2TP over localhost issue with incorrect checksums
being reported.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull ARM SoC fixes from Olof Johansson:
"Nothing controversial, just another batch of fixes:
- Samsung/exynos fixes for more merge window fallout: build errors
and warnings mostly, but also some clock/device setup issues on
exynos4/5
- PXA bug and warning fixes related to gpio and pinmux
- IRQ domain conversion bugfixes for U300 and MSM
- A regulator setup fix for U300"
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: PXA2xx: MFP: fix potential direction bug
ARM: PXA2xx: MFP: fix bug with MFP_LPM_KEEP_OUTPUT
arm/sa1100: fix sa1100-rtc memory resource
ARM: pxa: fix gpio wakeup setting
ARM: SAMSUNG: add missing MMC_CAP2_BROKEN_VOLTAGE capability
ARM: EXYNOS: Fix compilation error when CONFIG_OF is not defined
ARM: EXYNOS: Fix resource on dev-dwmci.c
ARM: S3C24XX: Fix build warning for S3C2410_PM
ARM: mini2440_defconfig: Fix build error
ARM: msm: Fix gic irqdomain support
ARM: EXYNOS: Fix incorrect initialization of GIC
ARM: EXYNOS: use 'exynos4-sdhci' as device name for sdhci controllers
ARM: u300: bump all IRQ numbers by one
ARM: ux300: Fix unimplementable regulation constraints
Pull misc SPI device driver bug fixes from Grant Likely.
* tag 'spi-for-linus' of git://git.secretlab.ca/git/linux-2.6:
spi/spi-bfin5xx: Fix flush of last bit after each spi transfer
spi/spi-bfin5xx: fix reversed if condition in interrupt mode
spi/spi_bfin_sport: drop bits_per_word from client data
spi/bfin_spi: drop bits_per_word from client data
spi/spi-bfin-sport: move word length setup to transfer handler
spi/bfin5xx: rename config macro name for bfin5xx spi controller driver
spi/pl022: Allow request for higher frequency than maximum possible
spi/bcm63xx: set master driver mode_bits.
spi/bcm63xx: don't use the stopping state
spi/bcm63xx: convert to the pump message infrastructure
spi/spi-ep93xx.c: use dma_transfer_direction instead of dma_data_direction
spi: fix spi.h kernel-doc warning
spi/pl022: Fix calculate_effective_freq()
spi/pl022: Fix range checking for bits per word
This method changes x86 to add a breakpoint to the mcount locations
instead of calling stop machine.
Now that iret can be handled by NMIs, we perform the following to
update code:
1) Add a breakpoint to all locations that will be modified
2) Sync all cores
3) Update all locations to be either a nop or call (except breakpoint
op)
4) Sync all cores
5) Remove the breakpoint with the new code.
6) Sync all cores
[
Added updates that Masami suggested:
Use unlikely(modifying_ftrace_code) in int3 trap to keep kprobes efficient.
Don't use NOTIFY_* in ftrace handler in int3 as it is not a notifier.
]
Cc: H. Peter Anvin <hpa@zytor.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
These two functions do almost the same thing and duplicate some code.
Merge their implementation into a single common function.
res_counter_charge_locked() takes one more parameter but it doesn't seem
to be used outside res_counter.c yet anyway.
There is no (intended) change in the behaviour.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Glauber Costa <glommer@parallels.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Li Zefan <lizefan@huawei.com>
Now that I'm doing secinfo automatically in the v4 code this extra
argument isn't needed.
Signed-off-by: Bryan Schumaker <bjschuma@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
This simplifies the code for v2 and v3 and gives v4 a chance to decide
on referrals without needing to modify the generic client.
Signed-off-by: Bryan Schumaker <bjschuma@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Need this to pass into nfs_commitdata_init, in order to keep data->dreq
accurate.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Factors out the code that needs to change when directio
starts using these code paths.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
It is COMMIT that is handled the most differently between
the paged and direct paths. Create a structure that encapsulates
everything either path needs to know about the commit state.
We could use void to hide some of the layout driver stuff, but
Trond suggests pulling it out to ensure type checking, given the
huge changes being made, and the fact that it doesn't interfere
with other drivers.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Factors out the code that will need to change when directio
starts using these code paths. This will allow directio to use
the generic pagein and flush routines
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Decouple nfs_pgio_header and nfs_write_data, and have (possibly
multiple) nfs_write_datas each take a refcount on nfs_pgio_header.
For the moment keeps nfs_write_header as a way to preallocate a single
nfs_write_data with the nfs_pgio_header. The code doesn't need this,
and would be prettier without, but given the amount of churn I am
already introducing I didn't want to play with tuning new mempools.
This also fixes bug in pnfs_ld_handle_write_error. In the case of
desc->pg_bsize < PAGE_CACHE_SIZE, the pages list was empty, causing
replay attempt to do nothing.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Decouple nfs_pgio_header and nfs_read_data, and have (possibly
multiple) nfs_read_datas each take a refcount on nfs_pgio_header.
For the moment keeps nfs_read_header as a way to preallocate a single
nfs_read_data with the nfs_pgio_header. The code doesn't need this,
and would be prettier without, but given the amount of churn I am
already introducing I didn't want to play with tuning new mempools.
This also fixes bug in pnfs_ld_handle_read_error. In the case of
desc->pg_bsize < PAGE_CACHE_SIZE, the pages list was empty, causing
replay attempt to do nothing.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Both nfs_read_data and nfs_write_data devote several fields which
can be combined into a single shared struct.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
In order to avoid duplicating all the data in nfs_read_data whenever we
split it up into multiple RPC calls (either due to a short read result
or due to rsize < PAGE_SIZE), we split out the bits that are the same
per RPC call into a separate "header" structure.
The goal this patch moves towards is to have a single header
refcounted by several rpc_data structures. Thus, want to always refer
from rpc_data to the header, and not the other way. This patch comes
close to that ideal, but the directio code currently needs some
special casing, isolated in the nfs_direct_[read_write]hdr_release()
functions. This will be dealt with in a future patch.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Commits don't need the vectors of pages, etc. that writes do. Split out
a separate structure for the commit operation.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>