The current ioctl define implies that this func expects to be passed a
64bit number directly rather than a pointer to a 64bit. The code that
processes this ioctl shows that it clearly expects a pointer.
It'd be best if we could change the type to "__s64*", but that would
change the generated ioctl number thus breaking the userland ABI. So
just add a comment for intrepid developers.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
An active monitor interface is one that is used for communication (via
injection). It is expected to ACK incoming unicast packets. This is
useful for running various 802.11 testing utilities that associate to an
AP via injection and manage the state in user space.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This corrects an regression introduced by "net: Use 16bits for *_headers
fields of struct skbuff" when NET_SKBUFF_DATA_USES_OFFSET is not set. In
that case skb->tail will be a pointer however skb->network_header is now
an offset.
This patch corrects the problem by adding a wrapper to return skb tail as
an offset regardless of the value of NET_SKBUFF_DATA_USES_OFFSET. It seems
that skb->tail that this offset may be more than 64k and some care has been
taken to treat such cases as an error.
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
This corrects an regression introduced by "net: Use 16bits for *_headers
fields of struct skbuff" when NET_SKBUFF_DATA_USES_OFFSET is not set. In
that case skb->tail will be a pointer whereas skb->transport_header
will be an offset from head. This is corrected by using wrappers that
ensure that comparisons and calculations are always made using pointers.
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some drivers that are shared between architectures have HAVE_CLK selected
but don't have OF. To remove compilation errors for drivers that provide
clocks on DT with of_clk_add_provider we would have to enclose these calls
within #ifdef CONFIG_OF, #endif.
This patch adds some stubs for OF related clk-provider functions that
either do nothing or return appropriate values if CONFIG_OF is not set.
So, definition of these routines will always be available.
Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
commit 351638e7de (net: pass info struct via netdevice notifier)
breaks booting of my KVM guest, this is due to we still forget to pass
struct netdev_notifier_info in several places. This patch completes it.
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <amwang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As rcu_dereference_raw() under RCU debug config options can add quite a
bit of checks, and that tracing uses rcu_dereference_raw(), these checks
happen with the function tracer. The function tracer also happens to trace
these debug checks too. This added overhead can livelock the system.
Add a new interface to RCU for both rcu_dereference_raw_notrace() as well
as hlist_for_each_entry_rcu_notrace() as the hlist iterator uses the
rcu_dereference_raw() as well, and is used a bit with the function tracer.
Link: http://lkml.kernel.org/r/20130528184209.304356745@goodmis.org
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
All Tegra GPIOs are named after the GPIO bank and GPIO number within
the bank. Define a macro to calculate the GPIO ID based on those
parameters. Make the macro available via all Tegra .dtsip files.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Create a header file to define the clock IDs used by the Tegra114 clock
binding. Remove the list of definitions from the binding documentation,
and refer the reader to the header file.
This will allow the same header to be used by both device tree files,
and drivers implementing this binding, which guarantees that the two
stay in sync. This also makes device trees more readable by using names
instead of magic numbers.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
[swarren, add header to clock/ instead of clk/ to match binding location]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Create a header file to define the clock IDs used by the Tegra30 clock
binding. Remove the list of definitions from the binding documentation,
and refer the reader to the header file.
This will allow the same header to be used by both device tree files,
and drivers implementing this binding, which guarantees that the two
stay in sync. This also makes device trees more readable by using names
instead of magic numbers.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
[swarren, add header to clock/ instead of clk/ to match binding location]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Create a header file to define the clock IDs used by the Tegra20 clock
binding. Remove the list of definitions from the binding documentation,
and refer the reader to the header file.
This will allow the same header to be used by both device tree files,
and drivers implementing this binding, which guarantees that the two
stay in sync. This also makes device trees more readable by using names
instead of magic numbers.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
[swarren, add header to clock/ instead of clk/ to match binding location]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
All the virtualized platforms (KVM, lguest and Xen) have persistent
wallclocks that have more than one second of precision.
read_persistent_wallclock() and update_persistent_wallclock() allow
for nanosecond precision but their implementation on x86 with
x86_platform.get/set_wallclock() only allows for one second precision.
This means guests may see a wallclock time that is off by up to 1
second.
Make set_wallclock() and get_wallclock() take a struct timespec
parameter (which allows for nanosecond precision) so KVM and Xen
guests may start with a more accurate wallclock time and a Xen dom0
can maintain a more accurate wallclock for guests.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
It seems we made a mistake when creating dw_apb_timer_of.c:
picoxcell sched_clock had parts that were not related to
dw_apb_timer, yet we moved them to dw_apb_timer_of, and tried to
use them on socfpga.
This results in system where user/system time is not measured
properly, as demonstrated by
time dd if=/dev/urandom of=/dev/zero bs=100000 count=100
So this patch switches sched_clock to hardware that exists on both
platforms, and adds missing of_node_put() in dw_apb_timer_init().
Signed-off-by: Pavel Machek <pavel@denx.de>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
The function is currently mainly used in the networking code and
if others start using it, they must check the result, otherwise
it cannot be determined if the timespec conversion suceeded.
Currently no user lacks this check, but make future users aware of
a possible misusage.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
We've got the macro NSEC_PER_USEC defined in header file
include/linux/time.h. To make the code decent, this patch
replaces the immediate number 1000 to convert bewteen a
time value in microseconds and one in nanoseconds with the
macro NSEC_PER_USEC.
Signed-off-by: Liu Ying <Ying.Liu@freescale.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Stultz <john.stultz@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Use new netdevice notifier infrastructure to pass along changed flags.
Signed-off-by: Timo Teräs <timo.teras@iki.fi>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
v2->v3: shortened notifier_info struct name
Signed-off-by: David S. Miller <davem@davemloft.net>
So far, only net_device * could be passed along with netdevice notifier
event. This patch provides a possibility to pass custom structure
able to provide info that event listener needs to know.
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
v2->v3: fix typo on simeth
shortened dev_getter
shortened notifier_info struct name
v1->v2: fix notifier_call parameter in call_netdevice_notifier()
Signed-off-by: David S. Miller <davem@davemloft.net>
Additionally change cast from long to unsigned long to follow specificator.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
Because of the encoding of the "Multiple Message Capable" and "Multiple
Message Enable" fields, a device can only advertise that it's capable of a
power-of-two number of vectors, and the OS can only enable a power-of-two
number.
For example, a device that's limited internally to using 18 vectors would
have to advertise that it's capable of 32. The 14 extra vectors consume
vector numbers and IRQ descriptors even though the device can't actually
use them.
This fix introduces a 'msi_desc::nvec_used' field to address this issue.
When non-zero, it is the actual number of MSIs the device will send, as
requested by the device driver. This value should be used by architectures
to set up and tear down only as many interrupt resources as the device will
actually use.
Note, although the existing 'msi_desc::multiple' field might seem
redundant, in fact it is not. The number of MSIs advertised need not be
the smallest power-of-two larger than the number of MSIs the device will
send. Thus, it is not always possible to derive the former from the
latter, so we need to keep them both to handle this case.
[bhelgaas: changelog, rename to "nvec_used"]
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
The current of_get_display_timings() reads multiple display timings,
allocating memory for the entries. However, most of the time when
parsing display timings from DT data is needed, there's only one display
timing as it's not common for a LCD panel to support multiple videomodes.
This patch creates a new function:
int of_get_display_timing(struct device_node *np, const char *name,
struct display_timing *dt);
which can be used to parse a single display timing entry from the given
node name.
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Cc: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Philipp Zabel <p.zabel@pengutronix.de>
Two minor changes to comments:
* Remove reference to drivers/base/sys.c, removed in 0a962657.
* CPUs are now exported by sysfs via devices/system/cpu.
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Kernel-doc gives the following warning:
DOCPROC Documentation/DocBook/kernel-api.xml
Warning(/include/linux/kernel.h:590): No description found for parameter 'ip'
Warning(/include/linux/kernel.h:590): No description found for parameter 'ip'
Due to the externs between the the comment and the trace_puts() macro.
This is fixed by moving the externs below the macro and keeping the
macro and comment directly together.
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Vince reported a problem found by his perf specific trinity
fuzzer.
Al noticed 2 problems with perf's mmap():
- it has issues against fork() since we use vma->vm_mm for accounting.
- it has an rb refcount leak on double mmap().
We fix the issues against fork() by using VM_DONTCOPY; I don't
think there's code out there that uses this; we didn't hear
about weird accounting problems/crashes. If we do need this to
work, the previously proposed VM_PINNED could make this work.
Aside from the rb reference leak spotted by Al, Vince's example
prog was indeed doing a double mmap() through the use of
perf_event_set_output().
This exposes another problem, since we now have 2 events with
one buffer, the accounting gets screwy because we account per
event. Fix this by making the buffer responsible for its own
accounting.
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Link: http://lkml.kernel.org/r/20130528085548.GA12193@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This changes might_fault() so that it does not
trigger a false positive diagnostic for e.g. the following
sequence:
spin_lock_irqsave()
pagefault_disable()
copy_to_user()
pagefault_enable()
spin_unlock_irqrestore()
In particular vhost wants to do this, to call
socket ops from under a lock.
There are 3 cases to consider:
- CONFIG_PROVE_LOCKING - might_fault is non-inline
so it's easy to move the in_atomic test to fix
up the false positive warning.
- CONFIG_DEBUG_ATOMIC_SLEEP - might_fault
is currently inline, but we are calling a
non-inline __might_sleep anyway,
so let's use the non-line version of might_fault
that does the right thing.
- !CONFIG_DEBUG_ATOMIC_SLEEP && !CONFIG_PROVE_LOCKING
__might_sleep is a nop so might_fault is a nop.
Make this explicit.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1369577426-26721-11-git-send-email-mst@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
might_fault() is called from functions like copy_to_user()
which most callers expect to be very fast, like a couple of
instructions.
So functions like memcpy_toiovec() call them many times in a loop.
But might_fault() calls might_sleep() and with CONFIG_PREEMPT_VOLUNTARY
this results in a function call.
Let's not do this - just call __might_sleep() that produces
a diagnostic for sleep within atomic, but drop
might_preempt().
Here's a test sending traffic between the VM and the host,
host is built with CONFIG_PREEMPT_VOLUNTARY:
before:
incoming: 7122.77 Mb/s
outgoing: 8480.37 Mb/s
after:
incoming: 8619.24 Mb/s
outgoing: 9455.42 Mb/s
As a side effect, this fixes an issue pointed
out by Ingo: might_fault might schedule differently
depending on PROVE_LOCKING. Now there's no
preemption point in both cases, so it's consistent.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1369577426-26721-10-git-send-email-mst@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch adds /sys/device/xxx/perf_event_mux_interval_ms to ajust
the multiplexing interval per PMU. The unit is milliseconds. Value has
to be >= 1.
In the 4th version, we renamed the sysfs file to be more consistent
with the other /proc/sys/kernel entries for perf_events.
In the 5th version, we handle the reprogramming of the hrtimer using
hrtimer_forward_now(). That way, we sync up to new timer value quickly
(suggested by Jiri Olsa).
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1364991694-5876-3-git-send-email-eranian@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The current scheme of using the timer tick was fine for per-thread
events. However, it was causing bias issues in system-wide mode
(including for uncore PMUs). Event groups would not get their fair
share of runtime on the PMU. With tickless kernels, if a core is idle
there is no timer tick, and thus no event rotation (multiplexing).
However, there are events (especially uncore events) which do count
even though cores are asleep.
This patch changes the timer source for multiplexing. It introduces a
per-PMU per-cpu hrtimer. The advantage is that even when a core goes
idle, it will come back to service the hrtimer, thus multiplexing on
system-wide events works much better.
The per-PMU implementation (suggested by PeterZ) enables adjusting the
multiplexing interval per PMU. The preferred interval is stashed into
the struct pmu. If not set, it will be forced to the default interval
value.
In order to minimize the impact of the hrtimer, it is turned on and
off on demand. When the PMU on a CPU is overcommited, the hrtimer is
activated. It is stopped when the PMU is not overcommitted.
In order for this to work properly, we had to change the order of
initialization in start_kernel() such that hrtimer_init() is run
before perf_event_init().
The default interval in milliseconds is set to a timer tick just like
with the old code. We will provide a sysctl to tune this in another
patch.
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1364991694-5876-2-git-send-email-eranian@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The netpoll_rx_disable() will always return 0, it is no use and looks wordy,
so remove the unnecessary code and get rid of it in _dev_open and _dev_close.
Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The on-disk block address is defined as __le32, but in-memory block address,
block_t, does as u64.
Let's synchronize them to 32 bits.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
In the case where a non-MPLS packet is received and an MPLS stack is
added it may well be the case that the original skb is GSO but the
NIC used for transmit does not support GSO of MPLS packets.
The aim of this code is to provide GSO in software for MPLS packets
whose skbs are GSO.
SKB Usage:
When an implementation adds an MPLS stack to a non-MPLS packet it should do
the following to skb metadata:
* Set skb->inner_protocol to the old non-MPLS ethertype of the packet.
skb->inner_protocol is added by this patch.
* Set skb->protocol to the new MPLS ethertype of the packet.
* Set skb->network_header to correspond to the
end of the L3 header, including the MPLS label stack.
I have posted a patch, "[PATCH v3.29] datapath: Add basic MPLS support to
kernel" which adds MPLS support to the kernel datapath of Open vSwtich.
That patch sets the above requirements in datapath/actions.c:push_mpls()
and was used to exercise this code. The datapath patch is against the Open
vSwtich tree but it is intended that it be added to the Open vSwtich code
present in the mainline Linux kernel at some point.
Features:
I believe that the approach that I have taken is at least partially
consistent with the handling of other protocols. Jesse, I understand that
you have some ideas here. I am more than happy to change my implementation.
This patch adds dev->mpls_features which may be used by devices
to advertise features supported for MPLS packets.
A new NETIF_F_MPLS_GSO feature is added for devices which support
hardware MPLS GSO offload. Currently no devices support this
and MPLS GSO always falls back to software.
Alternate Implementation:
One possible alternate implementation is to teach netif_skb_features()
and skb_network_protocol() about MPLS, in a similar way to their
understanding of VLANs. I believe this would avoid the need
for net/mpls/mpls_gso.c and in particular the calls to
__skb_push() and __skb_push() in mpls_gso_segment().
I have decided on the implementation in this patch as it should
not introduce any overhead in the case where mpls_gso is not compiled
into the kernel or inserted as a module.
MPLS GSO suggested by Jesse Gross.
Based in part on "v4 GRE: Add TCP segmentation offload for GRE"
by Pravin B Shelar.
Cc: Jesse Gross <jesse@nicira.com>
Cc: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to mitigate ongoing incresase in the size of struct skbuff
use 16 bit integer offsets rather than pointers for inner_*_headers.
This appears to reduce the size of struct skbuff from 0xd0 to 0xc0
bytes on x86_64 with the following all unset.
CONFIG_XFRM
CONFIG_NF_CONNTRACK
CONFIG_NF_CONNTRACK_MODULE
NET_SKBUFF_NF_DEFRAG_NEEDED
CONFIG_BRIDGE_NETFILTER
CONFIG_NET_SCHED
CONFIG_IPV6_NDISC_NODETYPE
CONFIG_NET_DMA
CONFIG_NETWORK_SECMARK
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
From Nicolas Ferre:
Big update converting pinctrl to use macros and header files. Increases
readability and avoids typos.
Update of AT91 defconfigs and merge of defconfigs for the similar SoCs
sam9260/9g20 and sam9261/9g10.
* tag 'at91-cleanup' of git://github.com/at91linux/linux-at91:
ARM: at91: udpate defconfigs
ARM: at91: dt: switch to standard IRQ flag defines
ARM: at91: dt: switch to pinctrl to pre-processor
ARM: at91: dt: add pinctrl pre-processor define
ARM: at91: dt: switch to standard GPIO flag defines.
ARM: at91: dt: use #include for all device trees
Signed-off-by: Olof Johansson <olof@lixom.net>
libphy currently always reports a PHY as an external transceiver from
the ethtool output. This is inaccurate, because some drivers should be
able to tell that a PHY device is an internal transceiver of an Ethernet
MAC. Add a new flag (PHY_IS_INTERNAL) which can be set by PHY drivers
just like other flags, and a corresponding helper: phy_is_internal()
which can be used by networking drivers to query if a given
PHY device is internal.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a comment which explains the real meaning of XCVR_INTERNAL (PHY and
Ethernet MAC in the same package/die) and XCVR_EXTERNAL (PHY and
Ethernet MAC in a different package/die). Most if not all of the drivers
setting their transceiver type already do it the way the comment
describes it.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently punch hole is disabled in file systems with bigalloc
feature enabled. However the recent changes in punch hole patch should
make it easier to support punching holes on bigalloc enabled file
systems.
This commit changes partial_cluster handling in ext4_remove_blocks(),
ext4_ext_rm_leaf() and ext4_ext_remove_space(). Currently
partial_cluster is unsigned long long type and it makes sure that we
will free the partial cluster if all extents has been released from that
cluster. However it has been specifically designed only for truncate.
With punch hole we can be freeing just some extents in the cluster
leaving the rest untouched. So we have to make sure that we will notice
cluster which still has some extents. To do this I've changed
partial_cluster to be signed long long type. The only scenario where
this could be a problem is when cluster_size == block size, however in
that case there would not be any partial clusters so we're safe. For
bigger clusters the signed type is enough. Now we use the negative value
in partial_cluster to mark such cluster used, hence we know that we must
not free it even if all other extents has been freed from such cluster.
This scenario can be described in simple diagram:
|FFF...FF..FF.UUU|
^----------^
punch hole
. - free space
| - cluster boundary
F - freed extent
U - used extent
Also update respective tracepoints to use signed long long type for
partial_cluster.
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
From Linus Walleij:
This is a set of patches from Lee Jones to start converting
the ux500 to fetch DMA channels from the device tree:
- Full DT support and channel mapping in the DMA40 driver
- Dropping of platform data for migrated devices on the DT
boot path.
* tag 'ux500-dma40-for-arm-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson: (36 commits)
ARM: ux500: Register Cryp and Hash platform drivers on Snowball
crypto: ux500/[cryp|hash] - Show successful start-up in the bootlog
ARM: ux500: Stop passing Cryp DMA channel config information though pdata
crypto: ux500/cryp - Set DMA configuration though dma_slave_config()
crypto: ux500/cryp - Prepare clock before enabling it
ARM: ux500: Stop passing Hash DMA channel config information though pdata
crypto: ux500/hash - Set DMA configuration though dma_slave_config()
crypto: ux500/hash - Prepare clock before enabling it
ARM: ux500: Remove unnecessary attributes from DMA channel request pdata
dmaengine: ste_dma40: Correct copy/paste error
ARM: ux500: Remove DMA address look-up table
dmaengine: ste_dma40: Remove redundant address fetching function
dmaengine: ste_dma40: Only use addresses passed as configuration information
ARM: ux500: Stop passing UART's platform data for Device Tree boots
dmaengine: ste_dma40: Don't configure runtime configurable setup during allocate
dmaengine: ste_dma40: Remove unnecessary call to d40_phy_cfg()
dmaengine: ste_dma40: Separate Logical Global Interrupt Mask (GIM) unmasking
ARM: ux500: Pass remnant platform data though to DMA40 driver
dmaengine: ste_dma40: Supply full Device Tree parsing support
dmaengine: ste_dma40: Allow driver to be probe()able when DT is enabled
...
Signed-off-by: Olof Johansson <olof@lixom.net>
Here we introduce a new interface to replace alloc_pci_dev():
struct pci_dev *pci_alloc_dev(struct pci_bus *bus)
It takes a "struct pci_bus *" argument, so we can alloc a PCI device
on a target PCI bus, and it acquires a reference on the pci_bus.
We use pci_alloc_dev(NULL) to simplify the old alloc_pci_dev(),
and keep it for a while but mark it as __deprecated.
Holding a reference to the pci_bus ensures that referencing
pci_dev->bus is valid as long as the pci_dev is valid.
[bhelgaas: keep existing "return error early" structure in pci_alloc_dev()]
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>