A machine driver can register the two ops.
When a DAI link is added or removed by a component's topology, the
ASoC core can call the ops to notify the machine driver for extra
intialization or destruction.
E.g. topology can create FE DAI links from a cpu DAI component, and
the machine driver may define an add_dai_link ops to set machine-specific
.init ops for the DAI link.
Signed-off-by: Mengdong Lin <mengdong.lin@linux.intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Implement a dai link list for the soc card.
Add APIs to add/remove a DAI links dynamically, e.g. by topology.
And a dobj is embedded into the struct snd_soc_dai_link. Topology can
use the dobj to find the links created by it and remove them when the
topology component is unloaded.
The predefined DAI links are reserved to keep backward compatibility.
And they will also be added to the list.
Signed-off-by: Mengdong Lin <mengdong.lin@linux.intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Receipt of CM MAD with other than the Send method for an attribute
other than the ClassPortInfo attribute is invalid.
CM attributes other than ClassPortInfo only use the send method.
The SRP initiator does not maintain a timeout policy for CM connect
requests relies on the CM layer to do that. The result was that
the SRP initiator hung as the connect request never completed.
A new SRP target has been observed to respond to Send CM REQ
with GetResp of CM REQ with bad status. This is non conformant
with IBA spec but exposes a vulnerability in the current MAD/CM
code which will respond to the incoming GetResp of CM REQ as if
it was a valid incoming Send of CM REQ rather than tossing
this on the floor. It also causes the MAD layer not to
retransmit the original REQ even though it has not received a REP.
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Hal Rosenstock <hal@mellanox.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Workqueue stalls can happen from a variety of usage bugs such as
missing WQ_MEM_RECLAIM flag or concurrency managed work item
indefinitely staying RUNNING. These stalls can be extremely difficult
to hunt down because the usual warning mechanisms can't detect
workqueue stalls and the internal state is pretty opaque.
To alleviate the situation, this patch implements workqueue lockup
detector. It periodically monitors all worker_pools periodically and,
if any pool failed to make forward progress longer than the threshold
duration, triggers warning and dumps workqueue state as follows.
BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 31s!
Showing busy workqueues and worker pools:
workqueue events: flags=0x0
pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=17/256
pending: monkey_wrench_fn, e1000_watchdog, cache_reap, vmstat_shepherd, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, cgroup_release_agent
workqueue events_power_efficient: flags=0x80
pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256
pending: check_lifetime, neigh_periodic_work
workqueue cgroup_pidlist_destroy: flags=0x0
pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/1
pending: cgroup_pidlist_destroy_work_fn
...
The detection mechanism is controller through kernel parameter
workqueue.watchdog_thresh and can be updated at runtime through the
sysfs module parameter file.
v2: Decoupled from softlockup control knobs.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Don Zickus <dzickus@redhat.com>
Cc: Ulrich Obergfell <uobergfe@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Chris Mason <clm@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
touch_softlockup_watchdog() is used to tell watchdog that scheduler
stall is expected. One group of usage is from paths where the task
may not be able to yield for a long time such as performing slow PIO
to finicky device and coming out of suspend. The other is to account
for scheduler and timer going idle.
For scheduler softlockup detection, there's no reason to distinguish
the two cases; however, workqueue lockup detector is planned and it
can use the same signals from the former group while the latter would
spuriously prevent detection. This patch introduces a new function
touch_softlockup_watchdog_sched() and convert the latter group to call
it instead. For now, it just calls touch_softlockup_watchdog() and
there's no functional difference.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ulrich Obergfell <uobergfe@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
The meat here is definitely the detailed specs for what atomic_check
and atomic_commit are supposed to do.
And another candidate for a core vfunc that should be in a helper really
(output_poll_changed this time around).
v2: Feedback from Eric on irc:
- spelling fixes.
- spec what async should do
- copy the event related paragraphs from page_flip and adjust
- make it clear that a successful async commit is not allowed to leave
the pipe dead or disabled.
v3: Use FIXME comments to annotate functions that we should move to
some helpers.
v4: Suggestions from Thierry.
Cc: Eric Anholt <eric@anholt.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1449218769-16577-22-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Thierry Reding <treding@nvidia.com>
And merge any docbook we have into the kerneldoc comments.
Since it's a legacy entry point with only two implementation (one each
in atomic and legacy crtc helpers) I've made the documentation for
set_config fairly sparse - no one should ever need to look at this
again, all the ABI we have is baked into code.
For ->page_flip otoh I kept all the extensive docs from the docbook
and even extended it where it was lacking: Currently we have a pile of
legacy page_flip implemantations, and even for atomic drivers there's
not yet a standard implementation in the helpers. Which means every
driver needs to implement this itself, and precise specs are really
valuable.
Otherwise there's just cursor, which really just boils down to "use at
least universal planes". And gamma tables (where we have a bit a mess
with the fbdev helper gamma hooks).
v2: Spelling fixes (Eric).
v3: Suggestions from Thierry.
Cc: Eric Anholt <eric@anholt.net>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449218769-16577-20-git-send-email-daniel.vetter@ffwll.ch
The special case here is that both ->detect and ->force are actually
functions only called by the probe helpers and hence really shouldn't
be here. But since they've used by pretty much every driver I figured
it's better to just document this for now instead of holding this doc
patch hostage until that's all fixed. For that reason also group force
right next to detect.
v2: Use FIXME comments to annotate where we should move a hook to
helpers.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449218769-16577-18-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Thierry Reding <treding@nvidia.com>
They're not how system suspend/resume should be done with atomic
(there's new helpers for that developed by Thierry Reding), and for
legacy drivers this really should be a helper hook and not a core one.
But there's not even helper code to use them, and only 2 drivers
(which now have their own private hooks) set them. Ditch them.
Saves me typing some kerneldoc, too ;-)
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449218769-16577-15-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Thierry Reding <treding@nvidia.com>
Originally the idea behind void* was to allow different sets of
helpers. But now we have that (with probe, plane, crtc and atomic
helpers) and we still just use the same set of vtables. That's the
only way to make the individual helpers modular and allow drivers to
pick&choose and transition between them. So this flexibility isn't
really needed. Also we have lots of non-vtable data meanwhile in core
structures too, this is not the first one at all.
Given that the void * is only trouble since gcc can't warn you if you
mix them up. Let's fix that and make them typesafe.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449218769-16577-5-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Thierry Reding <treding@nvidia.com>
Currently we have 4 helper libraries (probe, crtc, plane & atomic)
that all use the same helper vtables. And that's by necessity since we
don't want to litter the core structs with one ops pointer per helper
library. Also often the reuse the same hooks (like atomic does, to
facilite conversion from existing drivers using crtc and plane
helpers).
Given all that it doesn't make sense to put the docs for these next to
specific helpers. Instead extract them into a new header file and
section in the docbook, and add references to them everywhere.
Unfortunately kernel-doc complains when an include directive doesn't
find anything (and it does by dumping crap into the output file). We
have to remove the now empty includes to avoid that, instead of leaving
them in for future proofing.
v2: More OCD in ordering functions.
v3: Spelling plus collate copyright headers properly.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449218769-16577-4-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Thierry Reding <treding@nvidia.com>
in dt-bindings where the preprocessor #ifndef/#define
variables were mismatched.
Signed-off-by: Ashley Towns <mail@ashleytowns.id.au>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Add support for legacy non-DT Dove to the PMU driver, so that we can
transition the legacy support over.
[gregory.clement@free-electrons.com: removed pm_genpd_poweroff_unused]
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
This can be parsed with vc4-gpu-tools tools for trying to figure out
what was going on.
v2: Use __u32-style types.
Signed-off-by: Eric Anholt <eric@anholt.net>
This is basically a remote version of the btrfs CLONE operation,
so the implementation is fairly trivial. Made even more trivial
by stealing the XDR code and general framework Anna Schumaker's
COPY prototype.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: J. Bruce Fields <bfields@fieldses.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
The btrfs clone ioctls are now adopted by other file systems, with NFS
and CIFS already having support for them, and XFS being under active
development. To avoid growth of various slightly incompatible
implementations, add one to the VFS. Note that clones are different from
file copies in several ways:
- they are atomic vs other writers
- they support whole file clones
- they support 64-bit legth clones
- they do not allow partial success (aka short writes)
- clones are expected to be a fast metadata operation
Because of that it would be rather cumbersome to try to piggyback them on
top of the recent clone_file_range infrastructure. The converse isn't
true and the clone_file_range system call could try clone file range as
a first attempt to copy, something that further patches will enable.
Based on earlier work from Peng Tao.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Pass a loff_t end for the last byte instead of the 32-bit count
parameter to allow full file clones even on 32-bit architectures.
While we're at it also simplify the read/write selection.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: J. Bruce Fields <bfields@fieldses.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
The user submission is basically a pointer to a command list and a
pointer to uniforms. We copy those in to the kernel, validate and
relocate them, and store the result in a GPU BO which we queue for
execution.
v2: Drop support for NV shader recs (not necessary for GL), simplify
vc4_use_bo(), improve bin flush/semaphore checks, use __u32 style
types.
Signed-off-by: Eric Anholt <eric@anholt.net>
Since we have no MMU, the kernel needs to validate that the submitted
shader code won't make any accesses to memory that the user doesn't
control, which involves banning some operations (general purpose DMA
writes), and tracking where we need to write out pointers for other
operations (texture sampling). Once it's validated, we return a GEM
BO containing the shader, which doesn't allow mapping for write or
exporting to other subsystems.
v2: Use __u32-style types.
Signed-off-by: Eric Anholt <eric@anholt.net>
While there exist dumb APIs for creating and mapping BOs, one of the
rules is that drivers doing 3D acceleration have to provide their own
APIs for buffer allocation (besides, the pitch/height parameters of
the dumb alloc don't really make sense for a lot of 3D allocations).
v2: Use __u32-style types, use "drm.h" instead of <drm/drm.h>.
Signed-off-by: Eric Anholt <eric@anholt.net>
The CMA helpers had no way for a driver to extend the struct with its
own fields. Since the CMA helpers are mostly "Allocate a
drm_gem_cma_object, then fill in a few fields", it's hard to write as
pure helpers without passing in a driver callback for the allocate
step.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Although list_for_each_entry_rcu() can in theory be used anywhere
preemption is disabled, it can result in calls to lockdep, which cannot
be used in certain constrained execution environments, such as exception
handlers that do not map the entire kernel into their address spaces.
This commit therefore adds list_entry_lockless() and
list_for_each_entry_lockless(), which never invoke lockdep and can
therefore safely be used from these constrained environments, but only
as long as those environments are non-preemptible (or items are never
deleted from the list).
Use synchronize_sched(), call_rcu_sched(), or synchronize_sched_expedited()
in updates for the needed grace periods. Of course, if items are never
deleted from the list, there is no need to wait for grace periods.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
rcu_dereference_raw() calls indirectly rcu_read_lock_held() while
rcu_dereference_raw_notrace() does not so fix the comment about the latter.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
This commit replaces a local_irq_save()/local_irq_restore() pair with
a lockdep assertion that interrupts are already disabled. This should
remove the corresponding overhead from the interrupt entry/exit fastpaths.
This change was inspired by the fact that Iftekhar Ahmed's mutation
testing showed that removing rcu_irq_enter()'s call to local_ird_restore()
had no effect, which might indicate that interrupts were always enabled
anyway.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The rcu_expedited, rcu_normal, and rcu_normal_after_boot kernel boot
parameters are pointless in the case of TINY_RCU because in that case
synchronous grace periods, both expedited and normal, are no-ops.
However, these three symbols contribute several hundred bytes of bloat.
This commit therefore uses CPP directives to avoid compiling this code
in TINY_RCU kernels.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
TCP SYNACK messages might now be attached to request sockets.
XFRM needs to get back to a listener socket.
Adds new helpers that might be used elsewhere :
sk_to_full_sk() and sk_const_to_full_sk()
Note: We also need to add RCU protection for xfrm lookups,
now TCP/DCCP have lockless listener processing. This will
be addressed in separate patches.
Fixes: ca6fb06518 ("tcp: attach SYNACK messages to request sockets instead of listener")
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The current implementation gets a spin_lock, and at any scale with
qib and hfi1 post send, the lock contention grows exponentially
with the number of QPs.
idr_find() is RCU compatibile, so read doesn't need the lock.
Change to use rcu_read_lock() and rcu_read_unlock() in
__idr_get_uobj().
kfree_rcu() is used to insure a grace period between the
idr removal and actual free.
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Since no more DSA driver uses the polling callback, and since
the phylib handles the link detection, remove the link polling
work and timer code.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
DAX page fault path needs to get blocks that are pre-zeroed to avoid
races when two concurrent page faults happen in the same block of a
file. Implement support for this in ext4_map_blocks().
Signed-off-by: Jan Kara <jack@suse.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
When dioread_nolock mode is enabled, we grab i_data_sem in
ext4_ext_direct_IO() and therefore we need to instruct _ext4_get_block()
not to grab i_data_sem again using EXT4_GET_BLOCKS_NO_LOCK. However
holding i_data_sem over overwrite direct IO isn't needed these days. We
have exclusion against truncate / hole punching because we increase
i_dio_count under i_mutex in ext4_ext_direct_IO() so once
ext4_file_write_iter() verifies blocks are allocated & written, they are
guaranteed to stay so during the whole direct IO even after we drop
i_mutex.
So we can just remove this locking abuse and the no longer necessary
EXT4_GET_BLOCKS_NO_LOCK flag.
Signed-off-by: Jan Kara <jack@suse.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Concurrent non-blocking slowpath ramrods can be completed
out-of-order on the completion chain. Recycling completed elements,
while previously sent elements are still completion pending,
can lead to overriding of active elements on the chain. Furthermore,
sending pending slowpath ramrods currently lacks the update of the
chain element physical pointer.
This patch:
* Ensures that ramrods are sent to the FW with
consecutive echo values.
* Handles out-of-order completions by freeing only first
successive completed entries.
* Updates the chain element physical pointer when copying
a pending element into a free element for sending.
Signed-off-by: Tomer Tayar <Tomer.Tayar@qlogic.com>
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The amount of chain next pointer elements between the producer
and the consumer indices depends on which pages they currently
point to. The current calculation is based only on their difference,
and it can lead to a number of free elements which is higher by 1
than the actual value.
Signed-off-by: Tomer Tayar <Tomer.Tayar@qlogic.com>
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Johannes Berg says:
====================
This pull request got a bit bigger than I wanted, due to
needing to reshuffle and fix some bugs. I merged mac80211
to get the right base for some of these changes.
* new mac80211 API for upcoming driver changes: EOSP handling,
key iteration
* scan abort changes allowing to cancel an ongoing scan
* VHT IBSS 80+80 MHz support
* re-enable full AP client state tracking after fixes
* various small fixes (that weren't relevant for mac80211)
* various cleanups
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
In the case where a request queue is passed to the low lever lightnvm
device drive integration, the device driver might pass its admin
commands through another queue. Instead pass nvm_dev, and let the
low level drive the appropriate queue.
Reported-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
It is not obvious what NVM_IO_* and NVM_BLK_T_* are used for. Make sure
to comment them appropriately as the other constants.
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
Some controller lockup on a ata_read_log_page.
Add new ata port flag ATA_FLAG_NO_LOG_PAGE which can used
to blacklist a controller.
If this flag is set, any attempt to read a log page returns an error
without actually issuing the command.
Signed-off-by: Andreas Werner <andreas.werner@men.de>
Signed-off-by: Tejun Heo <tj@kernel.org>