Currently the standard memory allocator (snd_dma_malloc_pages*())
passes the byte size to allocate as is. Most of the backends
allocates real pages, hence the actual allocations are aligned in page
size. However, the genalloc doesn't seem assuring the size alignment,
hence it may result in the access outside the buffer when the whole
memory pages are exposed via mmap.
For avoiding such inconsistencies, this patch makes the allocation
size always to be aligned in page size.
Note that, after this change, snd_dma_buffer.bytes field contains the
aligned size, not the originally requested size. This value is also
used for releasing the pages in return.
BUG: 209931573
cherry picked from commit 5c1733e33c888a3cb7f576564d8ad543d5ad4a9e
Change-Id: Ib65f0e29b87d55e13006c7416793a4539d376cc8
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20201218145625.2045-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Denis Hsu <denis.hsu@mediatek.com>
mmu_notifier_trylock definition for CONFIG_MMU_NOTIFIER=n configuration
has not been modified from the older version. Correct that mistake.
Fixes: 6971350406 ("ANDROID: fix mmu_notifier race caused by not taking mmap_lock during SPF")
Bug: 161210518
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Change-Id: I71b8644bd2864b6ed98a7ff9c15a99fbd4c5a6c5
Szymon rightly pointed out that the previous check for the endpoint
direction in bRequestType was not looking at only the bit involved, but
rather the whole value. Normally this is ok, but for some request
types, bits other than bit 8 could be set and the check for the endpoint
length could not stall correctly.
Fix that up by only checking the single bit.
Fixes: 153a2d7e3350 ("USB: gadget: detect too-big endpoint 0 requests")
Cc: Felipe Balbi <balbi@kernel.org>
Reported-by: Szymon Heidrich <szymon.heidrich@gmail.com>
Link: https://lore.kernel.org/r/20211214184621.385828-1-gregkh@linuxfoundation.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit f08adf5add9a071160c68bb2a61d697f39ab0758
https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git usb-linus)
Bug: 210292376
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I7e708b2b94433009c87f697346e0515d93454f48
When a kernel thread calls dma_buf_put() to release the last reference
to a dma-buf, fput_many() defers calling the release callback to a
workqueue. This means that if the same kernel thread later calls
dma_heap_buffer_alloc(), it has no guarantee that the memory from the
prior free is available, leading to random failures. As a short-term
workaround, call flush_delayed_fput() to ensure the free completes
synchronously.
Leaf changes summary: 1 artifact changed
Changed leaf types summary: 0 leaf type changed
Removed/Changed/Added functions summary: 0 Removed, 0 Changed, 1 Added function
Removed/Changed/Added variables summary: 0 Removed, 0 Changed, 0 Added variable
1 Added function:
[A] 'function void flush_delayed_fput()'
Bug: 210598057
Change-Id: Id936aa0bcd410b23b12f4b922b676aa61a358b4c
Signed-off-by: Patrick Daly <quic_pdaly@quicinc.com>
To prevent ABI breakage, move mm->mmu_notifier_lock into
mm->notifier_subscriptions and allocate mm->notifier_subscriptions
during mm creation in mmu_notifier_subscriptions_init. This results
in additional 176 bytes allocated for each mm, but prevents ABI breakage.
mmu_notifier_subscriptions_hdr structure is introduced at the beginning
of mmu_notifier_subscriptions to keep mmu_notifier_subscriptions hidden
and prevent its type CRC from changing when used in other structures.
Bug: 161210518
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Change-Id: I6f435708d642b70b22e0243c8b33108c208ce5bb
percpu_rw_semaphore changes to allow calling percpu_free_rwsem in atomic
context cause ABI breakage. Introduce percpu_free_rwsem_atomic wrapper
and change percpu_rwsem_destroy to use it in order to keep
percpu_rw_semaphore struct intact and fix ABI breakage.
Bug: 161210518
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Change-Id: I198a6381fb48059f2aaa2ec38b8c1e5e5e936bb0
When pagefaults are handled speculatively,the pair of
mmu_notifier_invalidate_range_start/mmu_notifier_invalidate_range_end
calls happen without mmap_lock being taken. This enables the following
race:
mmu_notifier_invalidate_range_start
mmap_write_lock
mmu_notifier_register
mmap_write_unlock
mmu_notifier_invalidate_range_end
In this case mmu_notifier_invalidate_range_end will see a new
subscriber not seen at the time of mmu_notifier_invalidate_range_start
and will call ops->invalidate_range_end for that subscriber without
the matching ops->invalidate_range_start, creating imbalance.
Fix this by introducing a new mm->mmu_notifier_lock percpu_rw_semaphore
to synchronize mmu_notifier_invalidate_range_start/
mmu_notifier_invalidate_range_end with mmu_notifier_register when
handling pagefaults speculatively without holding mmap_lock.
percpu_rw_semaphore is used instead of rw_semaphore to prevent cache
line bouncing in the pagefault path.
Fixes: 86ee4a531e ("FROMLIST: x86/mm: add speculative pagefault handling")
Bug: 161210518
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Change-Id: I9c363b2348efcad19818f93b010abf956870ab55
Android OTA failed due to SBI_NEED_FSCK flag when pinning the file. Let's avoid
it since we can do in-place-updates.
Bug: 210593661
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
(cherry picked from commit 70da2736a4138b86a12873d33fefbb495e22e6f8
git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git dev)
Signed-off-by: Huang Jianan <huangjianan@oppo.com>
Change-Id: I3fd33c984417c10b38e23de6cec017b03d588945
Slub has a static spinlock protected bitmap for marking which objects are on
freelist when it wants to list them, for situations where dynamically
allocating such map can lead to recursion or locking issues, and on-stack
bitmap would be too large.
The handlers of debugfs files alloc_traces and free_traces also currently use this
shared bitmap, but their syscall context makes it straightforward to allocate a
private map before entering locked sections, so switch these processing paths
to use a private bitmap.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Christoph Lameter <cl@linux.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Bug: 209932470
(cherry picked from commit b3fd64e1451b5efd94aa0ebc755e02558e6f3ca1)
Change-Id: I5fbf34e0d828d1c8b5e81e3679f81b70ce1fc8bc
Signed-off-by: Yee Lee <yee.lee@mediatek.com>
After we start to do core soft reset while usb role switch,
the phy init is invoked at every switch to device mode, but
its counter part de-init is missing, this causes the actual
phy init can not be done when we really want to re-init phy
like system resume, because the counter maintained by phy
core is not 0. considering phy init is actually redundant for
role switch, so move out the phy init from core soft reset to
dwc3 core init where is the only place required.
Fixes: f88359e1588b ("usb: dwc3: core: Do core softreset when switch mode")
Cc: <stable@vger.kernel.org>
Tested-by: faqiang.zhu <faqiang.zhu@nxp.com>
Tested-by: John Stultz <john.stultz@linaro.org> #HiKey960
Acked-by: Felipe Balbi <balbi@kernel.org>
Signed-off-by: Li Jun <jun.li@nxp.com>
Link: https://lore.kernel.org/r/1631068099-13559-1-git-send-email-jun.li@nxp.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 194108974
(cherry picked from commit 8cfac9a6744fcb143cb3e94ce002f09fd17fadbb)
Change-Id: I47b3de1b3d56aecc235b89b1d8b9f34961068636
Signed-off-by: Jindong Yue <jindong.yue@nxp.com>
Only TDs with status TD_CLEARING_CACHE will be given back after
cache is cleared with a set TR deq command.
xhci_invalidate_cached_td() failed to set the TD_CLEARING_CACHE status
for some cancelled TDs as it assumed an endpoint only needs to clear the
TD it stopped on.
This isn't always true. For example with streams enabled an endpoint may
have several stream rings, each stopping on a different TDs.
Note that if an endpoint has several stream rings, the current code
will still only clear the cache of the stream pointed to by the last
cancelled TD in the cancel list.
This patch only focus on making sure all canceled TDs are given back,
avoiding hung task after device removal.
Another fix to solve clearing the caches of all stream rings with
cancelled TDs is needed, but not as urgent.
This issue was simultanously discovered and debugged by
by Tao Wang, with a slightly different fix proposal.
Fixes: 674f8438c121 ("xhci: split handling halted endpoints into two steps")
Cc: <stable@vger.kernel.org> #5.12
Reported-by: Tao Wang <wat@codeaurora.org>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Link: https://lore.kernel.org/r/20210820123503.2605901-4-mathias.nyman@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 209501020
(cherry picked from commit 94f339147fc3eb9edef7ee4ef6e39c569c073753)
Change-Id: Ie7d39365e00b54154be2fd9ca05b5600bd18850d
Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
If add_memory_subsection() is called with a size of
memory_block_size_bytes, it calls into add_memory(), which declares
the region as system ram, and adds it to the buddy allocator. This
is inconsistent with the behavior of add_memory_subsection() for
other sizes, for which it does not add the memory to buddy and
instead reserves it for the caller's private use.
Bug: 210008865
Fixes: 417ac617ea ("ANDROID: mm/memory_hotplug: implement {add/remove}_memory_subsection")
Change-Id: Iefb69b0b4e96af670d0e65c325a9538d14b460e3
Signed-off-by: Patrick Daly <quic_pdaly@quicinc.com>
Currently, the UVC function is activated when open on the corresponding
v4l2 device is called. On another open the activation of the function
fails since the deactivation counter in `usb_function_activate` equals
0. However the error is not returned to userspace since the open of the
v4l2 device is successful.
On a close the function is deactivated (since deactivation counter still
equals 0) and the video is disabled in `uvc_v4l2_release`, although the
UVC application potentially is streaming.
Move activation of UVC function to subscription on UVC_EVENT_SETUP
because there we can guarantee for a userspace application utilizing
UVC. Block subscription on UVC_EVENT_SETUP while another application
already is subscribed to it, indicated by `bool func_connected` in
`struct uvc_device`. Extend the `struct uvc_file_handle` with member
`bool is_uvc_app_handle` to tag it as the handle used by the userspace
UVC application.
With this a process is able to check capabilities of the v4l2 device
without deactivating the function for the actual UVC application.
Reviewed-By: Michael Tretter <m.tretter@pengutronix.de>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Thomas Haemmerle <thomas.haemmerle@wolfvision.net>
Signed-off-by: Michael Tretter <m.tretter@pengutronix.de>
Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>
Acked-by: Felipe Balbi <balbi@kernel.org>
Link: https://lore.kernel.org/r/20211003201355.24081-1-m.grzeschik@pengutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 209496225
Change-Id: I17944b520d6cc29f86dd6b64b257c0d3185cb69a
(cherry picked from commit 72ee48ee8925446eaeda8e4ef3f2eb16b4a93d2a)
Signed-off-by: Dan Vacura <w36195@motorola.com>
commit 50252e4b5e989ce64555c7aef7516bdefc2fea72 upstream.
signalfd_poll() and binder_poll() are special in that they use a
waitqueue whose lifetime is the current task, rather than the struct
file as is normally the case. This is okay for blocking polls, since a
blocking poll occurs within one task; however, non-blocking polls
require another solution. This solution is for the queue to be cleared
before it is freed, by sending a POLLFREE notification to all waiters.
Unfortunately, only eventpoll handles POLLFREE. A second type of
non-blocking poll, aio poll, was added in kernel v4.18, and it doesn't
handle POLLFREE. This allows a use-after-free to occur if a signalfd or
binder fd is polled with aio poll, and the waitqueue gets freed.
Fix this by making aio poll handle POLLFREE.
A patch by Ramji Jiyani <ramjiyani@google.com>
(https://lore.kernel.org/r/20211027011834.2497484-1-ramjiyani@google.com)
tried to do this by making aio_poll_wake() always complete the request
inline if POLLFREE is seen. However, that solution had two bugs.
First, it introduced a deadlock, as it unconditionally locked the aio
context while holding the waitqueue lock, which inverts the normal
locking order. Second, it didn't consider that POLLFREE notifications
are missed while the request has been temporarily de-queued.
The second problem was solved by my previous patch. This patch then
properly fixes the use-after-free by handling POLLFREE in a
deadlock-free way. It does this by taking advantage of the fact that
freeing of the waitqueue is RCU-delayed, similar to what eventpoll does.
Fixes: 2c14fa838c ("aio: implement IOCB_CMD_POLL")
Cc: <stable@vger.kernel.org> # v4.18+
Link: https://lore.kernel.org/r/20211209010455.42744-6-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 185125206
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I748544276cf2fe214097751507d9c0ee4e3d3475
commit 363bee27e25804d8981dd1c025b4ad49dc39c530 upstream.
Currently, aio_poll_wake() will always remove the poll request from the
waitqueue. Then, if aio_poll_complete_work() sees that none of the
polled events are ready and the request isn't cancelled, it re-adds the
request to the waitqueue. (This can easily happen when polling a file
that doesn't pass an event mask when waking up its waitqueue.)
This is fundamentally broken for two reasons:
1. If a wakeup occurs between vfs_poll() and the request being
re-added to the waitqueue, it will be missed because the request
wasn't on the waitqueue at the time. Therefore, IOCB_CMD_POLL
might never complete even if the polled file is ready.
2. When the request isn't on the waitqueue, there is no way to be
notified that the waitqueue is being freed (which happens when its
lifetime is shorter than the struct file's). This is supposed to
happen via the waitqueue entries being woken up with POLLFREE.
Therefore, leave the requests on the waitqueue until they are actually
completed (or cancelled). To keep track of when aio_poll_complete_work
needs to be scheduled, use new fields in struct poll_iocb. Remove the
'done' field which is now redundant.
Note that this is consistent with how sys_poll() and eventpoll work;
their wakeup functions do *not* remove the waitqueue entries.
Fixes: 2c14fa838c ("aio: implement IOCB_CMD_POLL")
Cc: <stable@vger.kernel.org> # v4.18+
Link: https://lore.kernel.org/r/20211209010455.42744-5-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 185125206
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ic85396773d98ef3ccf48559462557e4faa3289c3
commit 9537bae0da1f8d1e2361ab6d0479e8af7824e160 upstream.
wake_up_poll() uses nr_exclusive=1, so it's not guaranteed to wake up
all exclusive waiters. Yet, POLLFREE *must* wake up all waiters. epoll
and aio poll are fortunately not affected by this, but it's very
fragile. Thus, the new function wake_up_pollfree() has been introduced.
Convert signalfd to use wake_up_pollfree().
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: d80e731eca ("epoll: introduce POLLFREE to flush ->signalfd_wqh before kfree()")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20211209010455.42744-4-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 185125206
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I1d97ac9c9fbb28c164bd4b51deeefbbb139205e7
commit a880b28a71e39013e357fd3adccd1d8a31bc69a8 upstream.
wake_up_poll() uses nr_exclusive=1, so it's not guaranteed to wake up
all exclusive waiters. Yet, POLLFREE *must* wake up all waiters. epoll
and aio poll are fortunately not affected by this, but it's very
fragile. Thus, the new function wake_up_pollfree() has been introduced.
Convert binder to use wake_up_pollfree().
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: f5cb779ba1 ("ANDROID: binder: remove waitqueue when thread exits.")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20211209010455.42744-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 185125206
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I8354c40ed73a7d88132a74a388704f0eb307a618
commit 42288cb44c4b5fff7653bc392b583a2b8bd6a8c0 upstream.
Several ->poll() implementations are special in that they use a
waitqueue whose lifetime is the current task, rather than the struct
file as is normally the case. This is okay for blocking polls, since a
blocking poll occurs within one task; however, non-blocking polls
require another solution. This solution is for the queue to be cleared
before it is freed, using 'wake_up_poll(wq, EPOLLHUP | POLLFREE);'.
However, that has a bug: wake_up_poll() calls __wake_up() with
nr_exclusive=1. Therefore, if there are multiple "exclusive" waiters,
and the wakeup function for the first one returns a positive value, only
that one will be called. That's *not* what's needed for POLLFREE;
POLLFREE is special in that it really needs to wake up everyone.
Considering the three non-blocking poll systems:
- io_uring poll doesn't handle POLLFREE at all, so it is broken anyway.
- aio poll is unaffected, since it doesn't support exclusive waits.
However, that's fragile, as someone could add this feature later.
- epoll doesn't appear to be broken by this, since its wakeup function
returns 0 when it sees POLLFREE. But this is fragile.
Although there is a workaround (see epoll), it's better to define a
function which always sends POLLFREE to all waiters. Add such a
function. Also make it verify that the queue really becomes empty after
all waiters have been woken up.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20211209010455.42744-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 185125206
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I4f69da5bbbad53975024d027fa1bbe22522c6efe
Under some conditions, USB gadget devices can show allocated buffer
contents to a host. Fix this up by zero-allocating them so that any
extra data will all just be zeros.
Reported-by: Szymon Heidrich <szymon.heidrich@gmail.com>
Tested-by: Szymon Heidrich <szymon.heidrich@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 86ebbc11bb3f60908a51f3e41a17e3f477c2eaa3)
Bug: 210292367
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I72b4376cd4296a8b8af0ade2d702cd420146f3aa
Release resources when aborting a command. Make sure that aborted commands
are completed once by clearing the corresponding tag bit from
hba->outstanding_reqs. This patch is an improved version of commit
3ff1f6b6ba6f ("scsi: ufs: core: Improve SCSI abort handling").
Link: https://lore.kernel.org/r/20211203231950.193369-14-bvanassche@acm.org
Fixes: 7a3e97b0dc ("[SCSI] ufshcd: UFS Host controller driver")
Tested-by: Bean Huo <beanhuo@micron.com>
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Bean Huo <beanhuo@micron.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 1fbaa02dfd05229312404aaef8bc9317b4ff8750 git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next)
[Stanley: Resolved minor conflict in drivers/scsi/ufshcd.c]
Bug: 204438323
Change-Id: Ifdf7f016c0d1986fe905f13be8abbeb54af4bce5
Signed-off-by: Bart Van Assche <bvanassche@google.com>
Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
The only functional change in this patch is that scsi_done() is now called
after ufshcd_release() and ufshcd_clk_scaling_update_busy() instead of
before.
The next patch in this series will introduce a call to
ufshcd_release_scsi_cmd() in the abort handler.
Link: https://lore.kernel.org/r/20211203231950.193369-13-bvanassche@acm.org
Tested-by: Bean Huo <beanhuo@micron.com>
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Bean Huo <beanhuo@micron.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 6f8dafdee6ae836763e753a9df288d10b35e9679 git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next)
Bug: 204438323
Change-Id: Ie9e3ef49aa10d3dc9ce43625893809b232d87d5f
Signed-off-by: Bart Van Assche <bvanassche@google.com>
Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
hba->outstanding_tasks, which is read under host_lock spinlock, tells the
interrupt handler what task management tags are in use by the driver. The
doorbell register bits indicate which tags are in use by the hardware. A
doorbell bit that is 0 is because the bit has yet to be set by the driver,
or because the task is complete. It is only possible to disambiguate the 2
cases, if reading/writing the doorbell register is synchronized with
reading/writing hba->outstanding_tasks.
For that reason, reading REG_UTP_TASK_REQ_DOOR_BELL must be done under
spinlock.
Bug: 210094292
(cherry picked from commit 5cb37a26355d79ab290220677b1b57d28e99a895)
Change-Id: I9a83393fe97682a271ec67834dc2d2888d3fbb60
Link: https://lore.kernel.org/r/20211108064815.569494-3-adrian.hunter@intel.com
Fixes: f5ef336fd2e4 ("scsi: ufs: core: Fix task management completion")
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
__ufshcd_issue_tm_cmd() clears req->end_io_data after timing out, which
races with the completion function ufshcd_tmc_handler() which expects
req->end_io_data to have a value.
Note __ufshcd_issue_tm_cmd() and ufshcd_tmc_handler() are already
synchronized using hba->tmf_rqs and hba->outstanding_tasks under the
host_lock spinlock.
It is also not necessary (nor typical) to clear req->end_io_data because
the block layer does it before allocating out requests e.g. via
blk_get_request().
So fix by not clearing it.
Bug: 210094292
(cherry picked from commit 886fe2915cce6658b0fc19e64b82879325de61ea)
Change-Id: I2c6f8b81f2aed10a85c167aa97dcbe9496677de5
[Stanley: Resolved minor conflict in drivers/scsi/ufshcd.c]
Link: https://lore.kernel.org/r/20211108064815.569494-2-adrian.hunter@intel.com
Fixes: f5ef336fd2e4 ("scsi: ufs: core: Fix task management completion")
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
Sometimes USB hosts can ask for buffers that are too large from endpoint
0, which should not be allowed. If this happens for OUT requests, stall
the endpoint, but for IN requests, trim the request size to the endpoint
buffer size.
Co-developed-by: Szymon Heidrich <szymon.heidrich@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 153a2d7e3350cc89d406ba2d35be8793a64c2038)
Bug: 210292367
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I9bbd6154177d7a1fb6c2e3a3dffa96634d85bb7f
(upstream commit 669bc5a188b40a4edc9c2a42e5b32f19182767d9)
As we register two usb buses for each xHC, and systems with several
hosts are more and more common it is getting hard to follow the
flow of debug messages without knowing which bus they belong to
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Bug: 202901721
Signed-off-by: Puma Hsu <pumahsu@google.com>
Change-Id: I55428c864c57e5e10c71ae2e539ca086db31a52d
(upstream commit 0d9b9f533bf1aa555fcd28fa459332b7731316b3)
Add more debugging messages to follow what happends to a URB internally
in special cases like URB cancel, halted endpoints and endpoint reset.
Helps tracking issues like URB never given back by host.
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Bug: 202901721
Signed-off-by: Puma Hsu <pumahsu@google.com>
Change-Id: Ief6507db231a115f138c78f288929736a631a385
(upstream commit 2847c46c61486fd8bca9136a6e27177212e78c69)
This reverts commit 5d5323a6f3625f101dbfa94ba3ef7706cce38760.
That commit effectively disabled Intel host initiated U1/U2 lpm for devices
with periodic endpoints.
Before that commit we disabled host initiated U1/U2 lpm if the exit latency
was larger than any periodic endpoint service interval, this is according
to xhci spec xhci 1.1 specification section 4.23.5.2
After that commit we incorrectly checked that service interval was smaller
than U1/U2 inactivity timeout. This is not relevant, and can't happen for
Intel hosts as previously set U1/U2 timeout = 105% * service interval.
Patch claimed it solved cases where devices can't be enumerated because of
bandwidth issues. This might be true but it's a side effect of accidentally
turning off lpm.
exit latency calculations have been revised since then
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Bug: 202901721
Signed-off-by: Puma Hsu <pumahsu@google.com>
Change-Id: I5d77ab7e34805730c94da9d2a0052fb6096a0b69
(upstream commit 94f339147fc3eb9edef7ee4ef6e39c569c073753)
Only TDs with status TD_CLEARING_CACHE will be given back after
cache is cleared with a set TR deq command.
xhci_invalidate_cached_td() failed to set the TD_CLEARING_CACHE status
for some cancelled TDs as it assumed an endpoint only needs to clear the
TD it stopped on.
This isn't always true. For example with streams enabled an endpoint may
have several stream rings, each stopping on a different TDs.
Note that if an endpoint has several stream rings, the current code
will still only clear the cache of the stream pointed to by the last
cancelled TD in the cancel list.
This patch only focus on making sure all canceled TDs are given back,
avoiding hung task after device removal.
Another fix to solve clearing the caches of all stream rings with
cancelled TDs is needed, but not as urgent.
This issue was simultanously discovered and debugged by
by Tao Wang, with a slightly different fix proposal.
Fixes: 674f8438c121 ("xhci: split handling halted endpoints into two steps")
Cc: <stable@vger.kernel.org> #5.12
Reported-by: Tao Wang <wat@codeaurora.org>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Bug: 202901721
Signed-off-by: Puma Hsu <pumahsu@google.com>
Change-Id: I0ceff10453a99183d27bc53e64c2c193e0ac429a
Some HID drivers are only for USB drivers, yet did not depend on
CONFIG_USB_HID. This was hidden by the fact that the USB functions were
stubbed out in the past, but now that drivers are checking for USB
devices properly, build errors can occur with some random
configurations.
Reported-by: kernel test robot <lkp@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Link: https://lore.kernel.org/r/20211202114819.2511954-1-gregkh@linuxfoundation.org
(cherry picked from commit f237d9028f844a86955fc9da59d7ac4a5c55d7d5)
Bug: 188677105
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ia755dc2803f1111c33d1c4b06b02913eebdf34c0
Before commit fc0c209c14 ("clk: Allow parents to be specified without
string names") child clks couldn't find their parent until the parent
clk was added to a list in __clk_core_init(). After that commit, child
clks can reference their parent clks directly via a clk_hw pointer, or
they can lookup that clk_hw pointer via DT if the parent clk is
registered with an OF clk provider.
The common clk framework treats hw->core being non-NULL as "the clk is
registered" per the logic within clk_core_fill_parent_index():
parent = entry->hw->core;
/*
* We have a direct reference but it isn't registered yet?
* Orphan it and let clk_reparent() update the orphan status
* when the parent is registered.
*/
if (!parent)
Therefore we need to be extra careful to not set hw->core until the clk
is fully registered with the clk framework. Otherwise we can get into a
situation where a child finds a parent clk and we move the child clk off
the orphan list when the parent isn't actually registered, wrecking our
enable accounting and breaking critical clks.
Consider the following scenario:
CPU0 CPU1
---- ----
struct clk_hw clkBad;
struct clk_hw clkA;
clkA.init.parent_hws = { &clkBad };
clk_hw_register(&clkA) clk_hw_register(&clkBad)
... __clk_register()
hw->core = core
...
__clk_register()
__clk_core_init()
clk_prepare_lock()
__clk_init_parent()
clk_core_get_parent_by_index()
clk_core_fill_parent_index()
if (entry->hw) {
parent = entry->hw->core;
At this point, 'parent' points to clkBad even though clkBad hasn't been
fully registered yet. Ouch! A similar problem can happen if a clk
controller registers orphan clks that are referenced in the DT node of
another clk controller.
Let's fix all this by only setting the hw->core pointer underneath the
clk prepare lock in __clk_core_init(). This way we know that
clk_core_fill_parent_index() can't see hw->core be non-NULL until the
clk is fully registered.
Fixes: fc0c209c14 ("clk: Allow parents to be specified without string names")
Signed-off-by: Mike Tipton <quic_mdtipton@quicinc.com>
Link: https://lore.kernel.org/r/20211109043438.4639-1-quic_mdtipton@quicinc.com
[sboyd@kernel.org: Reword commit text, update comment]
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Bug: 208605820
(cherry picked from commit 54baf56eaa40aa5cdcd02b3c20d593e4e1211220
https://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git clk-next)
Change-Id: Iee7ea8a1ba3a95a4985c2e689bcc4484c33153f1
Signed-off-by: Mike Tipton <quic_mdtipton@quicinc.com>
Long ago there wasn't a FOLL_LONGTERM flag so this DAX check was done by
post-processing the VMA list.
These days it is trivial to just check each VMA to see if it is DAX before
processing it inside __get_user_pages() and return failure if a DAX VMA is
encountered with FOLL_LONGTERM.
Removing the allocation of the VMA list is a significant speed up for many
call sites.
Add an IS_ENABLED to vma_is_fsdax so that code generation is unchanged
when DAX is compiled out.
Remove the dummy version of __gup_longterm_locked() as !CONFIG_CMA already
makes memalloc_nocma_save(), check_and_migrate_cma_pages(), and
memalloc_nocma_restore() into a NOP.
Bug: 209719897
Link: https://lkml.kernel.org/r/0-v1-5551df3ed12e+b8-gup_dax_speedup_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Minchan Kim <minchan@google.com>
(cherry picked from commit 52650c8b466bac399aec213c61d74bfe6f7af1a4)
Change-Id: I8be099dc7b617916254c2650ff8a55a6b926a32e