IB/hfi1: Optimize kthread pointer locking when queuing CQ entries
All threads queuing CQ entries on different CQs are unnecessarily synchronized by a spin lock to check if the CQ kthread worker hasn't been destroyed before queuing an CQ entry. The lock used in6efaf10f16
("IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker") is a device global lock and will have poor performance at scale as completions are entered from a large number of CPUs. Convert to use RCU where the read side of RCU is rvt_cq_enter() to determine that the worker is alive prior to triggering the completion event. Apply write side RCU semantics in rvt_driver_cq_init() and rvt_cq_exit(). Fixes:6efaf10f16
("IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker") Cc: <stable@vger.kernel.org> # 4.14.x Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Sebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
This commit is contained in:

committed by
Doug Ledford

parent
c872a1f9e3
commit
af8aab7137
@@ -402,7 +402,7 @@ struct rvt_dev_info {
|
||||
spinlock_t pending_lock; /* protect pending mmap list */
|
||||
|
||||
/* CQ */
|
||||
struct kthread_worker *worker; /* per device cq worker */
|
||||
struct kthread_worker __rcu *worker; /* per device cq worker */
|
||||
u32 n_cqs_allocated; /* number of CQs allocated for device */
|
||||
spinlock_t n_cqs_lock; /* protect count of in use cqs */
|
||||
|
||||
|
Reference in New Issue
Block a user