this lock is used for sriov_gpu_reset, only get this mutex
can run into sriov_gpu_reset.
we have couple source triggers gpu_reset for SRIOV:
1) submit timedout and trigger reset voluntarily
2) invalid instruction detected by ENGINE and trigger reset voluntarily
2) hypervisor found world switch hang and trigger flr and notify guest to
do reset.
all need take care and we need a mutex to protect the consistency of
reset routine.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
implement SRIOV gpu_reset for future use.
it wil be called from:
1) job timeout
2) privl access or instruction error interrupt
3) hypervisor detect VF hang
v2: agd: rebase on upstream
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
sw part only invoked once during sw_init.
hw part invoked during first drv load and resume later.
that way we cannot alloc mqd in hw/resume, we only keep
mqd allocted in sw_init routine.
and hw_init routine only kmap and set it.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We ultimately want to re-use this for bare metal,
so no need to have vf checks in the KIQ code itself
since kiq itself is currently only used in VF cases.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
this is for SRIOV fix:
mqd soft init/fini will be invoked by sw_init to
allocate BO for compute MQD resource, instead of
original scheme that hw_init allocates MQD.
because if hw_init allocates MQD, then resume will
allocate MQD, and that lead to memory leak after
driver recovered from hang.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
introduce a new mqd member in ring is for later usage.
we need keep a clean version of MQD for the purpose
of recovering compute rings from hang.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
CG & PG function changes engine clock/gating, which is
not appropriate for VF device, because one vf doesn't know
the whole picture of engine's overall workload.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
CPU is not efficient to clean framebuffer especially under
virtualization, then loading driver takes long time which causes
timeout of mailbox handshake.
Signed-off-by: Pixel Ding <Pixel.Ding@amd.com>
Reviewed-by: Edward O'Callaghan <funfunctor@folklore1984.net>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
ib_pool init should prior to fbdev_init, otherwise
there will be error from amdgpu_sa_bo_new
(amdgpu_sa.c:323)
fbdev_init will call ttm_validate which further call
amdgpu_sa_bo_new.
v2:
move fbdev_init behind ib test.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
VF uses KIQ to access registers. When VM fault occurs, the driver
can't get back the fence of KIQ submission and runs into CPU soft
lockup.
Signed-off-by: Pixel Ding <Pixel.Ding@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
When multiple VFs try to enter exclusive mode at the same time, the
looping mechansim doesn't help to ensure each can get it because it
only loops active VFs, then the last one has to wait for a long
interval.
Signed-off-by: Pixel Ding <Pixel.Ding@amd.com>
Reviewed-by: Xiangliang.Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Based on commit 4cd0945901 ("drm/msm: submit support for out-fences").
We increment the minor driver version so userspace can detect explicit
fence support.
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
---
v3: Changed to work with fence returned from GPU submit.
The next patch will need the complete dma_fence, instead of just the seqno,
to create the sync_file in etnaviv_ioctl_gem_submit, in case an
out_fence_fd is requested.
The submit needs to hold a reference to the dma_fence, to avoid raceing
with the GPU completing the fence.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Philipp Zabel <p.zabel@pengutronix.de>
---
New patch in v3.
Loosely based on commit f0a42bb542 ("drm/msm: submit support for
in-fences"). Unfortunately, struct drm_etnaviv_gem_submit doesn't have
a flags field yet, so we have to extend the structure and trust that
drm_ioctl will clear the flags for us if an older userspace only submits
part of the struct.
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.com>
Reviewed-by: Sumit Semwal <sumit.semwal@linaro.org>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Each Vivante GPU contains a clock divider which can divide the GPU clock
by 2^n, which can lower the power dissipation from the GPU. It has been
suggested that the GC600 on Dove is responsible for 20-30% of the power
dissipation from the SoC, so lowering the GPU clock rate provides a way
to throttle the power dissiptation, and reduce the temperature when the
SoC gets hot.
This patch hooks the Etnaviv driver into the kernel's thermal management
to allow the GPUs to be throttled when necessary, allowing a reduction in
GPU clock rate from /1 to /64 in power of 2 steps.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The fence allocation needs to be protected by the GPU mutex, otherwise
the fence seqnos of concurrent submits might not match the insertion order
of the jobs in the kernel ring. This breaks the assumption that jobs
complete with monotonically increasing fence seqnos.
Fixes: d985349017 (drm/etnaviv: take GPU lock later in the submit process)
CC: stable@vger.kernel.org #4.9+
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This reverts commit 6c943de668 ("drm/i915: Skip execlists_dequeue()
early if the list is empty").
The validity of using READ_ONCE there depends upon having a mb to
coordinate the assignment of engine->execlist_first inside
submit_request() and checking prior to taking the spinlock in
execlists_dequeue(). We wrote "the update to TASKLET_SCHED incurs a
memory barrier making this cross-cpu checking safe", but failed to
notice that this mb was *conditional* on the execlists being ready, i.e.
there wasn't the required mb when it was most necessary!
We could install an unconditional memory barrier to fixup the
READ_ONCE():
diff --git a/drivers/gpu/drm/i915/intel_lrc.c
b/drivers/gpu/drm/i915/intel_lrc.c
index 7dd732cb9f57..1ed164b16d44 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -616,6 +616,7 @@ static void execlists_submit_request(struct
drm_i915_gem_request *request)
if (insert_request(&request->priotree, &engine->execlist_queue))
{
engine->execlist_first = &request->priotree.node;
+ smp_wmb();
if (execlists_elsp_ready(engine))
But we have opted to remove the race as it should be rarely effective,
and saves us having to explain the necessary memory barriers which we
quite clearly failed at.
Reported-and-tested-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Fixes: 6c943de668 ("drm/i915: Skip execlists_dequeue() early if the list is empty")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170329100052.29505-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Unlocking is dangerous. In this case we combine an early update to the
out-of-queue request, because we know that it will be inserted into the
correct FIFO priority-ordered slot when it becomes ready in the future.
However, given sufficient enthusiasm, it may become ready as we are
continuing to reschedule, and so may gazump the FIFO if we have since
dropped its spinlock. The result is that it may be executed too early,
before its dependencies.
v2: Move all work into the second phase over the topological sort. This
removes the shortcut on the out-of-rbtree request to ensure that we only
adjust its priority after adjusting all of its dependencies.
Fixes: 20311bd350 ("drm/i915/scheduler: Execute requests in order of priorities")
Testcase: igt/gem_exec_whisper
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: <stable@vger.kernel.org> # v4.10+
Link: http://patchwork.freedesktop.org/patch/msgid/20170327202143.7972-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
intel_shadow_wa_ctx is a field of intel_vgpu_workload. container_of() can
be used to refine the relation-ship between intel_shadow_wa_ctx and
intel_vgpu_workload. This patch removes the useless dereference.
v2. add "drm/i915/gvt" prefix. (Zhenyu)
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Turn on KBL WS platform support in gvt-g. More platforms would be
enabled, after validate.
Signed-off-by: Xu Han <xu.han@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>