Replace the global device seqno with one for each engine, and account
for in-flight seqno on each separately. This is consistent with
dma-fence as each timeline has separate fence-contexts for each engine
and a seqno is only ordered within a fence-context (i.e. seqno do not
need to be ordered wrt to other engines, just ordered within a single
engine). This is required to enable request rewinding for preemption on
individual engines (we have to rewind the global seqno to avoid
overflow, and we do not have to rewind all engines just to preempt one.)
v2: Rename active_seqno to inflight_seqnos to more clearly indicate that
it is a counter and not equivalent to the existing seqno. Update
functions that operated on active_seqno similarly.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-3-chris@chris-wilson.co.uk
When dma_fence_signal() is called, it sets a flag to indicate the fence
is complete. Before the dma_fence is signaled, the seqno check will
first be passed. During an unlocked check (such as inside a waiter), it
is possible for the fence to be signaled even though the seqno has been
reset (by engine wraparound). In this case the waiter will be kicked,
but for an extra layer of protection we can check the persistent
signaled bit from the fence.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-2-chris@chris-wilson.co.uk
Previous vGPU type create tried to determine vGPU type name e.g _1, _2
based on the number of mdev devices can be created, but different type
might have very different resource size depending on physical device.
We need to split type name vs. actual mdev resource and create fixed
vGPU type with determined size for consistence.
With this we'd like to fix vGPU types for _1, _2, _4 and _8 now, each
type has fixed defined resource size. Available mdev instances that could
be created is determined by physical resource, and user should query
for that before creating.
Cc: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The guest VM may initialize the whole GTT table during boot up,
so the warning msg in emulate_gtt_mmio_write is not necessary, it is
the expected behavior and it may confuse the user if error msg is
printed out, so remove the msg from emulate_gtt_mmio_write(),
Signed-off-by: Zhao, Xinda <xinda.zhao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In GVT-g we always emulate as pcode read/write success and ready for access
anytime, since we don't touch real physical registers here.
Add 'SKL_PCODE_CDCLK_CONTROL' write emulation, without it will cause
skl_set_cdclk fail in guest.
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Releasing shadow PPGTT pages is not enough when vGPU reset, the
guest page table tracking data should has same life-cycle with
all the shadow PPGTT pages; Otherwise there is no chance to
re-shadow the PPGTT pages without free the guest page table
tracking data.
This patch clear the PPGTT reset logic and make the vGPU reset in
working order.
v2: refactor some logic to avoid code duplicated.
v3: remove useless macro and add comments from Christophe.
v4: keep reset logic in reset function.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
When untracked mmio is visited, too many log info will be printed out,
it may confuse the user, but most of the time, it is not the urgent case,
so use gvt_dbg_mmio() instead.
Signed-off-by: Zhao, Xinda <xinda.zhao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
for a handled mmio reg, its default value is read from hardware, while
for an unhandled mmio regs, its default value would be random if not
explicitly set to 0
Signed-off-by: Zhao Yan <yan.y.zhao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Linux guest is using this MMIO in lri command. Add cmd_access flag
for this mmio in gvt to avoid error log.
v2: change the mmio address to its macro name
Signed-off-by: Pei Zhang <pei.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
add a whitelist to check the content of force-nonpriv registers
v3:
per He Min's comment, modify in_whitelist()'s return type to bool, and use
negative value as the return value for failure for force_nonpriv_write().
v2:
1. split a big patch into two smaller ones per zhenyu's comment.
this patch is the mmio handling part for force-nopriv registers
2. per zhenyu's comment, combine all non-priv registers into a single
MMIO_DFH entry
Signed-off-by: Zhao Yan <yan.y.zhao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
the value of those registers should be applied to hardware on
context restoring
Signed-off-by: Zhao Yan <yan.y.zhao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
send_display_send_uevent() sends two environment variable, and the
first one GVT_DISPLAY_READY is set including a new line at the end of
the string; that is obviously superfluous and wrong -- at least, it
*looks* so when you only read the code.
However, it doesn't appear in the actual output by a (supposedly
unexpected) trick. The code uses snprintf() and truncates the string
in size 20 bytes. This makes the string as GVT_DISPLAY_READY=0 or
...=1 including the trailing NUL-letter. That is, the '\n' found in
the format string is always cut off as a result.
Although the code gives the correct result, it is confusing. This
patch addresses it, just removing the superfluous '\n' from the format
string for avoiding further confusion. If the argument "ready" were
not a bool, the size 20 should be corrected as well. But it's a
bool, so we can leave the magic number 20 as is for now.
FWIW, the bug was spotted by a new GCC7 warning:
drivers/gpu/drm/i915/gvt/handlers.c: In function 'pvinfo_mmio_write':
drivers/gpu/drm/i915/gvt/handlers.c:1042:34: error: 'snprintf' output truncated before the last format character [-Werror=format-truncation=]
snprintf(display_ready_str, 20, "GVT_DISPLAY_READY=%d\n", ready);
^~~~~~~~~~~~~~~~~~~~~~~~
drivers/gpu/drm/i915/gvt/handlers.c:1042:2: note: 'snprintf' output 21 bytes into a destination of size 20
snprintf(display_ready_str, 20, "GVT_DISPLAY_READY=%d\n", ready);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fixes: 04d348ae3f ("drm/i915/gvt: vGPU display virtualization")
Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=1025903
Reported-by: Richard Biener <rguenther@suse.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
some registers were missing or treated as BDW only. This patch is to fix it
avoid unhandled mmio wanrings
v2: update commit message according to zhenyu's comment
Signed-off-by: Zhao Yan <yan.y.zhao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Due to the request replay, context switch interrupt may come after
gvt free the workload thus can cause a kernel NULL pointer kernel
panic. This patch will add a simple check to avoid this for a short
term.
From long term, gvt workload lifecycle doesn't match with i915 request
and need to find a proper way to manage this.
v4: simplify the NULL pointer check.
v5: add unlikely to optimize.
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Windows guest will notitfy GVT-g to request more resources through g2v
interface, when its resources are not enough.
This patch is to handle this case and let vgpu enter failsafe mode to
avoid too many error messages.
Signed-off-by: Min He <min.he@intel.com>
Signed-off-by: Pei Zhang <pei.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Testing with concurrent GGTT accesses no longer show the coherency
problems from yonder, commit 5bab6f60cb ("drm/i915: Serialise updates
to GGTT with access through GGTT on Braswell"). My presumption is that
the root cause was more likely fixed by commit 3b5724d702 ("drm/i915:
Wait for writes through the GTT to land before reading back"), along
with the use of WC updates to the global gTT in commit 8448661d65
("drm/i915: Convert clflushed pagetables over to WC maps". Given
that the original symptoms can no longer be reproduced, time to remove
the workaround.
Testcase: igt/gem_concurrent_blit
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20170220124718.14796-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Right now this is just leaving a lot of spam in dmesg that makes real
issues more difficult to debug. As well (as noted by the comment right
above the DRM_DEBUG_KMS() call) this is normal behavior when there's
nothing connected to the DisplayPort connector.
Signed-off-by: Lyude <lyude@redhat.com>
Flushing the cachelines for an object is slow, can be as much as 100ms
for a large framebuffer. We currently do this under the struct_mutex BKL
on execution or on pageflip. But now with the ability to add fences to
obj->resv for both flips and execbuf (and we naturally wait on the fence
before CPU access), we can move the clflush operation to a workqueue and
signal a fence for completion, thereby doing the work asynchronously and
not blocking the driver or its clients.
v2: Introduce i915_gem_clflush.h and use a new name, split out some
extras into separate patches.
Suggested-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170222114049.28456-5-chris@chris-wilson.co.uk
Two new tracepoints placed at the call sites where requests are
actually passed to the GPU enable userspace to track engine
utilisation.
These tracepoints are only enabled when the
DRM_I915_LOW_LEVEL_TRACEPOINTS Kconfig option is enabled.
v2: Fix compilation with !CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS.
v3: Name global seqno consistently across tracepoints.
v4: Remove port info from request out tracepoint. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
i915_gem_ring_notify is more appropriate since we do not have
the request information at this point, but it is simply a
signal from the engine that some request has been completed.
v2:
* Always trace and log if there were any waiters.
* Rename to intel_engine_notify. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
These new tracepoints are emitted once the request is ready to
be submitted to the GPU and once the request is about to
be submitted to the GPU, respectively.
Former condition triggers as soon as all the fences and
dependencies have been resolved, and the latter once the
backend is about to submit it to the GPU.
New tracepoint are enabled via the new
DRM_I915_LOW_LEVEL_TRACEPOINTS Kconfig option which is disabled
by default to alleviate the performance impact concerns.
v2: Move execute tracepoint to __i915_gem_request_submit.
(Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Provide the same information as the other request event classes.
v2: Pass in flags so we can properly report the blocking status.
(Chris Wilson)
v3: Log hex with 0x prefix for clarity.
v4: Derive blocking status from flags. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Rename it to i915_gem_request_queue and fix the logged info
equivalent to the i915_gem_request even class. Also moved it
a bit further apart from the i915_gem_request_add tracepoint
since they otherwise provide similar information too close in
time.
v2: Remove sw fence singalling. We will rely on the soon to
come GuC scheduling backend to enable that. (Chris Wilson)
v3: Log hex with 0x prefix for clarity.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
At the moment only the global seqno is logged which is not set
until the request is ready for submission.
Add the per-contex seqno and the context hardware id which are
both interesting data points.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Pull locking updates from Ingo Molnar:
"The main changes in this cycle were:
- Implement wraparound-safe refcount_t and kref_t types based on
generic atomic primitives (Peter Zijlstra)
- Improve and fix the ww_mutex code (Nicolai Hähnle)
- Add self-tests to the ww_mutex code (Chris Wilson)
- Optimize percpu-rwsems with the 'rcuwait' mechanism (Davidlohr
Bueso)
- Micro-optimize the current-task logic all around the core kernel
(Davidlohr Bueso)
- Tidy up after recent optimizations: remove stale code and APIs,
clean up the code (Waiman Long)
- ... plus misc fixes, updates and cleanups"
* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (50 commits)
fork: Fix task_struct alignment
locking/spinlock/debug: Remove spinlock lockup detection code
lockdep: Fix incorrect condition to print bug msgs for MAX_LOCKDEP_CHAIN_HLOCKS
lkdtm: Convert to refcount_t testing
kref: Implement 'struct kref' using refcount_t
refcount_t: Introduce a special purpose refcount type
sched/wake_q: Clarify queue reinit comment
sched/wait, rcuwait: Fix typo in comment
locking/mutex: Fix lockdep_assert_held() fail
locking/rtmutex: Flip unlikely() branch to likely() in __rt_mutex_slowlock()
locking/rwsem: Reinit wake_q after use
locking/rwsem: Remove unnecessary atomic_long_t casts
jump_labels: Move header guard #endif down where it belongs
locking/atomic, kref: Implement kref_put_lock()
locking/ww_mutex: Turn off __must_check for now
locking/atomic, kref: Avoid more abuse
locking/atomic, kref: Use kref_get_unless_zero() more
locking/atomic, kref: Kill kref_sub()
locking/atomic, kref: Add kref_read()
locking/atomic, kref: Add KREF_INIT()
...