Broadwell and newer actually compress up to 2560 lines instead of 2048
(as documented in the FBC_CTL page). If we don't take this into
consideration we end up reserving too little stolen memory for the
CFB, so we may allocate something else (such as a ring) right after
what we reserved, and the hardware will overwrite it with the contents
of the CFB when FBC is active, causing GPU hangs. Another possibility
is that the CFB may be allocated at the very end of the available
space, so the CFB will overlap the reserved stolen area, leading to
FIFO underruns.
This bug has always been a problem on BDW (the only affected platform
where FBC is enabled by default), but it's much easier to reproduce
since the following commit:
commit c58b735fc7
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu Aug 18 17:16:57 2016 +0100
drm/i915: Allocate rings from stolen
Of course, you can only reproduce the bug if your screen is taller
than 2048 lines.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98213
Fixes: a98ee79317 ("drm/i915/fbc: enable FBC by default on HSW and BDW")
Cc: <stable@vger.kernel.org> # v4.6+
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1477065346-13736-1-git-send-email-paulo.r.zanoni@intel.com
At the moment, we have dependency on the RPM as a barrier itself in both
i915_gem_release_all_mmaps() and i915_gem_restore_fences().
i915_gem_restore_fences() is also called along !runtime pm paths, but we
can move the markup of lost fences alongside releasing the mmaps into a
common i915_gem_runtime_suspend(). This has the advantage of locating
all the tricky barrier dependencies into one location.
v2: Just mark the fence as invalid (fence->dirty) so that upon waking we
will be sure to clear the fence after use, or restore it to the correct
value before use. This makes sure that if the fence is left intact
across the sleep, we do not leave it pointing to a region of GTT for the
next unsuspecting user.
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Imre Deak <imre.deak@linux.intel.com>
Reviewed-by: Imre Deak <imre.deak@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20161024124218.18252-5-chris@chris-wilson.co.uk
Now that we have reduced the access to the list to either (a) under the
struct_mutex whilst holding the RPM wakeref (so that concurrent writers to
the list are serialised by struct_mutex) and (b) under the atomic
runtime suspend (which cannot run concurrently with any other accessor due
to the atomic nature of the runtime suspend) we can remove the extra
locking around the list itself.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20161024124218.18252-3-chris@chris-wilson.co.uk
We can remove the false coupling between RPM and struct mutex by the
observation that we can use the RPM wakeref as the barrier around user
mmap access. That is as we tear down the user's PTE atomically from
within rpm suspend and then to fault in new PTE requires the rpm
wakeref, means that no user access is possible through those PTE without
RPM being awake. Having made that observation, we can then remove the
presumption of having to take rpm outside of struct_mutex and so allow
fine grained acquisition of a wakeref around hw access rather than
having to remember to acquire the wakeref early on.
v2: Rejig placement of the new intel_runtime_pm_get() to be as tight
as possible around the GTT pread/pwrite.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161024124218.18252-2-chris@chris-wilson.co.uk
We want to decouple RPM and struct_mutex, but currently RPM has to walk
the list of bound objects and remove userspace mmapping before we
suspend (otherwise userspace may continue to access the GTT whilst it is
powered down). This currently requires the struct_mutex to walk the
bound_list, but if we move that to a separate list and lock we can take
the first step towards removing the struct_mutex.
v2: Split runtime suspend unmapping vs regular unmapping, to make the
locking (and barriers) clearer. Add the object to the userfault_list
prior to inserting the first PTE, the race between add/revoke depends
upon struct_mutex for regular unmappings and rpm for runtime-suspend.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> #v1
Link: http://patchwork.freedesktop.org/patch/msgid/20161024124218.18252-1-chris@chris-wilson.co.uk
When invalidating RCS TLB the device can enter RC6 state interrupting
the process, therefore the need for render forcewake for the whole
procedure.
This WA is needed for all production SKL SKUs.
v2: reworked putting and getting forcewake with help of Mika Kuoppala
v3: use I915_READ_FW and I915_WRITE_FW as we are handling forcewake on
in the code path
References: HSD#2136899, HSD#1404391274
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Currently it's entirely possible to go through the link training step
without first determining the lane_count, which is silly since we end up
doing a bunch of aux transfers of size = 0, as highlighted by
WARN_ON(!msg->buffer != !msg->size), and can only ever result in a
'failed to update link training' message. This can be observed during
intel_dp_long_pulse where we can do the link training step, but before
we have had a chance to set the link params. To avoid this we add an
extra check for the lane_count in intel_dp_check_link_status, which
should prevent us from doing the link training step prematurely.
v2: add WARN_ON_ONCE and FIXME comment (Ville)
References: https://bugs.freedesktop.org/show_bug.cgi?id=97344
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Mika Kahola <mika.kahola@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1476912593-10019-1-git-send-email-matthew.auld@intel.com
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
The VBT provides the platform a way to mix and match the DDI ports vs.
AUX channels. Currently we only trust the VBT for DDI E, which has no
corresponding AUX channel of its own. However it is possible that some
board might use some non-standard DDI vs. AUX port routing even for
the other ports. Perhaps for signal routing reasons or something,
So let's generalize this and trust the VBT for all ports.
For now we'll limit this to DDI platforms, as we trust the VBT a bit
more there anyway when it comes to the DDI ports. I've structured
the code in a way that would allow us to easily expand this to
other platforms as well, by simply filling in the ddi_port_info.
v2: Drop whitespace changes, keep MISSING_CASE() for unknown
aux ch assignment, include a commit message, include debug
message during init
Cc: stable@vger.kernel.org
Cc: Maarten Maathuis <madman2003@gmail.com>
Tested-by: Maarten Maathuis <madman2003@gmail.com>
References: https://bugs.freedesktop.org/show_bug.cgi?id=97877
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1476208368-5710-2-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Jim Bride <jim.bride@linux.intel.com>
gvt-next-fix-2016-10-20
This contains fix for first pull request.
- clean up header mess between i915 core and gvt
- new MAINTAINERS item
- new kernel-doc section
- fix compiling warnings
- gvt gem fix series from Chris
- fix for i915 intel_engine_cs change
- some sparse fixes from Changbin
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The function return values should has type int if it return
a integer value.
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Switch to use new for_each_engine() helper to properly access
enabled intel_engine_cs as i915 core has changed that to be
dynamic managed. At GVT-g init time would still depend on ring
mask to determine engine list as it's earlier.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This code was removed from i915_cmd_parser.c but still an obsolete
version wound up being duplicated into gvt/cmd_parser.c. Good riddance.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We have the ability to map an object, so use it rather than opencode it
badly. Note that the object remains permanently pinned, this is poor
practise.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
For whatever reason, the gvt scheduler runs synchronously. At the very
least, lets run synchronously without holding the struct_mutex.
v2: cut'n'paste mutex_lock instead of unlock.
Replace long hold of struct_mutex with a mutex to serialise the worker
threads.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The workload took a pointer to the request, and even waited upon,
without holding a reference on the request. Take that reference
explicitly and fix up the error path following request allocation that
missed flushing the request.
v2: [zhenyuw]
- drop request put in error path for dispatch, as main thread
caller will handle it identically to a real request.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Unpinning the pages prior to the object being release from the GPU may
allow the GPU to read and write into system pages (i.e. use after free
by the hw).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The purpose of returning the just-pinned VMA is so that we can use the
information within, like its address. Also it should be tracked and used
as the cookie to unpin...
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Manipulating the fence_list requires the runtime wakelock, as does
writing to the fence registers. Acquire a wakelock for the former, and
assert that the device is awake for the latter.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Update with brief overview and reference for more detailed
arch design documents.
Add new section for Intel GVT-g host support.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
i915 core should only call functions and structures exposed through
intel_gvt.h. Remove internal gvt.h and i915_pvinfo.h.
Change for internal intel_gvt structure as private handler which
not requires to expose gvt internal structure for i915 core.
v2: Fix per Chris's comment
- carefully handle dev_priv->gvt assignment
- add necessary bracket for macro helper
- forward declartion struct intel_gvt
- keep free operation within same file handling alloc
v3: fix use after free and remove intel_gvt.initialized
v4: change to_gvt() to an inline
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>