Like we do for encoder let's make the plane->get_hw_state() return
the pipe to which the plane is currently attached. We don't currently
allow planes to move between the pipes, but perhaps one day we will.
In either case this makes the code more uniform and perhaps makes
intel_plane_mapping_ok() slightly more clear.
Note that for i965 and g4x planes A and B still have pipe select bits
but they're hardwired to pipe A and B respectively. This means we can
safely interpret those bits just like on gen2/3. This allows the
same readout code work for plane C (which can still be assigned
to eiter pipe on i965) should we ever expose it.
g4x no longer allows moving the cursor planes between the pipes,
but the pipe select bits can still be set in the register. Thus
we have to ignore those bits. OTOH i965 still allows the cursors
to move between pipes thus we have to trust the bits there.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180130203807.13721-3-ville.syrjala@linux.intel.com
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
During testing we encounter a conflict between SUSPEND_TEST_DEVICES and
disabling reset (gem_eio/suspend). This results in the device continuing
on without being reset, but since it has gone through HW sanitization to
account for the suspend/resume cycle, we have to assume the device has
been reset to its defaults. A simple way around this is to skip the
sanitize phase for SUSPEND_TEST_DEVICES by moving it to suspend-late.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180531082246.9763-4-chris@chris-wilson.co.uk
During suspend we want to flush out all active contexts and their
rendering. To do so we queue a request from the kernel's context, once
we know that request is done, we know the GPU is completely idle. To
speed up that switch bump the GPU clocks.
Switching to the kernel context prior to idling is also used to enforce
a barrier before changing OA properties, and when evicting active
rendering from the global GTT. All cases where we do want to
race-to-idle.
v2: Limit the boosting to only the switch before suspend.
v3: Limit it to the wait-for-idle on suspend.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: David Weinehall <david.weinehall@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Tested-by: David Weinehall <david.weinehall@linux.intel.com> #v1
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180531082246.9763-2-chris@chris-wilson.co.uk
Since we use i915_gem_find_active_request() from inside
intel_engine_dump() and may call that at any time, we do not guarantee
that the engine is paused nor that the signal kthreads and irq handler
are suspended, so we cannot assert that the breadcrumb doesn't advance
and that the irq hasn't happened on another CPU signaling the request we
believe to be idle.
The second assert removed (that request->engine == engine) remains
valid, but is now more rigorously checked during retirement.
Fixes: f636edb214 ("drm/i915: Make i915_engine_info pretty printer to standalone")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180529132922.6831-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
(cherry picked from commit cc7cc53435)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Since we use i915_gem_find_active_request() from inside
intel_engine_dump() and may call that at any time, we do not guarantee
that the engine is paused nor that the signal kthreads and irq handler
are suspended, so we cannot assert that the breadcrumb doesn't advance
and that the irq hasn't happened on another CPU signaling the request we
believe to be idle.
The second assert removed (that request->engine == engine) remains
valid, but is now more rigorously checked during retirement.
Fixes: f636edb214 ("drm/i915: Make i915_engine_info pretty printer to standalone")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180529132922.6831-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
DPCD 2009h "Synchronization latency in sink" has bits that tell us the
maximum number of frames sink can take to resynchronize to source timing
when exiting PSR. More importantly, as per eDP 1.4b, this is the "Minimum
number of frames following PSR exit that the Source device needs to
wait for PSR entry."
We currently use this value only to setup the number frames to wait before
PSR2 selective update. But, based on the above description it makes more
sense to use this to configure idle frames for both PSR1 and and PSR2. This
will ensure we wait the required number of frames before
activation whether it is PSR1 or PSR2.
The minimum number of idle frames remains 6, while allowing sink
synchronization latency and VBT to increase this value.
This also solves the flip-flop between sink and source frames that I
noticed on my Thinkpad X260 during PSR exit. This specific panel has a
value of 8h, which according to the spec means the "Source device must
wait for more than eight active frames after PSR exit before initiating PSR
entry. (In this case, should be provided by the panel supplier.)" VBT
however has a value of 0.
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Jose Roberto de Souza <jose.souza@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180525033047.7596-1-dhinakaran.pandiyan@intel.com
Let's suppress the underruns around every modeset sequence instead
of trying to avoid it. Planes are disabled at this point anyway so
we don't really gain anything from keeping the underrun reporting
enabled. Also for PCH ports we already suppress all underruns here
anyway so trying avoid it for the CPU eDP doesn't seem all that
important.
Maybe this gets rid of some lingering spurious underruns?
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180524190406.2973-2-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
VBT seems to have some bits to tell us whether the internal LVDS port
has something hooked up. In theory one might expect the VBT to not have
a child device for the LVDS port if there's no panel hooked up, but
in practice many VBTs still add the child device. The "LVDS config" bits
seem more reliable though, so let's check those.
So far we've used the "LVDS config" bits to check for eDP support on
ILK+, and disable the internal LVDS when the value is 3. That value
is actually documented as "Both internal LVDS and SDVO LVDS", but in
practice it looks to mean "eDP" on all the ilk+ VBTs I've seen. So let's
keep that interpretation, but for pre-ILK we will consider the value
3 to also indicate the presence of the internal LVDS.
Currently we have 25 DMI matches for the "no internal LVDS" quirk. In an
effort to reduce that let's toss in a WARN when the DMI match and VBT
both tell us that the internal LVDS is not present. The hope is that
people will report a bug, and then we can just nuke the corresponding
entry from the DMI quirk list. Credits to Jani for this idea.
v2: Split the basic int_lvds_support thing to a separate patch (Jani)
v3: Rebase
v4: Limit this to VBT version >= 134
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Ondrej Zary <linux@rainbow-software.org>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180518150138.18361-1-ville.syrjala@linux.intel.com
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
In order to prepare the GPU for sleeping, we may want to submit commands
to it. This is a complicated process that may even require some swapping
in from shmemfs, if the GPU was in the wrong state. As such, we need to
do this preparation step synchronously before the rest of the system has
started to turn off (e.g. swapin fails if scsi is suspended).
Fortunately, we are provided with a such a hook, pm_ops.prepare().
v2: Compile cleanup
v3: Fewer asserts, fewer problems?
v4: Ville pointed out that in some circumstances (such as switching off
the overlay) the display code may issue a GPU request. This is
unexpected, and will result in us going to sleep with us believing the
GPU is still awake (though all user work has been saved). Add a comment
to remind our future selves of what trouble brews.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106640
Testcase: igt/drv_suspend after igt/gem_tiled_swapping
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180525092629.1456-1-chris@chris-wilson.co.uk
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
We were not very carefully checking to see if an older request on the
engine was an earlier switch-to-kernel-context before deciding to emit a
new switch. The end result would be that we could get into a permanent
loop of trying to emit a new request to perform the switch simply to
flush the existing switch.
What we need is a means of tracking the completion of each timeline
versus the kernel context, that is to detect if a more recent request
has been submitted that would result in a switch away from the kernel
context. To realise this, we need only to look in our syncmap on the
kernel context and check that we have synchronized against all active
rings.
v2: Since all ringbuffer clients currently share the same timeline, we do
have to use the gem_context to distinguish clients.
As a bonus, include all the tracing used to debug the death inside
suspend.
v3: Test, test, test. Construct a selftest to exercise and assert the
expected behaviour that multiple switch-to-contexts do not emit
redundant requests.
Reported-by: Mika Kuoppala <mika.kuoppala@intel.com>
Fixes: a89d1f921c ("drm/i915: Split i915_gem_timeline into individual timelines")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180524081135.15278-1-chris@chris-wilson.co.uk
To no surprise (since we've flip-flopped over the use of PIN_HIGH a few
times), doing a search by address over a pathologically fragmented
address space is exceeding slow. To protect ourselves from nearly
unbounded latency (think searching a million holes while under
struct_mutex), limit the search for the highest available hole and
fallback to best-fit if it fails.
In the pathologically fragmented case, such as igt/gem_ctx_thrash, the
effect is dramatic, bringing the runtime down from hours to seconds
(depending on how many other slow searches you hit, e.g. alloc_iova()
and alloc_vmap_area() both degrade to a slow rbtree walk after their
small cache is exhausted). For the real world, the number of search
steps is unlikely to be significant as we should only need to search
once per new context.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180521082131.13744-3-chris@chris-wilson.co.uk
PSR hardware and hence the driver code for VLV and CHV deviates a lot from
their DDI counterparts. While the feature has been disabled for a long time
now, retaining support for these platforms is a maintenance burden. There
have been multiple refactoring commits to just keep the existing code for
these platforms in line with the rest. There are known issues that need to
be fixed to enable PSR on these platforms, and there is no PSR capable
platform in CI to ensure the code does not break again if we get around to
fixing the existing issues. On account of all these reasons, let's nuke
this code for now and bring it back if a need arises in the future.
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180511230059.19387-1-dhinakaran.pandiyan@intel.com