So now pre_update will be responsible for unconditionally deactivating
FBC and updating the state cache, while post_update will be
responsible for checking if it can be enabled, then enabling it.
This is one more step into proper locking.
Notice that intel_fbc_flush now calls post_update directly. The FBC
flush can only happen for drawing operations - since we explicitly
ignore the flips -, so the FBC state is not expected to have changed
at this point. With this we can just run post_update, which will make
sure we won't deactivate+reactivate FBC as would be the case now if we
called pre_update + post_update.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453210558-7875-11-git-send-email-paulo.r.zanoni@intel.com
Per the new atomic locking rules, we need to cache the CRTC, plane and
FB state structures we use so we can access them later without needing
more locks. So do this.
Notice that there are some pieces of the FBC code that look at things
that are only computed during the modeset, so we can't just can't
precompute whether FBC can be activated during the update_state_cache
stage. We may be able to do this later.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453210558-7875-10-git-send-email-paulo.r.zanoni@intel.com
We unconditionally disable/update FBC even during the page flip
IOCTLs, and an unconditional disable/update at every atomic commit
touching the primary plane shouldn't impact PC state residency
noticeably. Besides, the code that checks for rotation is a good hint
that we may be forgetting something else, so let's leave all the
decisions to intel_fbc.c, making the code much safer.
Once we have the code to properly make FBC enable/update decisions
based on atomic states, with proper locking, then we'll be able to
evaluate whether it will be worth trying to optimize the cases where a
disable isn't needed.
v2: Upstream moved and now our patch needs to remove dev_priv.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453406837-10511-1-git-send-email-paulo.r.zanoni@intel.com
Before this patch, page flips would call intel_frontbuffer_flip() and
intel_frontbuffer_flip_complete(), which would call intel_fbc_flush(),
which would call intel_fbc_update(). The problem is that drawing
operations also trigger intel_fbc_flush() calls, so it's not
guaranteed that we have the CRTC and FB locks grabbed when
intel_fbc_flush() happens, since the call trace may come from the
rendering path.
We're trying to make the FBC code grab the appropriate CRTC/FB locks,
so split the drawing and the flipping logic in order to achieve that
in later patches. So now the frontbuffer tracking code is just going
to be used for frontbuffer drawing, and intel_fbc_update() is going to
be used directly for actual page flips.
As a note, we don't need to call intel_fbc_flip() during the two
places where we call intel_frontbuffer_flip() since in one of them we
already have an intel_fbc_update() call, and in the other we have the
planes disabled.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453210558-7875-7-git-send-email-paulo.r.zanoni@intel.com
We say "dev_priv->fbc.something" way too many times in our code while
we could be saying just "fbc->something" with a previous declaration
of fbc. This has been bothering me for a while but I didn't want to
patch it since I wanted to fix the real problems first. But as I add
more code I keep thinking about it, especially since it makes the code
easier to read and it can make us fit 80 columns easier, so let's just
do the change now.
While at it, also rename from i915_fbc to intel_fbc because the whole
FBC code uses intel_fbc.
v2: Rebase after the work_fn changes.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453406763-10400-1-git-send-email-paulo.r.zanoni@intel.com
The early return inside __intel_fbc_update does not completely check
all the parameters that affect the FBC register values. For example,
we currently lack looking at crtc->adjusted_y (for the fence Y offset)
and all the parameters that affect the CFB size (for i8xx).
Instead of just adding the missing parameters to the check and hoping
that any changes to the fbc_activate functions also come with a
matching change to the __intel_fbc_update check, introduce a new
structure where we store these parameters and use the structure at the
fbc_activate function. Of course, it's still possible to access
everything from dev_priv in those functions, but IMHO the new code
will be harder to break.
v2: Rebase.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453210558-7875-5-git-send-email-paulo.r.zanoni@intel.com
Instead of waiting for 50ms, just wait until the next vblank, since
it's the minimum requirement. The whole infrastructure of FBC is based
on vblanks, so waiting for X vblanks instead of X milliseconds sounds
like the correct way to go. Besides, 50ms may be less than a vblank on
super slow modes that may or may not exist.
There are some small improvements in PC state residency (due to the
fact that we're now using 16ms for the common modes instead of 50ms),
but the biggest advantage is still the correctness of being
vblank-based instead of time-based.
v2:
- Rebase after changing the patch order.
- Update the commit message.
v3:
- Fix bogus vblank_get() instead of vblank_count() (Ville).
- Don't forget to call drm_crtc_vblank_{get,put} (Chris, Ville)
- Adjust the performance details on the commit message.
v4:
- Don't grab the FBC mutex just to grab the vblank (Maarten)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453406585-10233-1-git-send-email-paulo.r.zanoni@intel.com
In GuC mode LRC pinning lifetime depends exclusively on the
request liftime. Since that is terminated by the seqno update
that opens up a race condition between GPU finishing writing
out the context image and the driver unpinning the LRC.
To extend the LRC lifetime we will employ a similar approach
to what legacy ringbuffer submission does.
We will start tracking the last submitted context per engine
and keep it pinned until it is replaced by another one.
Note that the driver unload path is a bit fragile and could
benefit greatly from efforts to unify the legacy and exec
list submission code paths.
At the moment i915_gem_context_fini has special casing for the
two which are potentialy not needed, and also depends on
i915_gem_cleanup_ringbuffer running before itself.
v2:
* Move pinning into engine->emit_request and actually fix
the reference/unreference logic. (Chris Wilson)
* ring->dev can be NULL on driver unload so use a different
route towards it.
v3:
* Rebase.
* Handle the reset path. (Chris Wilson)
* Exclude default context from the pinning - it is impossible
to get it right before default context special casing in
general is eliminated.
v4:
* Rebased & moved context tracking to
intel_logical_ring_advance_and_submit.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Issue: VIZ-4277
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Nick Hoath <nicholas.hoath@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453976997-25424-1-git-send-email-tvrtko.ursulin@linux.intel.com
Will enable cleaner implementation of a following fix and
easier code unification in the future.
Idea and code by Chris Wilson.
v2: Do not return before last_contexts on engines are unpinned.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Previously intel_lr_context_(un)pin were operating on requests
which is in conflict with their names.
If we make them take a context and an engine, it makes the names
make more sense and it also makes future fixes possible.
v2: Rebase for default_context/kernel_context change.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Nick Hoath <nicholas.hoath@intel.com>
The only device specific dependency of the stolen memory setup is the
MMIO mapping and the stolen memory size. Both are already available in
i915_gtt_init(), so move the stolen initialization to there. The
clean-up code for i915_gtt_init() is in i915_global_gtt_cleanup(), so
move the stolen memory clean-up code there too.
This will be needed by an upcoming patch that needs the details of the
memory we reserve, but the change is also part of our generic goal to
move the initialization of resources with no or little dependencies on
other device specific resources towards the beginning of the init
sequence.
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: David Weinehall <david.weinehall@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453209992-25995-8-git-send-email-imre.deak@intel.com
Move the MCHBAR setup right after the MMIO setup, since the two things
are logically related and the MCHBAR setup code doesn't depend on any
other device specific resource. We'll also need MCHBAR to be ready
earlier in an upcoming patch, so this is also a preparation for that.
Factor out the init/clean-up code to separate functions to make things
clearer in the i915_driver_load()/unload() functions.
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: David Weinehall <david.weinehall@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453209992-25995-7-git-send-email-imre.deak@intel.com
Swap the order of context & engine cleanup, so that contexts are cleaned
up first, and *then* engines. This is a more sensible order anyway, but
in particular has become necessary since the 'intel_ring_initialized()
must be simple and inline' patch, which now uses ring->dev as an
'initialised' flag, so it can now be NULL after engine teardown. This
in turn can cause a problem in the context code, which (used to) check
the ring->dev->struct_mutex -- causing a fault if ring->dev was NULL.
Also rename the cleanup function to reflect what it actually does
(cleanup engines, not a ringbuffer), and fix an annoying whitespace issue.
v2: Also make the fix in i915_load_modeset_init, not just in
i915_driver_unload (Chris Wilson)
v3: Had extra stuff in it.
v4: Reverted extra stuff (so we're back to v2).
Rebased and updated commentary above (Dave Gordon).
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v2)
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1453405067-32890-3-git-send-email-david.s.gordon@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Required for,
WaDisableObjectLevelPreemptionForTrifanOrPolygon:bxt
WaDisableObjectLevelPreemptionForInstancedDraw:bxt
WaDisableObjectLevelPreemtionForInstanceId:bxt
According to WA database these are only applicable for BXT:A0 but since
A0 and A1 shares the same GT these are extended for A1 as well.
These are also required for SKL until B0 but not adding them because they
are pre-production steppings.
This register is added to HW whitelist to support WA required for future
enabling of pre-emptive command execution, WA implementation will be in
userspace and it cannot program this register if it is not on HW whitelist.
v2: use lower case in register defines (Nick)
v3: explain purpose of changes (Chris)
Reviewed-by: Nick Hoath <nicholas.hoath@intel.com>
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453412634-29238-5-git-send-email-arun.siluvery@linux.intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Some of the HW registers are privileged and cannot be written to from
non-privileged batch buffers coming from userspace unless they are added to
the HW whitelist. This whitelist is maintained by HW and it is different from
SW whitelist. Userspace need write access to them to implement preemption
related WA.
The reason for using this approach is, the register bits that control
preemption granularity at the HW level are not context save/restored; so even
if we set these bits always in kernel they are going to change once the
context is switched out. We can consider making them non-privileged by
default but these registers also contain other chicken bits which should not
be allowed to be modified.
In the later revisions controlling bits are save/restored at context level but
in the existing revisions these are exported via other debug registers and
should be on the whitelist. This patch adds changes to provide HW with a list
of registers to be whitelisted. HW checks this list during execution and
provides access accordingly.
HW imposes a limit on the number of registers on whitelist and it is
per-engine. At this point we are only enabling whitelist for RCS and we don't
foresee any requirement for other engines.
The registers to be whitelisted are added using generic workaround list
mechanism, even these are only enablers for userspace workarounds. But by
sharing this mechanism we get some test assets without additional cost (Mika).
v2: rebase
v3: parameterize RING_FORCE_TO_NONPRIV() as _MMIO() should be limited to
i915_reg.h (Ville), drop inline for wa_ring_whitelist_reg (Mika).
v4: improvements suggested by Chris Wilson.
Clarify that this is HW whitelist and different from the one maintained in
driver. This list is engine specific but it gets initialized along with other
WA which is RCS specific thing, so make it clear that we are not doing any
cross engine setup during initialization.
Make HW whitelist count of each engine available in debugfs.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453412634-29238-2-git-send-email-arun.siluvery@linux.intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>