This was introduced more than a decade ago when sg chaining was
added, but we never really caught anything with it. The scatterlist
entry size can be critical, since drivers allocate it, so remove
the magic member. Recently it's been triggering allocation stalls
and failures in NVMe.
Tested-by: Jordan Glover <Golden_Miller83@protonmail.ch>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The limit_id_fallback array uses enum drm_ipp_size_id to index its
content. The content itself is of type enum drm_exynos_ipp_limit_type.
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
The only bits that should be preserved in decon_win_set_fmt() is
WINCONx_ENWIN_F. All other bits depends on the selected pixel formats and
are set by the mentioned function.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Set per-plane global alpha to maximum value to get proper blending of
XRGB and ARGB planes. This fixes the strange order of overlapping planes.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
DMA hardware should respect buffer pitch, so use the width calculated from
the buffer pitch instead of the virtual one.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Fix following issues related to planar YUV pixel format configuration:
- NV16/61 modes were incorrectly programmed as NV12/21,
- YVU420 was programmed as YUV420 on source,
- YVU420 and YUV422 were programmed as YUV420 on output.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Horizontal (DRM_MODE_REFLECT_Y) and vertical (DMR_MODE_REFLECT_Y) flip
were swapped in GScaler driver. Fix this by swapping code for interpreting
them.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Investigation revealed that GScaler hardware requires the real buffer width
(pitch) to be aligned to 16 pixels.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
DMA hardware should respect buffer pitch, so use the width calculated from
the buffer pitch instead of the virtual one.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Fix Cb/CR components order in two-planar YUV420, YUV422 and YUV444 modes.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Ensure that Scaler hardware is properly reset and interrupts are cleared
before processing next image.
Signed-off-by: Andrzej Pietrasiewicz <andrzej.p@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Horizontal (DRM_MODE_REFLECT_Y) and vertical (DMR_MODE_REFLECT_Y) flip
were swapped in Rotator driver. Fix this by swapping code for interpreting
them.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Prepare a common function for size and scale checks and call it for
source and destination buffers. Then also move there the state-less checks
from exynos_drm_ipp_task_setup_buffer, so the format information is already
available in limits processing. Finally perform the IPP_LIMIT_BUFFER check
on the real width of the buffer (the width calculated from the provided
buffer pitch).
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Immutable branch between fbdev and drm for the v4.19 merge window
(contains the deferred console takeover feature)
Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com>
# gpg: Signature made Thu 28 Jun 2018 10:24:50 AM -03
# gpg: using RSA key 7E33B63FA047C20B
# gpg: Can't check signature: public key not found
# Conflicts:
# drivers/gpu/drm/i915/i915_gem.c
# drivers/gpu/drm/i915/intel_crt.c
# drivers/gpu/drm/i915/intel_display.c
# drivers/gpu/drm/i915/intel_lrc.c
Link: https://patchwork.freedesktop.org/patch/msgid/2462549.rLSfW9kX99@amdc3058
Back in commit 27af5eea54 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.
We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.
In this iteration of the tasklet-vs-direct submission debate, we seek a
compromise where by we submit new requests immediately to the HW but
defer processing the CS interrupt onto a tasklet. We gain the advantage
of low-latency and ksoftirqd avoidance when waking up the HW, while
avoiding the system-wide starvation of our CS irq-storms.
Comparing the impact on the maximum latency observed (that is the time
stolen from an RT process) over a 120s interval, repeated several times
(using gem_syslatency, similar to RT's cyclictest) while the system is
fully laden with i915 nops, we see that direct submission an actually
improve the worse case.
Maximum latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 2)
x Always using tasklets (a couple of >1000us outliers removed)
+ Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
| + |
| + |
| + |
| + + |
| + + + |
| + + + + x x x |
| +++ + + + x x x x x x |
| +++ + ++ + + *x x x x x x |
| +++ + ++ + * *x x * x x x |
| + +++ + ++ * * +*xxx * x x xx |
| * +++ + ++++* *x+**xx+ * x x xxxx x |
| **x++++*++**+*x*x****x+ * +x xx xxxx x x |
|x* ******+***************++*+***xxxxxx* xx*x xxx + x+|
| |__________MA___________| |
| |______M__A________| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 118 91 186 124 125.28814 16.279137
+ 120 92 187 109 112.00833 13.458617
Difference at 95.0% confidence
-13.2798 +/- 3.79219
-10.5994% +/- 3.02677%
(Student's t, pooled s = 14.9237)
However the mean latency is adversely affected:
Mean latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 1)
x Always using tasklets
+ Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
| xxxxxx + ++ |
| xxxxxx + ++ |
| xxxxxx + +++ ++ |
| xxxxxxx +++++ ++ |
| xxxxxxx +++++ ++ |
| xxxxxxx +++++ +++ |
| xxxxxxx + ++++++++++ |
| xxxxxxxx ++ ++++++++++ |
| xxxxxxxx ++ ++++++++++ |
| xxxxxxxxxx +++++++++++++++ |
| xxxxxxxxxxx x +++++++++++++++ |
|x xxxxxxxxxxxxx x + + ++++++++++++++++++ +|
| |__A__| |
| |____A___| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 120 3.506 3.727 3.631 3.6321417 0.02773109
+ 120 3.834 4.149 4.039 4.0375167 0.041221676
Difference at 95.0% confidence
0.405375 +/- 0.00888913
11.1608% +/- 0.244735%
(Student's t, pooled s = 0.03513)
However, since the mean latency corresponds to the amount of irqsoff
processing we have to do for a CS interrupt, we only need to speed that
up to benefit not just system latency but our own throughput.
v2: Remember to defer submissions when under reset.
v4: Only use direct submission for new requests
v5: Be aware that with mixing direct tasklet evaluation and deferred
tasklets, we may end up idling before running the deferred tasklet.
v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the
annotation to reset_in_progress().
v7: Take the full timeline.lock when enabling perf_pmu stats as the
tasklet is no longer a valid guard. A consequence is that the stats are
now only valid for engines also using the timeline.lock to process
state.
Testcase: igt/gem_exec_latency/*rthog*
References: 27af5eea54 ("drm/i915: Move execlists irq handler to a bottom half")
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk
Now that we use the CSB stored in the CPU friendly HWSP, we do not need
to track interrupts for when the mmio CSB registers are valid and can
just check where we read up to last from the cached HWSP. This means we
can forgo the atomic bit tracking from interrupt, and in the next patch
it means we can check the CSB at any time.
v2: Change the splitting inside reset_prepare, we only want to lose
testing the interrupt in this patch, the next patch requires the change
in locking
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
As we now never read back our current head position from the CSB
pointers register, and the HW itself doesn't use it to prevent
overwriting unread CSB entries, we do not need to keep updating the
register. As it turns out this register is not listed as being shadowed,
and so requires forcewake -- but we haven't been taking forcewake around
it so the writes has probably been regularly dropped. Fortuitously, we
only read the value after a reset where it did not matter, and zero was
the right answer (well, close enough).
Mika pointed out that this was how we used to do it (accidentally!)
before he fixed it in commit cc53699b25 ("drm/i915: Use masked write
for Context Status Buffer Pointer").
References: cc53699b25 ("drm/i915: Use masked write for Context Status Buffer Pointer")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-7-chris@chris-wilson.co.uk
On HW reset, the HW clears the write pointer (to 0). But since it also
writes its first CSB entry to slot 0, we need to reset the write pointer
back to the element before (so the first entry we read is 0).
This is required for the next patch, where we trust the CSB completely!
v2: Use _MASKED_FIELD
v3: Store the reset value, so that we differentiate between mmio/hwsp
transparently and without pretense.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-6-chris@chris-wilson.co.uk
Following the removal of the last workarounds, the only CSB mmio access
is for the old vGPU interface. The mmio registers presented by vGPU do
not require forcewake and can be treated as ordinary volatile memory,
i.e. they behave just like the HWSP access just at a different location.
We can reduce the CSB access to a set of read/write/buffer pointers and
treat the various paths identically and not worry about forcewake.
(Forcewake is nightmare for worstcase latency, and we want to process
this all with irqsoff -- no latency allowed!)
v2: Comments, comments, comments. Well, 2 bonus comments.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
In the next patch, we will process the CSB events directly from the
submission path, rather than only after a CS interrupt. Hence, we will
no longer have the need for a loop until the has-interrupt bit is clear,
and in the meantime can remove that small optimisation.
v2: Tvrtko pointed out it was safer to unconditionally kick the tasklet
after each irq, when assuming that the tasklet is called for each irq.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-4-chris@chris-wilson.co.uk
In the following patch, we will process the CSB events under the
timeline.lock and not serialised by the tasklet. This also means that we
will need to protect access to common variables such as
execlists->csb_head with the timeline.lock during reset.
v2: Move sync_irq to avoid deadlocks between taking timeline.lock from
our interrupt handler.
v3: Kill off the synchronize_hardirq as it raises more questions than
answered; now we use the timeline.lock entirely for CSB serialisation
between the irq and elsewhere, we don't need to be so heavy handed with
flushing
v4: Treat request cancellation (wedging after failed reset) similarly
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-3-chris@chris-wilson.co.uk
In the next patch, we will begin processing the CSB from inside the
submission path (underneath an irqsoff section, and even from inside
interrupt handlers). This means that updating the execlists->port[] will
no longer be serialised by the tasklet but needs to be locked by the
engine->timeline.lock instead. Pull dequeue and submit under the same
lock for protection. (An alternate future plan is to keep the in/out
arrays separate for concurrent processing and reduced lock coverage.)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-2-chris@chris-wilson.co.uk
The other day I was testing one of the HP laptops at my office with an
i915/amdgpu hybrid setup and noticed that hotplugging was non-functional
on almost all of the display outputs. I eventually discovered that all
of the external outputs were connected to the amdgpu device instead of
i915, and that the hotplugs weren't being detected so long as the GPU
was in runtime suspend. After some talking with folks at AMD, I learned
that amdgpu is actually supposed to support hotplug detection in runtime
suspend so long as the OEM has implemented it properly in the firmware.
On this HP ZBook 15 G4 (the machine in question), amdgpu wasn't managing
to find the ATIF handle at all despite the fact that I could see acpi
events being sent in response to any hotplugging. After going through
dumps of the firmware, I discovered that this machine did in fact
support ATIF, but that it's ATIF method lived in an entirely different
namespace than this device's handle (the device handle was
\_SB_.PCI0.PEG0.PEGP, but ATIF lives in ATPX's handle at
\_SB_.PCI0.GFX0).
So, fix this by probing ATPX's ACPI parent's namespace if we can't find
ATIF elsewhere, along with storing a pointer to the proper handle to use
for ATIF and using that instead of the device's handle.
This fixes HPD detection while in runtime suspend for this ZBook!
v2: Update the comment to reflect how the namespaces are arranged
based on the system configuration. (Alex)
Signed-off-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Since it seems that some vendors are storing the ATIF ACPI methods under
the same handle that ATPX lives under instead of the device's own
handle, we're going to need to be able to retrieve this handle later so
we can probe for ATIF there.
Signed-off-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Currently, there is nothing in amdgpu that actually uses these structs
other than amdgpu_acpi.c. Additionally, since we're about to start
saving the correct ACPI handle to use for calling ATIF in this struct
this saves us from having to handle making sure that the acpi_handle
(and by proxy, the type definition for acpi_handle and all of the other
acpi headers) doesn't need to be included within the amdgpu_drv struct
itself. This follows the example set by amdgpu_atpx_handler.c.
Signed-off-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
At the moment, gem_exec_gttfill fails with a sporadic EBUSY due to us
wanting to unbind a pinned batch. Let's dump who first bound that vma to
see if that helps us identify who still unexpectedly has it pinned.
v2: We cannot allocate inside the printer (as it may be on an fs-reclaim
path), so hope for the best and build the string on the stack
v3: stack depth of 16 routinely overflows a 512 character string, limit
it to 12 to avoid unsightly truncation.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628132206.8329-1-chris@chris-wilson.co.uk
The vbios firmware structure changed between v3_1 and v3_2. So,
the code to setup bootup values needs different paths based
on header version.
Signed-off-by: Evan Quan <evan.quan@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>