The PIPEDSL freezes on PSR entry and if PSR hasn't fully exited, then
the pipe_update_start call schedules itself out to check back later.
On ChromeOS-4.4 kernel, which is fairly up-to-date w.r.t drm/i915 but
lags w.r.t core kernel code, hot plugging an external display triggers
tons of "potential atomic update errors" in the dmesg, on *pipe A*. A
closer analysis reveals that we try to read the scanline 3 times and
eventually timeout, b/c PSR hasn't exited fully leading to a PIPEDSL
stuck @ 1599. This issue is not seen on upstream kernels, b/c for *some*
reason we loop inside intel_pipe_update start for ~2+ msec which in this
case is more than enough to exit PSR fully, hence an *unstuck* PIPEDSL
counter, hence no error. On the other hand, the ChromeOS kernel spends
~1.1 msec looping inside intel_pipe_update_start and hence errors out
b/c the source is still in PSR.
Regardless, we should wait for PSR exit (if PSR is disabled, we incur
a ~1-2 usec penalty) before reading the PIPEDSL, b/c if we haven't
fully exited PSR, then checking for vblank evasion isn't actually
applicable.
v4: Comment explaining psr_wait after enabling VBL interrupts (DK)
v5: CAN_PSR() to handle platforms that don't support PSR.
v6: Handle local_irq_disable on early return (Chris)
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-2-tarun.vyas@intel.com
This is a lockless version of the exisiting psr_wait_for_idle().
We want to wait for PSR to idle out inside intel_pipe_update_start.
At the time of a pipe update, we should never race with any psr
enable or disable code, which is a part of crtc enable/disable.
The follow up patch will use this lockless wait inside pipe_update_
start to wait for PSR to idle out before checking for vblank evasion.
We need to keep the wait in pipe_update_start to as less as it can be.
So,we can live and flourish w/o taking any psr locks at all.
Even if psr is never enabled, psr2_enabled will be false and this
function will wait for PSR1 to idle out, which should just return
immediately, so a very short (~1-2 usec) wait for cases where PSR
is disabled.
v2: Add comment to explain the 25msec timeout (DK)
v3: Rename psr_wait_for_idle to __psr_wait_for_idle_locked to avoid
naming conflicts and propagate err (if any) to the caller (Chris)
v5: Form a series with the next patch
v7: Better explain the need for lockless wait and increase the max
timeout to handle refresh rates < 60 Hz (Daniel Vetter)
v8: Rebase
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-1-tarun.vyas@intel.com
changed gvt display transcode DDI mode from DP_SST to
DVI to address below calltrace issue during guest booting
up which is caused by zero dotclock initial value with DP_SST
mode. transcode DVI mode emulation also align with native with DP
connection.
[drm:drm_calc_timestamping_constants]
ERROR crtc 41: Can't calculate constants, dotclock = 0!
WARNING: at drivers/gpu/drm/drm_vblank.c:620
drm_calc_vbltimestamp_from_scanoutpos
Call Trace:
? drm_calc_timestamping_constants+0x144/0x150 [drm]
drm_get_last_vbltimestamp+0x54/0x90 [drm]
drm_reset_vblank_timestamp+0x59/0xd0 [drm]
drm_crtc_vblank_on+0x7b/0xd0 [drm]
intel_modeset_setup_hw_state+0xb67/0xfd0 [i915]
? gen2_read32+0x110/0x110 [i915]
? drm_modeset_lock+0x30/0xa0 [drm]
intel_modeset_init+0x794/0x19d0 [i915]
? intel_setup_gmbus+0x232/0x2e0 [i915]
i915_driver_load+0xb4a/0xf40 [i915]
Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
when guest writes ggtt entries, it could write 8 bytes a time if
gtt_entry_size is 8. But, qemu could split the 8 bytes into 2 consecutive
4-byte writes.
If each 4-byte partial write could trigger a host ggtt write, it is very
possible that a wrong combination is written to the host ggtt. E.g.
the higher 4 bytes is the old value, but the lower 4 bytes is the new
value, and this 8-byte combination is wrong but written to the ggtt, thus
causing bugs.
To handle this condition, we just record the first 4-byte write, then wait
until the second 4-byte write comes and write the combined 64-bit data to
host ggtt table.
To save memory space and to spot partial write as early as possible, we
don't keep this information for every ggtt index. Instread, we just record
the last ggtt write position, and assume the two 4-byte writes come in
consecutively for each vgpu.
This assumption is right based on the characteristic of ggtt entry which
stores memory address. When gtt_entry_size is 8, the guest memory physical
address should be 64 bits, so any sane guest driver should write 8-byte
long data at a time, so 2 consecutive 4-byte writes at the same ggtt index
should be trapped in gvt.
v2:
when incomplete ggtt entry write is located, e.g.
1. guest only writes 4 bytes at a ggtt offset and no long writes the
rest 4 bytes.
2. guest writes 4 bytes of a ggtt offset, then write at other ggtt
offsets, then return back to write the left 4 bytes of the first
ggtt offset.
add error handling logic to remap host entry to scratch page, and mark
guest virtual ggtt entry as not present. (zhenyu wang)
Signed-off-by: Zhao Yan <yan.y.zhao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch unifies the naming of DRM functions for reference counting
of struct drm_device. The resulting code is more aligned with the rest
of the Linux kernel interfaces.
Signed-off-by: Thomas Zimmermann <tdz@users.sourceforge.net>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
This patch unifies the naming of DRM functions for reference counting
of struct drm_gem_object. The resulting code is more aligned with the
rest of the Linux kernel interfaces.
Signed-off-by: Thomas Zimmermann <tdz@users.sourceforge.net>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
This patch unifies the naming of DRM functions for reference counting
of struct drm_framebuffer. The resulting code is more aligned with the
rest of the Linux kernel interfaces.
Signed-off-by: Thomas Zimmermann <tdz@users.sourceforge.net>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
The Nomadik has a variant of the PL110 known as "Color LCD
Controller" LCDC. This variant has the same bit ordering as
the DRM subsystem (in difference from the other variants)
and adds a few bits for the control of 5551, 565 etc in the
control register. Notably it also adds a packed RGB888
24BPP mode.
We add support by detecting this variant and also adding a
small plug-in that will mux the LCDC out if the ASIC happens
to be muxed to the other graphics controller (they are
mutually exclusive).
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20180621184450.25377-1-linus.walleij@linaro.org
Pull block fixes from Jens Axboe:
"Small set of fixes for this series. Mostly just minor fixes, the only
oddball in here is the sg change.
The sg change came out of the stall fix for NVMe, where we added a
mempool and limited us to a single page allocation. CONFIG_SG_DEBUG
sort-of ruins that, since we'd need to account for that. That's
actually a generic problem, since lots of drivers need to allocate SG
lists. So this just removes support for CONFIG_SG_DEBUG, which I added
back in 2007 and to my knowledge it was never useful.
Anyway, outside of that, this pull contains:
- clone of request with special payload fix (Bart)
- drbd discard handling fix (Bart)
- SATA blk-mq stall fix (me)
- chunk size fix (Keith)
- double free nvme rdma fix (Sagi)"
* tag 'for-linus-20180629' of git://git.kernel.dk/linux-block:
sg: remove ->sg_magic member
drbd: Fix drbd_request_prepare() discard handling
blk-mq: don't queue more if we get a busy return
block: Fix cloning of requests with a special payload
nvme-rdma: fix possible double free of controller async event buffer
block: Fix transfer when chunk sectors exceeds max
This allows panels or bridges that need to send DSI commands during
pre_enable() to successfully send them. We delay DISP0 (aka the
actual display) enabling until after pre_enable so that pixels aren't
streaming before then.
v2: Just clear out the encoder->bridge value to disable the midlayer
calls (idea by Andrzej Hajda).
Signed-off-by: Eric Anholt <eric@anholt.net>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180621231759.29604-1-eric@anholt.net
Reviewed-by: Andrzej Hajda <a.hajda@samsung.com>
This was introduced more than a decade ago when sg chaining was
added, but we never really caught anything with it. The scatterlist
entry size can be critical, since drivers allocate it, so remove
the magic member. Recently it's been triggering allocation stalls
and failures in NVMe.
Tested-by: Jordan Glover <Golden_Miller83@protonmail.ch>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The limit_id_fallback array uses enum drm_ipp_size_id to index its
content. The content itself is of type enum drm_exynos_ipp_limit_type.
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
The only bits that should be preserved in decon_win_set_fmt() is
WINCONx_ENWIN_F. All other bits depends on the selected pixel formats and
are set by the mentioned function.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Set per-plane global alpha to maximum value to get proper blending of
XRGB and ARGB planes. This fixes the strange order of overlapping planes.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
DMA hardware should respect buffer pitch, so use the width calculated from
the buffer pitch instead of the virtual one.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Fix following issues related to planar YUV pixel format configuration:
- NV16/61 modes were incorrectly programmed as NV12/21,
- YVU420 was programmed as YUV420 on source,
- YVU420 and YUV422 were programmed as YUV420 on output.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Horizontal (DRM_MODE_REFLECT_Y) and vertical (DMR_MODE_REFLECT_Y) flip
were swapped in GScaler driver. Fix this by swapping code for interpreting
them.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Investigation revealed that GScaler hardware requires the real buffer width
(pitch) to be aligned to 16 pixels.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
DMA hardware should respect buffer pitch, so use the width calculated from
the buffer pitch instead of the virtual one.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Fix Cb/CR components order in two-planar YUV420, YUV422 and YUV444 modes.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Ensure that Scaler hardware is properly reset and interrupts are cleared
before processing next image.
Signed-off-by: Andrzej Pietrasiewicz <andrzej.p@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Horizontal (DRM_MODE_REFLECT_Y) and vertical (DMR_MODE_REFLECT_Y) flip
were swapped in Rotator driver. Fix this by swapping code for interpreting
them.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Prepare a common function for size and scale checks and call it for
source and destination buffers. Then also move there the state-less checks
from exynos_drm_ipp_task_setup_buffer, so the format information is already
available in limits processing. Finally perform the IPP_LIMIT_BUFFER check
on the real width of the buffer (the width calculated from the provided
buffer pitch).
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Immutable branch between fbdev and drm for the v4.19 merge window
(contains the deferred console takeover feature)
Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com>
# gpg: Signature made Thu 28 Jun 2018 10:24:50 AM -03
# gpg: using RSA key 7E33B63FA047C20B
# gpg: Can't check signature: public key not found
# Conflicts:
# drivers/gpu/drm/i915/i915_gem.c
# drivers/gpu/drm/i915/intel_crt.c
# drivers/gpu/drm/i915/intel_display.c
# drivers/gpu/drm/i915/intel_lrc.c
Link: https://patchwork.freedesktop.org/patch/msgid/2462549.rLSfW9kX99@amdc3058
Back in commit 27af5eea54 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.
We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.
In this iteration of the tasklet-vs-direct submission debate, we seek a
compromise where by we submit new requests immediately to the HW but
defer processing the CS interrupt onto a tasklet. We gain the advantage
of low-latency and ksoftirqd avoidance when waking up the HW, while
avoiding the system-wide starvation of our CS irq-storms.
Comparing the impact on the maximum latency observed (that is the time
stolen from an RT process) over a 120s interval, repeated several times
(using gem_syslatency, similar to RT's cyclictest) while the system is
fully laden with i915 nops, we see that direct submission an actually
improve the worse case.
Maximum latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 2)
x Always using tasklets (a couple of >1000us outliers removed)
+ Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
| + |
| + |
| + |
| + + |
| + + + |
| + + + + x x x |
| +++ + + + x x x x x x |
| +++ + ++ + + *x x x x x x |
| +++ + ++ + * *x x * x x x |
| + +++ + ++ * * +*xxx * x x xx |
| * +++ + ++++* *x+**xx+ * x x xxxx x |
| **x++++*++**+*x*x****x+ * +x xx xxxx x x |
|x* ******+***************++*+***xxxxxx* xx*x xxx + x+|
| |__________MA___________| |
| |______M__A________| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 118 91 186 124 125.28814 16.279137
+ 120 92 187 109 112.00833 13.458617
Difference at 95.0% confidence
-13.2798 +/- 3.79219
-10.5994% +/- 3.02677%
(Student's t, pooled s = 14.9237)
However the mean latency is adversely affected:
Mean latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 1)
x Always using tasklets
+ Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
| xxxxxx + ++ |
| xxxxxx + ++ |
| xxxxxx + +++ ++ |
| xxxxxxx +++++ ++ |
| xxxxxxx +++++ ++ |
| xxxxxxx +++++ +++ |
| xxxxxxx + ++++++++++ |
| xxxxxxxx ++ ++++++++++ |
| xxxxxxxx ++ ++++++++++ |
| xxxxxxxxxx +++++++++++++++ |
| xxxxxxxxxxx x +++++++++++++++ |
|x xxxxxxxxxxxxx x + + ++++++++++++++++++ +|
| |__A__| |
| |____A___| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 120 3.506 3.727 3.631 3.6321417 0.02773109
+ 120 3.834 4.149 4.039 4.0375167 0.041221676
Difference at 95.0% confidence
0.405375 +/- 0.00888913
11.1608% +/- 0.244735%
(Student's t, pooled s = 0.03513)
However, since the mean latency corresponds to the amount of irqsoff
processing we have to do for a CS interrupt, we only need to speed that
up to benefit not just system latency but our own throughput.
v2: Remember to defer submissions when under reset.
v4: Only use direct submission for new requests
v5: Be aware that with mixing direct tasklet evaluation and deferred
tasklets, we may end up idling before running the deferred tasklet.
v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the
annotation to reset_in_progress().
v7: Take the full timeline.lock when enabling perf_pmu stats as the
tasklet is no longer a valid guard. A consequence is that the stats are
now only valid for engines also using the timeline.lock to process
state.
Testcase: igt/gem_exec_latency/*rthog*
References: 27af5eea54 ("drm/i915: Move execlists irq handler to a bottom half")
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk