eDP spec states that sink device will do a short pulse in HPD
line when there is a PSR/PSR2 error that needs to be handled by
source, this is handling the first and most simples error:
DP_PSR_SINK_INTERNAL_ERROR.
Here taking the safest approach and disabling PSR(at least until
the next modeset), to avoid multiple rendering issues due to
bad pannels.
v5:
added lockdep_assert in psr_disable and renamed psr_disable()
to intel_psr_disable_locked()
v4:
Using CAN_PSR instead of HAS_PSR in intel_psr_short_pulse
v3:
disabling PSR instead of exiting on error
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626201644.21932-2-jose.souza@intel.com
Commit 5422b37c90 ("drm/i915/psr: Kill delays when activating psr
back.") switched from delayed work to the plain variant and while doing so
removed the check for work_busy() before scheduling a PSR activation.
This appears to cause consecutive executions of psr_activate() in this
scenario - after a worker picks up the PSR work item for execution and
before the work function can acquire the PSR mutex, a psr_flush() can
get hold of the mutex and schedule another PSR work. Without a psr_exit()
between the two psr_activate() calls, warning messages get printed.
Further, since we drop the mutex in the midst of psr_work() to wait for
PSR to idle, another work item can also get scheduled. Fix this by
returning if PSR was already active.
Fixes: 5422b37c90 ("drm/i915/psr: Kill delays when activating psr back.")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106948
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180625054741.3919-1-dhinakaran.pandiyan@intel.com
So far we got an AUX power domain reference only for the duration of DP
AUX transfers. However, the following suggests that we also need these
for main link functionality:
- The specification doesn't state whether it's needed or not for main
link functionality, but suggests that these power wells need to be
enabled already during display core initialization (Sequences to
Initialize Display).
- For PSR we need to keep the AUX power well enabled.
- On ICL combo PHY ports (non-TC) the AUX power well is needed for
link training too: while the port is enabled with a DP link training
test pattern trying to toggle the AUX power well will time out.
- On ICL MG PHY ports (TC) the AUX power well is needed also for main
link functionality (both in DP and HDMI modes).
- Windows enables these power wells both for main and AUX lane
functionality.
Based on the above take an AUX power reference for main link
functionality too. This makes a difference only on GEN10+ (GLK+)
platforms, where we have separate port specific AUX power wells.
For PSR we still need to distinguish between port A and the other
ports, since on port A DC states must stay enabled for main link
functionality, but DC states must be disabled for driver initiated
AUX transfers. So re-use the corresponding helper from intel_psr.c.
Since we take now a reference for main link functionality on all DP
ports we can forgo taking the separate power ref for PSR functionality.
v2:
- Make sure DC states stay enabled when taking the ref on port A.
(Ville)
v3: (Ville)
- Fix comment about logic for encoders without a crtc state and
add FIXME note for a simplification to avoid calling get_power_domains
in such cases.
- Use intel_crtc_has_dp_encoder() instead !intel_crtc_has_type(HDMI).
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
[Clarified code comments in intel_ddi_main_link_aux_domain() and
intel_ddi_get_power_domains() (Imre)]
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180621184449.26634-1-imre.deak@intel.com
If we avoid cleaning up the old state immediately in
intel_atomic_commit_tail() and defer it to a second task, we can avoid
taking heavily contended locks when the caller is ready to procede.
Subsequent modesets will wait for the cleanup operation (either directly
via the ordered modeset wq or indirectly through the atomic helperr)
which keeps the number of inflight cleanup tasks in check.
As an example, during reset an immediate modeset is performed to disable
the displays before the HW is reset, which must avoid struct_mutex to
avoid recursion. Moving the cleanup to a separate task, defers acquiring
the struct_mutex to after the GPU is running again, allowing it to
complete. Even in a few patches time (optimist!) when we no longer
require struct_mutex to unpin the framebuffers, it will still be good
practice to minimise the number of contention points along reset. The
mutex dependency still exists (as one modeset flushes the other), but in
the short term it resolves the deadlock for simple reset cases.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=101600
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180623103951.23889-1-chris@chris-wilson.co.uk
Acked-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Rather than doing drm_file allocation/destruction right in the fops, lets
provide separate helpers. This decouples drm_file management from the
still-mandatory drm-fops. It prepares for use of drm_file without the
fops, both by possible separate fops implementations and APIs (not that I
am aware of any such plans), and more importantly from in-kernel use where
no real file is available.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618141739.48151-2-noralf@tronnes.org
This patch update the definition of connection from RDMA1 to DPI0.
Change the term MOUT to SOUT.
Because our HW datasheet use the term SOUT to match its function for RDMA.
For consistency, changing the name from MOUT to SOUT is better.
Signed-off-by: Stu Hsieh <stu.hsieh@mediatek.com>
Signed-off-by: CK Hu <ck.hu@mediatek.com>
This patch support that if modules more than 32,
add index more than 31 when using DISP_REG_MUTEX_MOD2 bit
Signed-off-by: Stu Hsieh <stu.hsieh@mediatek.com>
Signed-off-by: CK Hu <ck.hu@mediatek.com>
This fixes a regression I accidentally reduced that was picked up by
kasan, where we were checking the CRTC atomic states after DRM's helpers
had already freed them. Example:
==================================================================
BUG: KASAN: use-after-free in amdgpu_dm_atomic_commit_tail.cold.50+0x13d/0x15a [amdgpu]
Read of size 1 at addr ffff8803a697b071 by task kworker/u16:0/7
CPU: 7 PID: 7 Comm: kworker/u16:0 Tainted: G O 4.18.0-rc1Lyude-Upstream+ #1
Hardware name: HP HP ZBook 15 G4/8275, BIOS P70 Ver. 01.21 05/02/2018
Workqueue: events_unbound commit_work [drm_kms_helper]
Call Trace:
dump_stack+0xc1/0x169
? dump_stack_print_info.cold.1+0x42/0x42
? kmsg_dump_rewind_nolock+0xd9/0xd9
? printk+0x9f/0xc5
? amdgpu_dm_atomic_commit_tail.cold.50+0x13d/0x15a [amdgpu]
print_address_description+0x6c/0x23c
? amdgpu_dm_atomic_commit_tail.cold.50+0x13d/0x15a [amdgpu]
kasan_report.cold.6+0x241/0x2fd
amdgpu_dm_atomic_commit_tail.cold.50+0x13d/0x15a [amdgpu]
? commit_planes_to_stream.constprop.45+0x13b0/0x13b0 [amdgpu]
? cpu_load_update_active+0x290/0x290
? finish_task_switch+0x2bd/0x840
? __switch_to_asm+0x34/0x70
? read_word_at_a_time+0xe/0x20
? strscpy+0x14b/0x460
? drm_atomic_helper_wait_for_dependencies+0x47d/0x7e0 [drm_kms_helper]
commit_tail+0x96/0xe0 [drm_kms_helper]
process_one_work+0x88a/0x1360
? create_worker+0x540/0x540
? __sched_text_start+0x8/0x8
? move_queued_task+0x760/0x760
? call_rcu_sched+0x20/0x20
? vsnprintf+0xcda/0x1350
? wait_woken+0x1c0/0x1c0
? mutex_unlock+0x1d/0x40
? init_timer_key+0x190/0x230
? schedule+0xea/0x390
? __schedule+0x1ea0/0x1ea0
? need_to_create_worker+0xe4/0x210
? init_worker_pool+0x700/0x700
? try_to_del_timer_sync+0xbf/0x110
? del_timer+0x120/0x120
? __mutex_lock_slowpath+0x10/0x10
worker_thread+0x196/0x11f0
? flush_rcu_work+0x50/0x50
? __switch_to_asm+0x34/0x70
? __switch_to_asm+0x34/0x70
? __switch_to_asm+0x40/0x70
? __switch_to_asm+0x34/0x70
? __switch_to_asm+0x40/0x70
? __switch_to_asm+0x34/0x70
? __switch_to_asm+0x40/0x70
? __schedule+0x7d6/0x1ea0
? migrate_swap_stop+0x850/0x880
? __sched_text_start+0x8/0x8
? save_stack+0x8c/0xb0
? kasan_kmalloc+0xbf/0xe0
? kmem_cache_alloc_trace+0xe4/0x190
? kthread+0x98/0x390
? ret_from_fork+0x35/0x40
? ret_from_fork+0x35/0x40
? deactivate_slab.isra.67+0x3c4/0x5c0
? kthread+0x98/0x390
? kthread+0x98/0x390
? set_track+0x76/0x120
? schedule+0xea/0x390
? __schedule+0x1ea0/0x1ea0
? wait_woken+0x1c0/0x1c0
? kasan_unpoison_shadow+0x30/0x40
? parse_args.cold.15+0x17a/0x17a
? flush_rcu_work+0x50/0x50
kthread+0x2d4/0x390
? kthread_create_worker_on_cpu+0xc0/0xc0
ret_from_fork+0x35/0x40
Allocated by task 1124:
kasan_kmalloc+0xbf/0xe0
kmem_cache_alloc_trace+0xe4/0x190
dm_crtc_duplicate_state+0x78/0x130 [amdgpu]
drm_atomic_get_crtc_state+0x147/0x410 [drm]
page_flip_common+0x57/0x230 [drm_kms_helper]
drm_atomic_helper_page_flip+0xa6/0x110 [drm_kms_helper]
drm_mode_page_flip_ioctl+0xc4b/0x10a0 [drm]
drm_ioctl_kernel+0x1d4/0x260 [drm]
drm_ioctl+0x433/0x920 [drm]
amdgpu_drm_ioctl+0x11d/0x290 [amdgpu]
do_vfs_ioctl+0x1a1/0x13d0
ksys_ioctl+0x60/0x90
__x64_sys_ioctl+0x6f/0xb0
do_syscall_64+0x147/0x440
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Freed by task 1124:
__kasan_slab_free+0x12e/0x180
kfree+0x92/0x1a0
drm_atomic_state_default_clear+0x315/0xc40 [drm]
__drm_atomic_state_free+0x35/0xd0 [drm]
drm_atomic_helper_update_plane+0xac/0x350 [drm_kms_helper]
__setplane_internal+0x2d6/0x840 [drm]
drm_mode_cursor_universal+0x41e/0xbe0 [drm]
drm_mode_cursor_common+0x49f/0x880 [drm]
drm_mode_cursor_ioctl+0xd8/0x130 [drm]
drm_ioctl_kernel+0x1d4/0x260 [drm]
drm_ioctl+0x433/0x920 [drm]
amdgpu_drm_ioctl+0x11d/0x290 [amdgpu]
do_vfs_ioctl+0x1a1/0x13d0
ksys_ioctl+0x60/0x90
__x64_sys_ioctl+0x6f/0xb0
do_syscall_64+0x147/0x440
entry_SYSCALL_64_after_hwframe+0x44/0xa9
The buggy address belongs to the object at ffff8803a697b068
which belongs to the cache kmalloc-1024 of size 1024
The buggy address is located 9 bytes inside of
1024-byte region [ffff8803a697b068, ffff8803a697b468)
The buggy address belongs to the page:
page:ffffea000e9a5e00 count:1 mapcount:0 mapping:ffff88041e00efc0 index:0x0 compound_mapcount: 0
flags: 0x8000000000008100(slab|head)
raw: 8000000000008100 ffffea000ecbc208 ffff88041e000c70 ffff88041e00efc0
raw: 0000000000000000 0000000000170017 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffff8803a697af00: fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff8803a697af80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff8803a697b000: fc fc fc fc fc fc fc fc fc fc fc fc fc fb fb fb
^
ffff8803a697b080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8803a697b100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
So, we fix this by counting the number of CRTCs this atomic commit disabled
early on in the function before their atomic states have been freed, then use
that count later to do the appropriate number of RPM puts at the end of the
function.
Acked-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Harry Wentland <harry.wentland@amd.com>
Cc: stable@vger.kernel.org
Fixes: 97028037a3 ("drm/amdgpu: Grab/put runtime PM references in atomic_commit_tail()")
Signed-off-by: Lyude Paul <lyude@redhat.com>
Cc: Michel Dänzer <michel@daenzer.net>
Reported-by: Michel Dänzer <michel@daenzer.net>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>