The prior assumption was that we did not need to reset the CSB on
wedging when cancelling the outstanding requests as it would be cleaned
up in the subsequent reset prior to restarting the GPU. However, what
was not accounted for was that in preparing for the reset, we would try
to process the outstanding CSB entries. If the GPU happened to complete
a CS event just as we were performing the cancellation of requests, that
event would be kept in the CSB until the reset -- but our bookkeeping
was cleared, causing confusion when trying to complete the CS event.
v2: Use a sanitize on unwedge to avoid interfering with eio suspend
(where we intentionally disable GPU reset).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107925
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-3-chris@chris-wilson.co.uk
If an asynchronous wait on a foriegn fence, we print a warning
indicating which fence was not signaled. As i915_sw_fences become more
common, include the debug hint (the symbol-name of the target) to help
identify the waiter. E.g.
[ 31.968144] Asynchronous wait on fence sw_sync:gem_eio:1 timed out (hint:submit_notify [i915])
We also want to downgrade from a warning to a notice (normal but
significant condition) as the timeout is imposed and controlled by the
caller (i.e. it is deliberate) and can be provoked by userspace.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914124007.18790-1-chris@chris-wilson.co.uk
That we use a WB mapping for updating the RING_TAIL register inside the
context image even on !llc machines has been a source of consternation
for every reader. It appears to work on bsw+, but it may just have been
that we have been incredibly bad at detecting the errors.
v2: With extra enthusiasm.
v3: Drop force of map type for pinned default_state as by the time we
pin it, the map type is always WB and doesn't conflict with the earlier
use by ce->state.
v4: Transfer engine->default_state from MAP_WC to MAP_WB on creation so
we do not need the MAP_FORCE littered around the backends
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-3-chris@chris-wilson.co.uk
If we try and fail to allocate a i915_request, we apply some
backpressure on the clients to throttle the memory allocations coming
from i915.ko. Currently, we wait until completely idle, but this is far
too heavy and leads to some situations where the only escape is to
declare a client hung and reset the GPU. The intent is to only ratelimit
the allocation requests and to allow ourselves to recycle requests and
memory from any long queues built up by a client hog.
Although the system memory is inherently a global resources, we don't
want to overly penalize an unlucky client to pay the price of reaping a
hog. To reduce the influence of one client on another, we can instead of
waiting for the entire GPU to idle, impose a barrier on the local client.
(One end goal for request allocation is for scalability to many
concurrent allocators; simultaneous execbufs.)
To prevent ourselves from getting caught out by long running requests
(requests that may never finish without userspace intervention, whom we
are blocking) we need to impose a finite timeout, ideally shorter than
hangcheck. A long time ago Paul McKenney suggested that RCU users should
ratelimit themselves using judicious use of cond_synchronize_rcu(). This
gives us the opportunity to reduce our indefinite wait for the GPU to
idle to a wait for the RCU grace period of the previous allocation along
this timeline to expire, satisfying both the local and finite properties
we desire for our ratelimiting.
There are still a few global steps (reclaim not least amongst those!)
when we exhaust the immediate slab pool, at least now the wait is itself
decoupled from struct_mutex for our glorious highly parallel future!
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106680
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-1-chris@chris-wilson.co.uk
Use the newly exposed VSP1 interface to enable interlaced frame support
through the VSP1 LIF pipelines.
The DSMR register is updated to set the ODEV flag on interlaced
pipelines, thus defining an interlaced stream as having the ODD field
located in the second half (BOTTOM) of the frame buffer.
Signed-off-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
drm-misc-next for 4.20:
UAPI Changes:
- Add host endian variants for the most common formats (Gerd)
- Fail ADDFB2 for big-endian drivers that don't advertise BE quirk (Gerd)
- clear smem_start in fbdev for drm drivers to avoid leaking fb addr (Daniel)
Cross-subsystem Changes:
Core Changes:
- fix drm_mode_addfb() on big endian machines (Gerd)
- add timeline point to syncobj find+replace (Chunming)
- more drmP.h removal effort (Daniel)
- split uapi portions of drm_atomic.c into drm_atomic_uapi.c (Daniel)
Driver Changes:
- bochs: Convert open-coded portions to use helpers (Peter)
- vkms: Add cursor support (Haneen)
- udmabuf: Lots of fixups (mostly cosmetic afaict) (Gerd)
- qxl: Convert to use fbdev helper (Peter)
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Chunming Zhou <david1.zhou@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Peter Wu <peter@lekensteyn.nl>
Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20180913130254.GA156437@art_vandelay
This patch adds support to decode system memory bandwidth and other
parameters for skylake and Gen9+ platforms, which will be used for
arbitrated display memory bandwidth calculation in GEN9 based
platforms and WM latency level-0 Work-around calculation on GEN9+.
Changes Since V1:
- s/memdev_info/dram_info
- create a struct to hold channel info
Changes Since V2:
- rewrite code to adhere i915 coding style
- not valid for GLK
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824093225.12598-3-mahesh1.kumar@intel.com
Instead of the double linked list. Gets the size of amdgpu_vm_pt down to
64 bytes again.
We could even reduce it down to 32 bytes, but that would require some
rather extreme hacks.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
While cutting the lists we sometimes accidentally added a list_head from
the stack to the LRUs, effectively corrupting the list.
Remove the list cutting and use explicit list manipulation instead.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-and-Tested: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Problem:
After GPU reset pflip completion IRQ is disabled and hence
any subsequent mode set or plane update leads to hang.
Fix:
Unless acrtc->otg_inst is initialized to -1 during display
block initializtion then durng resume from GPU reset
amdgpu_irq_gpu_reset_resume_helper will override CRTC 0 pflip
IRQ value with whatever value was on every other unused CRTC because
dm_irq_state will do irq_source = dal_irq_type + acrtc->otg_inst
where acrtc->otg_inst will be 0 for every unused CRTC.
Reviewed-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This optimizes the generating of PTEs by walking the hierarchy only once
for a range and making changes as necessary.
It allows for both huge (2MB) as well giant (1GB) pages to be used on
Vega and Raven.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Acked-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
If we have framebuffers that are >= 4GiB in size we will overflow
the fb size check in intel_fill_fb_info().
Currently that is only possible with NV12 and CCS as offsets[1]
may be anything between 0 and 0xffffffff. offsets[0] is currently
required to be 0 so we can't hit the overflow with any single
plane format (thanks to max fb size of 8kx8k and max stride of
32 KiB).
In the future we may allow almost any framebuffer to exceed 4GiB
in size so we really should fix the overflow. Not that the overflow
is particularly dangerous. It's mostly just a sanity check against
insane userspace. The display engine can't write to memory anyway
so I suppose in the worst case we might anger the hw by attempting
scanout past the end of the ggtt, or we might scan out some data
that we're not supposed to see from other parts of the ggtt.
Note that triggering this overflow depends on the driver
aligning the fb height to the next tile boundary to push the
calculated size above 4GiB. With linear buffers the effective
tile height is one so that never happens, and the core already
has a check for 32bit overflow of offsets[]+pitches[]*height.
v2: Drop the unnecessary cast (Chris)
Testcase: igt/kms_big_fb/x-tiled-addfb-size-offset-overflow
Testcase: igt/kms_big_fb/y-tiled-addfb-size-offset-overflow
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912180443.28649-1-ville.syrjala@linux.intel.com
The recent change of vga_switcheroo allowed the runtime PM for
HD-audio on AMD GPUs, but this also resulted in a regression. When
the HD-audio controller driver gets runtime-suspended, HD-audio link
is turned off, and the hotplug notification is ignored. This leads to
the inconsistent audio state (the connection isn't notified and ELD is
ignored).
The best fix would be to implement the proper ELD notification via the
audio component, but it's still not ready. As a quick workaround,
this patch adds the check of runtime_idle and allows the runtime
suspend only when the vga_switcheroo is bound with discrete GPU.
That is, a system with a single GPU and APU would be again without
runtime PM to keep the HD-audio link for the hotplug notification and
ELD read out.
Also, the codec->auto_runtime_pm flag is set only for the discrete GPU
at the time GPU gets bound via vga_switcheroo (i.e. only dGPU is
forcibly runtime-PM enabled), so that APU can still get the ELD
notification.
For identifying which GPU is bound, a new vga_switcheroo client
callback, gpu_bound, is implemented. The vga_switcheroo simply calls
this when GPU is bound, and tells whether it's dGPU or APU.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=200945
Fixes: 07f4f97d7b ("vga_switcheroo: Use device link for HDA controller")
Reported-by: Jian-Hong Pan <jian-hong@endlessm.com>
Tested-by: Jian-Hong Pan <jian-hong@endlessm.com>
Acked-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>