drm-misc-next for 4.20:
UAPI Changes:
- None
Cross-subsystem Changes:
- None
Core Changes:
- Allow drivers to disable features with per-device granularity (Ville)
- Use EOPNOTSUPP when iface/feature is unsupported instead of
EINVAL/errno soup (Chris)
- Simplify M/N DP quirk by using constant N to limit size of M/N (Shawn)
- add quirk for LG LP140WF6-SPM1 eDP panel (Shawn)
Driver Changes:
- i915/amdgpu: Disable DRIVER_ATOMIC for older/unsupported devices (Ville)
- sun4i: add support for R40 HDMI PHY (Icenowy)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Icenowy Zheng <icenowy@aosc.io>
Cc: Lee, Shawn C <shawn.c.lee@intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20180919200218.GA186644@art_vandelay
SDVO encoders can have multiple different types of outputs hanging off
them. Currently the code tries to muck around with various is_foo
flags in the encoder to figure out which type its driving. That doesn't
work with atomic and other stuff, so let's nuke those flags and just
look at which type of connector we're actually dealing with.
The is_hdmi we'll need as that's not discoverable via the output flags,
but we'll just move it under the connector.
We'll also move the sdvo fixed mode handling out from the .get_modes()
hook into the sdvo lvds init function so that we can bail out properly
if there is no fixed mode to be found.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180917151504.8754-1-ville.syrjala@linux.intel.com
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Clean up some cases where we're dealing with GTT pages instead of
system pages to use I915_GTT_PAGE_SIZE instead of PAGE_SHIT. So
just replace the the shifts with mul/div as appropriate. These
are the easy ones, the rest probably need some actual thought.
No real changes in the generated asm. Only gen8_ppgtt_insert_4lvl()
was affected as gcc decided to do the following change:
- be9: 89 d9 mov %ebx,%ecx
- beb: c1 e1 0c shl $0xc,%ecx
- bee: 48 63 c9 movslq %ecx,%rcx
+ be9: 48 63 cb movslq %ebx,%rcx
+ bec: 48 c1 e1 0c shl $0xc,%rcx
and that then shifted a bunch of the offset by one byte. I presume
the sign extensions in the asm are due to integer promotions from
u16 etc. Hopefully someone has confirmed that those don't end up
doing the wrong thing for us.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180917171414.19220-1-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
When one vgpu is destroyed, its ggtt entries are not cleared.
This patch clears ggtt entries to avoid information leak.
v2: add 'Fixes' tag (Zhenyu)
Fixes: 2707e44466 ("drm/i915/gvt: vGPU graphics memory virtualization")
Signed-off-by: Zhipeng Gong <zhipeng.gong@intel.com>
Reviewed-by: Hang Yuan <hang.yuan@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Host prints lots of untracked MMIO at 0x4653c when creating linux guest.
"gvt: vgpu 2: untracked MMIO 0004653c len 4"
GEN9_CLKGATE_DIS_4 (0x4653c) is accessed by i915 for gmbus clockgating.
However vgpu doesn't support any clockgating powergating operations
on related mmio access trap so need add it to default handler.
GEN9_CLKGATE_DIS_4 is accessed in bxt_gmbus_clock_gating() which only
applies to GEN9_LP so doens't show the warning on other platforms.
The solution is to add it to default handler init_bxt_mmio_info().
Reviewed-by: He, Min <min.he@intel.com>
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Recent patch fixed the call trace
"ERROR Port B enabled but PHY powered down? (PHY_CTL 00000000)".
but introduced another similar call trace shown as:
"ERROR Port C enabled but PHY powered down? (PHY_CTL 00000200)".
The call trace will appear when host and guest enabled different ports,
i.e. host using PORT C or neither PORT is enabled, while guest is always
using PORT B as simulated by gvt. The issue is actually covered previously
before the commit and reverals now when the commit do the right thing.
On BXT, some PHY registers are initialized by vbios, before i915 loaded.
Later i915 will re-program some, or skip some based on the implementation.
The initialized mmio for guest i915 is done by gvt, based on the snapshot
taken from host. If host and guest have different PORT enabled, some
DPIO PHY mmios that gvt initialized for guest i915 will not match the
simualted monitor for guest, which leads to guest i915 print the calltrace
when it's trying to enable PHY and PORT.
The solution is to init these DPIO PHY registers to default value, then
guest i915 will program them to reasonable value based on the default
powerwell table and enabled PORT. Together with the old patch, all similar
call trace in guest kernel on BXT can be resolved.
v2: Move PHY register init to intel_vgpu_reset_mmio (Min)
v3: Do not delete empty line in issue fix patch. (zhenyu)
Fixes: c8ab5ac30c ("drm/i915/gvt: Make correct handling to vreg
BXT_PHY_CTL_FAMILY")
Reviewed-by: He, Min <min.he@intel.com>
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The prior assumption was that we did not need to reset the CSB on
wedging when cancelling the outstanding requests as it would be cleaned
up in the subsequent reset prior to restarting the GPU. However, what
was not accounted for was that in preparing for the reset, we would try
to process the outstanding CSB entries. If the GPU happened to complete
a CS event just as we were performing the cancellation of requests, that
event would be kept in the CSB until the reset -- but our bookkeeping
was cleared, causing confusion when trying to complete the CS event.
v2: Use a sanitize on unwedge to avoid interfering with eio suspend
(where we intentionally disable GPU reset).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107925
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-3-chris@chris-wilson.co.uk
If an asynchronous wait on a foriegn fence, we print a warning
indicating which fence was not signaled. As i915_sw_fences become more
common, include the debug hint (the symbol-name of the target) to help
identify the waiter. E.g.
[ 31.968144] Asynchronous wait on fence sw_sync:gem_eio:1 timed out (hint:submit_notify [i915])
We also want to downgrade from a warning to a notice (normal but
significant condition) as the timeout is imposed and controlled by the
caller (i.e. it is deliberate) and can be provoked by userspace.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914124007.18790-1-chris@chris-wilson.co.uk
That we use a WB mapping for updating the RING_TAIL register inside the
context image even on !llc machines has been a source of consternation
for every reader. It appears to work on bsw+, but it may just have been
that we have been incredibly bad at detecting the errors.
v2: With extra enthusiasm.
v3: Drop force of map type for pinned default_state as by the time we
pin it, the map type is always WB and doesn't conflict with the earlier
use by ce->state.
v4: Transfer engine->default_state from MAP_WC to MAP_WB on creation so
we do not need the MAP_FORCE littered around the backends
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-3-chris@chris-wilson.co.uk
If we try and fail to allocate a i915_request, we apply some
backpressure on the clients to throttle the memory allocations coming
from i915.ko. Currently, we wait until completely idle, but this is far
too heavy and leads to some situations where the only escape is to
declare a client hung and reset the GPU. The intent is to only ratelimit
the allocation requests and to allow ourselves to recycle requests and
memory from any long queues built up by a client hog.
Although the system memory is inherently a global resources, we don't
want to overly penalize an unlucky client to pay the price of reaping a
hog. To reduce the influence of one client on another, we can instead of
waiting for the entire GPU to idle, impose a barrier on the local client.
(One end goal for request allocation is for scalability to many
concurrent allocators; simultaneous execbufs.)
To prevent ourselves from getting caught out by long running requests
(requests that may never finish without userspace intervention, whom we
are blocking) we need to impose a finite timeout, ideally shorter than
hangcheck. A long time ago Paul McKenney suggested that RCU users should
ratelimit themselves using judicious use of cond_synchronize_rcu(). This
gives us the opportunity to reduce our indefinite wait for the GPU to
idle to a wait for the RCU grace period of the previous allocation along
this timeline to expire, satisfying both the local and finite properties
we desire for our ratelimiting.
There are still a few global steps (reclaim not least amongst those!)
when we exhaust the immediate slab pool, at least now the wait is itself
decoupled from struct_mutex for our glorious highly parallel future!
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106680
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-1-chris@chris-wilson.co.uk
drm-misc-next for 4.20:
UAPI Changes:
- Add host endian variants for the most common formats (Gerd)
- Fail ADDFB2 for big-endian drivers that don't advertise BE quirk (Gerd)
- clear smem_start in fbdev for drm drivers to avoid leaking fb addr (Daniel)
Cross-subsystem Changes:
Core Changes:
- fix drm_mode_addfb() on big endian machines (Gerd)
- add timeline point to syncobj find+replace (Chunming)
- more drmP.h removal effort (Daniel)
- split uapi portions of drm_atomic.c into drm_atomic_uapi.c (Daniel)
Driver Changes:
- bochs: Convert open-coded portions to use helpers (Peter)
- vkms: Add cursor support (Haneen)
- udmabuf: Lots of fixups (mostly cosmetic afaict) (Gerd)
- qxl: Convert to use fbdev helper (Peter)
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Chunming Zhou <david1.zhou@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Peter Wu <peter@lekensteyn.nl>
Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20180913130254.GA156437@art_vandelay
This patch adds support to decode system memory bandwidth and other
parameters for skylake and Gen9+ platforms, which will be used for
arbitrated display memory bandwidth calculation in GEN9 based
platforms and WM latency level-0 Work-around calculation on GEN9+.
Changes Since V1:
- s/memdev_info/dram_info
- create a struct to hold channel info
Changes Since V2:
- rewrite code to adhere i915 coding style
- not valid for GLK
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824093225.12598-3-mahesh1.kumar@intel.com
If we have framebuffers that are >= 4GiB in size we will overflow
the fb size check in intel_fill_fb_info().
Currently that is only possible with NV12 and CCS as offsets[1]
may be anything between 0 and 0xffffffff. offsets[0] is currently
required to be 0 so we can't hit the overflow with any single
plane format (thanks to max fb size of 8kx8k and max stride of
32 KiB).
In the future we may allow almost any framebuffer to exceed 4GiB
in size so we really should fix the overflow. Not that the overflow
is particularly dangerous. It's mostly just a sanity check against
insane userspace. The display engine can't write to memory anyway
so I suppose in the worst case we might anger the hw by attempting
scanout past the end of the ggtt, or we might scan out some data
that we're not supposed to see from other parts of the ggtt.
Note that triggering this overflow depends on the driver
aligning the fb height to the next tile boundary to push the
calculated size above 4GiB. With linear buffers the effective
tile height is one so that never happens, and the core already
has a check for 32bit overflow of offsets[]+pitches[]*height.
v2: Drop the unnecessary cast (Chris)
Testcase: igt/kms_big_fb/x-tiled-addfb-size-offset-overflow
Testcase: igt/kms_big_fb/y-tiled-addfb-size-offset-overflow
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912180443.28649-1-ville.syrjala@linux.intel.com
We can remove the update-via-batch-buffer code path, which is basically an
effective duplicate of update-via-context-image path, if we notice that
after we have idled the GPU, we can update the context image even of the
kernel context directly. (Update-via-batch-buffer path existed only to
solve the problem of how to update the kernel context image.)
Only additional thing needed is to activate the edited configuration by
sending one empty request down the pipe. This accomplishes context restore
of the updated kernel context and so the OA configuration gets written out
to it's control registers.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912152930.28237-1-tvrtko.ursulin@linux.intel.com
Split up intel_check_primary_plane() and intel_check_sprite_plane()
into per-platform variants. This way we can get a unified behaviour
between the SKL universal planes, and we stop checking for non-SKL
specific scaling limits for the "sprite" planes. And we now get
a natural place where to add more plarform specific checks.
v2: Split the .check_plane() calling convention change out (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-10-ville.syrjala@linux.intel.com
Let's store the final plane stride in the plane state. This avoids
having to pick between the normal vs. rotated stride during hardware
programming. And once we get GTT remapping the plane stride will
no longer match the fb stride so we'll need a place to store it
anyway.
v2: Keep checking fb->pitches[0] for cursor as later on we won't
populate plane_state->color_plane[0].stride for invisible planes
and we have been checking the cursor fb stride even for invisible
planes
v3: s/betwen/between in commit msg (José)
v4: Check color_plane[0].stride instead of fb->pitches[0] in
the skl_check_main_surface() X-tiling kludge
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180911150139.23922-1-ville.syrjala@linux.intel.com
If the caller supplies more than 4G of objects and than one that has to
be in the low 4G, it is possible for the low 4G to be full before we
attempt to find room for the last object that must be there. As we don't
reorder the two types, every pass hits the same problem and we fail with
ENOSPC. However, if we impose a little bit of ordering between the two
classes of objects, on the second pass we will be able to fit the
special object as we do it first. For setups that only use !48b objects,
we now reverse the order between passes, hopefully making the subsequent
passes more likely to succeed given that we are trying a different
order (rather than repeating the previous pass!)
v2: Quick one line explanation for the relative priorities given to
reservations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912101133.31377-1-chris@chris-wilson.co.uk
Userspace should be free to race against itself and shoot itself in
the foot if it so desires to adjust a parameter at the same time as
submitting a batch to that context. As such, the struct_mutex in context
setparam is only being used to serialise userspace against itself and
not for any protection of internal structs and so is superfluous.
v2: Separate user_flags from internal flags to reduce chance of
interference; and use locked bit ops for user updates.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180911132206.23032-1-chris@chris-wilson.co.uk