Commit cd7e 61b9"init mmio by lri command in vgpu inhibit context"
initializes registers saved/restored in context with its vreg value
through lri command in ring buffer. It relies on vreg got updated
on every guest access. There is a case found that Linux guest uses
lri command in inhibit-ctx to update the register. This patch adds
vreg update on this case.
v2: move mmio_attribute functions to gvt.h (Zhenyu)
v3: use mask_mmio_write in vreg update
v4: refine codes and add more comments (Zhenyu)
Fixes: cd7e61b9("drm/i915/gvt: init mmio by lri command in vgpu inhibit context")
Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The displaylink hardware has such a peculiarity that it doesn't render a
command until next command is received. This produces occasional
corruption, such as when setting 22x11 font on the console, only the first
line of the cursor will be blinking if the cursor is located at some
specific columns.
When we end up with a repeating pixel, the driver has a bug that it leaves
one uninitialized byte after the command (and this byte is enough to flush
the command and render it - thus it fixes the screen corruption), however
whe we end up with a non-repeating pixel, there is no byte appended and
this results in temporary screen corruption.
This patch fixes the screen corruption by always appending a byte 0xAF at
the end of URB. It also removes the uninitialized byte.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
Currently, the wc-stash used for providing flushed WC pages ready for
constructing the page directories is assumed to be protected by the
struct_mutex. However, we want to remove this global lock and so must
install a replacement global lock for accessing the global wc-stash (the
per-vm stash continues to be guarded by the vm).
We need to push ahead on this patch due to an oversight in hastily
removing the struct_mutex guard around the igt_ppgtt_alloc selftest. No
matter, it will prove very useful (i.e. will be required) in the near
future.
v2: Restore the onstack stash so that we can drop the vm->mutex in
future across the allocation.
v3: Restore the lost pagevec_init of the onstack allocation, and repaint
function names.
v4: Reorder init so that we don't try and use i915_address_space before
it is ininitialised.
Fixes: 1f6f00238a ("drm/i915/selftests: Drop struct_mutex around lowlevel pggtt allocation")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180704185518.4193-1-chris@chris-wilson.co.uk
Adjust the EIR clearing to cope with the edge triggered IIR
on i965/g4x. To guarantee an edge in the ISR master error bit
we temporarily mask everything in EMR. As some of the EIR bits
can't even be directly cleared we also borrow a trick from
i915_clear_error_registers() and permanently mask any bit that
remains high. No real thought given to how we might unmask them
again once the cause for the error has been clered. I suppose
on pre-g4x GPU reset will reinitialize EMR from scratch.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180611200258.27121-3-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
Just like with PIPESTAT, the edge triggered IIR on i965/g4x
also causes problems for hotplug interrupts. To make sure
we don't get the IIR port interrupt bit stuck low with the
ISR bit high we must force an edge in ISR. Unfortunately
we can't borrow the PIPESTAT trick and toggle the enable
bits in PORT_HOTPLUG_EN as that act itself generates hotplug
interrupts. Instead we just have to loop until we've cleared
PORT_HOTPLUG_STAT, or we just give up and WARN.
v2: Don't frob with PORT_HOTPLUG_EN
Cc: stable@vger.kernel.org
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180614175625.1615-1-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
Current implementation does not guarantee packed pixel modes working
with every dongle. There are some dongles, which require selecting
the output mode explicitly.
Write proper values to registers in packed_pixel mode, based on how it
is done in vendor's code. Select output color space: RGB
(no packed pixel) or YCBCR422 (packed pixel).
This reverts commit e8b92efa62
("drm/bridge/sii8620: fix display of packed pixel modes in MHL2").
Signed-off-by: Maciej Purski <m.purski@samsung.com>
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530204243-6370-3-git-send-email-m.purski@samsung.com
Currently AVI infoframe is sent only in MHL3. However, some MHL2 dongles
need AVI infoframe to work correctly in either packed pixel mode or
non-packed pixel mode.
Send AVI infoframe in set_infoframes() in every case. Create an
infoframe using drm_hdmi_infoframe_from_display_mode() instead of
manually filling each infoframe structure's field.
Signed-off-by: Maciej Purski <m.purski@samsung.com>
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530204243-6370-2-git-send-email-m.purski@samsung.com
This change updates the device headers to the latest device version.
Where renaming affects the existing code, it's updated accordingly.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
The buffer object backing the user fence is reserved using the non-user
fence, i.e., as soon as the non-user fence is signaled, the user fence
buffer object can be moved or even destroyed.
Therefore, emit the user fence first.
Both fences have the same cache invalidation behavior, so this should
have no user-visible effect.
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Two requests have come in for a backmerge,
and I've got some pull reqs on rc2, so this
just makes sense.
Signed-off-by: Dave Airlie <airlied@redhat.com>
We've long ago given up on enforcing the device format is always little
endian, so remove a leftover conversion.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Make the host message module function declarations similar to the other
declarations in vmwgfx_drv.h and include the header in vmwgfx_msg.c
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Reorganize the fence wait loop somewhat to make it look more like the
examples in set_current_state() kerneldoc, and add some code comments.
Also if we're about to time out, make sure we check again whether the fence
is actually signaled.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Make sure the error messages are a bit more descriptive, so that
a log reader may understand what's gone wrong.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
As gui_x/y positioning is display unit is protected by
requested_layout_mutex adding vmw_connector_state copy of the same and
modeset commit will refer the state copy to sync with modeset_check
state.
v2: Tested with CONFIG_PROVE_LOCKING enabled.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
To avoid race condition between update_layout ioctl and modeset ioctl
for access to gui_x/y positioning added a new mutex
requested_layout_mutex.
Also used drm_for_each_connector_iter to iterate over connector list.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
This validation is not required because user-space will send create_fb
request once the memory is allocated. This check should be performed
during mode-setting.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
For cases when full modeset is not requested like page-flip, skip
memory validation as the topology is not changed.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Call the same display memory validation function which is used by
modeset_check. This ensure consistency that kernel change preferred
mode/topology only if supported.
Also change the internal function to use drm_rect instead of
drm_vmw_rect.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
This patch adds display (primary) memory validation during modeset
check. Display memory validation are applicable to both SOU and STDU,
so allow both display unit to undergo this check.
Also added check for SVGA_CAP_NO_BB_RESTRICTION capability which lifts
bounding box restriction for STDU.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
vmw_kms_atomic_check_modeset() is currently checking config using the
legacy state, which is updated after a commit has happened.
This means vmw_kms_atomic_check_modeset() will reject an invalid config
on the next update rather than the current one.
Fix this by using the new states for config checking
Signed-off-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Previously when evicting resources we were unconditionally calling
ttm_eu_reserve_buffers with a NULL ww acquire context. That meant all
buffer object reserves were done using trylock semantics.
That makes sense when evicting during resource validation, because then
there already are a number of buffers reserved and using waiting locks
would cause lockdep errors.
That's not the case when unconditionally evicting all resources as part
of driver takedown or hibernation, so in that code path, make sure
we have a ww acquire context to get waiting lock buffer object reserve
semantics.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Only try to unmap cached maps when the buffer is moved into or out from
vram. Otherwise the underlying pages stay the same.
Also when unbinding resources from MOBs about to move, make sure we're
really moving out of MOB memory.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
It makes more sense to have all the buffer object related code in
a single file rather than splitting it up between the resource code
and buffer object pinning utilities.
Place all buffer object related code in vmwgfx_bo.c. Fix up headers
and export resource functionality when needed in the buffer object
code.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Initially vmware buffer objects were only used as DMA buffers, so the name
DMA buffer was a natural one. However, currently they are used also as
dumb buffers and MOBs backing guest backed objects so renaming them to
buffer objects is logical. Particularly since there is a dmabuf subsystem
in the kernel where a dma buffer means something completely different.
This also renames user-space api structures and IOCTL names
correspondingly, but the old names remain defined for now and the ABI
hasn't changed.
There are a couple of minor style changes to make checkpatch happy.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
The current Wound-Wait mutex algorithm is actually not Wound-Wait but
Wait-Die. Implement also Wound-Wait as a per-ww-class choice. Wound-Wait
is, contrary to Wait-Die a preemptive algorithm and is known to generate
fewer backoffs. Testing reveals that this is true if the
number of simultaneous contending transactions is small.
As the number of simultaneous contending threads increases, Wait-Wound
becomes inferior to Wait-Die in terms of elapsed time.
Possibly due to the larger number of held locks of sleeping transactions.
Update documentation and callers.
Timings using git://people.freedesktop.org/~thomash/ww_mutex_test
tag patch-18-06-15
Each thread runs 100000 batches of lock / unlock 800 ww mutexes randomly
chosen out of 100000. Four core Intel x86_64:
Algorithm #threads Rollbacks time
Wound-Wait 4 ~100 ~17s.
Wait-Die 4 ~150000 ~19s.
Wound-Wait 16 ~360000 ~109s.
Wait-Die 16 ~450000 ~82s.
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Gustavo Padovan <gustavo@padovan.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Sean Paul <seanpaul@chromium.org>
Cc: David Airlie <airlied@linux.ie>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-doc@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Co-authored-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
In commits:
34a2ab5e06 ("drm: Add acquire ctx parameter to ->update_plane")
1931529448 ("drm: Add acquire ctx parameter to ->plane_disable")
a pointer to a drm_modeset_acquire_ctx structure was added as an
argument to the method prototypes. The transitional helpers are
supposed to be directly plugged in as implementations of these
methods, but doing so generates a warning. Add the missing
argument.
A number of buggy users were added for drm_plane_helper_disable()
which need to be fixed up for this change, which we do by passing
a NULL ctx argument.
Fixes: 1931529448 ("drm: Add acquire ctx parameter to ->plane_disable")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/E1fa1Zr-0005gT-VF@rmk-PC.armlinux.org.uk
new_active_crtcs is a bitmask, new_active_crtc_count is the
actual count.
Reviewed-by: Rex Zhu <Rex.Zhu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>