This patch introduces two functions for activating/de-activating vGPU in
mdev ops.
A racing condition was found between virtual vblank emulation and KVGMT
mdev release path. V-blank emulation will emulate and inject V-blank
interrupt for every active vGPU with holding gvt->lock, while in mdev
release path, it will directly release hypervisor handle without changing
vGPU status or taking gvt->lock, so a kernel oops is encountered when
vblank emulation is injecting a interrupt with a invalid hypervisor
handle. (Reported by Terrence)
To solve this problem, we factor out vGPU activation/de-activation from
vGPU creation/destruction path and let KVMGT mdev release ops de-activate
the vGPU before release hypervisor handle. Once a vGPU is de-activated,
GVT-g will not emulate v-blank for it or touch the hypervisor handle.
Fixes: 659643f ("drm/i915/gvt/kvmgt: add vfio/mdev support to KVMGT")
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The timeslice usage will determine vGPU whether has chance to
schedule or not at every vGPU switch checkpoint.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
vGPU resource is allocated by scheduler. To account for non-allocated
free cycles, we create an idle vGPU as the placeholder similar to idle task
concept, which is useful to handle some corner cases in scheduling policy.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This method tries to guarantee precision in second level, with the
adjustment conducted in every 100ms. At the end of each vGPU switch
calculate the sched time and subtract it from the time slice
allocated; the allocated time slice for every 100ms together with
remaining timeslice, will be used to decide how much timeslice
allocated to this vGPU in the next 100ms slice, with the end goal
to guarantee weight ratio in second level.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The weight defines proportional control of physical GPU resource
shared between vGPUs. So far the weight is tied to a specific vGPU
type, i.e when creating multiple vGPUs with different types, they
will inherit different weights.
e.g. The weight of type GVTg_V5_2 is 8, the weight of type GVTg_V5_4
is 4, so vGPU of type GVTg_V5_2 has double vGPU resource of vGPU type
GVTg_V5_4.
TODO: allow user control the weight setting in the future.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Factor out the scheduler to a more clear structure, the basic
logic is to find out next vGPU first and then schedule it.
vGPUs were ordered in a LRU list, scheduler scan from the LRU
list head and choose the first vGPU who has pending workload.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add some statistic routine to collect the time when vGPU is
scheduled in/out and the time of the last ctx submission.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Currently the scheduler is triggered by delayed_work, which doesn't
provide precision at microsecond level. Move to hrtimer instead for
more accurate control.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The variable info is never NULL, which is checked by the caller. This
patch removes the redundant info NULL check logic.
Fixes: 695fbc08d8 ("drm/i915/gvt: replace the gvt_err with gvt_vgpu_err")
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
From commit d1a513be1f ("drm/i915/gvt: add resolution definition for vGPU
type"), small type has been restricted to small resolution, so not
require larger high GM size any more. Change to smaller 384M for more
VM creation with vGPU enabled which still perform reasonable workload.
Fixes: d1a513be1f ("drm/i915/gvt: add resolution definition for vGPU type")
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
On gfx9 hardware the value is not wrapped and is a 64-bit value. So
we reduce it modulo the ring size.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
(v2) use buf_mask instead of computing on the fly
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Fix the start/end address calculation for address ranges that span
multiple page directories in amdgpu_vm_alloc_levels.
Add error messages if page tables aren't found. Otherwise the page
table update would just fail silently.
v2:
* Change WARN_ON to WARN_ON_ONCE
* Move masking of high address bits to caller
* Add range-check for "from" and "to"
v3:
* Replace WARN_ON_ONCE in get_pt with pr_err in caller
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
adev->family is not initialized yet when amdgpu_get_block_size is
called. Use adev->asic_type instead.
Minimum VM size is 512GB, not 256GB, for a single page table entry
in the root page table.
gmc_v9_0_vm_init is called after adev->vm_manager.max_pfn is
initialized. Move the minimum VM-size enforcement ahead of max_pfn
initializtion. Cast to 64-bit before the left-shift.
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
1. security firmware loading has moved to sw init, so this code
is useless.
2. it seems that driver could not call request_firmware on
kernel 2.6, when S3 resume. for request firmware depends on
userspace, at this time, userspace is freeze.
Signed-off-by: Jim Qu <Jim.Qu@amd.com>
Acked-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The RB harvest registers are not necessary, the driver already
exposes this info via the info ioctl. GB_BACKEND_MAP has
been deprecated since SI and is not relevant to the RB mapping.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Required for SR-IOV and saves MMIO transactions.
v2: drop cached RB harvest registers
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We check the mem config register to make sure it's been
programmed by the vbios to determine if we need to post
so we check for a non-0 value. However, when the asic
comes out of reset, we may see all ones here, so check
for that too.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
1) Adapt to vulkan:
Now use double SWITCH BUFFER to replace the 128 nops w/a,
because when vulkan introduced, umd can insert 7 ~ 16 IBs
per submit which makes 256 DW size cannot hold the whole
DMAframe (if we still insert those 128 nops), CP team suggests
use double SWITCH_BUFFERs, instead of tricky 128 NOPs w/a.
2) To fix the CE VM fault issue when MCBP introduced:
Need one more COND_EXEC wrapping IB part (original one us
for VM switch part).
this change can fix vm fault issue caused by below scenario
without this change:
>CE passed original COND_EXEC (no MCBP issued this moment),
proceed as normal.
>DE catch up to this COND_EXEC, but this time MCBP issued,
thus DE treats all following packages as NOP. The following
VM switch packages now looks just as NOP to DE, so DE
dosen't do VM flush at all.
>Now CE proceeds to the first IBc, and triggers VM fault,
because DE didn't do VM flush for this DMAframe.
3) change estimated alloc size for gfx9.
with new DMAframe scheme, we need modify emit_frame_size
for gfx9
4) No need to insert 128 nops after gfx8 vm flush anymore
because there was double SWITCH_BUFFER append to vm flush,
and for gfx7 we already use double SWITCH_BUFFER following
after vm_flush so no change needed for it.
5) Change emit_frame_size for gfx8
v2: squash in BUG removal from Monk
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
must set minor_update.enable before write smaller value
to wptr/doorbell, so for sriov we need set that register
bit in hw_init period.
this could fix the SDMA ring test fail after guest reboot
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
ring->buf_mask need be set prior to ring_clear_ring invoke
and fix ring_clear_ring as well which should use buf_mask
instead of ptr_mask
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
set bit 21 of IB.control filed to actually enable
MCBP for SRIOV
v2:
add flag for preemption enable bit for soc15 and use
this flag instead of hardcode.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
when MCBP enabled for gfx8, the cond_exec must also
be implemented, otherwise there will be odds to meet
cross engine (ce and me) deadlock when world switch
happens.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Also, add the fence of the clear operations to the BO to ensure that
the underlying memory can only be re-used after all PTEs pointing to
it have been cleared.
This avoids the following sequence of events that could be triggered
by user space:
1. Submit a CS that accesses some BO _without_ adding that BO to the
buffer list.
2. Free that BO.
3. Some other task re-uses the memory underlying the BO.
4. The CS is submitted to the hardware and accesses memory that is
now already in use by somebody else.
By clearing the page tables immediately in step 2, a GPU VM fault will
be triggered in step 4 instead of wild memory accesses.
v2: use amdgpu_bo_fence directly
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>