Merge tag 'drm-intel-next-2018-09-06-2' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

Merge tag 'gvt-next-2018-09-04'
drm-intel-next-2018-09-06-1:
UAPI Changes:
- GGTT coherency GETPARAM: GGTT has turned out to be non-coherent for some
  platforms, which we've failed to communicate to userspace so far. SNA was
  modified to do extra flushing on non-coherent GGTT access, while Mesa will
  mitigate by always requiring WC mapping (which is non-coherent anyway).
- Neuter Resource Streamer uAPI: There never really were users for the feature,
  so neuter it while keeping the interface bits for compatibility. This is a
  long due item from past.

Cross-subsystem Changes:
- Backmerge of branch drm-next-4.19 for DP_DPCD_REV_14 changes

Core Changes:
- None

Driver Changes:

- A load of Icelake (ICL) enabling patches (Paulo, Manasi)
- Enabled full PPGTT for IVB,VLV and HSW (Chris)
- Bugzilla #107113: Distribute DDB based on display resolutions (Mahesh)
- Bugzillas #100023,#107476,#94921: Support limited range DP displays (Jani)
- Bugzilla #107503: Increase LSPCON timeout (Fredrik)
- Avoid boosting GPU due to an occasional stall in interactive workloads (Chris)
- Apply GGTT coherency W/A only for affected systems instead of all (Chris)
- Fix for infinite link training loop for faulty USB-C MST hubs (Nathan)
- Keep KMS functional on Gen4 and earlier when GPU is wedged (Chris)
- Stop holding ppGTT reference from closed VMAs (Chris)
- Clear error registers after error capture (Lionel)
- Various Icelake fixes (Anusha, Jyoti, Ville, Tvrtko)
- Add missing Coffeelake (CFL) PCI IDs (Rodrigo)
- Flush execlists tasklet directly from reset-finish (Chris)
- Fix LPE audio runtime PM (Chris)
- Fix detection of out of range surface positions (GLK/CNL) (Ville)
- Remove wait-for-idle for PSR2 (Dhinakaran)
- Power down existing display hardware resources when display is disabled (Chris)
- Don't allow runtime power management if RC6 doesn't exist (Chris)
- Add debugging checks for runtime power management paths (Imre)
- Increase symmetry in display power init/fini paths (Imre)
- Isolate GVT specific macros from i915_reg.h (Lucas)
- Increase symmetry in power management enable/disable paths (Chris)
- Increase IP disable timeout to 100 ms to avoid DRM_ERROR (Imre)
- Fix memory leak from HDMI HDCP write function (Brian, Rodrigo)
- Reject Y/Yf tiling on interlaced modes (Ville)
- Use a cached mapping for the physical HWS on older gens (Chris)
- Force slow path of writing relocations to buffer if unable to write to userspace (Chris)
- Do a full device reset after being wedged (Chris)
- Keep forcewake counts over reset (in case of debugfs user) (Imre, Chris)
- Avoid false-positive errors from power wells during init (Imre)
- Reset engines forcibly in exchange of declaring whole device wedged (Mika)
- Reduce context HW ID lifetime in preparation for Icelake (Chris)
- Attempt to recover from module load failures (Chris)
- Keep select interrupts over a reset to avoid missing/losing them (Chris)
- GuC submission backend improvements (Jakub)
- Terminate context images with BB_END (Chris, Lionel)
- Make GCC evaluate GGTT view struct size assertions again (Ville)
- Add selftest to exercise suspend/hibernate code-paths for GEM (Chris)
- Use a full emulation of a user ppgtt context in selftests (Chris)
- Exercise resetting in the middle of a wait-on-fence in selftests (Chris)
- Fix coherency issues on selftests for Baytrail (Chris)
- Various other GEM fixes / self-test updates (Chris, Matt)
- GuC doorbell self-tests (Daniele)
- PSR mode control through debugfs for IGTs (Maarten)
- Degrade expected WM latency errors to DRM_DEBUG_KMS (Chris)
- Cope with errors better in MST link training (Dhinakaran)
- Fix WARN on KBL external displays (Azhar)
- Power well code cleanups (Imre)
- Fixes to PSR debugging (Dhinakaran)
- Make forcewake errors louder for easier catching in CI (WARNs) (Chris)
- Fortify tiling code against programmer errors (Chris)
- Bunch of fixes for CI exposed corner cases (multiple authors, mostly Chris)

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907105446.GA22860@jlahtine-desk.ger.corp.intel.com
Esse commit está contido em:
Dave Airlie
2018-09-11 11:52:54 +10:00
87 arquivos alterados com 3859 adições e 1808 exclusões

Ver arquivo

@@ -56,6 +56,10 @@ static const u8 pci_cfg_space_rw_bmp[PCI_INTERRUPT_LINE + 4] = {
/**
* vgpu_pci_cfg_mem_write - write virtual cfg space memory
* @vgpu: target vgpu
* @off: offset
* @src: src ptr to write
* @bytes: number of bytes
*
* Use this function to write virtual cfg space memory.
* For standard cfg space, only RW bits can be changed,
@@ -91,6 +95,10 @@ static void vgpu_pci_cfg_mem_write(struct intel_vgpu *vgpu, unsigned int off,
/**
* intel_vgpu_emulate_cfg_read - emulate vGPU configuration space read
* @vgpu: target vgpu
* @offset: offset
* @p_data: return data ptr
* @bytes: number of bytes to read
*
* Returns:
* Zero on success, negative error code if failed.
@@ -278,6 +286,10 @@ static int emulate_pci_bar_write(struct intel_vgpu *vgpu, unsigned int offset,
/**
* intel_vgpu_emulate_cfg_read - emulate vGPU configuration space write
* @vgpu: target vgpu
* @offset: offset
* @p_data: write data ptr
* @bytes: number of bytes to write
*
* Returns:
* Zero on success, negative error code if failed.

Ver arquivo

@@ -1840,6 +1840,8 @@ static int cmd_handler_mi_batch_buffer_start(struct parser_exec_state *s)
return ret;
}
static int mi_noop_index;
static struct cmd_info cmd_info[] = {
{"MI_NOOP", OP_MI_NOOP, F_LEN_CONST, R_ALL, D_ALL, 0, 1, NULL},
@@ -2525,7 +2527,12 @@ static int cmd_parser_exec(struct parser_exec_state *s)
cmd = cmd_val(s, 0);
info = get_cmd_info(s->vgpu->gvt, cmd, s->ring_id);
/* fastpath for MI_NOOP */
if (cmd == MI_NOOP)
info = &cmd_info[mi_noop_index];
else
info = get_cmd_info(s->vgpu->gvt, cmd, s->ring_id);
if (info == NULL) {
gvt_vgpu_err("unknown cmd 0x%x, opcode=0x%x, addr_type=%s, ring %d, workload=%p\n",
cmd, get_opcode(cmd, s->ring_id),
@@ -2928,6 +2935,8 @@ static int init_cmd_table(struct intel_gvt *gvt)
kfree(e);
return -EEXIST;
}
if (cmd_info[i].opcode == OP_MI_NOOP)
mi_noop_index = i;
INIT_HLIST_NODE(&e->hlist);
add_cmd_entry(gvt, e);

Ver arquivo

@@ -462,6 +462,7 @@ void intel_vgpu_clean_display(struct intel_vgpu *vgpu)
/**
* intel_vgpu_init_display- initialize vGPU virtual display emulation
* @vgpu: a vGPU
* @resolution: resolution index for intel_vgpu_edid
*
* This function is used to initialize vGPU virtual display emulation stuffs
*

Ver arquivo

@@ -340,6 +340,9 @@ static int gmbus2_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
/**
* intel_gvt_i2c_handle_gmbus_read - emulate gmbus register mmio read
* @vgpu: a vGPU
* @offset: reg offset
* @p_data: data return buffer
* @bytes: access data length
*
* This function is used to emulate gmbus register mmio read
*
@@ -365,6 +368,9 @@ int intel_gvt_i2c_handle_gmbus_read(struct intel_vgpu *vgpu,
/**
* intel_gvt_i2c_handle_gmbus_write - emulate gmbus register mmio write
* @vgpu: a vGPU
* @offset: reg offset
* @p_data: data return buffer
* @bytes: access data length
*
* This function is used to emulate gmbus register mmio write
*
@@ -437,6 +443,9 @@ static inline int get_aux_ch_reg(unsigned int offset)
/**
* intel_gvt_i2c_handle_aux_ch_write - emulate AUX channel register write
* @vgpu: a vGPU
* @port_idx: port index
* @offset: reg offset
* @p_data: write ptr
*
* This function is used to emulate AUX channel register write
*

Ver arquivo

@@ -1113,6 +1113,10 @@ static inline void ppgtt_generate_shadow_entry(struct intel_gvt_gtt_entry *se,
}
/**
* Check if can do 2M page
* @vgpu: target vgpu
* @entry: target pfn's gtt entry
*
* Return 1 if 2MB huge gtt shadowing is possilbe, 0 if miscondition,
* negtive if found err.
*/
@@ -1945,7 +1949,7 @@ void intel_vgpu_unpin_mm(struct intel_vgpu_mm *mm)
/**
* intel_vgpu_pin_mm - increase the pin count of a vGPU mm object
* @vgpu: a vGPU
* @mm: target vgpu mm
*
* This function is called when user wants to use a vGPU mm object. If this
* mm object hasn't been shadowed yet, the shadow will be populated at this
@@ -2521,8 +2525,7 @@ fail:
/**
* intel_vgpu_find_ppgtt_mm - find a PPGTT mm object
* @vgpu: a vGPU
* @page_table_level: PPGTT page table level
* @root_entry: PPGTT page table root pointers
* @pdps: pdp root array
*
* This function is used to find a PPGTT mm object from mm object pool
*

Ver arquivo

@@ -189,7 +189,6 @@ static const struct intel_gvt_ops intel_gvt_ops = {
/**
* intel_gvt_init_host - Load MPT modules and detect if we're running in host
* @gvt: intel gvt device
*
* This function is called at the driver loading stage. If failed to find a
* loadable MPT module or detect currently we're running in a VM, then GVT-g
@@ -303,7 +302,7 @@ static int init_service_thread(struct intel_gvt *gvt)
/**
* intel_gvt_clean_device - clean a GVT device
* @gvt: intel gvt device
* @dev_priv: i915 private
*
* This function is called at the driver unloading stage, to free the
* resources owned by a GVT device.

Ver arquivo

@@ -1287,12 +1287,13 @@ static int power_well_ctl_mmio_write(struct intel_vgpu *vgpu,
{
write_vreg(vgpu, offset, p_data, bytes);
if (vgpu_vreg(vgpu, offset) & HSW_PWR_WELL_CTL_REQ(HSW_DISP_PW_GLOBAL))
if (vgpu_vreg(vgpu, offset) &
HSW_PWR_WELL_CTL_REQ(HSW_PW_CTL_IDX_GLOBAL))
vgpu_vreg(vgpu, offset) |=
HSW_PWR_WELL_CTL_STATE(HSW_DISP_PW_GLOBAL);
HSW_PWR_WELL_CTL_STATE(HSW_PW_CTL_IDX_GLOBAL);
else
vgpu_vreg(vgpu, offset) &=
~HSW_PWR_WELL_CTL_STATE(HSW_DISP_PW_GLOBAL);
~HSW_PWR_WELL_CTL_STATE(HSW_PW_CTL_IDX_GLOBAL);
return 0;
}
@@ -2118,7 +2119,7 @@ static int init_generic_mmio_info(struct intel_gvt *gvt)
MMIO_F(PCH_GMBUS0, 4 * 4, 0, 0, 0, D_ALL, gmbus_mmio_read,
gmbus_mmio_write);
MMIO_F(PCH_GPIOA, 6 * 4, F_UNALIGN, 0, 0, D_ALL, NULL, NULL);
MMIO_F(PCH_GPIO_BASE, 6 * 4, F_UNALIGN, 0, 0, D_ALL, NULL, NULL);
MMIO_F(_MMIO(0xe4f00), 0x28, 0, 0, 0, D_ALL, NULL, NULL);
MMIO_F(_MMIO(_PCH_DPB_AUX_CH_CTL), 6 * 4, 0, 0, 0, D_PRE_SKL, NULL,
@@ -2443,17 +2444,10 @@ static int init_generic_mmio_info(struct intel_gvt *gvt)
MMIO_D(GEN6_RC6p_THRESHOLD, D_ALL);
MMIO_D(GEN6_RC6pp_THRESHOLD, D_ALL);
MMIO_D(GEN6_PMINTRMSK, D_ALL);
/*
* Use an arbitrary power well controlled by the PWR_WELL_CTL
* register.
*/
MMIO_DH(HSW_PWR_WELL_CTL_BIOS(HSW_DISP_PW_GLOBAL), D_BDW, NULL,
power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL_DRIVER(HSW_DISP_PW_GLOBAL), D_BDW, NULL,
power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL_KVMR, D_BDW, NULL, power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL_DEBUG(HSW_DISP_PW_GLOBAL), D_BDW, NULL,
power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL1, D_BDW, NULL, power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL2, D_BDW, NULL, power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL3, D_BDW, NULL, power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL4, D_BDW, NULL, power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL5, D_BDW, NULL, power_well_ctl_mmio_write);
MMIO_DH(HSW_PWR_WELL_CTL6, D_BDW, NULL, power_well_ctl_mmio_write);
@@ -2804,13 +2798,8 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
MMIO_F(_MMIO(_DPD_AUX_CH_CTL), 6 * 4, 0, 0, 0, D_SKL_PLUS, NULL,
dp_aux_ch_ctl_mmio_write);
/*
* Use an arbitrary power well controlled by the PWR_WELL_CTL
* register.
*/
MMIO_D(HSW_PWR_WELL_CTL_BIOS(SKL_DISP_PW_MISC_IO), D_SKL_PLUS);
MMIO_DH(HSW_PWR_WELL_CTL_DRIVER(SKL_DISP_PW_MISC_IO), D_SKL_PLUS, NULL,
skl_power_well_ctl_write);
MMIO_D(HSW_PWR_WELL_CTL1, D_SKL_PLUS);
MMIO_DH(HSW_PWR_WELL_CTL2, D_SKL_PLUS, NULL, skl_power_well_ctl_write);
MMIO_D(_MMIO(0xa210), D_SKL_PLUS);
MMIO_D(GEN9_MEDIA_PG_IDLE_HYSTERESIS, D_SKL_PLUS);
@@ -3434,6 +3423,7 @@ bool intel_gvt_in_force_nonpriv_whitelist(struct intel_gvt *gvt,
* @offset: register offset
* @pdata: data buffer
* @bytes: data length
* @is_read: read or write
*
* Returns:
* Zero on success, negative error code if failed.

Ver arquivo

@@ -1712,7 +1712,7 @@ static unsigned long kvmgt_gfn_to_pfn(unsigned long handle, unsigned long gfn)
return pfn;
}
int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
unsigned long size, dma_addr_t *dma_addr)
{
struct kvmgt_guest_info *info;
@@ -1761,7 +1761,7 @@ static void __gvt_dma_release(struct kref *ref)
__gvt_cache_remove_entry(entry->vgpu, entry);
}
void kvmgt_dma_unmap_guest_page(unsigned long handle, dma_addr_t dma_addr)
static void kvmgt_dma_unmap_guest_page(unsigned long handle, dma_addr_t dma_addr)
{
struct kvmgt_guest_info *info;
struct gvt_dma *entry;

Ver arquivo

@@ -39,6 +39,7 @@
/**
* intel_vgpu_gpa_to_mmio_offset - translate a GPA to MMIO offset
* @vgpu: a vGPU
* @gpa: guest physical address
*
* Returns:
* Zero on success, negative error code if failed
@@ -228,7 +229,7 @@ out:
/**
* intel_vgpu_reset_mmio - reset virtual MMIO space
* @vgpu: a vGPU
*
* @dmlr: whether this is device model level reset
*/
void intel_vgpu_reset_mmio(struct intel_vgpu *vgpu, bool dmlr)
{

Ver arquivo

@@ -37,19 +37,6 @@
#include "gvt.h"
#include "trace.h"
/**
* Defined in Intel Open Source PRM.
* Ref: https://01.org/linuxgraphics/documentation/hardware-specification-prms
*/
#define TRVATTL3PTRDW(i) _MMIO(0x4de0 + (i)*4)
#define TRNULLDETCT _MMIO(0x4de8)
#define TRINVTILEDETCT _MMIO(0x4dec)
#define TRVADR _MMIO(0x4df0)
#define TRTTE _MMIO(0x4df4)
#define RING_EXCC(base) _MMIO((base) + 0x28)
#define RING_GFX_MODE(base) _MMIO((base) + 0x29c)
#define VF_GUARDBAND _MMIO(0x83a4)
#define GEN9_MOCS_SIZE 64
/* Raw offset is appened to each line for convenience. */

Ver arquivo

@@ -53,5 +53,8 @@ bool is_inhibit_context(struct intel_context *ce);
int intel_vgpu_restore_inhibit_context(struct intel_vgpu *vgpu,
struct i915_request *req);
#define IS_RESTORE_INHIBIT(a) \
(_MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT) == \
((a) & _MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT)))
#endif

Ver arquivo

@@ -216,7 +216,6 @@ static void virt_vbt_generation(struct vbt *v)
/**
* intel_vgpu_init_opregion - initialize the stuff used to emulate opregion
* @vgpu: a vGPU
* @gpa: guest physical address of opregion
*
* Returns:
* Zero on success, negative error code if failed.

Ver arquivo

@@ -41,6 +41,8 @@ struct intel_vgpu_page_track *intel_vgpu_find_page_track(
* intel_vgpu_register_page_track - register a guest page to be tacked
* @vgpu: a vGPU
* @gfn: the gfn of guest page
* @handler: page track handler
* @priv: tracker private
*
* Returns:
* zero on success, negative error code if failed.

Ver arquivo

@@ -77,4 +77,22 @@
#define _RING_CTL_BUF_SIZE(ctl) (((ctl) & RB_TAIL_SIZE_MASK) + \
I915_GTT_PAGE_SIZE)
#define PCH_GPIO_BASE _MMIO(0xc5010)
#define PCH_GMBUS0 _MMIO(0xc5100)
#define PCH_GMBUS1 _MMIO(0xc5104)
#define PCH_GMBUS2 _MMIO(0xc5108)
#define PCH_GMBUS3 _MMIO(0xc510c)
#define PCH_GMBUS4 _MMIO(0xc5110)
#define PCH_GMBUS5 _MMIO(0xc5120)
#define TRVATTL3PTRDW(i) _MMIO(0x4de0 + (i) * 4)
#define TRNULLDETCT _MMIO(0x4de8)
#define TRINVTILEDETCT _MMIO(0x4dec)
#define TRVADR _MMIO(0x4df0)
#define TRTTE _MMIO(0x4df4)
#define RING_EXCC(base) _MMIO((base) + 0x28)
#define RING_GFX_MODE(base) _MMIO((base) + 0x29c)
#define VF_GUARDBAND _MMIO(0x83a4)
#endif

Ver arquivo

@@ -132,35 +132,6 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload)
unsigned long context_gpa, context_page_num;
int i;
gvt_dbg_sched("ring id %d workload lrca %x", ring_id,
workload->ctx_desc.lrca);
context_page_num = gvt->dev_priv->engine[ring_id]->context_size;
context_page_num = context_page_num >> PAGE_SHIFT;
if (IS_BROADWELL(gvt->dev_priv) && ring_id == RCS)
context_page_num = 19;
i = 2;
while (i < context_page_num) {
context_gpa = intel_vgpu_gma_to_gpa(vgpu->gtt.ggtt_mm,
(u32)((workload->ctx_desc.lrca + i) <<
I915_GTT_PAGE_SHIFT));
if (context_gpa == INTEL_GVT_INVALID_ADDR) {
gvt_vgpu_err("Invalid guest context descriptor\n");
return -EFAULT;
}
page = i915_gem_object_get_page(ctx_obj, LRC_HEADER_PAGES + i);
dst = kmap(page);
intel_gvt_hypervisor_read_gpa(vgpu, context_gpa, dst,
I915_GTT_PAGE_SIZE);
kunmap(page);
i++;
}
page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN);
shadow_ring_context = kmap(page);
@@ -195,6 +166,37 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload)
sr_oa_regs(workload, (u32 *)shadow_ring_context, false);
kunmap(page);
if (IS_RESTORE_INHIBIT(shadow_ring_context->ctx_ctrl.val))
return 0;
gvt_dbg_sched("ring id %d workload lrca %x", ring_id,
workload->ctx_desc.lrca);
context_page_num = gvt->dev_priv->engine[ring_id]->context_size;
context_page_num = context_page_num >> PAGE_SHIFT;
if (IS_BROADWELL(gvt->dev_priv) && ring_id == RCS)
context_page_num = 19;
i = 2;
while (i < context_page_num) {
context_gpa = intel_vgpu_gma_to_gpa(vgpu->gtt.ggtt_mm,
(u32)((workload->ctx_desc.lrca + i) <<
I915_GTT_PAGE_SHIFT));
if (context_gpa == INTEL_GVT_INVALID_ADDR) {
gvt_vgpu_err("Invalid guest context descriptor\n");
return -EFAULT;
}
page = i915_gem_object_get_page(ctx_obj, LRC_HEADER_PAGES + i);
dst = kmap(page);
intel_gvt_hypervisor_read_gpa(vgpu, context_gpa, dst,
I915_GTT_PAGE_SIZE);
kunmap(page);
i++;
}
return 0;
}
@@ -1138,6 +1140,7 @@ out_shadow_ctx:
/**
* intel_vgpu_select_submission_ops - select virtual submission interface
* @vgpu: a vGPU
* @engine_mask: either ALL_ENGINES or target engine mask
* @interface: expected vGPU virtual submission interface
*
* This function is called when guest configures submission interface.
@@ -1190,7 +1193,7 @@ int intel_vgpu_select_submission_ops(struct intel_vgpu *vgpu,
/**
* intel_vgpu_destroy_workload - destroy a vGPU workload
* @vgpu: a vGPU
* @workload: workload to destroy
*
* This function is called when destroy a vGPU workload.
*
@@ -1282,6 +1285,7 @@ static int prepare_mm(struct intel_vgpu_workload *workload)
/**
* intel_vgpu_create_workload - create a vGPU workload
* @vgpu: a vGPU
* @ring_id: ring index
* @desc: a guest context descriptor
*
* This function is called when creating a vGPU workload.