Since Michal introduced new user controllable errors other than -EIO
during i915_gem_init(), we need to actually unwind on the error path as
we have to abort the module load (and we expect to do so cleanly!).
As we now teardown key state and then mark the driver as wedged (on
EIO), we have to be careful to not allow ourselves to resume and
unwedge, thus attempting to use the uninitialised driver.
v2: Try not to free driver state for the suppressed EIO
v3: Use load-fault-injection to test both error/recovery paths.
References: 8620eb1dbb ("drm/i915/uc: Don't use -EIO to report missing firmware")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171213134347.4608-1-chris@chris-wilson.co.uk
It seems that the DMC likes to transition between the DC states a lot when
there are no connected displays (no active power domains) during command
submission.
This activity on DC states has a negative impact on the performance of the
chip with huge latencies observed in the interrupt handlers and elsewhere.
Simple tests like igt/gem_latency -n 0 are slowed down by a factor of
eight.
Work around it by introducing a new power domain named,
POWER_DOMAIN_GT_IRQ, associtated with the "DC off" power well, which is
held for the duration of command submission activity.
CNL has the same problem which will be addressed as a follow-up. Doing
that requires a fix for a DC6 context corruption problem in the CNL DMC
firmware which is yet to be released.
v2:
* Add commit text as comment in i915_gem_mark_busy. (Chris Wilson)
* Protect macro body with braces. (Jani Nikula)
v3:
* Add dedicated power domain for clarity. (Chris, Imre)
* Commit message and comment text updates.
* Apply to all big-core GEN9 parts apart for Skylake which is pending DMC
firmware release.
v4:
* Power domain should be inner to device runtime pm. (Chris)
* Simplify NEEDS_CSR_GT_PERF_WA macro. (Chris)
* Handle async DMC loading by moving the GT_IRQ power domain logic into
intel_runtime_pm. (Daniel, Chris)
* Include small core GEN9 as well. (Imre)
v5
* Special handling for async DMC load is not needed since on failure the
power domain reference is kept permanently taken. (Imre)
v6:
* Drop the NEEDS_CSR_GT_PERF_WA macro since all firmwares have now been
deployed. (Imre, Chris)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100572
Testcase: igt/gem_exec_nop/headless
Cc: Imre Deak <imre.deak@intel.com>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk> (v2)
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (v5)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[Imre: Add note about applying the WA on CNL as a follow-up]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171205132854.26380-1-tvrtko.ursulin@linux.intel.com
In quite a few places, we have a list iteration over the vma on an
object that only want to inspect GGTT vma. By construction, these are
placed at the start of the list, so we have copied that knowledge into
many callsites. Pull that knowledge back to i915_vma.h and provide a
for_each_ggtt_vma() to tidy up the code.
v2: Add a backreference from vma_create() to remind ourselves why we put
ggtt vma at the head of the obj->vma_list (and ppgtt vma at the tail).
v3: Fixup s/vma/V/
Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20171207211407.31549-1-chris@chris-wilson.co.uk
From the shrinker paths, we want to relinquish the GPU and GGTT access to
the object, releasing the backing storage back to the system for
swapout. As a part of that process we would unpin the pages, marking
them for access by the CPU (for the swapout/swapin). However, if that
process was interrupted after unbind the vma, we missed a flush of the
inflight GGTT writes before we made that GTT space available again for
reuse, with the prospect that we would redirect them to another page.
The bug dates back to the introduction of multiple GGTT vma, but the
code itself dates to commit 02bef8f98d ("drm/i915: Unbind closed vma
for i915_gem_object_unbind()").
Fixes: 02bef8f98d ("drm/i915: Unbind closed vma for i915_gem_object_unbind()")
Fixes: c5ad54cf7d ("drm/i915: Use partial view in mmap fault handler")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: stable@vger.kernel.org
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171204132513.7303-1-chris@chris-wilson.co.uk
If only a subset of events is enabled we can afford to suspend
the sampling timer when the GPU is idle and so save some cycles
and power.
v2: Rebase and limit timer even more.
v3: Rebase.
v4: Rebase.
v5: Skip action if perf PMU failed to register.
v6: Checkpatch cleanup.
v7:
* Add a common helper to start the timer if needed. (Chris Wilson)
* Add comment explaining bitwise logic in pmu_needs_timer.
v8: Fix some comments styles. (Chris Wilson)
v9: Rebase.
v10: Move function declarations to i915_pmu.h.
v11: Rename functions to i915_pmu_gt_(un)parked. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20171121181852.16128-3-tvrtko.ursulin@linux.intel.com
I should have admitted defeat long ago as there has been a rare but
persistent error on Sandybridge where semaphore signaling did not
propagate to the waiter, leading to a GPU hang.
With the work on fence signaling for v4.9, the impact of using CPU driven
signaling was greatly reduced wrt to the latency of GPU semaphores,
though without logical rings support, the benefit of reordering work to
avoid bubbles is not realised (i.e. as it stands fence signaling is just
a slower, more costly version of HW semaphores; but works more
consistently). As a rough indicator of the difference,
with semaphores:
Sequential (3 engines, 1 processes): average 5.470us per cycle [expected 4.988us]
w/o semaphores:
Sequential (3 engines, 1 processes): average 15.771us per cycle [expected 4.923us]
In comparison, v3.4:
with semaphores:
Sequential (3 engines, 1 processes): average 16.066us per cycle [expected 11.842us]
w/o semaphores:
Sequential (3 engines, 1 processes): average 23.460us per cycle [expected 11.839us]
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54226 #and 100+ dupes
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171120205504.21892-3-chris@chris-wilson.co.uk
Since removing the module parameter to force selection of ringbuffer
emission for gen8, the code is defunct. Remove it.
To put the difference into perspective, a couple of microbenchmarks
(bdw i7-5557u, 20170324):
ring execlists
exec continuous nops on all rings: 1.491us 2.223us
exec sequential nops on each ring: 12.508us 53.682us
single nop + sync: 9.272us 30.291us
vblank_mode=0 glxgears: ~11000fps ~9000fps
Since the earlier submission, gen8 ringbuffer submission has fallen
further and further behind in features. So while ringbuffer may hold the
throughput crown, in terms of interactive latency, execlists is much
better. Alas, we have no convenient metrics for such, other than
demonstrating things we can do with execlists but can not using
legacy ringbuffer submission.
We have made a few improvements to lowlevel execlists throughput,
and ringbuffer currently panics on boot! (bdw i7-5557u, 20171026):
ring execlists
exec continuous nops on all rings: n/a 1.921us
exec sequential nops on each ring: n/a 44.621us
single nop + sync: n/a 21.953us
vblank_mode=0 glxgears: n/a ~18500fps
References: https://bugs.freedesktop.org/show_bug.cgi?id=87725
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Once-upon-a-time-Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171120205504.21892-2-chris@chris-wilson.co.uk
GEM proxy is a kind of GEM, whose backing physical memory is pinned
and produced by guest VM and is used by host as read only. With GEM
proxy, host is able to access guest physical memory through GEM object
interface. As GEM proxy is such a special kind of GEM, a new flag
I915_GEM_OBJECT_IS_PROXY is introduced to ban host from changing the
backing storage of GEM proxy.
v3:
- update "Reviewed-by". (Joonas)
v2:
- return -ENXIO when pin and map pages of GEM proxy to kernel space.
(Chris)
Here are the histories of this patch in "Dma-buf support for Gvt-g"
patch-set:
v14:
- return -ENXIO when gem proxy object is banned by ioctl.
(Chris) (Daniel)
v13:
- add comments to GEM proxy. (Chris)
- don't ban GEM proxy in i915_gem_sw_finish_ioctl. (Chris)
- check GEM proxy bar after finishing i915_gem_object_wait. (Chris)
- remove GEM proxy bar in i915_gem_madvise_ioctl.
v6:
- add gem proxy barrier in the following ioctls. (Chris)
i915_gem_set_caching_ioctl
i915_gem_set_domain_ioctl
i915_gem_sw_finish_ioctl
i915_gem_set_tiling_ioctl
i915_gem_madvise_ioctl
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/1510555798-21079-2-git-send-email-tina.zhang@intel.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20171114102513.22269-2-chris@chris-wilson.co.uk
Now that we always execute a context switch upon module load, there is
no need to queue a delayed task for doing so. The purpose of the delayed
task is to enable GT powersaving, for which we need the HW state to be
valid (i.e. having loaded a context and initialised basic state). We
used to defer this operation as historically it was slow (due to slow
register polling, fixed with commit 1758b90e38 ("drm/i915: Use a hybrid
scheme for fast register waits")) but now we have a requirement to save
the default HW state.
v2: Load the kernel context (to provide the power context) upon resume.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171112112738.1463-3-chris@chris-wilson.co.uk
Take a copy of the HW state after a reset upon module loading by
executing a context switch from a blank context to the kernel context,
thus saving the default hw state over the blank context image.
We can then use the default hw state to initialise any future context,
ensuring that each starts with the default view of hw state.
v2: Unmap our default state from the GTT after stealing it from the
context. This should stop us from accidentally overwriting it via the
GTT (and frees up some precious GTT space).
Testcase: igt/gem_ctx_isolation
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171110142634.10551-7-chris@chris-wilson.co.uk
In the next few patches, we will have a hard requirement that we emit a
context-switch to the perma-pinned i915->kernel_context (so that we can
save the HW state using that context-switch). As the first context
itself may be classed as a kernel context, we want to be explicit in our
comparison. For an extra-layer of finesse, we can check the last
unretired context on the engine; as well as the last retired context
when idle.
v2: verbose verbosity
v3: Always force the switch, even when the engine is idle, and update
the assert that this happens before suspend.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171110142634.10551-2-chris@chris-wilson.co.uk
Kasan spotted
[IGT] gem_tiled_pread_pwrite: exiting, ret=0
==================================================================
BUG: KASAN: use-after-free in __i915_gem_object_reset_page_iter+0x15c/0x170 [i915]
Read of size 8 at addr ffff8801359da310 by task kworker/3:2/182
CPU: 3 PID: 182 Comm: kworker/3:2 Tainted: G U 4.14.0-rc6-CI-Custom_3340+ #1
Hardware name: Intel Corp. Geminilake/GLK RVP1 DDR4 (05), BIOS GELKRVPA.X64.0062.B30.1708222146 08/22/2017
Workqueue: events __i915_gem_free_work [i915]
Call Trace:
dump_stack+0x68/0xa0
print_address_description+0x78/0x290
? __i915_gem_object_reset_page_iter+0x15c/0x170 [i915]
kasan_report+0x23d/0x350
__asan_report_load8_noabort+0x19/0x20
__i915_gem_object_reset_page_iter+0x15c/0x170 [i915]
? i915_gem_object_truncate+0x100/0x100 [i915]
? lock_acquire+0x380/0x380
__i915_gem_object_put_pages+0x30d/0x530 [i915]
__i915_gem_free_objects+0x551/0xbd0 [i915]
? lock_acquire+0x13e/0x380
__i915_gem_free_work+0x4e/0x70 [i915]
process_one_work+0x6f6/0x1590
? pwq_dec_nr_in_flight+0x2b0/0x2b0
worker_thread+0xe6/0xe90
? pci_mmcfg_check_reserved+0x110/0x110
kthread+0x309/0x410
? process_one_work+0x1590/0x1590
? kthread_create_on_node+0xb0/0xb0
ret_from_fork+0x27/0x40
Allocated by task 1801:
save_stack_trace+0x1b/0x20
kasan_kmalloc+0xee/0x190
kasan_slab_alloc+0x12/0x20
kmem_cache_alloc+0xdc/0x2e0
radix_tree_node_alloc.constprop.12+0x48/0x330
__radix_tree_create+0x274/0x480
__radix_tree_insert+0xa2/0x610
i915_gem_object_get_sg+0x224/0x670 [i915]
i915_gem_object_get_page+0xb5/0x1c0 [i915]
i915_gem_pread_ioctl+0x822/0xf60 [i915]
drm_ioctl_kernel+0x13f/0x1c0
drm_ioctl+0x6cf/0x980
do_vfs_ioctl+0x184/0xf30
SyS_ioctl+0x41/0x70
entry_SYSCALL_64_fastpath+0x1c/0xb1
Freed by task 37:
save_stack_trace+0x1b/0x20
kasan_slab_free+0xaf/0x190
kmem_cache_free+0xbf/0x340
radix_tree_node_rcu_free+0x79/0x90
rcu_process_callbacks+0x46d/0xf40
__do_softirq+0x21c/0x8d3
The buggy address belongs to the object at ffff8801359da0f0
which belongs to the cache radix_tree_node of size 576
The buggy address is located 544 bytes inside of
576-byte region [ffff8801359da0f0, ffff8801359da330)
The buggy address belongs to the page:
page:ffffea0004d67600 count:1 mapcount:0 mapping: (null) index:0x0 compound_mapcount: 0
flags: 0x8000000000008100(slab|head)
raw: 8000000000008100 0000000000000000 0000000000000000 0000000100110011
raw: ffffea0004b52920 ffffea0004b38020 ffff88015b416a80 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffff8801359da200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8801359da280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff8801359da300: fb fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc
^
ffff8801359da380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff8801359da400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================
Disabling lock debugging due to kernel taint
which looks like the slab containing the radixtree iter was freed as we
traversed the tree, taking the rcu read lock across the loop should
prevent that (deferring all the frees until the end).
Reported-by: Tomi Sarvela <tomi.p.sarvela@intel.com>
Fixes: 96d7763452 ("drm/i915: Use a radixtree for random access to the object's backing storage")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171026130032.10677-1-chris@chris-wilson.co.uk
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Pretty similar to what we have on execlists.
We're reusing most of the GEM code, however, due to GuC quirks we need a
couple of extra bits.
Preemption is implemented as GuC action, and actions can be pretty slow.
Because of that, we're using a mutex to serialize them. Since we're
requesting preemption from the tasklet, the task of creating a workitem
and wrapping it in GuC action is delegated to a worker.
To distinguish that preemption has finished, we're using additional
piece of HWSP, and since we're not getting context switch interrupts,
we're also adding a user interrupt.
The fact that our special preempt context has completed unfortunately
doesn't mean that we're ready to submit new work. We also need to wait
for GuC to finish its own processing.
v2: Don't compile out the wait for GuC, handle workqueue flush on reset,
no need for ordered workqueue, put on a reviewer hat when looking at my own
patches (Chris)
Move struct work around in intel_guc, move user interruput outside of
conditional (Michał)
Keep ring around rather than chase though intel_context
v3: Extract WA for flushing ggtt writes to a helper (Chris)
Keep work_struct in intel_guc rather than engine (Michał)
Use ordered workqueue for inject_preempt worker to avoid GuC quirks.
v4: Drop now unused INTEL_GUC_PREEMPT_OPTION_IMMEDIATE (Daniele)
Drop stray newlines, use container_of for intel_guc in worker,
check for presence of workqueue when flushing it, rather than
enable_guc_submission modparam, reorder preempt postprocessing (Chris)
v5: Make wq NULL after destroying it
v6: Swap struct guc_preempt_work members (Michał)
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jeff McGee <jeff.mcgee@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Oscar Mateo <oscar.mateo@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20171026133558.19580-1-michal.winiarski@intel.com
In the idle worker we drop the prolonged GT wakeref used to cover such
essentials as interrupt delivery. (When a CS interrupt arrives, we also
assert that the GT is awake.) However, it turns out that 10ms is not
long enough to be assured that the last CS interrupt has been delivered,
so bump that to 200ms, and move the entirety of that wait to before we
take the struct_mutex to avoid blocking. As this is now a potentially
long wait, restore the earlier behaviour of bailing out early when a new
request arrives.
v2: Break out the repeated check for new requests into its own little
helper to try and improve the self-commentary.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171023213237.26536-1-chris@chris-wilson.co.uk
We free objects in bulk after they wait for their RCU grace period.
Currently, we take struct_mutex and unbind all the objects. This can lead
to a long lock duration during which time those objects have their pages
unfreeable (i.e. the shrinker is prevented from reaping those pages). If
we only process a single object under the struct_mutex and then free the
pages, the number of objects locked away from the shrinker is minimal
and we allow regular clients better access to struct_mutex if they need
it.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171013202621.7276-9-chris@chris-wilson.co.uk
Inspired by Tvrtko's critique of the reaping of the stale contexts
before allocating a new one, also limit the freed object reaping to the
oldest stale object before allocating a fresh object. Unlike contexts,
objects may have radically different sizes of backing storage, but
similar to contexts, while we want to prevent starvation due to
excessive freed lists, we also do not want to delay fresh allocations
for too long. Only freeing the oldest on the freed object list before
each allocation is a reasonable compromise.
v2: Only a single consumer of llist_del_first() is allowed (although
multiple llist_add are still allowed in parallel). Unlike
i915_gem_context, i915_gem_flush_free_objects() is itself not serialized
and so we need to add our own spinlock. Otherwise KASAN eventually spots
a use-after-free for the race on *first->next.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171013202621.7276-8-chris@chris-wilson.co.uk
Remove the struct_mutex requirement around dev_priv->mm.bound_list and
dev_priv->mm.unbound_list by giving it its own spinlock. This reduces
one more requirement for struct_mutex and in the process gives us
slightly more accurate unbound_list tracking, which should improve the
shrinker - but the drawback is that we drop the retirement before
counting so i915_gem_object_is_active() may be stale and lead us to
underestimate the number of objects that may be shrunk (see commit
bed50aea61 ("drm/i915/shrinker: Flush active on objects before
counting")).
v2: Crosslink the spinlock to the lists it protects, and btw this
changes s/obj->global_link/obj->mm.link/
v3: Fix decoupling of old links in i915_gem_object_attach_phys()
v3.1: Fix the fix, only unlink if it was linked
v3.2: Use a local for to_i915(obj->base.dev)->mm.obj_lock
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171016114037.5556-1-chris@chris-wilson.co.uk