This adds an optional function table on GEM objects.
The main benefit is for drivers that support more than one type of
memory (shmem,vram,cma) for their buffers depending on the hardware it
runs on. With the callbacks attached to the GEM object itself, it is
easier to have core helpers for the the various buffer types. The driver
only has to make the decision about buffer type on GEM object creation
and all other callbacks can be handled by the chosen helper.
drm_driver->gem_prime_res_obj has not been added since there's a todo to
put a reservation_object into drm_gem_object.
v3: Add todo entry
v2: Drop drm_gem_object_funcs->prime_mmap in favour of
drm_gem_prime_mmap() (Daniel Vetter)
v1:
- drm_gem_object_funcs.map -> .prime_map let it only do PRIME mmap like
the function it superseeds (Daniel Vetter)
- Flip around the if ladders and make obj->funcs the first choice
highlighting the fact that this the new default way of doing it
(Daniel Vetter)
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181110145647.17580-4-noralf@tronnes.org
If the ioctl is not supported on a particular piece of HW/driver
combination, report ENOTSUP (aka EOPNOTSUPP) so that it can be easily
distinguished from both the lack of the ioctl and from a regular invalid
parameter.
v2: Across all the kms ioctls we had a mixture of reporting EINVAL,
ENODEV and a few ENOTSUPP (most where EINVAL) for a failed
drm_core_check_feature(). Update everybody to report ENOTSUPP.
v3: ENOTSUPP is an internal errno! It's value (524) does not correspond
to a POSIX errno, the one we want is ENOTSUP. However,
uapi/asm-generic/errno.h doesn't include ENOTSUP but man errno says
"ENOTSUP and EOPNOTSUPP have the same value on Linux,
but according to POSIX.1 these error values should be
distinct."
so use EOPNOTSUPP as its equivalent.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> #v2
Link: https://patchwork.freedesktop.org/patch/msgid/20180913192050.24812-1-chris@chris-wilson.co.uk
Dma-bufs should already be device coherent, as they are only pulled in the
CPU domain via the begin/end cpu_access calls. As we cache the mapping set
up by dma_map_sg a CPU sync at this point will not actually guarantee proper
coherency on non-coherent architectures, so we can as well stop pretending.
This is an important performance fix for architectures which need explicit
cache synchronization and userspace doing lots of dma-buf imports.
Improves Weston on Etnaviv performance 5x, where before this patch > 90%
of Weston CPU time was spent synchronizing caches for buffers which are
already device coherent.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20171130173428.8666-1-l.stach@pengutronix.de
Reference counting functions in the kernel typically use get/put suffixes. For
maintaining coding style consistency, introduce drm_dev_{get/put} functions. All
callers of drm_dev_ref() API have been converted in this patch and hence it has
been dropped while the drm_dev_unref() API with non-trivial number of users
remains for compatibility.
The semantic patch scripts/coccinelle/api/drm-get-put.cocci has been updated
with the new helper for conversion of drm_dev_unref() to drm_dev_put()
Signed-off-by: Aishwarya Pant <aishpant@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/6babda56134035a98220d5d37a4fd4048df214ce.1506413698.git.aishpant@gmail.com
For consistency with other reference counting APIs in the kernel, add
drm_gem_object_get() and drm_gem_object_put(), as well as an unlocked
variant of the latter, to reference count GEM buffer objects.
Compatibility aliases are added to keep existing code working. To help
speed up the transition, all the instances of the old functions in the
DRM core are already replaced in this commit.
The existing semantic patch for the DRM subsystem-wide conversion is
extended to account for these new helpers.
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170228144643.5668-6-thierry.reding@gmail.com
Currently the reference for the dmabuf->obj is incremented for the
dmabuf in drm_gem_prime_handle_to_fd() (at the high level userspace
interface), but is released in drm_gem_dmabuf_release() (the lowlevel
handler). Improve the symmetry of the dmabuf->obj ownership by acquiring
the reference in drm_gem_dmabuf_export(). This makes it easier to use
the prime functions directly.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Update kerneldoc.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161207214527.22533-1-chris@chris-wilson.co.uk
Currently we use a linear walk to lookup a handle and return a dma-buf,
and vice versa. A long overdue TODO task is to convert that to a
hashtable. Since the initial implementation of dma-buf/prime, we now
have resizeable hashtables we can use (and now a future task is to RCU
enable the lookup!). However, this patch opts to use an rbtree instead
to provide O(lgN) lookups (and insertion, deletion). rbtrees were chosen
over using the RCU backed resizable hashtable to firstly avoid the
reallocations (rbtrees can be embedded entirely within the parent
struct) and to favour simpler code with predictable worst case
behaviour. In simple testing, the difference between using the constant
lookup and insertion of the rhashtable and the rbtree was less than 10%
of the wall time (igt/benchmarks/prime_lookup) - both are dramatic
improvements over the existing linear lists.
v2: Favour rbtree over rhashtable
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94631
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sean Paul <seanpaul@chromium.org>
Cc: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20160926204414.23222-1-chris@chris-wilson.co.uk
Currently drm_gem_prime_import() checks if gem_prime_import_sg_table()
is implemented in DRM driver ops. However it is not necessary for
internal imports (i.e. dma_buf->ops == &drm_gem_prime_dmabuf_ops
and obj->dev == dev), which only increment reference count on respective
GEM objects.
This patch makes the helper check this condition only in case of
external imports fo rwhich importing sg table is indeed needed.
Signed-off-by: Tomasz Figa <tfiga@chromium.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
At present, dma_buf_export() takes a series of parameters, which
makes it difficult to add any new parameters for exporters, if required.
Make it simpler by moving all these parameters into a struct, and pass
the struct * as parameter to dma_buf_export().
While at it, unite dma_buf_export_named() with dma_buf_export(), and
change all callers accordingly.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Daniel Thompson <daniel.thompson@linaro.org>
Acked-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Acked-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
drm: Miscellaneous fixes for v3.19-rc1
This is a small collection of fixes that I've been carrying around for a
while now. Many of these have been posted and reviewed or acked. The few
that haven't I deemed too trivial to bother.
* tag 'drm/fixes/for-3.19-rc1' of git://people.freedesktop.org/~tagr/linux:
video/hdmi: Relicense header under MIT license
drm/gma500: mdfld: Reuse video/mipi_display.h
drm: Make drm_mode_create_tv_properties() signature consistent
drm: Implement drm_get_pci_dev() dummy for !PCI
drm/prime: Use unsigned type for number of pages
drm/gem: Fix typo in kerneldoc
drm: Use const data when creating blob properties
drm: Use size_t for blob property sizes
The number of pages can never be negative, so an unsigned type is
enough. This also matches the type of the n_pages argument of the
sg_alloc_table_from_pages() function.
Signed-off-by: Thierry Reding <treding@nvidia.com>
This patch fix spelling typos found in drm.xml.
It is because the file is generated from comments in
source codes, I have to fix the typos within source files.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This way drivers can't grow crazy ideas any more, and it also
helps a bit in reviewing EXPORT_SYMBOLS.
v2: Even more stuff. Unfortunately we can't move drm_vm_open_locked
because exynos does some horrible stuff with it.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For giant hilarity the DocBook reference overview is only generated
when in a level 2 section, not in a level 3 section. So we need to
move this up a bit as a side-by-side section to the main PRIME
documentation.
Whatever.
To have a complete set of references add the missing kerneldoc for all
functions exported to modules with the exception of the file private
init/destroy functions - drivers have no business calling those, so
let's just drop the EXPORT_SYMBOL instead.
Also reflow the function parameters to align correctly and break at 80
chars - my OCD couldn't stand them while writing the kerneldoc ;-)
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
dma_buf_map_attachment and dma_buf_vmap can return NULL or
ERR_PTR on a error. This encourages a common buggy pattern in
callers:
sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
if (IS_ERR_OR_NULL(sgt))
return PTR_ERR(sgt);
This causes the caller to return 0 on an error. IS_ERR_OR_NULL
is almost always a sign of poorly-defined error handling.
This patch converts dma_buf_map_attachment to always return
ERR_PTR, and fixes the callers that incorrectly handled NULL.
There are a few more callers that were not checking for NULL
at all, which would have dereferenced a NULL pointer later.
There are also a few more callers that correctly handled NULL
and ERR_PTR differently, I left those alone but they could also
be modified to delete the NULL check.
This patch also converts dma_buf_vmap to always return NULL.
All the callers to dma_buf_vmap only check for NULL, and would
have dereferenced an ERR_PTR and panic'd if one was ever
returned. This is not consistent with the rest of the dma buf
APIs, but matches the expectations of all of the callers.
Signed-off-by: Colin Cross <ccross@android.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
... not only when the dma-buf is freshly created. In contrived
examples someone else could have exported/imported the dma-buf already
and handed us the gem object with a flink name. If such on object gets
reexported as a dma_buf we won't have it in the handle cache already,
which breaks the guarantee that for dma-buf imports we always hand
back an existing handle if there is one.
This is exercised by igt/prime_self_import/with_one_bo_two_files
Now if we extend the locked sections just a notch more we can also
plug th racy buf/handle cache setup in handle_to_fd:
If evil userspace races a concurrent gem close against a prime export
operation we can end up tearing down the gem handle before the dma buf
handle cache is set up. When handle_to_fd gets around to adding the
handle to the cache there will be no one left to clean it up,
effectily leaking the bo (and the dma-buf, since the handle cache
holds a ref on the dma-buf):
Thread A Thread B
handle_to_fd:
lookup gem object from handle
creates new dma_buf
gem_close on the same handle
obj->dma_buf is set, but file priv buf
handle cache has no entry
obj->handle_count drops to 0
drm_prime_add_buf_handle sets up the handle cache
-> We have a dma-buf reference in the handle cache, but since the
handle_count of the gem object already dropped to 0 no on will clean
it up. When closing the drm device fd we'll hit the WARN_ON in
drm_prime_destroy_file_private.
The important change is to extend the critical section of the
filp->prime.lock to cover the gem handle lookup. This serializes with
a concurrent gem handle close.
This leak is exercised by igt/prime_self_import/export-vs-gem_close-race
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
... and move it to the top of the function to avoid a forward
declaration.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The export dma-buf cache is semantically similar to an flink name. So
semantically it makes sense to treat it the same and remove the name
(i.e. the dma_buf pointer) and its references when the last gem handle
disappears.
Again we need to be careful, but double so: Not just could someone
race and export with a gem close ioctl (so we need to recheck
obj->handle_count again when assigning the new name), but multiple
exports can also race against each another. This is prevented by
holding the dev->object_name_lock across the entire section which
touches obj->dma_buf.
With the new scheme we also need to reinstate the obj->dma_buf link at
import time (in case the only reference userspace has held in-between
was through the dma-buf fd and not through any native gem handle). For
simplicity we don't check whether it's a native object but
unconditionally set up that link - with the new scheme of removing the
obj->dma_buf reference when the last handle disappears we can do that.
To make it clear that this is not just for exported buffers anymore
als rename it from export_dma_buf to dma_buf.
To make sure that now one can race a fd_to_handle or handle_to_fd with
gem_close we use the same tricks as in flink of extending the
dev->object_name_locking critical section. With this change we finally
have a guaranteed 1:1 relationship (at least for native objects)
between gem objects and dma-bufs, even accounting for races (which can
happen since the dma-buf itself holds a reference while in-flight).
This prevent igt/prime_self_import/export-vs-gem_close-race from
Oopsing the kernel. There is still a leak though since the per-file
priv dma-buf/handle cache handling is racy. That will be fixed in a
later patch.
v2: Remove the bogus dma_buf_put from the export_and_register_object
failure path if we've raced with the handle count dropping to 0.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>