This commit adds a new led_cdev flag LED_PANIC_INDICATOR, which
allows to mark a specific LED to be switched to the "panic"
trigger, on a kernel panic.
This is useful to allow the user to assign a regular trigger
to a given LED, and still blink that LED on a kernel panic.
Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Reviewed-by: Matthias Brugger <mbrugger@suse.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Jacek Anaszewski <j.anaszewski@samsung.com>
fsl-dcu pixel clock polarity support
* 'for-next' of http://git.agner.ch/git/linux-drm-fsl-dcu:
drm/fsl-dcu: use bus_flags for pixel clock polarity
drm: introduce bus_flags in drm_display_info
This is the first big radeon/amdgpu pull request for 4.7. Highlights:
- Polaris support in amdgpu
Current display stack on par with other asics, for advanced features DAL is required
Power management support
Support for GFX, Compute, SDMA, UVD, VCE
- VCE and UVD init/fini cleanup in radeon
- GPUVM improvements
- Scheduler improvements
- Clockgating improvements
- Powerplay improvements
- TTM changes to support driver specific LRU update mechanism
- Radeon support for new Mesa features
- ASYNC pageflip support for radeon
- Lots of bug fixes and code cleanups
* 'drm-next-4.7' of git://people.freedesktop.org/~agd5f/linux: (180 commits)
drm/amdgpu: Replace rcu_assign_pointer() with RCU_INIT_POINTER()
drm/amdgpu: use drm_mode_vrefresh() rather than mode->vrefresh
drm/amdgpu/uvd6: add bypass support for fiji (v3)
drm/amdgpu/fiji: set UVD CG state when enabling UVD DPM (v2)
drm/powerplay: add missing clockgating callback for tonga
drm/amdgpu: Constify some tables
drm/amd/powerplay: Delete dead struct declaration
drm/amd/powerplay/hwmgr: don't add invalid voltage
drm/amd/powerplay/hwmgr: prevent VDDC from exceeding 2V
MAINTAINERS: Remove unneded wildcard for the Radeon/AMDGPU drivers
drm/radeon: add cayman VM support for append packet.
drm/amd/amdgpu: Add debugfs entries for smc/didt/pcie
drm/amd/amdgpu: Drop print_status callbacks.
drm/amd/powerplay: revise reading/writing pptable on Polaris10
drm/amd/powerplay: revise reading/writing pptable on Tonga
drm/amd/powerplay: revise reading/writing pptable on Fiji
drm/amd/powerplay: revise caching the soft pptable and add it's size
drm/amd/powerplay: add dpm force multiple levels on cz/tonga/fiji/polaris (v2)
drm/amd/powerplay: fix fan speed percent setting error on Polaris10
drm/amd/powerplay: fix bug dpm can't work when resume back on Polaris
...
Merge fixes from Andrew Morton:
"14 fixes"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
byteswap: try to avoid __builtin_constant_p gcc bug
lib/stackdepot: avoid to return 0 handle
mm: fix kcompactd hang during memory offlining
modpost: fix module autoloading for OF devices with generic compatible property
proc: prevent accessing /proc/<PID>/environ until it's ready
mm/zswap: provide unique zpool name
mm: thp: kvm: fix memory corruption in KVM with THP enabled
MAINTAINERS: fix Rajendra Nayak's address
mm, cma: prevent nr_isolated_* counters from going negative
mm: update min_free_kbytes from khugepaged after core initialization
huge pagecache: mmap_sem is unlocked when truncation splits pmd
rapidio/mport_cdev: fix uapi type definitions
mm: memcontrol: let v2 cgroups follow changes in system swappiness
mm: thp: correct split_huge_pages file permission
The dma_alloc_coherent() function returns a virtual address which can
be used for coherent access to the underlying memory. On some
architectures, like arm64, undefined behavior results if this memory is
also accessed via virtual mappings that are not coherent. Because of
their undefined nature, operations like virt_to_page() return garbage
when passed virtual addresses obtained from dma_alloc_coherent(). Any
subsequent mappings via vmap() of the garbage page values are unusable
and result in bad things like bus errors (synchronous aborts in ARM64
speak).
The mlx4 driver contains code that does the equivalent of:
vmap(virt_to_page(dma_alloc_coherent)), this results in an OOPs when the
device is opened.
Prevent Ethernet driver to run this problematic code by forcing it to
allocate contiguous memory. As for the Infiniband driver, at first we
are trying to allocate contiguous memory, but in case of failure roll
back to work with fragmented memory.
Signed-off-by: Haggai Abramovsky <hagaya@mellanox.com>
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Reported-by: David Daney <david.daney@cavium.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit bf6993276f ("jbd2: Use tracepoints for history file")
removed the members j_history, j_history_max and j_history_cur from struct
handle_s but the descriptions stayed lingering. Removing them.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Jan Kara <jack@suse.cz>
Updates from Boris Brezillon:
This pull request contains the following infrastructure changes:
* introduction of the ECC algo concept to extend the ECC mode one
* replacement of the nand_ecclayout infrastructure by something more
future-proof.
* addition of an mtd-activity led trigger to replace the nand-activity
one
And a bunch of specific NAND driver improvements/fixes. Here are the
changes that are worth mentioning:
* rework of the OMAP GPMC and NAND drivers
* prepare the sunxi NAND driver to receive DMA support
* handle bitflips in erased pages on GPMI revisions that do not support
this in hardware.
* tag 'nand/for-4.7' of github.com:linux-nand/linux: (152 commits)
mtd: brcmnand: respect ECC algorithm set by NAND subsystem
gpmi-nand: Handle ECC Errors in erased pages
Documentation: devicetree: deprecate "soft_bch" nand-ecc-mode value
mtd: nand: add support for "nand-ecc-algo" DT property
mtd: mtd: drop NAND_ECC_SOFT_BCH enum value
mtd: drop support for NAND_ECC_SOFT_BCH as "soft_bch" mapping
mtd: nand: read ECC algorithm from the new field
mtd: nand: fsmc: validate ECC setup by checking algorithm directly
mtd: nand: set ECC algorithm to Hamming on fallback
staging: mt29f_spinand: set ECC algorithm explicitly
CRIS v32: nand: set ECC algorithm explicitly
mtd: nand: atmel: set ECC algorithm explicitly
mtd: nand: davinci: set ECC algorithm explicitly
mtd: nand: bf5xx: set ECC algorithm explicitly
mtd: nand: omap2: Fix high memory dma prefetch transfer
mtd: nand: omap2: Start dma request before enabling prefetch
mtd: nandsim: add __init attribute
mtd: nand: move of_get_nand_xxx() helpers into nand_base.c
mtd: nand: sh_flctl: rely on generic DT parsing done in nand_scan_ident()
mtd: nand: mxc: rely on generic DT parsing done in nand_scan_ident()
...
This is another attempt to avoid a regression in wwn_to_u64() after that
started using get_unaligned_be64(), which in turn ran into a bug on
gcc-4.9 through 6.1.
The regression got introduced due to the combination of two separate
workarounds (commits e3bde9568d: "include/linux/unaligned: force
inlining of byteswap operations" and ef3fb2422f: "scsi: fc: use
get/put_unaligned64 for wwn access") that each try to sidestep distinct
problems with gcc behavior (code growth and increased stack usage).
Unfortunately after both have been applied, a more serious gcc bug has
been uncovered, leading to incorrect object code that discards part of a
function and causes undefined behavior.
As part of this problem is how __builtin_constant_p gets evaluated on an
argument passed by reference into an inline function, this avoids the
use of __builtin_constant_p() for all architectures that set
CONFIG_ARCH_USE_BUILTIN_BSWAP. Most architectures do not set
ARCH_SUPPORTS_OPTIMIZED_INLINING, which means they probably do not
suffer from the problem in the qla2xxx driver, but they might still run
into it elsewhere.
Both of the original workarounds were only merged in the 4.6 kernel, and
the bug that is fixed by this patch should only appear if both are
there, so we probably don't need to backport the fix. On the other
hand, it works by simplifying the code path and should not have any
negative effects.
[arnd@arndb.de: fix older gcc warnings]
(http://lkml.kernel.org/r/12243652.bxSxEgjgfk@wuerfel)
Link: https://lkml.org/lkml/headers/2016/4/12/1103
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70232
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70646
Fixes: e3bde9568d ("include/linux/unaligned: force inlining of byteswap operations")
Fixes: ef3fb2422f ("scsi: fc: use get/put_unaligned64 for wwn access")
Link: http://lkml.kernel.org/r/1780465.XdtPJpi8Tt@wuerfel
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Tested-by: Josh Poimboeuf <jpoimboe@redhat.com> # on gcc-5.3
Tested-by: Quinn Tran <quinn.tran@qlogic.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Thomas Graf <tgraf@suug.ch>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Himanshu Madhani <himanshu.madhani@qlogic.com>
Cc: Jan Hubicka <hubicka@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
After the THP refcounting change, obtaining a compound pages from
get_user_pages() no longer allows us to assume the entire compound page
is immediately mappable from a secondary MMU.
A secondary MMU doesn't want to call get_user_pages() more than once for
each compound page, in order to know if it can map the whole compound
page. So a secondary MMU needs to know from a single get_user_pages()
invocation when it can map immediately the entire compound page to avoid
a flood of unnecessary secondary MMU faults and spurious
atomic_inc()/atomic_dec() (pages don't have to be pinned by MMU notifier
users).
Ideally instead of the page->_mapcount < 1 check, get_user_pages()
should return the granularity of the "page" mapping in the "mm" passed
to get_user_pages(). However it's non trivial change to pass the "pmd"
status belonging to the "mm" walked by get_user_pages up the stack (up
to the caller of get_user_pages). So the fix just checks if there is
not a single pte mapping on the page returned by get_user_pages, and in
turn if the caller can assume that the whole compound page is mapped in
the current "mm" (in a pmd_trans_huge()). In such case the entire
compound page is safe to map into the secondary MMU without additional
get_user_pages() calls on the surrounding tail/head pages. In addition
of being faster, not having to run other get_user_pages() calls also
reduces the memory footprint of the secondary MMU fault in case the pmd
split happened as result of memory pressure.
Without this fix after a MADV_DONTNEED (like invoked by QEMU during
postcopy live migration or balloning) or after generic swapping (with a
failure in split_huge_page() that would only result in pmd splitting and
not a physical page split), KVM would map the whole compound page into
the shadow pagetables, despite regular faults or userfaults (like
UFFDIO_COPY) may map regular pages into the primary MMU as result of the
pte faults, leading to the guest mode and userland mode going out of
sync and not working on the same memory at all times.
Any other secondary MMU notifier manager (KVM is just one of the many
MMU notifier users) will need the same information if it doesn't want to
run a flood of get_user_pages_fast and it can support multiple
granularity in the secondary MMU mappings, so I think it is justified to
be exposed not just to KVM.
The other option would be to move transparent_hugepage_adjust to
mm/huge_memory.c but that currently has all kind of KVM data structures
in it, so it's definitely not a cut-and-paste work, so I couldn't do a
fix as cleaner as this one for 4.6.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: "Li, Liang Z" <liang.z.li@intel.com>
Cc: Amit Shah <amit.shah@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cgroup2 currently doesn't have a per-cgroup swappiness setting. We
might want to add one later - that's a different discussion - but until
we do, the cgroups should always follow the system setting. Otherwise
it will be unchangeably set to whatever the ancestor inherited from the
system setting at the time of cgroup creation.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: <stable@vger.kernel.org> [4.5]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull asm-generic syscall fix from Arnd Bergmann:
"My last pull request for asm-generic had just one patch that added two
new system calls to asm/unistd.h, but unfortunately it turned out to
be wrong, pointing arch/tile compat mode at the native handlers rather
than the compat ones.
This was spotted by Yury Norov, who is working on ILP32 mode for
arch/arm64, which would have the same problem when merged. This fixes
the table to use the correct compat syscalls, like the other 64-bit
architectures do.
I'll try to find the time to come up with a solution that prevents
this problem from happening again, by allowing all future system calls
to just get added in a single file for use by all architectures"
* tag 'asm-generic-4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic:
asm-generic: use compat version for preadv2 and pwritev2
This value should not be part of nand_ecc_modes_t as it specifies
algorithm not a mode. We successfully managed to introduce new "algo"
field which is respected now.
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Now that all drivers go through nand_set_flash_node() to parse the generic
NAND properties, we can move all of_get_nand_xxx() helpers in to
nand_base.c, make them static and remove of_mtd.c and of_mtd.h.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Now that all MTD drivers have moved to the mtd_ooblayout_ops model we can
safely remove the struct nand_ecclayout definition, and all the remaining
places where it was still used.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Now that all NAND drivers have switched to mtd_ooblayout_ops, we can kill
the ecc->layout field.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Implementing the mtd_ooblayout_ops interface is the new way of exposing
ECC/OOB layout to MTD users. Modify the onenand drivers to switch to this
approach.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Now that mtd_ooblayout_ecc() returns the ECC byte position using the
OOB free method, we can get rid of the fsmc_nand_eccplace struct.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Implementing the mtd_ooblayout_ops interface is the new way of exposing
ECC/OOB layout to MTD users.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Commit 326e1dbb57 ("block: remove management of bi_remaining when
restoring original bi_end_io") made bio_inc_remaining() private to bio.c
because the only use-case that made sense was confined to the
bio_chain() interface.
Since that time DM thinp went on to use bio_chain() in its relatively
complex implementation of async discard support. That implementation,
even when converted over to use the new async __blkdev_issue_discard()
interface, depends on deferred completion of the original discard bio --
which is most appropriately implemented using bio_inc_remaining().
DM thinp foolishly duplicated bio_inc_remaining(), local to dm-thin.c as
__bio_inc_remaining(), so re-exporting bio_inc_remaining() allows us to
put an end to that foolishness.
All said, bio_inc_remaining() should really only be used in conjunction
with bio_chain(). It isn't intended for generic bio reference counting.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Introduce bus_flags to specify display bus properties like signal
polarities. This is useful for parallel display buses, e.g. to
specify the pixel clock or data enable polarity.
Suggested-by: Thierry Reding <thierry.reding@gmail.com>
Acked-by: Philipp Zabel <p.zabel@pengutronix.de>
Acked-by: Manfred Schlaegl <manfred.schlaegl@gmx.at>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Stefan Agner <stefan@agner.ch>
On mx6ul the General Purpose Register 1 (GPR1) contains the following
bits for configuring the direction of the SAI MCLKs:
SAI1_MCLK_DIR, SAI2_MCLK_DIR, SAI3_MCLK_DIR
Introduce the "fsl,sai-mclk-direction-output" optional property to allow
configuring the SAI_MCLK outputs.
Tested on a imx6ul-evk board.
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Acked-by: Nicolin Chen <nicoleotsuka@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Currently, we support set names of up to 16 bytes, get this aligned
with the maximum length we can use in ipset to make it easier when
considering migration to nf_tables.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
This patch introduces nf_ct_resolve_clash() to resolve race condition on
conntrack insertions.
This is particularly a problem for connection-less protocols such as
UDP, with no initial handshake. Two or more packets may race to insert
the entry resulting in packet drops.
Another problematic scenario are packets enqueued to userspace via
NFQUEUE after the raw table, that make it easier to trigger this
race.
To resolve this, the idea is to reset the conntrack entry to the one
that won race. Packet and bytes counters are also merged.
The 'insert_failed' stats still accounts for this situation, after
this patch, the drop counter is bumped whenever we drop packets, so we
can watch for unresolved clashes.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
We already include netns address in the hash and compare the netns pointers
during lookup, so even if namespaces have overlapping addresses entries
will be spread across the table.
Assuming 64k bucket size, this change saves 0.5 mbyte per namespace on a
64bit system.
NAT bysrc and expectation hash is still per namespace, those will
changed too soon.
Future patch will also make conntrack object slab cache global again.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
ACPICA commit b2294cae776f5a66a7697414b21949d307e6856f
This patch removes unwanted spaces for typedef. This solution doesn't cover
function types.
Note that the linuxize result of this commit is very giant and should have
many conflicts against the current Linux upstream. Thus it is required to
modify the linuxize result of this commit and the commits around it
manually in order to have them merged to the Linux upstream. Since this is
very costy, we should do this only once, and if we can't ensure to do this
only once, we need to revert the Linux code to the wrong indentation result
before merging the linuxize result of this commit. Lv Zheng.
Link: https://github.com/acpica/acpica/commit/b2294cae
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Commit:
2665784850 ("perf/core: Verify we have a single perf_hw_context PMU")
forcefully prevents multiple PMUs from sharing perf_hw_context, as this
generally doesn't make sense. It is a common bug for uncore PMUs to
use perf_hw_context rather than perf_invalid_context, which this detects.
However, systems exist with heterogeneous CPUs (and hence heterogeneous
HW PMUs), for which sharing perf_hw_context is necessary, and possible
in some limited cases.
To make this work we have to perform some gymnastics, as we did in these
commits:
66eb579e66 ("perf: allow for PMU-specific event filtering")
c904e32a69 ("arm: perf: filter unschedulable events")
To allow those systems to work, we must allow PMUs for heterogeneous
CPUs to share perf_hw_context, though we must still disallow sharing
otherwise to detect the common misuse of perf_hw_context.
This patch adds a new PERF_PMU_CAP_HETEROGENEOUS_CPUS for this, updates
the core logic to account for this, and makes use of it in the arm_pmu
code that is used for systems with heterogeneous CPUs. Comments are
added to make the rationale clear and hopefully avoid accidental abuse.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20160426103346.GA20836@leverpostej
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Many instruction tracing PMUs out there support address range-based
filtering, which would, for example, generate trace data only for a
given range of instruction addresses, which is useful for tracing
individual functions, modules or libraries. Other PMUs may also
utilize this functionality to allow filtering to or filtering out
code at certain address ranges.
This patch introduces the interface for userspace to specify these
filters and for the PMU drivers to apply these filters to hardware
configuration.
The user interface is an ASCII string that is passed via an ioctl()
and specifies (in the form of an ASCII string) address ranges within
certain object files or within kernel. There is no special treatment
for kernel modules yet, but it might be a worthy pursuit.
The PMU driver interface basically adds two extra callbacks to the
PMU driver structure, one of which validates the filter configuration
proposed by the user against what the hardware is actually capable of
doing and the other one translates hardware-independent filter
configuration into something that can be programmed into the
hardware.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461771888-10409-6-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This uses the previous changes to add reference counts
to drm connector objects.
v2: move fbdev changes to their own patch.
add some kerneldoc
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Ofc I promise just a few leftovers for drm-misc and somehow it's the
biggest pull. But really mostly trivial stuff:
- MAINTAINERS updates from Emil
- rename async to nonblock in atomic_commit to avoid the confusion between
nonblocking ioctl and async flip (= not vblank synced), from Maarten.
Needs to be regened with newer drivers, but probably only after -rc1 to
catch them all.
- actually lockless gem_object_free, plus acked driver conversion patches.
All the trickier prep stuff already is in drm-next.
- Noralf's nice work for generic defio support in our fbdev emulation.
Keeps the udl hack, and qxl is tested by Gerd.
* tag 'topic/drm-misc-2016-05-04' of git://anongit.freedesktop.org/drm-intel: (47 commits)
drm: Fixup locking WARN_ON mistake around gem_object_free_unlocked
drm/etnaviv: Use lockless gem BO free callback
drm/imx: Use lockless gem BO free callback
drm/radeon: Use lockless gem BO free callback
drm/amdgpu: Use lockless gem BO free callback
drm/gem: support BO freeing without dev->struct_mutex
MAINTAINERS: Add myself for the new VC4 (RPi GPU) graphics driver.
MAINTAINERS: Add a bunch of legacy (UMS) DRM drivers
MAINTAINERS: Add a few DRM drivers by Dave Airlie
MAINTAINERS: List the correct git repo for the Renesas DRM drivers
MAINTAINERS: Update the files list for the Renesas DRM drivers
MAINTAINERS: Update the files list for the Armada DRM driver
MAINTAINERS: Update the files list for the Rockchip DRM driver
MAINTAINERS: Update the files list for the Exynos DRM driver
MAINTAINERS: Add maintainer entry for the VMWGFX DRM driver
MAINTAINERS: Add maintainer entry for the MSM DRM driver
MAINTAINERS: Add maintainer entry for the Nouveau DRM driver
MAINTAINERS: Update the files list for the Etnaviv DRM driver
MAINTAINERS: Remove unneded wildcard for the i915 DRM driver
drm/atomic: Add WARN_ON when state->acquire_ctx is not set.
...