skb_gso_network_seglen is not enough for checking fragment sizes if
skb is using GSO_BY_FRAGS as we have to check frag per frag.
This patch introduces skb_gso_validate_mtu, based on the former, which
will wrap the use case inside it as all calls to skb_gso_network_seglen
were to validate if it fits on a given TMU, and improve the check.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch allows segmenting a skb based on its frags sizes instead of
based on a fixed value.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull irq fixes from Thomas Gleixner:
- a few simple fixes for fallout from the recent gic-v3 changes
- a workaround for a Cavium thunderX erratum
- a bugfix for the pic32 irqchip to make external interrupts work proper
- a missing return value in the generic IPI management code
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/irq-pic32-evic: Fix bug with external interrupts.
irqchip/gicv3-its: numa: Enable workaround for Cavium thunderx erratum 23144
irqchip/gic-v3: Fix quiescence check in gic_enable_redist
irqchip/gic-v3: Fix copy+paste mistakes in defines
irqchip/gic-v3: Fix ICC_SGI1R_EL1.INTID decoding mask
genirq: Fix missing return value in irq_destroy_ipi()
Pull timer bugfix from Thomas Gleixner:
"A single bugfix for the error check wreckage we introduced in the
merge window"
* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
time: Make settimeofday error checking work again
Just fallout from switching from asciidoc to sphinx/rst.
v2: Found more. Also s/\//#/ in the vgpu ascii-art - sphinx treats
those as comments and switch to variable-width, which wreaks the
layout.
v3: Undo some of the hacks, rebasing onto latest version of Jani's
series fixed it.
Acked-by: Liviu Dudau <Liviu.Dudau@arm.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Apparently not everyone has been super dutiful with updating this
stuff.
I still decided to leave out the documentation for all the *_property
pointers we have in drm_mode_config.
v2: Feedback from Liviu.
Acked-by: Liviu Dudau <Liviu.Dudau@arm.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Move the contents of the video/omapdss.h header file to omapdrm/dss local
header file and remove the original global header. The omapfb stach is
using video/omapfb_dss.h so this change will complete the separation of the
two driver implementation.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Copy the content of video/omapdss.h to a new (video/omapfb_dss.h) header
file and convert the omapfb drivers to use this new file.
The new header file is needed to complete the separation of omapdrm and
omapfb implementation of DSS.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The num_devices, **devices and *default_device is leftover from the past.
They can be removed as they are no used.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The driver only supports composite connection when booted in legacy mode
so the omap_dss_venc_type can be dropped from the pdata.
At the same time the video/omapdss.h include can be removed as it is no
longer needed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The omap_display_init() is implemented in the mach-omap2/display.c so the
declaration should have been there as well.
Change the board files to include display.h to avoid build breakage at the
same time.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
commit 93c667ca25
("of: *node argument to of_parse_phandle_with_args should be const")
changed to const for struct device node *np,
but it cares CONFIG_OF case only, !CONFIG_OF case need it too.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Rob Herring <robh@kernel.org>
This patch allows device drivers to initialize more than one reserved
memory region assigned to given device. When driver needs to use more
than one reserved memory region, it should allocate child devices and
initialize regions by index for each of its child devices.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Add a helper function for device drivers to set DMA's max_seg_size.
Setting it to largest possible value lets DMA-mapping API always create
contiguous mappings in DMA address space. This is essential for all
devices, which use dma-contig videobuf2 memory allocator and shared
buffers.
Till now, the only case when vb2-dma-contig really 'worked' was a case
where userspace provided USERPTR buffer, which was in fact mmaped
contiguous buffer from the other v4l2/drm device. Also DMABUF made of
contiguous buffer worked only when its exporter did not split it into
several chunks in the scatter-list. Any other buffer failed, regardless
of the arch/platform used and the presence of the IOMMU of the device bus.
This patch provides interface to fix this issue.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Commit:
78ce248faa ("efi: Iterate over efi.memmap in for_each_efi_memory_desc()")
introduced a regression for systems booted with the 'noefi' kernel option.
In particular, I observed an early kernel hang in efi_find_mirror()'s
for_each_efi_memory_desc() call. As we don't have efi memmap on this
system we enter this iterator with the following parameters:
efi.memmap.map = 0, efi.memmap.map_end = 0, efi.memmap.desc_size = 28
... then for_each_efi_memory_desc_in_map() does the following comparison:
(md) <= (efi_memory_desc_t *)((m)->map_end - (m)->desc_size);
... where md = 0, (m)->map_end = 0 and (m)->desc_size = 28 but when we subtract
something from a NULL pointer wrap around happens and we end up returning
invalid pointer and crash.
Fix it by using the correct pointer arithmetics.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: K. Y. Srinivasan <kys@microsoft.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Fixes: 78ce248faa ("efi: Iterate over efi.memmap in for_each_efi_memory_desc()")
Link: http://lkml.kernel.org/r/1464690224-4503-2-git-send-email-matt@codeblueprint.co.uk
[ Made the changelog more readable. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The perf_event_aux() function iterates all PMUs and all events in
their respective per-CPU contexts to find the events to deliver
side-band records to.
For example, the brk test case in lkp triggers many mmap() operations,
which, if we're also running perf, results in many perf_event_aux()
invocations.
If we enable uncore PMU support (even when uncore events are not used),
dozens of uncore PMUs will be iterated, which can significantly
decrease brk_test's throughput.
For example, the brk throughput:
without uncore PMUs: 2647573 ops_per_sec
with uncore PMUs: 1768444 ops_per_sec
... a 33% reduction.
To get at the per-CPU events that need side-band records, this patch
puts these events on a per-CPU list, this avoids iterating the PMUs
and any events that do not need side-band records.
Per task events are unchanged to avoid extra overhead on the context
switch paths.
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reported-by: Huang, Ying <ying.huang@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/1458757477-3781-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Generally task_struct is only protected by RCU if it was found on a
RCU protected list (say, for_each_process() or find_task_by_vpid()).
As Kirill pointed out rq->curr isn't protected by RCU, the scheduler
drops the (potentially) last reference without RCU gp, this means
that we need to fix the code which uses foreign_rq->curr under
rcu_read_lock().
Add a new helper which can be used to dereference rq->curr or any
other pointer to task_struct assuming that it should be cleared or
updated before the final put_task_struct(). It returns non-NULL
only if this task can't go away before rcu_read_unlock().
( Also add try_get_task_struct() to make it easier to use this API
correctly. )
Suggested-by: Kirill Tkhai <ktkhai@parallels.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
[ Updated comments; added try_get_task_struct()]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Kirill Tkhai <tkhai@yandex.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Link: http://lkml.kernel.org/r/20160518170218.GY3192@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit 50755bc1c3 ("seqlock: fix raw_read_seqcount_latch()") broke
raw_read_seqcount_latch().
If you look at the comment that was modified; the thing that changes is
the seq count, not the latch pointer.
* void latch_modify(struct latch_struct *latch, ...)
* {
* smp_wmb(); <- Ensure that the last data[1] update is visible
* latch->seq++;
* smp_wmb(); <- Ensure that the seqcount update is visible
*
* modify(latch->data[0], ...);
*
* smp_wmb(); <- Ensure that the data[0] update is visible
* latch->seq++;
* smp_wmb(); <- Ensure that the seqcount update is visible
*
* modify(latch->data[1], ...);
* }
*
* The query will have a form like:
*
* struct entry *latch_query(struct latch_struct *latch, ...)
* {
* struct entry *entry;
* unsigned seq, idx;
*
* do {
* seq = lockless_dereference(latch->seq);
So here we have:
seq = READ_ONCE(latch->seq);
smp_read_barrier_depends();
Which is exactly what we want; the new code:
seq = ({ p = READ_ONCE(latch);
smp_read_barrier_depends(); p })->seq;
is just wrong; because it looses the volatile read on seq, which can now
be torn or worse 'optimized'. And the read_depend barrier is also placed
wrong, we want it after the load of seq, to match the above data[]
up-to-date wmb()s.
Such that when we dereference latch->data[] below, we're guaranteed to
observe the right data.
*
* idx = seq & 0x01;
* entry = data_query(latch->data[idx], ...);
*
* smp_rmb();
* } while (seq != latch->seq);
*
* return entry;
* }
So yes, not passing a pointer is not pretty, but the code was correct,
and isn't anymore now.
Change to explicit READ_ONCE()+smp_read_barrier_depends() to avoid
confusion and allow strict lockless_dereference() checking.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 50755bc1c3 ("seqlock: fix raw_read_seqcount_latch()")
Link: http://lkml.kernel.org/r/20160527111117.GL3192@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The New QED firmware contains several fixes, including:
- Wrong classification of packets in 4-port devices.
- Anti-spoof interoperability with encapsulated packets.
- Tx-switching of encapsulated packets.
It also slightly improves Tx performance of the device.
In addition, this firmware contains the necessary logic for
supporting iscsi & rdma, for which we plan on pushing protocol
drivers in the imminent future.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Regmap irq implements the generic interrupt service routine which
is common for most of devices. Some devices, like MAX77620, MAX20024
needs the special handling before and after servicing the interrupt
as generic. For the example, MAX77620 programming guidelines for
interrupt servicing says:
1. When interrupt occurs from PMIC, mask the PMIC interrupt by setting
GLBLM.
2. Read IRQTOP and service the interrupt accordingly.
3. Once all interrupts has been checked and serviced, the interrupt
service routine un-masks the hardware interrupt line by clearing
GLBLM.
The step (2) is implemented in regmap irq as generic routine. For
step (1) and (3), add callbacks from regmap irq to client driver
to handle chip specific configurations.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Currently the plane's index is determined by walking the list of all
planes in the mode and finding the position of that plane in the list. A
linear walk, especially a linear walk within a linear walk as frequently
conceived by i915.ko [O(N^2)] quickly comes to dominate profiles.
The plane's index is constant for as long as no earlier planes are
removed from the list. For all drivers, planes are static, determined
at boot and then untouched until shutdown. In fact, there is no locking
provided to allow for dynamic removal of planes/encoders/crtcs.
v2: Convert drm_crtc_index() and drm_encoder_index() as well.
v3: Stop adjusting the indices upon removal; consider the list
construct-only.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
[danvet: Fixup typo in kerneldoc that Matt spotted.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1464375900-2542-1-git-send-email-chris@chris-wilson.co.uk
Now a drm_pending_event can either send a real drm_event or signal a
fence, or both. It allow us to signal via fences when the buffer is
displayed on the screen. Which in turn means that the previous buffer
is not in use anymore and can be freed or sent back to another driver
for processing.
v2: Comments from Daniel Vetter
- call fence_signal in drm_send_event_locked()
- remove unneeded !e->event check
v3: Remove drm_pending_event->destroy to fix a leak when e->file_priv
is not set.
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk> (v2)
[danvet: fix one e->destroy in arcpgu due to rebasing.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1464818821-5736-13-git-send-email-daniel.vetter@ffwll.ch
The modularity of cpufreq_stats is quite problematic.
First off, the usage of policy notifiers for the initialization
and cleanup in the cpufreq_stats module is inherently racy with
respect to CPU offline/online and the initialization and cleanup
of the cpufreq driver.
Second, fast frequency switching (used by the schedutil governor)
cannot be enabled if any transition notifiers are registered, so
if the cpufreq_stats module (that registers a transition notifier
for updating transition statistics) is loaded, the schedutil governor
cannot use fast frequency switching.
On the other hand, allowing cpufreq_stats to be built as a module
doesn't really add much value. Arguably, there's not much reason
for that code to be modular at all.
For the above reasons, make the cpufreq stats code non-modular,
modify the core to invoke functions provided by that code directly
and drop the notifiers from it.
Make the stats sysfs attributes appear empty if fast frequency
switching is enabled as the statistics will not be updated in that
case anyway (and returning -EBUSY from those attributes breaks
powertop).
While at it, clean up Kconfig help for the CPU_FREQ_STAT and
CPU_FREQ_STAT_DETAILS options.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
The 'initialized' field in struct cpufreq_governor is only used by
the conservative governor (as a usage counter) and the way that
happens is far from straightforward and arguably incorrect.
Namely, the value of 'initialized' is checked by
cpufreq_dbs_governor_init() and cpufreq_dbs_governor_exit() and
the results of those checks are passed (as the second argument) to
the ->init() and ->exit() callbacks in struct dbs_governor. Those
callbacks are only implemented by the ondemand and conservative
governors and ondemand doesn't use their second argument at all.
In turn, the conservative governor uses it to decide whether or not
to either register or unregister a transition notifier.
That whole mechanism is not only unnecessarily convoluted, but also
racy, because the 'initialized' field of struct cpufreq_governor is
updated in cpufreq_init_governor() and cpufreq_exit_governor() under
policy->rwsem which doesn't help if one of these functions is run
twice in parallel for different policies (which isn't impossible in
principle), for example.
Instead of it, add a proper usage counter to the conservative
governor and update it from cs_init() and cs_exit() which is
guaranteed to be non-racy, as those functions are only called
under gov_dbs_data_mutex which is global.
With that in place, drop the 'initialized' field from struct
cpufreq_governor as it is not used any more.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
The design of the cpufreq governor API is not very straightforward,
as struct cpufreq_governor provides only one callback to be invoked
from different code paths for different purposes. The purpose it is
invoked for is determined by its second "event" argument, causing it
to act as a "callback multiplexer" of sorts.
Unfortunately, that leads to extra complexity in governors, some of
which implement the ->governor() callback as a switch statement
that simply checks the event argument and invokes a separate function
to handle that specific event.
That extra complexity can be eliminated by replacing the all-purpose
->governor() callback with a family of callbacks to carry out specific
governor operations: initialization and exit, start and stop and policy
limits updates. That also turns out to reduce the code size too, so
do it.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
The cpuidle_devices per-CPU variable is only defined when CPU_IDLE is
enabled. Commit c8cc7d4de7 ("sched/idle: Reorganize the idle loop")
removed the #ifdef CONFIG_CPU_IDLE around cpuidle_idle_call() with the
compiler optimising away __this_cpu_read(cpuidle_devices). However, with
CONFIG_UBSAN && !CONFIG_CPU_IDLE, this optimisation no longer happens
and the kernel fails to link since cpuidle_devices is not defined.
This patch introduces an accessor function for the current CPU cpuidle
device (returning NULL when !CONFIG_CPU_IDLE) and uses it in
cpuidle_idle_call().
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: 4.5+ <stable@vger.kernel.org> # 4.5+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Like zlib compression in pstore, this patch added lzo and lz4
compression support so that users can have more options and better
compression ratio.
The original code treats the compressed data together with the
uncompressed ECC correction notice by using zlib decompress. The
ECC correction notice is missing in the decompression process. The
treatment also makes lzo and lz4 not working. So I treat them
separately by using pstore_decompress() to treat the compressed
data, and memcpy() to treat the uncompressed ECC correction notice.
Signed-off-by: Geliang Tang <geliangtang@163.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
ICC_SGI1R_AFFINITY_{2,3}_MASK are unused, which is good
because they were defined with the wrong shifts.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The INTID mask is wrong, and is made a signed value, which has
nteresting effects in the KVM emulation. Let's sanitize it.
Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Eric nicely pointed these out, but I failed at git add and lost them.
This fixes up
commit 2f196b7c4b
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Thu Jun 2 16:21:44 2016 +0200
drm/atomic: Add drm_atomic_crtc_state_for_each_plane_state
to actually do what it says on the tin^Wcommit message.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
It's kinda pointless to have 2 separate mallocs for these. And when we
add more per-plane state in the future it's even more pointless.
Right now there's no such thing planned, but both Gustavo's per-crtc
fence patches, and some nonblocking commit helpers I'm playing around
with will add more per-crtc stuff. It makes sense to also consolidate
planes, just for consistency.
In the future we can use this to store a pointer to the preceeding
state, making an atomic update entirely free-standing. This will be
needed to be able to queue them up with a depth > 1.
Cc: Gustavo Padovan <gustavo@padovan.org>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1464818821-5736-11-git-send-email-daniel.vetter@ffwll.ch
It's kinda pointless to have 2 separate mallocs for these. And when we
add more per-connector state in the future it's even more pointless.
Right now there's no such thing planned, but both Gustavo's per-crtc
fence patches, and some nonblocking commit helpers I'm playing around
with will add more per-crtc stuff. It makes sense to also consolidate
connectors, just for consistency.
In the future we can use this to store a pointer to the preceeding
state, making an atomic update entirely free-standing. This will be
needed to be able to queue them up with a depth > 1.
Cc: Gustavo Padovan <gustavo@padovan.org>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1464818821-5736-10-git-send-email-daniel.vetter@ffwll.ch