The num_devices, **devices and *default_device is leftover from the past.
They can be removed as they are no used.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The driver only supports composite connection when booted in legacy mode
so the omap_dss_venc_type can be dropped from the pdata.
At the same time the video/omapdss.h include can be removed as it is no
longer needed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The panel is not used by any legacy board files so the legacy (pdata) boot
support can be dropped.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
The omap_display_init() is implemented in the mach-omap2/display.c so the
declaration should have been there as well.
Change the board files to include display.h to avoid build breakage at the
same time.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
commit 93c667ca25
("of: *node argument to of_parse_phandle_with_args should be const")
changed to const for struct device node *np,
but it cares CONFIG_OF case only, !CONFIG_OF case need it too.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Rob Herring <robh@kernel.org>
This patch allows device drivers to initialize more than one reserved
memory region assigned to given device. When driver needs to use more
than one reserved memory region, it should allocate child devices and
initialize regions by index for each of its child devices.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Add a helper function for device drivers to set DMA's max_seg_size.
Setting it to largest possible value lets DMA-mapping API always create
contiguous mappings in DMA address space. This is essential for all
devices, which use dma-contig videobuf2 memory allocator and shared
buffers.
Till now, the only case when vb2-dma-contig really 'worked' was a case
where userspace provided USERPTR buffer, which was in fact mmaped
contiguous buffer from the other v4l2/drm device. Also DMABUF made of
contiguous buffer worked only when its exporter did not split it into
several chunks in the scatter-list. Any other buffer failed, regardless
of the arch/platform used and the presence of the IOMMU of the device bus.
This patch provides interface to fix this issue.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Commit:
78ce248faa ("efi: Iterate over efi.memmap in for_each_efi_memory_desc()")
introduced a regression for systems booted with the 'noefi' kernel option.
In particular, I observed an early kernel hang in efi_find_mirror()'s
for_each_efi_memory_desc() call. As we don't have efi memmap on this
system we enter this iterator with the following parameters:
efi.memmap.map = 0, efi.memmap.map_end = 0, efi.memmap.desc_size = 28
... then for_each_efi_memory_desc_in_map() does the following comparison:
(md) <= (efi_memory_desc_t *)((m)->map_end - (m)->desc_size);
... where md = 0, (m)->map_end = 0 and (m)->desc_size = 28 but when we subtract
something from a NULL pointer wrap around happens and we end up returning
invalid pointer and crash.
Fix it by using the correct pointer arithmetics.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: K. Y. Srinivasan <kys@microsoft.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Fixes: 78ce248faa ("efi: Iterate over efi.memmap in for_each_efi_memory_desc()")
Link: http://lkml.kernel.org/r/1464690224-4503-2-git-send-email-matt@codeblueprint.co.uk
[ Made the changelog more readable. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The perf_event_aux() function iterates all PMUs and all events in
their respective per-CPU contexts to find the events to deliver
side-band records to.
For example, the brk test case in lkp triggers many mmap() operations,
which, if we're also running perf, results in many perf_event_aux()
invocations.
If we enable uncore PMU support (even when uncore events are not used),
dozens of uncore PMUs will be iterated, which can significantly
decrease brk_test's throughput.
For example, the brk throughput:
without uncore PMUs: 2647573 ops_per_sec
with uncore PMUs: 1768444 ops_per_sec
... a 33% reduction.
To get at the per-CPU events that need side-band records, this patch
puts these events on a per-CPU list, this avoids iterating the PMUs
and any events that do not need side-band records.
Per task events are unchanged to avoid extra overhead on the context
switch paths.
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reported-by: Huang, Ying <ying.huang@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/1458757477-3781-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Generally task_struct is only protected by RCU if it was found on a
RCU protected list (say, for_each_process() or find_task_by_vpid()).
As Kirill pointed out rq->curr isn't protected by RCU, the scheduler
drops the (potentially) last reference without RCU gp, this means
that we need to fix the code which uses foreign_rq->curr under
rcu_read_lock().
Add a new helper which can be used to dereference rq->curr or any
other pointer to task_struct assuming that it should be cleared or
updated before the final put_task_struct(). It returns non-NULL
only if this task can't go away before rcu_read_unlock().
( Also add try_get_task_struct() to make it easier to use this API
correctly. )
Suggested-by: Kirill Tkhai <ktkhai@parallels.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
[ Updated comments; added try_get_task_struct()]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Kirill Tkhai <tkhai@yandex.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Link: http://lkml.kernel.org/r/20160518170218.GY3192@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit 50755bc1c3 ("seqlock: fix raw_read_seqcount_latch()") broke
raw_read_seqcount_latch().
If you look at the comment that was modified; the thing that changes is
the seq count, not the latch pointer.
* void latch_modify(struct latch_struct *latch, ...)
* {
* smp_wmb(); <- Ensure that the last data[1] update is visible
* latch->seq++;
* smp_wmb(); <- Ensure that the seqcount update is visible
*
* modify(latch->data[0], ...);
*
* smp_wmb(); <- Ensure that the data[0] update is visible
* latch->seq++;
* smp_wmb(); <- Ensure that the seqcount update is visible
*
* modify(latch->data[1], ...);
* }
*
* The query will have a form like:
*
* struct entry *latch_query(struct latch_struct *latch, ...)
* {
* struct entry *entry;
* unsigned seq, idx;
*
* do {
* seq = lockless_dereference(latch->seq);
So here we have:
seq = READ_ONCE(latch->seq);
smp_read_barrier_depends();
Which is exactly what we want; the new code:
seq = ({ p = READ_ONCE(latch);
smp_read_barrier_depends(); p })->seq;
is just wrong; because it looses the volatile read on seq, which can now
be torn or worse 'optimized'. And the read_depend barrier is also placed
wrong, we want it after the load of seq, to match the above data[]
up-to-date wmb()s.
Such that when we dereference latch->data[] below, we're guaranteed to
observe the right data.
*
* idx = seq & 0x01;
* entry = data_query(latch->data[idx], ...);
*
* smp_rmb();
* } while (seq != latch->seq);
*
* return entry;
* }
So yes, not passing a pointer is not pretty, but the code was correct,
and isn't anymore now.
Change to explicit READ_ONCE()+smp_read_barrier_depends() to avoid
confusion and allow strict lockless_dereference() checking.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 50755bc1c3 ("seqlock: fix raw_read_seqcount_latch()")
Link: http://lkml.kernel.org/r/20160527111117.GL3192@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The New QED firmware contains several fixes, including:
- Wrong classification of packets in 4-port devices.
- Anti-spoof interoperability with encapsulated packets.
- Tx-switching of encapsulated packets.
It also slightly improves Tx performance of the device.
In addition, this firmware contains the necessary logic for
supporting iscsi & rdma, for which we plan on pushing protocol
drivers in the imminent future.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Regmap irq implements the generic interrupt service routine which
is common for most of devices. Some devices, like MAX77620, MAX20024
needs the special handling before and after servicing the interrupt
as generic. For the example, MAX77620 programming guidelines for
interrupt servicing says:
1. When interrupt occurs from PMIC, mask the PMIC interrupt by setting
GLBLM.
2. Read IRQTOP and service the interrupt accordingly.
3. Once all interrupts has been checked and serviced, the interrupt
service routine un-masks the hardware interrupt line by clearing
GLBLM.
The step (2) is implemented in regmap irq as generic routine. For
step (1) and (3), add callbacks from regmap irq to client driver
to handle chip specific configurations.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Currently the plane's index is determined by walking the list of all
planes in the mode and finding the position of that plane in the list. A
linear walk, especially a linear walk within a linear walk as frequently
conceived by i915.ko [O(N^2)] quickly comes to dominate profiles.
The plane's index is constant for as long as no earlier planes are
removed from the list. For all drivers, planes are static, determined
at boot and then untouched until shutdown. In fact, there is no locking
provided to allow for dynamic removal of planes/encoders/crtcs.
v2: Convert drm_crtc_index() and drm_encoder_index() as well.
v3: Stop adjusting the indices upon removal; consider the list
construct-only.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
[danvet: Fixup typo in kerneldoc that Matt spotted.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1464375900-2542-1-git-send-email-chris@chris-wilson.co.uk
Now a drm_pending_event can either send a real drm_event or signal a
fence, or both. It allow us to signal via fences when the buffer is
displayed on the screen. Which in turn means that the previous buffer
is not in use anymore and can be freed or sent back to another driver
for processing.
v2: Comments from Daniel Vetter
- call fence_signal in drm_send_event_locked()
- remove unneeded !e->event check
v3: Remove drm_pending_event->destroy to fix a leak when e->file_priv
is not set.
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk> (v2)
[danvet: fix one e->destroy in arcpgu due to rebasing.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1464818821-5736-13-git-send-email-daniel.vetter@ffwll.ch
The modularity of cpufreq_stats is quite problematic.
First off, the usage of policy notifiers for the initialization
and cleanup in the cpufreq_stats module is inherently racy with
respect to CPU offline/online and the initialization and cleanup
of the cpufreq driver.
Second, fast frequency switching (used by the schedutil governor)
cannot be enabled if any transition notifiers are registered, so
if the cpufreq_stats module (that registers a transition notifier
for updating transition statistics) is loaded, the schedutil governor
cannot use fast frequency switching.
On the other hand, allowing cpufreq_stats to be built as a module
doesn't really add much value. Arguably, there's not much reason
for that code to be modular at all.
For the above reasons, make the cpufreq stats code non-modular,
modify the core to invoke functions provided by that code directly
and drop the notifiers from it.
Make the stats sysfs attributes appear empty if fast frequency
switching is enabled as the statistics will not be updated in that
case anyway (and returning -EBUSY from those attributes breaks
powertop).
While at it, clean up Kconfig help for the CPU_FREQ_STAT and
CPU_FREQ_STAT_DETAILS options.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
The 'initialized' field in struct cpufreq_governor is only used by
the conservative governor (as a usage counter) and the way that
happens is far from straightforward and arguably incorrect.
Namely, the value of 'initialized' is checked by
cpufreq_dbs_governor_init() and cpufreq_dbs_governor_exit() and
the results of those checks are passed (as the second argument) to
the ->init() and ->exit() callbacks in struct dbs_governor. Those
callbacks are only implemented by the ondemand and conservative
governors and ondemand doesn't use their second argument at all.
In turn, the conservative governor uses it to decide whether or not
to either register or unregister a transition notifier.
That whole mechanism is not only unnecessarily convoluted, but also
racy, because the 'initialized' field of struct cpufreq_governor is
updated in cpufreq_init_governor() and cpufreq_exit_governor() under
policy->rwsem which doesn't help if one of these functions is run
twice in parallel for different policies (which isn't impossible in
principle), for example.
Instead of it, add a proper usage counter to the conservative
governor and update it from cs_init() and cs_exit() which is
guaranteed to be non-racy, as those functions are only called
under gov_dbs_data_mutex which is global.
With that in place, drop the 'initialized' field from struct
cpufreq_governor as it is not used any more.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
The design of the cpufreq governor API is not very straightforward,
as struct cpufreq_governor provides only one callback to be invoked
from different code paths for different purposes. The purpose it is
invoked for is determined by its second "event" argument, causing it
to act as a "callback multiplexer" of sorts.
Unfortunately, that leads to extra complexity in governors, some of
which implement the ->governor() callback as a switch statement
that simply checks the event argument and invokes a separate function
to handle that specific event.
That extra complexity can be eliminated by replacing the all-purpose
->governor() callback with a family of callbacks to carry out specific
governor operations: initialization and exit, start and stop and policy
limits updates. That also turns out to reduce the code size too, so
do it.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
The cpuidle_devices per-CPU variable is only defined when CPU_IDLE is
enabled. Commit c8cc7d4de7 ("sched/idle: Reorganize the idle loop")
removed the #ifdef CONFIG_CPU_IDLE around cpuidle_idle_call() with the
compiler optimising away __this_cpu_read(cpuidle_devices). However, with
CONFIG_UBSAN && !CONFIG_CPU_IDLE, this optimisation no longer happens
and the kernel fails to link since cpuidle_devices is not defined.
This patch introduces an accessor function for the current CPU cpuidle
device (returning NULL when !CONFIG_CPU_IDLE) and uses it in
cpuidle_idle_call().
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: 4.5+ <stable@vger.kernel.org> # 4.5+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Like zlib compression in pstore, this patch added lzo and lz4
compression support so that users can have more options and better
compression ratio.
The original code treats the compressed data together with the
uncompressed ECC correction notice by using zlib decompress. The
ECC correction notice is missing in the decompression process. The
treatment also makes lzo and lz4 not working. So I treat them
separately by using pstore_decompress() to treat the compressed
data, and memcpy() to treat the uncompressed ECC correction notice.
Signed-off-by: Geliang Tang <geliangtang@163.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
ICC_SGI1R_AFFINITY_{2,3}_MASK are unused, which is good
because they were defined with the wrong shifts.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The INTID mask is wrong, and is made a signed value, which has
nteresting effects in the KVM emulation. Let's sanitize it.
Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Eric nicely pointed these out, but I failed at git add and lost them.
This fixes up
commit 2f196b7c4b
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Thu Jun 2 16:21:44 2016 +0200
drm/atomic: Add drm_atomic_crtc_state_for_each_plane_state
to actually do what it says on the tin^Wcommit message.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
It's kinda pointless to have 2 separate mallocs for these. And when we
add more per-plane state in the future it's even more pointless.
Right now there's no such thing planned, but both Gustavo's per-crtc
fence patches, and some nonblocking commit helpers I'm playing around
with will add more per-crtc stuff. It makes sense to also consolidate
planes, just for consistency.
In the future we can use this to store a pointer to the preceeding
state, making an atomic update entirely free-standing. This will be
needed to be able to queue them up with a depth > 1.
Cc: Gustavo Padovan <gustavo@padovan.org>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1464818821-5736-11-git-send-email-daniel.vetter@ffwll.ch
It's kinda pointless to have 2 separate mallocs for these. And when we
add more per-connector state in the future it's even more pointless.
Right now there's no such thing planned, but both Gustavo's per-crtc
fence patches, and some nonblocking commit helpers I'm playing around
with will add more per-crtc stuff. It makes sense to also consolidate
connectors, just for consistency.
In the future we can use this to store a pointer to the preceeding
state, making an atomic update entirely free-standing. This will be
needed to be able to queue them up with a depth > 1.
Cc: Gustavo Padovan <gustavo@padovan.org>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1464818821-5736-10-git-send-email-daniel.vetter@ffwll.ch
Add IDs for watchdog and Security SubSystem to Exynos5410. Use the same
number as for Exynos5420 just in case in future these drivers were
merged.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
struct fence_array inherits from struct fence and carries a
collection of fences that needs to be waited together.
It is useful to translate a sync_file to a fence to remove the complexity
of dealing with sync_files on DRM drivers. So even if there are many
fences in the sync_file that needs to waited for a commit to happen,
they all get added to the fence_collection and passed for DRM use as
a standard struct fence.
That means that no changes needed to any driver besides supporting fences.
To avoid fence_array's fence allocates a new timeline if needed (when
combining fences from different timelines).
v2: Comments by Daniel Vetter:
- merge fence_collection_init() and fence_collection_add()
- only add callbacks at ->enable_signalling()
- remove fence_collection_put()
- check for type on to_fence_collection()
- adjust fence_is_later() and fence_later() to WARN_ON() if they
are used with collection fences.
v3: - Initialize fence_cb.node at fence init.
Comments by Chris Wilson:
- return "unbound" on fence_collection_get_timeline_name()
- don't stop adding callbacks if one fails
- remove redundant !! on fence_collection_enable_signaling()
- remove redundant () on fence_collection_signaled
- use fence_default_wait() instead
v4 (chk): Rework, simplification and cleanup:
- Drop FENCE_NO_CONTEXT handling, always allocate a context.
- Rename to fence_array.
- Return fixed driver name.
- Register only one callback at a time.
- Document that create function takes ownership of array.
v5 (chk): More work and fixes:
- Avoid deadlocks by adding all callbacks at once again.
- Stop trying to remove the callbacks.
- Provide context and sequence number for the array fence.
v6 (chk): Fixes found during testing
- Fix stupid typo in _enable_signaling().
Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
[danvet: Improve commit message as suggested by Gustavo.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1464786612-5010-3-git-send-email-deathsimple@vodafone.de
Fence contexts are created on the fly (for example) by the GPU scheduler used
in the amdgpu driver as a result of an userspace request. Because of this
userspace could in theory force a wrap around of the 32bit context number
if it doesn't behave well.
Avoid this by increasing the context number to 64bits. This way even when
userspace manages to allocate a billion contexts per second it takes more
than 500 years for the context number to wrap around.
v2: fix printf formats as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1464786612-5010-2-git-send-email-deathsimple@vodafone.de
Pablo Neira Ayuso says:
====================
Netfilter fixes for net
The following patchset contains Netfilter fixes for your net tree,
they are:
1) Fix incorrect timestamp in nfnetlink_queue introduced when addressing
y2038 safe timestamp, from Florian Westphal.
2) Get rid of leftover conntrack definition from the previous merge
window, oneliner from Florian.
3) Make nf_queue handler pernet to resolve race on dereferencing the
hook state structure with netns removal, from Eric Biederman.
4) Ensure clean exit on unregistered helper ports, from Taehee Yoo.
5) Restore FLOWI_FLAG_KNOWN_NH in nf_dup_ipv6. This got lost while
generalizing xt_TEE to add packet duplication support in nf_tables,
from Paolo Abeni.
6) Insufficient netlink NFTA_SET_TABLE attribute check in
nf_tables_getset(), from Phil Turnbull.
7) Reject helper registration on duplicated ports via modparams.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
In commit e6afc8ace6 ("udp: remove headers from UDP packets before
queueing"), udp_csum_pull_header() helper was added but missed fact
that CHECKSUM_UNNECESSARY packets were now converted to CHECKSUM_NONE
and skb->csum_valid was set to 1 for them.
Since csum_partial() is quite expensive, even for 8-byte area, it is
worth adding a test.
We also can use skb->data instead of udp_hdr() as we are pulling
UDP headers, as it is sightly faster.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change the return value of spi-nor device read and write methods to
allow returning amount of data transferred and errors as
read(2)/write(2) does.
Also, start handling positive returns in spi_nor_read(), since we want
to convert drivers to start returning the read-length both via *retlen
and the return code. (We don't need to do the same transition process
for spi_nor_write(), since ->write() didn't used to have a return code
at all.)
Signed-off-by: Michal Suchanek <hramrach@gmail.com>
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Tested-by Cyrille Pitchen <cyrille.pitchen@atmel.com>
Acked-by: Michal Suchanek <hramrach@gmail.com>
Tested-by: Michal Suchanek <hramrach@gmail.com>
At least on n900 we have phy-twl4030-usb only generating cable
interrupts, and then have a separate USB PHY.
In order for musb to know the real cable status, we need to
clear any cached state until musb is ready. Otherwise the cable
status interrupts will get just ignored if the status does
not change from the initial state.
To do this, let's add a return value to musb_mailbox(), and
reset cached linkstat to MUSB_UNKNOWN on error. Sorry to cause
a bit of churn here, I should have added that already last time
patching musb_mailbox().
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>