I want to merge in the new Broadwell support as a late hw enabling
pull request. But since the internal branch was based upon our
drm-intel-nightly integration branch I need to resolve all the
oustanding conflicts in drm/i915 with a backmerge to make the 60+
patches apply properly.
We'll propably have some fun because Linus will come up with a
slightly different merge solution.
Conflicts:
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/i915_drv.c
drivers/gpu/drm/i915/intel_crt.c
drivers/gpu/drm/i915/intel_ddi.c
drivers/gpu/drm/i915/intel_display.c
drivers/gpu/drm/i915/intel_dp.c
drivers/gpu/drm/i915/intel_drv.h
All rather simple adjacent lines changed or partial backports from
-next to -fixes, with the exception of the thaw code in i915_dma.c.
That one needed a bit of shuffling to restore the intent.
Oh and the massive header file reordering in intel_drv.h is a bit
trouble. But not much.
v2: Also don't forget the fixup for the silent conflict that results
in compile fail ...
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Pull networking fixes from David Miller:
"I'm sending a pull request of these lingering bug fixes for networking
before the normal merge window material because some of this stuff I'd
like to get to -stable ASAP"
1) cxgb3 stopped working on 32-bit machines, fix from Ben Hutchings.
2) Structures passed via netlink for netfilter logging are not fully
initialized. From Mathias Krause.
3) Properly unlink upper openvswitch device during notifications, from
Alexei Starovoitov.
4) Fix race conditions involving access to the IP compression scratch
buffer, from Michal Kubrecek.
5) We don't handle the expiration of MTU information contained in ipv6
routes sometimes, fix from Hannes Frederic Sowa.
6) With Fast Open we can miscompute the TCP SYN/ACK RTT, from Yuchung
Cheng.
7) Don't take TCP RTT sample when an ACK doesn't acknowledge new data,
also from Yuchung Cheng.
8) The decreased IPSEC garbage collection threshold causes problems for
some people, bump it back up. From Steffen Klassert.
9) Fix skb->truesize calculated by tcp_tso_segment(), from Eric
Dumazet.
10) flow_dissector doesn't validate packet lengths sufficiently, from
Jason Wang
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (41 commits)
net/mlx4_core: Fix call to __mlx4_unregister_mac
net: sctp: do not trigger BUG_ON in sctp_cmd_delete_tcb
net: flow_dissector: fail on evil iph->ihl
xfrm: Fix null pointer dereference when decoding sessions
can: kvaser_usb: fix usb endpoints detection
can: c_can: Fix RX message handling, handle lost message before EOB
doc:net: Fix typo in Documentation/networking
bgmac: don't update slot on skb alloc/dma mapping error
ibm emac: Fix locking for enable/disable eob irq
ibm emac: Don't call napi_complete if napi_reschedule failed
virtio-net: correctly handle cpu hotplug notifier during resuming
bridge: pass correct vlan id to multicast code
net: x25: Fix dead URLs in Kconfig
netfilter: xt_NFQUEUE: fix --queue-bypass regression
xen-netback: use jiffies_64 value to calculate credit timeout
cxgb3: Fix length calculation in write_ofld_wr() on 32-bit architectures
bnx2x: Disable VF access on PF removal
bnx2x: prevent FW assert on low mem during unload
tcp: gso: fix truesize tracking
xfrm: Increase the garbage collector threshold
...
ASoC: Final updates for v3.13
A few final updates for v3.13, all driver updates apart from some DPCM
and Coverity fixes which should have minor impact on practical systems.
Joby Poriyath provided a xen-netback patch to reduce the size of
xenvif structure as some netdev allocation could fail under
memory pressure/fragmentation.
This patch is handling the problem at the core level, allowing
any netdev structures to use vmalloc() if kmalloc() failed.
As vmalloc() adds overhead on a critical network path, add __GFP_REPEAT
to kzalloc() flags to do this fallback only when really needed.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Joby Poriyath <joby.poriyath@citrix.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, skb_checksum walks over 1) linearized, 2) frags[], and
3) frag_list data and calculats the one's complement, a 32 bit
result suitable for feeding into itself or csum_tcpudp_magic(),
but unsuitable for SCTP as we're calculating CRC32c there.
Hence, in order to not re-implement the very same function in
SCTP (and maybe other protocols) over and over again, use an
update() + combine() callback internally to allow for walking
over the skb with different algorithms.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds a combinator to merge two or more crc32{,c}s
into a new one. This is useful for checksum computations of
fragmented skbs that use crc32/crc32c as checksums.
The arithmetics for combining both in the GF(2) was taken and
slightly modified from zlib. Only passing two crcs is insufficient
as two crcs and the length of the second piece is needed for
merging. The code is made generic, so that only polynomials
need to be passed for crc32_le resp. crc32c_le.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Add this empty macro definition so users can be compiled without
excluding this macro call with preprocessor directives when CONFIG_OF
is disabled.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Negative message lengths make no sense -- so don't do negative queue
lenghts or identifier counts. Prevent them from getting negative.
Also change the underlying data types to be unsigned to avoid hairy
surprises with sign extensions in cases where those variables get
evaluated in unsigned expressions with bigger data types, e.g size_t.
In case a user still wants to have "unlimited" sizes she could just use
INT_MAX instead.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
header_desc was completely unused and union_desc was never used
outside cdc_ncm_bind_common.
Cc: Alexey Orishko <alexey.orishko@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
Moving the call to cdc_ncm_setup() after the endpoint
setup removes the last remaining reference to ncm_parm
outside cdc_ncm_setup.
Collecting all the ncm_parm based calculations in
cdc_ncm_setup improves readability.
Cc: Alexey Orishko <alexey.orishko@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
These fields are only used to prevent printing the same speeds
multiple times if we receive multiple identical speed notifications.
The value of these printk's is questionable, and even more so when
we filter out some of the notifications sent us by the firmware. If
we are going to print any of these, then we should print them all.
Removing little used fields is a bonus.
Cc: Alexey Orishko <alexey.orishko@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
Too many pointers back and forth are likely to confuse developers,
creating subtle bugs whenever we forget to syncronize them all.
As a usbnet driver, we should stick with the standard struct
usbnet fields as much as possible. The netdevice is one such
field.
Cc: Greg Suarez <gsuarez@smithmicro.com>
Cc: Alexey Orishko <alexey.orishko@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
No need to duplicate stuff already in the common usbnet
struct. We still need to keep our special find_endpoints
function because we need explicit control over the selected
altsetting.
Cc: Alexey Orishko <alexey.orishko@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is always a duplicate of the "control" field. It causes
confusion wrt intf_data updates and cleanups.
Cc: Alexey Orishko <alexey.orishko@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
__wait_event_interruptible_lock_irq_timeout() needs the timeout
parameter passed instead of "ret".
This magically compiled since the only user has a local ret
variable. Luckily we got a build warning:
CC drivers/s390/scsi/zfcp_qdio.o
drivers/s390/scsi/zfcp_qdio.c: In function 'zfcp_qdio_sbal_get':
include/linux/wait.h:780:15: warning: 'ret' may be used uninitialized
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20131031114814.GB5551@osiris
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Resolve cherry-picking conflicts:
Conflicts:
mm/huge_memory.c
mm/memory.c
mm/mprotect.c
See this upstream merge commit for more details:
52469b4fcd Merge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This adds a driver for touchscreens using the zforce infrared
technology from Neonode connected via i2c to the host system.
It supports multitouch with up to two fingers and tracking of the
contacts in hardware.
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Most of the kernel assumes that PFN0 is the start of the physical
memory (RAM). This assumptions is not true on most of the ARM SOCs
and hence and if one try to update the ARM port to follow the assumptions,
we end of breaking the dma bounce limit for few block layer drivers.
One such example is trying to unify the meaning of max*_pfn on ARM
as the bootmem layer expects, breaks few block layer driver dma
bounce limit.
To fix this problem, we introduce dma_max_pfn(dev) generic helper with
a possibility of override from the architecture code. The helper converts
a DMA bitmask of bits to a block PFN number. In all the generic cases,
it is just "dev->dma_mask >> PAGE_SHIFT" and hence default behavior
is maintained as is.
Subsequent patches will make use of the helper. No functional change.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Many drivers contain code such as:
dev->dma_mask = &dev->coherent_dma_mask;
dev->coherent_dma_mask = MASK;
Let's move this pattern out of drivers and have the DMA API provide a
helper for it. This helper uses dma_set_mask_and_coherent() to allow
platform issues to be properly dealt with via dma_set_mask()/
dma_is_supported().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
AMBA Primecell devices always treat streaming and coherent DMA exactly
the same, so there's no point in having the masks separated.
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds support for hardware syncpoint bases. This creates
a simple mechanism to stall the command FIFO until an operation is
completed.
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Functions host1x_syncpt_request() and _host1x_syncpt_alloc() have
been taking a separate boolean flag ('client_managed') for indicating
if the syncpoint value should be tracked by the host1x driver.
This patch converts the field into generic 'flags' field so that
we can easily add more information while requesting a syncpoint.
Clients are adapted to use the new interface accordingly.
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Initialize and power the 3D unit on Tegra20, Tegra30 and Tegra114 and
register a channel with the Tegra DRM driver so that the unit can be
used from userspace.
Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de>
Signed-off-by: Thierry Reding <treding@nvidia.com>
The Tegra DRM driver currently uses some infrastructure to defer the DRM
core initialization until all required devices have registered. The same
infrastructure can potentially be used by any other driver that requires
more than a single sub-device of the host1x module.
Make the infrastructure more generic and keep only the DRM specific code
in the DRM part of the driver. Eventually this will make it easy to move
the DRM driver part back to the DRM subsystem.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Expose the buffer objects, syncpoint and channel functionality in the
public public header so that drivers can use them.
Signed-off-by: Thierry Reding <treding@nvidia.com>
This structure derives from host1x_client. DRM-specific fields are moved
from host1x_client to this structure, so that host1x_client can remain
agnostic of DRM.
Signed-off-by: Thierry Reding <treding@nvidia.com>
In some environments it is to prefer to postpone the resume of the card
device until runtime_resume is being carried out, since it will mean a
signficant decrease of the total system resume time.
The reason of the decreased resume time is simply because of the actual
re-initalization of the card, which typically takes hundreds of
milliseconds, is performed outside the resume sequence and wont thus
affect it.
For removable card, the detect work tries to re-detect the card to make
sure it is still present, as a part of that sequence the card will also
be runtime_resumed and thus also fully resumed.
For a non-removable card, typically a mmc blk request will trigger a
runtime_resume and thus fully resume the card. This also means the
first request will likely suffer from an inital latency since the
re-initialization of the card needs to be performed.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
By adding a card state that records if it is suspended or resumed, we
can accept asyncronus suspend/resume requests for the mmc and sd
bus_ops.
MMC_CAP_AGGRESSIVE_PM, will at request inactivity through the runtime
bus_ops callbacks, execute a suspend of the the card. In the state were
this has been done, we can receive a suspend request for the mmc bus,
which for sd and mmc forced the card to active state by a
pm_runtime_get_sync. In other words, the card was resumed and then
immediately suspended again, completely unnecessary.
Since the suspend/resume bus_ops callbacks for sd and mmc are now
capable of handling asynchronous requests, we no longer need to force
the card to active state before executing suspend. Evidently preventing
the above sequence for MMC_CAP_AGGRESSIVE_PM.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
The are no more users of the deprecated mmc_suspend|resume_host API,
so let's remove it.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
The negotiated ocr mask is directly related to the card. Once a card
gets removed, the mask shall be dropped. By moving the cache of the ocr
mask from the host struct to the card struct we have accomplished this.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Some switch operations like poweroff notify, shall according to the
spec not be followed by any other new commands. For these cases and
when the host does'nt support MMC_CAP_WAIT_WHILE_BUSY, we must not
send status commands to poll for busy detection. Instead wait for
the stated timeout from the EXT_CSD before completing the request.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
There are few special cases like exynos5440 which doesn't send POSTCHANGE
notification from their ->target() routine and call some kind of bottom halves
for doing this work, work/tasklet/etc.. From which they finally send POSTCHANGE
notification.
Its better if we distinguish them from other cpufreq drivers in some way so that
core can handle them specially. So this patch introduces another flag:
CPUFREQ_ASYNC_NOTIFICATION, which will be set by such drivers.
This also changes exynos5440-cpufreq.c and powernow-k8 in order to set this
flag.
Acked-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The server does allow NFS over v4.2, even if it doesn't add any new
operations yet.
I also switch to using constants to represent the last operation for
each minor version since this makes the code cleaner and easier to
understand at a quick glance.
Signed-off-by: Anna Schumaker <bjschuma@netapp.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Conflicts:
arch/arm/kernel/head.S
This series has been well tested and it would be great to get this
merged now.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
this_cpu_sub() is implemented as negation and addition.
This patch casts the adjustment to the counter type before negation to
sign extend the adjustment. This helps in cases where the counter type
is wider than an unsigned adjustment. An alternative to this patch is
to declare such operations unsupported, but it seemed useful to avoid
surprises.
This patch specifically helps the following example:
unsigned int delta = 1
preempt_disable()
this_cpu_write(long_counter, 0)
this_cpu_sub(long_counter, delta)
preempt_enable()
Before this change long_counter on a 64 bit machine ends with value
0xffffffff, rather than 0xffffffffffffffff. This is because
this_cpu_sub(pcp, delta) boils down to this_cpu_add(pcp, -delta),
which is basically:
long_counter = 0 + 0xffffffff
Also apply the same cast to:
__this_cpu_sub()
__this_cpu_sub_return()
this_cpu_sub_return()
All percpu_test.ko passes, especially the following cases which
previously failed:
l -= ui_one;
__this_cpu_sub(long_counter, ui_one);
CHECK(l, long_counter, -1);
l -= ui_one;
this_cpu_sub(long_counter, ui_one);
CHECK(l, long_counter, -1);
CHECK(l, long_counter, 0xffffffffffffffff);
ul -= ui_one;
__this_cpu_sub(ulong_counter, ui_one);
CHECK(ul, ulong_counter, -1);
CHECK(ul, ulong_counter, 0xffffffffffffffff);
ul = this_cpu_sub_return(ulong_counter, ui_one);
CHECK(ul, ulong_counter, 2);
ul = __this_cpu_sub_return(ulong_counter, ui_one);
CHECK(ul, ulong_counter, 1);
Signed-off-by: Greg Thelen <gthelen@google.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We currently use some ad-hoc arch variables tied to legacy KVM device
assignment to manage emulation of instructions that depend on whether
non-coherent DMA is present. Create an interface for this, adapting
legacy KVM device assignment and adding VFIO via the KVM-VFIO device.
For now we assume that non-coherent DMA is possible any time we have a
VFIO group. Eventually an interface can be developed as part of the
VFIO external user interface to query the coherency of a group.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Default to operating in coherent mode. This simplifies the logic when
we switch to a model of registering and unregistering noncoherent I/O
with KVM.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
So far we've succeeded at making KVM and VFIO mostly unaware of each
other, but areas are cropping up where a connection beyond eventfds
and irqfds needs to be made. This patch introduces a KVM-VFIO device
that is meant to be a gateway for such interaction. The user creates
the device and can add and remove VFIO groups to it via file
descriptors. When a group is added, KVM verifies the group is valid
and gets a reference to it via the VFIO external user interface.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
edma header defines DMA_COMPLETE, this causes issues as commit adfedd9a32 move
DMA_SUCCESS to DMA_COMPLETE. edma should properly namespace its defines and
needs a future fix
Reported-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>