nand_base.c shouldn't have to know the implementation details of
nand_bbt's in-memory BBT. Specifically, nand_base shouldn't perform the
bit masking and shifting to isolate a BBT entry.
Instead, just move some of the BBT code into a new nand_markbad_bbt()
interface. This interface allows external users (i.e., nand_base) to
mark a single block as bad in the BBT. Then nand_bbt will take care of
modifying the in-memory BBT and updating the flash-based BBT (if
applicable).
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Below is the equation in original code:
tps65217_uv1_ranges:
0 ... 24: uV = vsel * 25000 + 900000;
25 ... 52: uV = (vsel - 24) * 50000 + 1500000;
= (vsel - 25) * 50000 + 1550000;
53 ... 55: uV = (vsel - 52) * 100000 + 2900000;
= (vsel - 53) * 100000 + 3000000;
56 ... 62: uV = 3300000;
tps65217_uv2_ranges:
0 ... 8: uV = vsel * 50000 + 1500000;
9 ... 13: uV = (vsel - 8) * 100000 + 1900000;
= (vsel - 9) * 100000 + 2000000;
14 ... 31: uV = (vsel - 13) * 50000 + 2400000;
= (vsel - 14) * 50000 + 2450000;
The voltage tables are composed of linear ranges.
This patch converts this driver to use multiple linear ranges APIs.
In original code, voltage range for DCDC1 is 900000 ~ 1800000 and voltage range
for DCDC3 is 900000 ~ 1500000. This patch separates the range 25~52 in
tps65217_uv1_ranges table to two linear ranges: 25~30 and 31~52.
This change makes it possible to reuse the same linear_ranges table for DCDCx.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
The current system requires everyone to set up notifiers, manage directory
locking, etc.
What we really want to do is have the rpc_client create its directory,
and then create all the entries.
This patch will allow the RPCSEC_GSS and NFS code to register all the
objects that they want to have appear in the directory, and then have
the sunrpc code call them back to actually create/destroy their pipefs
dentries when the rpc_client creates/destroys the parent.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
The clnt->cl_principal is being used exclusively to store the service
target name for RPCSEC_GSS/krb5 callbacks. Replace it with something that
is stored only in the RPCSEC_GSS-specific code.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
The Versatile Express TC2 board, which we use as our main emulated
platform in QEMU, defines 160+32 == 192 interrupts, so limiting the
number of interrupts to 128 is not quite going to cut it for real board
emulation.
Note that this didn't use to be a problem because QEMU was buggy and
only defined 128 interrupts until recently.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
The default phase can meet most cards' requirement, but it is not the
optimal one. In some extreme situation, the rx phase point produced by
the following tuning process will drift quite a distance.
Before tuning UHS card, this patch will set a more proper initial tx
phase point, which is calculated from statistic data, and can achieve
a much better tx signal quality.
Signed-off-by: Wei WANG <wei_wang@realsil.com.cn>
Acked-by: Lee Jones <lee.jones@linaro.org>
Acked-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In the old panel device model we had omap_dss_output entities,
representing the encoders in the DSS block. This entity had "device"
field, which pointed to the panel that was using the omap_dss_output.
With the new panel device model, the omap_dss_output is integrated into
omap_dss_device, which now represents a "display entity". Thus the "device"
field, now in omap_dss_device, points to the next entity in the display
entity-chain.
This patch renames the "device" field to "dst", which much better tells
what the field points to.
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Reviewed-by: Archit Taneja <archit@ti.com>
In the old panel device model we had "outputs", which were the encoders
inside OMAP DSS block, and panel devices (omap_dss_device). The panel
devices had a reference to the source of the video data, i.e. reference
to an "output", in a field named "output".
That was somewhat confusing even in the old panel device model, but even
more so with the panel device model where we can have longer chains of
display entities.
This patch renames the "output" field to "src", which much better tells
what the field points to.
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Reviewed-by: Archit Taneja <archit@ti.com>
With all the old panels removed and all the old panel model APIs removed
from the DSS encoders, we can now remove the custom omapdss-bus which
was used in the old panel model.
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Reviewed-by: Archit Taneja <archit@ti.com>
__GFP_ZERO is an uncommon flag and perhaps is better
not used. static inline dma_zalloc_coherent exists
so convert the uses of dma_alloc_coherent with __GFP_ZERO
to the more common kernel style with zalloc.
Remove memset from the static inline dma_zalloc_coherent
and add just one use of __GFP_ZERO instead.
Trivially reduces the size of the existing uses of
dma_zalloc_coherent.
Realign arguments as appropriate.
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Uses perfect flow match (not stochastic hash like SFQ/FQ_codel)
- Uses the new_flow/old_flow separation from FQ_codel
- New flows get an initial credit allowing IW10 without added delay.
- Special FIFO queue for high prio packets (no need for PRIO + FQ)
- Uses a hash table of RB trees to locate the flows at enqueue() time
- Smart on demand gc (at enqueue() time, RB tree lookup evicts old
unused flows)
- Dynamic memory allocations.
- Designed to allow millions of concurrent flows per Qdisc.
- Small memory footprint : ~8K per Qdisc, and 104 bytes per flow.
- Single high resolution timer for throttled flows (if any).
- One RB tree to link throttled flows.
- Ability to have a max rate per flow. We might add a socket option
to add per socket limitation.
Attempts have been made to add TCP pacing in TCP stack, but this
seems to add complex code to an already complex stack.
TCP pacing is welcomed for flows having idle times, as the cwnd
permits TCP stack to queue a possibly large number of packets.
This removes the 'slow start after idle' choice, hitting badly
large BDP flows, and applications delivering chunks of data
as video streams.
Nicely spaced packets :
Here interface is 10Gbit, but flow bottleneck is ~20Mbit
cwin is big, yet FQ avoids the typical bursts generated by TCP
(as in netperf TCP_RR -- -r 100000,100000)
15:01:23.545279 IP A > B: . 78193:81089(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.545394 IP B > A: . ack 81089 win 3668 <nop,nop,timestamp 11597985 1115>
15:01:23.546488 IP A > B: . 81089:83985(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.546565 IP B > A: . ack 83985 win 3668 <nop,nop,timestamp 11597986 1115>
15:01:23.547713 IP A > B: . 83985:86881(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.547778 IP B > A: . ack 86881 win 3668 <nop,nop,timestamp 11597987 1115>
15:01:23.548911 IP A > B: . 86881:89777(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.548949 IP B > A: . ack 89777 win 3668 <nop,nop,timestamp 11597988 1115>
15:01:23.550116 IP A > B: . 89777:92673(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.550182 IP B > A: . ack 92673 win 3668 <nop,nop,timestamp 11597989 1115>
15:01:23.551333 IP A > B: . 92673:95569(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.551406 IP B > A: . ack 95569 win 3668 <nop,nop,timestamp 11597991 1115>
15:01:23.552539 IP A > B: . 95569:98465(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.552576 IP B > A: . ack 98465 win 3668 <nop,nop,timestamp 11597992 1115>
15:01:23.553756 IP A > B: . 98465:99913(1448) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.554138 IP A > B: P 99913:100001(88) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
15:01:23.554204 IP B > A: . ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
15:01:23.554234 IP B > A: . 65248:68144(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
15:01:23.555620 IP B > A: . 68144:71040(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
15:01:23.557005 IP B > A: . 71040:73936(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
15:01:23.558390 IP B > A: . 73936:76832(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
15:01:23.559773 IP B > A: . 76832:79728(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
15:01:23.561158 IP B > A: . 79728:82624(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.562543 IP B > A: . 82624:85520(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.563928 IP B > A: . 85520:88416(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.565313 IP B > A: . 88416:91312(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.566698 IP B > A: . 91312:94208(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.568083 IP B > A: . 94208:97104(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.569467 IP B > A: . 97104:100000(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.570852 IP B > A: . 100000:102896(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.572237 IP B > A: . 102896:105792(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.573639 IP B > A: . 105792:108688(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.575024 IP B > A: . 108688:111584(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.576408 IP B > A: . 111584:114480(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
15:01:23.577793 IP B > A: . 114480:117376(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
TCP timestamps show that most packets from B were queued in the same ms
timeframe (TSval 1159799{3,4}), but FQ managed to send them right
in time to avoid a big burst.
In slow start or steady state, very few packets are throttled [1]
FQ gets a bunch of tunables as :
limit : max number of packets on whole Qdisc (default 10000)
flow_limit : max number of packets per flow (default 100)
quantum : the credit per RR round (default is 2 MTU)
initial_quantum : initial credit for new flows (default is 10 MTU)
maxrate : max per flow rate (default : unlimited)
buckets : number of RB trees (default : 1024) in hash table.
(consumes 8 bytes per bucket)
[no]pacing : disable/enable pacing (default is enable)
All of them can be changed on a live qdisc.
$ tc qd add dev eth0 root fq help
Usage: ... fq [ limit PACKETS ] [ flow_limit PACKETS ]
[ quantum BYTES ] [ initial_quantum BYTES ]
[ maxrate RATE ] [ buckets NUMBER ]
[ [no]pacing ]
$ tc -s -d qd
qdisc fq 8002: dev eth0 root refcnt 32 limit 10000p flow_limit 100p buckets 256 quantum 3028 initial_quantum 15140
Sent 216532416 bytes 148395 pkt (dropped 0, overlimits 0 requeues 14)
backlog 0b 0p requeues 14
511 flows, 511 inactive, 0 throttled
110 gc, 0 highprio, 0 retrans, 1143 throttled, 0 flows_plimit
[1] Except if initial srtt is overestimated, as if using
cached srtt in tcp metrics. We'll provide a fix for this issue.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Need to get my stuff out the door ;-) Highlights:
- pc8+ support from Paulo
- more vma patches from Ben.
- Kconfig option to enable preliminary support by default (Josh
Triplett)
- Optimized cpu cache flush handling and support for write-through caching
of display planes on Iris (Chris)
- rc6 tuning from Stéphane Marchesin for more stability
- VECS seqno wrap/semaphores fix (Ben)
- a pile of smaller cleanups and improvements all over
Note that I've ditched Ben's execbuf vma conversion for 3.12 since not yet
ready. But there's still other vma conversion stuff in here.
* tag 'drm-intel-next-2013-08-23' of git://people.freedesktop.org/~danvet/drm-intel: (62 commits)
drm/i915: Print seqnos as unsigned in debugfs
drm/i915: Fix context size calculation on SNB/IVB/VLV
drm/i915: Use POSTING_READ in lcpll code
drm/i915: enable Package C8+ by default
drm/i915: add i915.pc8_timeout function
drm/i915: add i915_pc8_status debugfs file
drm/i915: allow package C8+ states on Haswell (disabled)
drm/i915: fix SDEIMR assertion when disabling LCPLL
drm/i915: grab force_wake when restoring LCPLL
drm/i915: drop WaMbcDriverBootEnable workaround
drm/i915: Cleaning up the relocate entry function
drm/i915: merge HSW and SNB PM irq handlers
drm/i915: fix how we mask PMIMR when adding work to the queue
drm/i915: don't queue PM events we won't process
drm/i915: don't disable/reenable IVB error interrupts when not needed
drm/i915: add dev_priv->pm_irq_mask
drm/i915: don't update GEN6_PMIMR when it's not needed
drm/i915: wrap GEN6_PMIMR changes
drm/i915: wrap GTIMR changes
drm/i915: add the FCLK case to intel_ddi_get_cdclk_freq
...
Let applications know whether the kernel supports asynchronous page
flipping.
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
This requests that the driver perform the page flip as soon as
possible, not necessarily waiting for vblank.
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
This lets drivers see the flags requested by the application
[airlied: fixup for rcar/imx/msm]
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Unfortunately, I haven't been thorough enough in:
commit ddecb10cf4
Author: Lespiau, Damien <damien.lespiau@intel.com>
Date: Tue Aug 20 00:53:04 2013 +0100
drm: Remove drm_mode_create_dithering_property()
And forgot to remove the dithering_mode_property member of struct
drm_mode_config.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Render nodes provide an API for userspace to use non-privileged GPU
commands without any running DRM-Master. It is useful for offscreen
rendering, GPGPU clients, and normal render clients which do not perform
modesetting.
Compared to legacy clients, render clients no longer need any
authentication to perform client ioctls. Instead, user-space controls
render/client access to GPUs via filesystem access-modes on the
render-node. Once a render-node was opened, a client has full access to
the client/render operations on the GPU. However, no modesetting or ioctls
that affect global state are allowed on render nodes.
To prevent privilege-escalation, drivers must explicitly state that they
support render nodes. They must mark their render-only ioctls as
DRM_RENDER_ALLOW so render clients can use them. Furthermore, they must
support clients without any attached master.
If filesystem access-modes are not enough for fine-grained access control
to render nodes (very unlikely, considering the versaitlity of FS-ACLs),
you may still fall-back to fd-passing from server to client (which allows
arbitrary access-control). However, note that revoking access is
currently impossible and unlikely to get implemented.
Note: Render clients no longer have any associated DRM-Master as they are
supposed to be independent of any server state. DRM core highly depends on
file_priv->master to be non-NULL for modesetting/ctx/etc. commands.
Therefore, drivers must be very careful to not require DRM-Master if they
support DRIVER_RENDER.
So far render-nodes are protected by "drm_rnodes". As long as this
module-parameter is not set to 1, a driver will not create render nodes.
This allows us to experiment with the API a bit before we stabilize it.
v2: drop insecure GEM_FLINK to force use of dmabuf
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We just got rid of the version of hdmi_vendor_infoframe that had a byte
array for anyone to poke at. It's now time to shuffle around the naming
of hdmi_hdmi_infoframe to make hdmi_vendor_infoframe become the HDMI
vendor specific structure.
Cc: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
With this last bit, hdmi_infoframe_pack() is now able to pack any
infoframe we support.
At the same time, because it's impractical to make two commits out of
this, we get rid of the version that encourages the open coding of the
vendor infoframe packing. We can do so because the only user of this API
has been ported in:
Author: Damien Lespiau <damien.lespiau@intel.com>
Date: Mon Aug 12 18:08:37 2013 +0100
gpu: host1x: Port the HDMI vendor infoframe code the common helpers
v2: Change oui to be an unsigned int (Ville Syrjälä)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
We'll need the HDMI OUI for the HDMI vendor infoframe data, so let's
move the DRM one to hdmi.h, might as well use the hdmi header to store
some hdmi defines.
(Note that, in fact, infoframes are part of the CEA-861 standard, and
only the HDMI vendor specific infoframe is special to HDMI, but
details..)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Provide the same programming model than the other infoframe types.
The generic _pack() function can't handle those yet as we need to move
the vendor OUI in the generic hdmi_vendor_infoframe structure to know
which kind of vendor infoframe we are dealing with.
v2: Fix the value of Side-by-side (half), hmdi typo, pack 3D_Ext_Data
(Ville Syrjälä)
v3: Future proof the sending of 3D_Ext_Data (Ville Syrjälä), Fix
multi-lines comment style (Thierry Reding)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
To set the active aspect ratio value in the AVI infoframe today, you not
only have to set the active_aspect field, but also the active_info_valid
bit. Out of the 1 user of this API, we had 100% misuse, forgetting the
_valid bit. This was fixed in:
Author: Damien Lespiau <damien.lespiau@intel.com>
Date: Tue Aug 6 20:32:17 2013 +0100
drm: Don't generate invalid AVI infoframes for CEA modes
We can do better and derive the _valid bit from the user wanting to set
the active aspect ratio.
v2: Fix multi-lines comment style (Thierry Reding)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Usually the received CAN frames can be processed/routed as much as 'max_hops'
times (which is given at module load time of the can-gw module).
Introduce a new configuration option to reduce the number of possible hops
for a specific gateway rule to a value smaller then max_hops.
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
While poking at something using the for-3.12/* trees, I hit the
following compile error:
drivers/built-in.o: In function `tegra_pcie_map_irq':
/builddir/build/BUILD/kernel-3.10.fc20/linux-3.11.0-0.rc6.git4.1.fc20.armv7hl/drivers/pci/host/pci-tegra.c:640:
undefined reference to `tegra_cpuidle_pcie_irqs_in_use'
drivers/built-in.o: In function `tegra_msi_map':
/builddir/build/BUILD/kernel-3.10.fc20/linux-3.11.0-0.rc6.git4.1.fc20.armv7hl/drivers/pci/host/pci-tegra.c:1227:
undefined reference to `tegra_cpuidle_pcie_irqs_in_use'
make: *** [vmlinux] Error 1
Since our .config had CONFIG_CPU_IDLE off. We should probably provide
an empty function to handle this to avoid cluttering up pci-tegra.c
with conditionals.
Signed-off-by: Kyle McMartin <kyle@redhat.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
[swarren, removed unnecessary return statement]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
For at91 boards, there are different IPs for adc. Different IPs has different
STARTUP & PRESCAL mask in ADC_MR.
Signed-off-by: Josh Wu <josh.wu@atmel.com>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
We currently allow for different fanout scheduling policies in pf_packet
such as scheduling by skb's rxhash, round-robin, by cpu, and rollover.
Also allow for a random, equidistributed selection of the socket from the
fanout process group.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is identical to of_parse_phandle_with_args(), except that the
number of argument cells is fixed, rather than being parsed out of the
node referenced by each phandle.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
From Haojian Zhuang:
Move irq driver out of mach-mmp to support multiplatform
* tag 'mmp-irq' of git://git.kernel.org/pub/scm/linux/kernel/git/hzhuang1/linux:
irqchip: mmp: avoid to include irqs head file
ARM: mmp: avoid to include head file in mach-mmp
irqchip: mmp: support irqchip
irqchip: move mmp irq driver
This patch adds lower_dev_list list_head to net_device, which is the same
as upper_dev_list, only for lower devices, and begins to use it in the same
way as the upper list.
It also changes the way the whole adjacent device lists work - now they
contain *all* of upper/lower devices, not only the first level. The first
level devices are distinguished by the bool neighbour field in
netdev_adjacent, also added by this patch.
There are cases when a device can be added several times to the adjacent
list, the simplest would be:
/---- eth0.10 ---\
eth0- --- bond0
\---- eth0.20 ---/
where both bond0 and eth0 'see' each other in the adjacent lists two times.
To avoid duplication of netdev_adjacent structures ref_nr is being kept as
the number of times the device was added to the list.
The 'full view' is achieved by adding, on link creation, all of the
upper_dev's upper_dev_list devices as upper devices to all of the
lower_dev's lower_dev_list devices (and to the lower_dev itself), and vice
versa. On unlink they are removed using the same logic.
I've tested it with thousands vlans/bonds/bridges, everything works ok and
no observable lags even on a huge number of interfaces.
Memory footprint for 128 devices interconnected with each other via both
upper and lower (which is impossible, but for the comparison) lists would be:
128*128*2*sizeof(netdev_adjacent) = 1.5MB
but in the real world we usualy have at most several devices with slaves
and a lot of vlans, so the footprint will be much lower.
CC: "David S. Miller" <davem@davemloft.net>
CC: Eric Dumazet <edumazet@google.com>
CC: Jiri Pirko <jiri@resnulli.us>
CC: Alexander Duyck <alexander.h.duyck@intel.com>
CC: Cong Wang <amwang@redhat.com>
Signed-off-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Steffen Klassert says:
====================
This pull request fixes some issues that arise when 6in4 or 4in6 tunnels
are used in combination with IPsec, all from Hannes Frederic Sowa and a
null pointer dereference when queueing packets to the policy hold queue.
1) We might access the local error handler of the wrong address family if
6in4 or 4in6 tunnel is protected by ipsec. Fix this by addind a pointer
to the correct local_error to xfrm_state_afinet.
2) Add a helper function to always refer to the correct interpretation
of skb->sk.
3) Call skb_reset_inner_headers to record the position of the inner headers
when adding a new one in various ipv6 tunnels. This is needed to identify
the addresses where to send back errors in the xfrm layer.
4) Dereference inner ipv6 header if encapsulated to always call the
right error handler.
5) Choose protocol family by skb protocol to not call the wrong
xfrm{4,6}_local_error handler in case an ipv6 sockets is used
in ipv4 mode.
6) Partly revert "xfrm: introduce helper for safe determination of mtu"
because this introduced pmtu discovery problems.
7) Set skb->protocol on tcp, raw and ip6_append_data genereated skbs.
We need this to get the correct mtu informations in xfrm.
8) Fix null pointer dereference in xdst_queue_output.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The current protocol for handling hot remove of containers is very
fragile and causes acpi_eject_store() to acquire acpi_scan_lock
which may deadlock with the removal of the device that it is called
for (the reason is that device sysfs attributes cannot be removed
while their callbacks are being executed and ACPI device objects
are removed under acpi_scan_lock).
The problem is related to the fact that containers are handled by
acpi_bus_device_eject() in a special way, which is to emit an
offline uevent instead of just removing the container. Then, user
space is expected to handle that uevent and use the container's
"eject" attribute to actually remove it. That is fragile, because
user space may fail to complete the ejection (for example, by not
using the container's "eject" attribute at all) leaving the BIOS
kind of in a limbo. Moreover, if the eject event is not signaled
for a container itself, but for its parent device object (or
generally, for an ancestor above it in the ACPI namespace), the
container will be removed straight away without doing that whole
dance.
For this reason, modify acpi_bus_device_eject() to remove containers
synchronously like any other objects (user space will get its uevent
anyway in case it does some other things in response to it) and
remove the eject_pending ACPI device flag that is not used any more.
This way acpi_eject_store() doesn't have a reason to acquire
acpi_scan_lock any more and one possible deadlock scenario goes
away (plus the code is simplified a bit).
Reported-and-tested-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Toshi Kani <toshi.kani@hp.com>
device_hotplug_lock is held around the acpi_bus_trim() call in
acpi_scan_hot_remove() which generally removes devices (it removes
ACPI device objects at least, but it may also remove "physical"
device objects through .detach() callbacks of ACPI scan handlers).
Thus, potentially, device sysfs attributes are removed under that
lock and to remove those attributes it is necessary to hold the
s_active references of their directory entries for writing.
On the other hand, the execution of a .show() or .store() callback
from a sysfs attribute is carried out with that attribute's s_active
reference held for reading. Consequently, if any device sysfs
attribute that may be removed from within acpi_scan_hot_remove()
through acpi_bus_trim() has a .store() or .show() callback which
acquires device_hotplug_lock, the execution of that callback may
deadlock with the removal of the attribute. [Unfortunately, the
"online" device attribute of CPUs and memory blocks is one of them.]
To avoid such deadlocks, make all of the sysfs attribute callbacks
that need to lock device hotplug, for example store_online(), use
a special function, lock_device_hotplug_sysfs(), to lock device
hotplug and return the result of that function immediately if it is
not zero. This will cause the s_active reference of the directory
entry in question to be released and the syscall to be restarted
if device_hotplug_lock cannot be acquired.
[show_online() actually doesn't need to lock device hotplug, but
it is useful to serialize it with respect to device_offline() and
device_online() for the same device (in case user space attempts to
run them concurrently) which can be done with the help of
device_lock().]
Reported-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Reported-and-tested-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Toshi Kani <toshi.kani@hp.com>
After hearing many people over past years complaining against TSO being
bursty or even buggy, we are proud to present automatic sizing of TSO
packets.
One part of the problem is that tcp_tso_should_defer() uses an heuristic
relying on upcoming ACKS instead of a timer, but more generally, having
big TSO packets makes little sense for low rates, as it tends to create
micro bursts on the network, and general consensus is to reduce the
buffering amount.
This patch introduces a per socket sk_pacing_rate, that approximates
the current sending rate, and allows us to size the TSO packets so
that we try to send one packet every ms.
This field could be set by other transports.
Patch has no impact for high speed flows, where having large TSO packets
makes sense to reach line rate.
For other flows, this helps better packet scheduling and ACK clocking.
This patch increases performance of TCP flows in lossy environments.
A new sysctl (tcp_min_tso_segs) is added, to specify the
minimal size of a TSO packet (default being 2).
A follow-up patch will provide a new packet scheduler (FQ), using
sk_pacing_rate as an input to perform optional per flow pacing.
This explains why we chose to set sk_pacing_rate to twice the current
rate, allowing 'slow start' ramp up.
sk_pacing_rate = 2 * cwnd * mss / srtt
v2: Neal Cardwell reported a suspect deferring of last two segments on
initial write of 10 MSS, I had to change tcp_tso_should_defer() to take
into account tp->xmit_size_goal_segs
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Van Jacobson <vanj@google.com>
Cc: Tom Herbert <therbert@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch implements RFC6980: Drop fragmented ndisc packets by
default. If a fragmented ndisc packet is received the user is informed
that it is possible to disable the check.
Cc: Fernando Gont <fernando@gont.com.ar>
Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Reduce cacheline usage from 2 to 1 cacheline for sctp_globals structure. By
reordering elements, we can close gaps and simply achieve the following:
Current situation:
/* size: 80, cachelines: 2, members: 10 */
/* sum members: 57, holes: 4, sum holes: 16 */
/* padding: 7 */
/* last cacheline: 16 bytes */
Afterwards:
/* size: 64, cachelines: 1, members: 10 */
/* padding: 7 */
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The event stream is not always parsable because the format of a sample
is dependent on the sample_type of the selected event. When there is
more than one selected event and the sample_types are not the same then
parsing becomes problematic. A sample can be matched to its selected
event using the ID that is allocated when the event is opened.
Unfortunately, to get the ID from the sample means first parsing it.
This patch adds a new sample format bit PERF_SAMPLE_IDENTIFER that puts
the ID at a fixed position so that the ID can be retrieved without
parsing the sample. For sample events, that is the first position
immediately after the header. For non-sample events, that is the last
position.
In this respect parsing samples requires that the sample_type and ID
values are recorded. For example, perf tools records struct
perf_event_attr and the IDs within the perf.data file. Those must be
read first before it is possible to parse samples found later in the
perf.data file.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1377591794-30553-6-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
From Christian Daudt, SoC changes for Broadcom.
* 'armsoc/for-3.12/soc' of git://github.com/broadcom/bcm11351: (673 commits)
ARM: bcm: Make secure API call optional
ARM: DT: binding fixup to align with vendor-prefixes.txt (drivers)
ARM: mmc: fix NONREMOVABLE test in sdhci-bcm-kona
ARM: bcm: Rename board_bcm
mmc: sdhci-bcm-kona: make linker-section warning go away
ARM: configs: disable DEBUG_LL in bcm_defconfig
ARM: bcm281xx: Board specific reboot code
ARM bcm281xx: Turn on socket & network support.
ARM: bcm281xx: Turn on L2 cache.
+ Linux 3.11-rc4
Signed-off-by: Olof Johansson <olof@lixom.net>