Add support in cros_ec.c to handle EC host command protocol v3.
For v3+, probe for maximum shared protocol version and max
request, response, and passthrough sizes. For now, this will
always fall back to v2, since there is no bus-specific code
for handling proto v3 packets.
Signed-off-by: Stephen Barber <smbarber@chromium.org>
Signed-off-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
Reviewed-by: Gwendal Grignou <gwendal@chromium.org>
Tested-by: Gwendal Grignou <gwendal@chromium.org>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Acked-by: Lee Jones <lee.jones@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Commit 1b84f2a4cd ("mfd: cros_ec: Use fixed size arrays to transfer
data with the EC") modified the struct cros_ec_command fields to not
use pointers for the input and output buffers and use fixed length
arrays instead.
This change was made because the cros_ec ioctl API uses that struct
cros_ec_command to allow user-space to send commands to the EC and
to get data from the EC. So using pointers made the API not 64-bit
safe. Unfortunately this approach was not flexible enough for all
the use-cases since there may be a need to send larger commands
on newer versions of the EC command protocol.
So to avoid to choose a constant length that it may be too big for
most commands and thus wasting memory and CPU cycles on copy from
and to user-space or having a size that is too small for some big
commands, use a zero-length array that is both 64-bit safe and
flexible. The same buffer is used for both output and input data
so the maximum of these values should be used to allocate it.
Suggested-by: Gwendal Grignou <gwendal@chromium.org>
Signed-off-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Acked-by: Lee Jones <lee.jones@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Move the contents of the include/linux/nx842.h header file into the
drivers/crypto/nx/nx-842.h header file. Remove the nx842.h header
file and its entry in the MAINTAINERS file.
The include/linux/nx842.h header originally was there because the
crypto/842.c driver needed it to communicate with the nx-842 hw
driver. However, that crypto compression driver was moved into
the drivers/crypto/nx/ directory, and now can directly include the
nx-842.h header. Nothing else needs the public include/linux/nx842.h
header file, as all use of the nx-842 hardware driver will be through
the "842-nx" crypto compression driver, since the direct nx-842 api is
very limited in the buffer alignments and sizes that it will accept,
and the crypto compression interface handles those limitations and
allows any alignment and size buffers.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The KERNCZ is new AMD SB/FCH generation name, like HUDSON2.
0x790b is the device ID for this generation.
Signed-off-by: Wan ZongShun <Vincent.Wan@amd.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The docbook for these members is missing. Add them.
Warning(include/linux/regulator/machine.h:147): No description
found for parameter 'soft_start'
Warning(include/linux/regulator/driver.h:197): No description
found for parameter 'set_soft_start'
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Replace rwlock_t with spinlock_t in "struct ip_set" and change the locking
accordingly. Convert the comment extension into an rcu-avare object. Also,
simplify the timeout routines.
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
When elements added to a hash:* type of set and resizing triggered,
parallel listing could start to list the original set (before resizing)
and "continue" with listing the new set. Fix it by references and
using the original hash table for listing. Therefore the destroying of
the original hash table may happen from the resizing or listing functions.
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Commit "Simplify cidr handling for hash:*net* types" broke the cidr
handling for the hash:*net* types when the sets were used by the SET
target: entries with invalid cidr values were added to the sets.
Reported by Jonathan Johnson.
Testsuite entry is added to verify the fix.
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
__skb_header_pointer() returns a pointer that must be checked.
Fixes infinite loop reported by Alexei, and add __must_check to
catch these errors earlier.
Fixes: 6a74fcf426 ("flow_dissector: add support for dst, hop-by-hop and routing ext hdrs")
Reported-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Tested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch just redefine the unique id of supported external connectors without
'enum extcon' type. Because unique id would be used on devictree file(*.dts) to
indicate the specific external connectors like key number of input framework.
So, I have the plan to move this definitions to following header file which
includes the unique id of supported external connectors.
- include/dt-bindings/extcon/extcon.h
Fixes: 2a9de9c0f0 ("extcon: Use the unique id for external connector instead of string")
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In order to read the HCA's cycle counter efficiently in
user space, we need to map the HCA's register.
This is done through mmap call.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Pull VT-d hardware workarounds from David Woodhouse:
"This contains a workaround for hardware issues which I *thought* were
never going to be seen on production hardware. I'm glad I checked
that before the 4.1 release...
Firstly, PASID support is so broken on existing chips that we're just
going to declare the old capability bit 28 as 'reserved' and change
the VT-d spec to move PASID support to another bit. So any existing
hardware doesn't support SVM; it only sets that (now) meaningless bit
28.
That patch *wasn't* imperative for 4.1 because we don't have PASID
support yet. But *even* the extended context tables are broken — if
you just enable the wider tables and use none of the new bits in them,
which is precisely what 4.1 does, you find that translations don't
work. It's this problem which I thought was caught in time to be
fixed before production, but wasn't.
To avoid triggering this issue, we now *only* enable the extended
context tables on hardware which also advertises "we have PASID
support and we actually tested it this time" with the new PASID
feature bit.
In addition, I've added an 'intel_iommu=ecs_off' command line
parameter to allow us to disable it manually if we need to"
* git://git.infradead.org/intel-iommu:
iommu/vt-d: Only enable extended context tables if PASID is supported
iommu/vt-d: Change PASID support to bit 40 of Extended Capability Register
Now that we can have ICGs set for both the source and destination (using
the icg field of struct data_chunk) or for only the source or the
destination (using the dst_icg or src_icg respectively), and that these
fields can be ignored depending on other parameters (src_inc, src_sgl,
etc.), the logic to get the actual ICG value can be quite tricky.
The XDMAC driver was already implementing it, but since we will need it in
other drivers, we can move it to the main header file.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We store the rule blob per (possible) cpu. Unfortunately this means we can
waste lot of memory on big smp machines. ipt_entry structure ('rule head')
is 112 byte, so e.g. with maxcpu=64 one single rule eats
close to 8k RAM.
Since previous patch made counters percpu it appears there is nothing
left in the rule blob that needs to be percpu.
On my test system (144 possible cpus, 400k dummy rules) this
change saves close to 9 Gigabyte of RAM.
Reported-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
The binary arp/ip/ip6tables ruleset is stored per cpu.
The only reason left as to why we need percpu duplication are the rule
counters embedded into ipt_entry et al -- since each cpu has its own copy
of the rules, all counters can be lockless.
The downside is that the more cpus are supported, the more memory is
required. Rules are not just duplicated per online cpu but for each
possible cpu, i.e. if maxcpu is 144, then rule is duplicated 144 times,
not for the e.g. 64 cores present.
To save some memory and also improve utilization of shared caches it
would be preferable to only store the rule blob once.
So we first need to separate counters and the rule blob.
Instead of using entry->counters, allocate this percpu and store the
percpu address in entry->counters.pcnt on CONFIG_SMP.
This change makes no sense as-is; it is merely an intermediate step to
remove the percpu duplication of the rule set in a followup patch.
Suggested-by: Eric Dumazet <edumazet@google.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reported-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Some regulators can limit their input current (typically annotated
as ilim). Add an op (set_input_current_limit) and a DT property +
constraint to support this.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Some regulators support a "soft start" feature where the voltage
ramps up slowly when the regulator is enabled. Add an op
(set_soft_start) and a DT property + constraint to support this.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
since commit d6b915e29f
("ip_fragment: don't forward defragmented DF packet") the largest
fragment size is available in the IPCB.
Therefore we no longer need to care about 'encapsulation'
overhead of stripped PPPOE/VLAN headers since ip_do_fragment
doesn't use device mtu in such cases.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
IPv6 fragmented packets are not forwarded on an ethernet bridge
with netfilter ip6_tables loaded. e.g. steps to reproduce
1) create a simple bridge like this
modprobe br_netfilter
brctl addbr br0
brctl addif br0 eth0
brctl addif br0 eth2
ifconfig eth0 up
ifconfig eth2 up
ifconfig br0 up
2) place a host with an IPv6 address on each side of the bridge
set IPv6 address on host A:
ip -6 addr add fd01:2345:6789:1::1/64 dev eth0
set IPv6 address on host B:
ip -6 addr add fd01:2345:6789:1::2/64 dev eth0
3) run a simple ping command on host A with packets > MTU
ping6 -s 4000 fd01:2345:6789:1::2
4) wait some time and run e.g. "ip6tables -t nat -nvL" on the bridge
IPv6 fragmented packets traverse the bridge cleanly until somebody runs.
"ip6tables -t nat -nvL". As soon as it is run (and netfilter modules are
loaded) IPv6 fragmented packets do not traverse the bridge any more (you
see no more responses in ping's output).
After applying this patch IPv6 fragmented packets traverse the bridge
cleanly in above scenario.
Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
[pablo@netfilter.org: small changes to br_nf_dev_queue_xmit]
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Some regulators need to be configured to pull down a resistor
when the regulator is disabled. Add an op (set_pull_down) and a
DT property + constraint to support this.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Currently frag_max_size is member of br_input_skb_cb and copied back and
forth using IPCB(skb) and BR_INPUT_SKB_CB(skb) each time it is changed or
used.
Attach frag_max_size to nf_bridge_info and set value in pre_routing and
forward functions. Use its value in forward and xmit functions.
Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
IPv4 iptables allows to REDIRECT/DNAT/SNAT any traffic over a bridge.
e.g. REDIRECT
$ sysctl -w net.bridge.bridge-nf-call-iptables=1
$ iptables -t nat -A PREROUTING -p tcp -m tcp --dport 8080 \
-j REDIRECT --to-ports 81
This does not work with ip6tables on a bridge in NAT66 scenario
because the REDIRECT/DNAT/SNAT is not correctly detected.
The bridge pre-routing (finish) netfilter hook has to check for a possible
redirect and then fix the destination mac address. This allows to use the
ip6tables rules for local REDIRECT/DNAT/SNAT REDIRECT similar to the IPv4
iptables version.
e.g. REDIRECT
$ sysctl -w net.bridge.bridge-nf-call-ip6tables=1
$ ip6tables -t nat -A PREROUTING -p tcp -m tcp --dport 8080 \
-j REDIRECT --to-ports 81
This patch makes it possible to use IPv6 NAT66 on a bridge. It was tested
on a bridge with two interfaces using SNAT/DNAT NAT66 rules.
Reported-by: Artie Hamilton <artiemhamilton@yahoo.com>
Signed-off-by: Sven Eckelmann <sven@open-mesh.com>
[bernhard.thaler@wvnet.at: rebased, add indirect call to ip6_route_input()]
[bernhard.thaler@wvnet.at: rebased, split into separate patches]
Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Some regulators have a fixed load that isn't captured by
consumers that the kernel knows about. Add a constraint to
support this.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Add a new function to register a PWM chip with channels that have their
initial polarity as specified by an additional parameter. This benefits
drivers of controllers that by default operate with inversed polarity
by removing the need to modify the polarity during initialization.
Signed-off-by: Tim Kryger <tim.kryger@gmail.com>
Signed-off-by: Jonathan Richardson <jonathar@broadcom.com>
[thierry.reding@gmail.com: export pwmchip_add_with_polarity()]
Signed-off-by: Thierry Reding <thierry.reding@gmail.com>
Currently, leapsecond adjustments are done at tick time. As a result,
the leapsecond was applied at the first timer tick *after* the
leapsecond (~1-10ms late depending on HZ), rather then exactly on the
second edge.
This was in part historical from back when we were always tick based,
but correcting this since has been avoided since it adds extra
conditional checks in the gettime fastpath, which has performance
overhead.
However, it was recently pointed out that ABS_TIME CLOCK_REALTIME
timers set for right after the leapsecond could fire a second early,
since some timers may be expired before we trigger the timekeeping
timer, which then applies the leapsecond.
This isn't quite as bad as it sounds, since behaviorally it is similar
to what is possible w/ ntpd made leapsecond adjustments done w/o using
the kernel discipline. Where due to latencies, timers may fire just
prior to the settimeofday call. (Also, one should note that all
applications using CLOCK_REALTIME timers should always be careful,
since they are prone to quirks from settimeofday() disturbances.)
However, the purpose of having the kernel do the leap adjustment is to
avoid such latencies, so I think this is worth fixing.
So in order to properly keep those timers from firing a second early,
this patch modifies the ntp and timekeeping logic so that we keep
enough state so that the update_base_offsets_now accessor, which
provides the hrtimer core the current time, can check and apply the
leapsecond adjustment on the second edge. This prevents the hrtimer
core from expiring timers too early.
This patch does not modify any other time read path, so no additional
overhead is incurred. However, this also means that the leap-second
continues to be applied at tick time for all other read-paths.
Apologies to Richard Cochran, who pushed for similar changes years
ago, which I resisted due to the concerns about the performance
overhead.
While I suspect this isn't extremely critical, folks who care about
strict leap-second correctness will likely want to watch
this. Potentially a -stable candidate eventually.
Originally-suggested-by: Richard Cochran <richardcochran@gmail.com>
Reported-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Reported-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jiri Bohac <jbohac@suse.cz>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/1434063297-28657-4-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Enable HW cacheline start padding and align RX WQE size to cacheline
while considering HW start padding. Also, fix dma_unmap call to use
the correct SKB data buffer size.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Previously we configured HW MTU to be netdev->mtu, actually we
need to configure netdev->mtu + (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN).
Also, query MTU can not fail, hence make the relevant helper a
void functionm, add mlx5e_set_dev_port_mtu, helper function to
handle MTU setting.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Samsung updates for v4.2
- add failure(exception) handling
: of_iomap(), of_find_device_by_node() and kstrdup()
- add common poweroff to use PS_HOLD based for all of exynos SoCs
- add exnos_get/set_boot_addr() helper
- constify platform_device_id and irq_domain_ops
- get current parent clock for power domain on/off
- use core_initcall to register power domain driver
- make exynos_core_restart() less verbose
- add support coupled CPUidle for exynos3250
- fix exynos_boot_secondary() return value on timeout
- fix clk_enable() in s3c24xx adc
- fix missing of_node_put() for power domains
* tag 'samsung-mach-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung: (301 commits)
ARM: EXYNOS: register power domain driver from core_initcall
ARM: EXYNOS: use PS_HOLD based poweroff for all supported SoCs
ARM: SAMSUNG: Constify platform_device_id
ARM: EXYNOS: Constify irq_domain_ops
ARM: EXYNOS: add coupled cpuidle support for Exynos3250
ARM: EXYNOS: add exynos_get_boot_addr() helper
ARM: EXYNOS: add exynos_set_boot_addr() helper
ARM: EXYNOS: make exynos_core_restart() less verbose
ARM: EXYNOS: fix exynos_boot_secondary() return value on timeout
ARM: EXYNOS: Get current parent clock for power domain on/off
ARM: SAMSUNG: fix clk_enable() WARNing in S3C24XX ADC
ARM: EXYNOS: Add missing of_node_put() when parsing power domains
ARM: EXYNOS: Handle of_find_device_by_node() and kstrdup() failures
ARM: EXYNOS: Handle of of_iomap() failure
Linux 4.1-rc4
....
Declare nfcmrvl platform_data structure and few DT parameters
for nfcmrvl driver.
Signed-off-by: Vincent Cuissard <cuissard@marvell.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This patch supplies a new framework API; mbox_request_channel_byname().
It works by supplying the usual client pointer as the first argument and
a string as the second. The API will search the client's node for a
'mbox-names' property then request a channel in the normal way using the
requested string's index as the expected second 'index' argument.
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>