Replace the generic kernel lirc api with ones which use rc-core, further
reducing the lirc_dev members.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
This is done to further remove the lirc kernel api. Ensure that every
fops checks for this.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Since the only mode lirc devices can handle is raw IR, handle this
in a plain kfifo.
Remove lirc_buffer since this is no longer needed.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Calculate lirc features when necessary, and add LIRC_{S,G}ET_REC_MODE
cases to ir_lirc_ioctl.
This makes lirc_dev_fop_ioctl() unnecessary since all cases are
already handled by ir_lirc_ioctl().
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
The lirc user interface exists as a raw decoder, which does not make
much sense for transmit-only devices.
In addition, we want to have lirc char devices for devices which do not
use raw IR, i.e. scancode only devices.
Note that rc-code, lirc_dev, ir-lirc-codec are now calling functions of
each other, so they've been merged into one module rc-core to avoid
circular dependencies.
Since ir-lirc-codec no longer exists as separate codec module, there is no
need for RC_DRIVER_IR_RAW_TX type drivers to call ir_raw_event_register().
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
If the lirc device supports it, set the carrier for the protocol.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
This introduces a new lirc mode: scancode. Any device which can send raw IR
can now also send scancodes.
int main()
{
int mode, fd = open("/dev/lirc0", O_RDWR);
mode = LIRC_MODE_SCANCODE;
if (ioctl(fd, LIRC_SET_SEND_MODE, &mode)) {
// kernel too old or lirc does not support transmit
}
struct lirc_scancode scancode = {
.scancode = 0x1e3d,
.rc_proto = RC_PROTO_RC5,
};
write(fd, &scancode, sizeof(scancode));
close(fd);
}
The other fields of lirc_scancode must be set to 0.
Note that toggle (rc5, rc6) and repeats (nec) are not implemented. Nor is
there a method for holding down a key for a period.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
LIRCCODE is a lirc mode where a driver produces driver-dependent
codes for receive and transmit. No driver uses this any more. The
LIRC_GET_LENGTH ioctl was used for this mode only.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
This code implements the transmitter which is currently implemented
in the staging lirc_zilog driver.
The new code does not need a signal database, iow. the
haup-ir-blaster.bin firmware file is no longer needed, and the driver
does not know anything about the keycodes in that file.
Instead, the new driver can send raw IR, but the hardware is limited
to few different lengths of pulse and spaces, so it is best to use
generated IR rather than recorded IR.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
With the parent set for the rc device, the messages clearly state
that it is attached via i2c. The additional printk is unnecessary.
These are the old messages:
rc rc1: i2c IR (Hauppauge WinTV PVR-150 as /devices/virtual/rc/rc1
ir-kbd-i2c: i2c IR (Hauppauge WinTV PVR-150 detected at i2c-10/10-0071/ir0 [ivtv i2c driver #0]
Now we simply get:
rc rc1: Hauppauge WinTV PVR-150 as /devices/pci0000:00/0000:00:1e.0/0000:02:00.0/i2c-10/10-0071/rc/rc1
Note that we no longer copy the name. I've checked all call sites
to verfiy this is not a problem.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
struct nand_buffers is malloc'ed in nand_scan_tail() just for
containing three pointers. Squash this struct into nand_chip.
Move and rename as follows:
chip->buffers->ecccalc -> chip->ecc.calc_buf
chip->buffers->ecccode -> chip->ecc.code_buf
chip->buffers->databuf -> chip->data_buf
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Right now, the chip->data_interface field is populated in
nand_scan_tail(), so after the whole NAND detection has taken place.
This is fine because these timings are not yet used by the core so
early in the probe process, but the situation is about to change with
the introduction of ->exec_op().
Also, by convention, nand_scan_ident() is not supposed to allocate
resources, only nand_scan_tail() can, so this prevent us from
allocating and initializing the data_interface object in
nand_scan_ident().
In order to solve this problem, directly embed a data_interface object
in nand_chip so that we don't have to allocate it, and initialize it to
ONFI SDR mode 0 at the very beginning of nand_scan_ident().
Signed-off-by: Miquel Raynal <miquel.raynal@free-electrons.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
The core currently send the READ0 and SEQIN+PAGEPROG commands in
nand_do_read/write_ops(). This is inconsistent with
->read/write_oob[_raw]() hooks behavior which are expected to send
these commands.
There's already a flag (NAND_ECC_CUSTOM_PAGE_ACCESS) to inform the core
that a specific controller wants to send the READ/SEQIN+PAGEPROG
commands on its own, but it's an opt-in flag, and existing drivers are
unlikely to be updated to pass it.
Moreover, some controllers cannot dissociate the READ/PAGEPROG commands
from the associated data transfer and ECC engine activation, and
developers have to hack things in their ->cmdfunc() implementation to
handle such complex cases, or have to accept the perf penalty of sending
twice the same command.
To address this problem we are planning on adding a new interface which
is passed all information about a NAND operation (including the amount
of data to transfer) and replacing all calls to ->cmdfunc() to calls to
this new ->exec_op() hook. But, in order to do that, we need to have all
->cmdfunc() calls placed near their associated ->read/write_buf/byte()
calls.
Modify the core and relevant drivers to make NAND_ECC_CUSTOM_PAGE_ACCESS
the default case, and remove this flag.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
[miquel.raynal@free-electrons.com: tested, fixed and rebased on nand/next]
Signed-off-by: Miquel Raynal <miquel.raynal@free-electrons.com>
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
This is part of the process of removing direct calls to ->cmdfunc()
outside of the core in order to introduce a better interface to execute
NAND operations.
Here we provide several helpers and make use of them to remove all
direct calls to ->cmdfunc(). This way, we can easily modify those
helpers to make use of the new ->exec_op() interface when available.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
[miquel.raynal@free-electrons.com: rebased and fixed some conflicts]
Signed-off-by: Miquel Raynal <miquel.raynal@free-electrons.com>
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
After the vcpu_load/vcpu_put pushdown, the handling of asynchronous VCPU
ioctl is already much clearer in that it is obvious that they bypass
vcpu_load and vcpu_put.
However, it is still not perfect in that the different state of the VCPU
mutex is still hidden in the caller. Separate those ioctls into a new
function kvm_arch_vcpu_async_ioctl that returns -ENOIOCTLCMD for more
"traditional" synchronous ioctls.
Cc: James Hogan <jhogan@kernel.org>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Suggested-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As we're about to call vcpu_load() from architecture-specific
implementations of the KVM vcpu ioctls, but yet we access data
structures protected by the vcpu->mutex in the generic code, factor
this logic out from vcpu_load().
x86 is the only architecture which calls vcpu_load() outside of the main
vcpu ioctl function, and these calls will no longer take the vcpu mutex
following this patch. However, with the exception of
kvm_arch_vcpu_postcreate (see below), the callers are either in the
creation or destruction path of the VCPU, which means there cannot be
any concurrent access to the data structure, because the file descriptor
is not yet accessible, or is already gone.
kvm_arch_vcpu_postcreate makes the newly created vcpu potentially
accessible by other in-kernel threads through the kvm->vcpus array, and
we therefore take the vcpu mutex in this case directly.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit f371b304f1 ("bpf/tracing: allow user space to
query prog array on the same tp") introduced a perf
ioctl command to query prog array attached to the
same perf tracepoint. The commit introduced a
compilation error under certain config conditions, e.g.,
(1). CONFIG_BPF_SYSCALL is not defined, or
(2). CONFIG_TRACING is defined but neither CONFIG_UPROBE_EVENTS
nor CONFIG_KPROBE_EVENTS is defined.
Error message:
kernel/events/core.o: In function `perf_ioctl':
core.c:(.text+0x98c4): undefined reference to `bpf_event_query_prog_array'
This patch fixed this error by guarding the real definition under
CONFIG_BPF_EVENTS and provided static inline dummy function
if CONFIG_BPF_EVENTS was not defined.
It renamed the function from bpf_event_query_prog_array to
perf_event_query_prog_array and moved the definition from linux/bpf.h
to linux/trace_events.h so the definition is in proximity to
other prog_array related functions.
Fixes: f371b304f1 ("bpf/tracing: allow user space to query prog array on the same tp")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
phylink_get_fixed_state() currently consults an optional "link_gpio"
GPIO descriptor, expand this mechanism to allow specifying a custom
callback. This is necessary to support out of band link notifcation
(e.g: from an interrupt within a MMIO register).
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to let subsystems like DSA fully utilize PHYLINK, we need to be able
to communicate phy_device::flags from of_phy_{connect,attach} even when using
PHYLINK APIs.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prior to this patch, active Fast Open is paused on a specific
destination IP address if the previous connections to the
IP address have experienced recurring timeouts . But recent
experiments by Microsoft (https://goo.gl/cykmn7) and Mozilla
browsers indicate the isssue is often caused by broken middle-boxes
sitting close to the client. Therefore it is much better user
experience if Fast Open is disabled out-right globally to avoid
experiencing further timeouts on connections toward other
destinations.
This patch changes the destination-IP disablement to global
disablement if a connection experiencing recurring timeouts
or aborts due to timeout. Repeated incidents would still
exponentially increase the pause time, starting from an hour.
This is extremely conservative but an unfortunate compromise to
minimize bad experience due to broken middle-boxes.
Reported-by: Dragana Damjanovic <ddamjanovic@mozilla.com>
Reported-by: Patrick McManus <mcmanus@ducksong.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Reviewed-by: Wei Wang <weiwan@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In commit 3a9b76fd0d ("tcp: allow drivers to tweak TSQ logic")
I gave a code sample to set sk->sk_pacing_shift that was not complete.
Better add a helper that can be used by drivers without worries,
and maybe amended in the future.
A wifi driver might use it from its ndo_start_xmit()
Following call would setup TCP to allow up to ~8ms of queued data per
flow.
sk_pacing_shift_update(skb->sk, 7);
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Before this patch the bridge used a fixed 256 element hash table which
was fine for small use cases (in my tests it starts to degrade
above 1000 entries), but it wasn't enough for medium or large
scale deployments. Modern setups have thousands of participants in a
single bridge, even only enabling vlans and adding a few thousand vlan
entries will cause a few thousand fdbs to be automatically inserted per
participating port. So we need to scale the fdb table considerably to
cope with modern workloads, and this patch converts it to use a
rhashtable for its operations thus improving the bridge scalability.
Tests show the following results (10 runs each), at up to 1000 entries
rhashtable is ~3% slower, at 2000 rhashtable is 30% faster, at 3000 it
is 2 times faster and at 30000 it is 50 times faster.
Obviously this happens because of the properties of the two constructs
and is expected, rhashtable keeps pretty much a constant time even with
10000000 entries (tested), while the fixed hash table struggles
considerably even above 10000.
As a side effect this also reduces the net_bridge struct size from 3248
bytes to 1344 bytes. Also note that the key struct is 8 bytes.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add pcim_set_mwi(), a device-managed version of pci_set_mwi().
First user is the Realtek r8169 driver.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
First, rename __inet_twsk_hashdance() to inet_twsk_hashdance()
Then, remove one inet_twsk_put() by setting tw_refcnt to 3 instead
of 4, but adding a fat warning that we do not have the right to access
tw anymore after inet_twsk_hashdance()
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
IPv4 stack reacts to changes to small MTU, by disabling itself under
RTNL.
But there is a window where threads not using RTNL can see a wrong
device mtu. This can lead to surprises, in igmp code where it is
assumed the mtu is suitable.
Fix this by reading device mtu once and checking IPv4 minimal MTU.
This patch adds missing IPV4_MIN_MTU define, to not abuse
ETH_MIN_MTU anymore.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
No DM target provides num_write_bios and none has since dm-cache's
brief use in 2013.
Having the possibility of num_write_bios > 1 complicates bio
allocation. So remove the interface and assume there is only one bio
needed.
If a target ever needs more, it must provide a suitable bioset and
allocate itself based on its particular needs.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Some PHYs need the refclk to be a continuous clock. Therefore they don't
allow turning it off and on again during operation. Nonetheless such a
clock switching is performed by some ETH drivers (namely FEC [1]) for
power saving reasons. An example for an affected PHY is the
SMSC/Microchip LAN8720 in "REF_CLK In Mode".
In order to provide a uniform method to overcome this problem this patch
adds a new phy_driver flag (PHY_RST_AFTER_CLK_EN) and corresponding
function phy_reset_after_clk_enable() to the phylib. These should be
used to trigger reset of the PHY after the refclk is switched on again.
[1] commit e8fcfcd568 ("net: fec: optimize the clock management to save power")
Signed-off-by: Richard Leitner <richard.leitner@skidata.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some PHYs need a minimum time after the reset gpio was asserted and/or
deasserted. To ensure we meet these timing requirements add two new
optional devicetree parameters for the phy: reset-delay-us and
reset-post-delay-us.
Signed-off-by: Richard Leitner <richard.leitner@skidata.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are a set of values in the drm_display_info structure for each
connector which hold information derived from EDID. These are computed
in drm_add_display_info. Before this patch, that was only called in
drm_add_edid_modes. This meant that they were only set when EDID was
present and never reset when EDID was not, as happened when the
display was disconnected.
One of these fields, non_desktop, is used from
drm_mode_connector_update_edid_property, the function responsible for
assigning the new edid value to the application-visible property.
Various drivers call these two functions (drm_add_edid_modes and
drm_mode_connector_update_edid_property) in different orders. This
means that even when EDID is present, the drm_display_info fields may
not have been computed at the time that
drm_mode_connector_update_edid_property used the non_desktop value to
set the non_desktop property.
I've added a public function (drm_reset_display_info) that resets the
drm_display_info field values to default values and then made the
drm_add_display_info function public. These two functions are now
called directly from drm_mode_connector_update_edid_property so that
the drm_display_info fields are always computed from the current EDID
information before being used in that function.
This means that the drm_display_info values are often computed twice,
once when the EDID property it set and a second time when EDID is used
to compute modes for the device. The alternative would be to uniformly
ensure that the values were computed once before being used, which
would require that all drivers reliably invoke the two paths in the
same order. The computation is inexpensive enough that it seems more
maintainable in the long term to simply compute them in both paths.
The API to drm_add_display_info has been changed so that it no longer
takes the set of edid-based quirks as a parameter. Rather, it now
computes those quirks itself and returns them for further use by
drm_add_edid_modes.
This patch also includes a number of 'const' additions caused by
drm_mode_connector_update_edid_property taking a 'const struct edid *'
parameter and wanting to pass that along to drm_add_display_info.
v2: after review by Daniel Vetter <daniel.vetter@ffwll.ch>
Removed EXPORT_SYMBOL_GPL for drm_reset_display_info and
drm_add_display_info.
Added FIXME in drm_mode_connector_update_edid_property about
potentially merging that with drm_add_edid_modes to avoid
the need for two driver calls.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20171213084427.31199-1-keithp@keithp.com
(danvet: cherry picked from commit 12a889bf4bca ("drm: rework delayed
connector cleanup in connector_iter") from drm-misc-next since
functional conflict with changes in -next and we need to make sure
both have the right version and nothing gets lost.)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There are a set of values in the drm_display_info structure for each
connector which hold information derived from EDID. These are computed
in drm_add_display_info. Before this patch, that was only called in
drm_add_edid_modes. This meant that they were only set when EDID was
present and never reset when EDID was not, as happened when the
display was disconnected.
One of these fields, non_desktop, is used from
drm_mode_connector_update_edid_property, the function responsible for
assigning the new edid value to the application-visible property.
Various drivers call these two functions (drm_add_edid_modes and
drm_mode_connector_update_edid_property) in different orders. This
means that even when EDID is present, the drm_display_info fields may
not have been computed at the time that
drm_mode_connector_update_edid_property used the non_desktop value to
set the non_desktop property.
I've added a public function (drm_reset_display_info) that resets the
drm_display_info field values to default values and then made the
drm_add_display_info function public. These two functions are now
called directly from drm_mode_connector_update_edid_property so that
the drm_display_info fields are always computed from the current EDID
information before being used in that function.
This means that the drm_display_info values are often computed twice,
once when the EDID property it set and a second time when EDID is used
to compute modes for the device. The alternative would be to uniformly
ensure that the values were computed once before being used, which
would require that all drivers reliably invoke the two paths in the
same order. The computation is inexpensive enough that it seems more
maintainable in the long term to simply compute them in both paths.
The API to drm_add_display_info has been changed so that it no longer
takes the set of edid-based quirks as a parameter. Rather, it now
computes those quirks itself and returns them for further use by
drm_add_edid_modes.
This patch also includes a number of 'const' additions caused by
drm_mode_connector_update_edid_property taking a 'const struct edid *'
parameter and wanting to pass that along to drm_add_display_info.
v2: after review by Daniel Vetter <daniel.vetter@ffwll.ch>
Removed EXPORT_SYMBOL_GPL for drm_reset_display_info and
drm_add_display_info.
Added FIXME in drm_mode_connector_update_edid_property about
potentially merging that with drm_add_edid_modes to avoid
the need for two driver calls.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20171213084427.31199-1-keithp@keithp.com
The existing format modifier definitions were merged prematurely, and
recent work has unveiled that the definitions are suboptimal in several
ways:
- The format specifiers, except for one, are not Tegra specific, but
the names don't reflect that.
- The number space is split into two, reserving 32 bits for some
"parameter" which most of the modifiers are not going to have.
- Symbolic names for the modifiers are not using the standard
DRM_FORMAT_MOD_* prefix, which makes them awkward to use.
- The vendor prefix NV is somewhat ambiguous.
Fortunately, nobody's started using these modifiers, so we can still fix
the above issues. Do so by using the standard prefix. Also, remove TEGRA
from the name of those modifiers that exist on NVIDIA GPUs as well. In
case of the block linear modifiers, make the "parameter" smaller (4
bits, though only 6 values are valid) and don't let that leak into any
of the other modifiers.
Finally, also use the more canonical NVIDIA instead of the ambiguous NV
prefix.
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Avoid a compiler warnings when the val parameter is an expression.
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Move Tegra186 support to the consolidated PMC driver to reduce some of
the duplication and also gain I/O pad functionality on the new SoC as a
side-effect.
Signed-off-by: Thierry Reding <treding@nvidia.com>
As opposed to earlier incarnations, the memory controller on Tegra186 no
longer implements an SMMU. Instead the SMMU is a regular ARM SMMU and in
a separate IP block.
However, the memory controller programs the SMMU stream IDs for each of
the memory clients. Add a header file with definitions for each of these
stream IDs and mark the #iommu-cells property as required on Tegra30 to
Tegra210 in the device tree bindings.
Signed-off-by: Thierry Reding <treding@nvidia.com>
To set host/peripheral mode by using extcon notifier, this patch
adds a new callback as "notifier" in renesas_usbhs_platform_callback.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
There is an OF/ACPI function to obtain the driver data. We want to hide
OF/ACPI details from the device drivers and abstract following the device
family of functions.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Rob Herring <robh@kernel.org>
Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
OF has of_device_get_match_data() function to extract driver specific data
structure. Add a similar function for ACPI.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add this API to restore the status of SPI flash chip to the default
such as addressing mode, whenever detach the driver from device or
reboot the system.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@wedev4u.fr>
For Micron spi nor device, when erase/program operation
fails, especially the failure results from intending to
modify protected space, spi-nor upper layers still get
the return which shows the operation succeeds. This is
because current spi_nor_fsr_ready() only uses FSR bit.7
(flag status register) to check device whether ready.
This patch fixes this issue by checking relevant error
bits in FSR.
The FSR is a powerful tool to investigate the status of
device, checking information regarding what the memory is
actually doing and detecting possible error conditions.
Signed-off-by: beanhuo <beanhuo@micron.com>
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@wedev4u.fr>
Add the rg_crc field to store a crc32 of the gfs2_rgrp structure. This
allows us to check resource group headers' integrity and removes the
requirement to check them against the rindex entries in fsck. If this
field is found to be zero, it should be ignored (or updated with an
accurate value).
Signed-off-by: Andrew Price <anprice@redhat.com>
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Add rg_data0, rg_data and rg_bitbytes to struct gfs2_rgrp. The fields
are identical to their counterparts in struct gfs2_rindex and are
intended to reduce the use of the rindex. For now the fields are only
written back as the in-memory equivalents in struct gfs2_rgrpd are set
using values from the rindex. However, they are needed at this point so
that userspace can make use of them, allowing a migration away from the
rindex over time.
The new fields take up previously reserved space which was explicitly
zeroed on write so, in clusters with mixed kernels, these fields could
get zeroed after being set and this should not be treated as an error.
Signed-off-by: Andrew Price <anprice@redhat.com>
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Add a new rg_skip field to struct gfs2_rgrp, replacing __pad. The
rg_skip field has the following meaning:
- If rg_skip is zero, it is considered unset and not useful.
- If rg_skip is non-zero, its value will be the number of blocks between
this rgrp's address and the next rgrp's address. This can be used as a
hint by fsck.gfs2 when rebuilding a bad rindex, for example.
This will provide less dependency on the rindex in future, and allow
tools such as fsck.gfs2 to iterate the resource groups without keeping
the rindex around.
The field is updated in gfs2_rgrp_out() so that existing file systems
will have it set. This means that any resource groups that aren't ever
written will not be updated. The final rgrp is a special case as there
is no next rgrp, so it will always have a rg_skip of 0 (unless the fs is
extended).
Before this patch, gfs2_rgrp_out() zeroes the __pad field explicitly, so
the rg_skip field can get set back to 0 in cases where nodes with and
without this patch are mixed in a cluster. In some cases, the field may
bounce between being set by one node and then zeroed by another which
may harm performance slightly, e.g. when two nodes create many small
files. In testing this situation is rare but it becomes more likely as
the filesystem fills up and there are fewer resource groups to choose
from. The problem goes away when all nodes are running with this patch.
Dipping into the space currently occupied by the rg_reserved field would
have resulted in the same problem as it is also explicitly zeroed, so
unfortunately there is no other way around it.
Signed-off-by: Andrew Price <anprice@redhat.com>
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Error injection is sloppy and very ad-hoc. BPF could fill this niche
perfectly with it's kprobe functionality. We could make sure errors are
only triggered in specific call chains that we care about with very
specific situations. Accomplish this with the bpf_override_funciton
helper. This will modify the probe'd callers return value to the
specified value and set the PC to an override function that simply
returns, bypassing the originally probed function. This gives us a nice
clean way to implement systematic error injection for all of our code
paths.
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Using BPF we can override kprob'ed functions and return arbitrary
values. Obviously this can be a bit unsafe, so make this feature opt-in
for functions. Simply tag a function with KPROBE_ERROR_INJECT_SYMBOL in
order to give BPF access to that function for error injection purposes.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Commit e87c6bc385 ("bpf: permit multiple bpf attachments
for a single perf event") added support to attach multiple
bpf programs to a single perf event.
Although this provides flexibility, users may want to know
what other bpf programs attached to the same tp interface.
Besides getting visibility for the underlying bpf system,
such information may also help consolidate multiple bpf programs,
understand potential performance issues due to a large array,
and debug (e.g., one bpf program which overwrites return code
may impact subsequent program results).
Commit 2541517c32 ("tracing, perf: Implement BPF programs
attached to kprobes") utilized the existing perf ioctl
interface and added the command PERF_EVENT_IOC_SET_BPF
to attach a bpf program to a tracepoint. This patch adds a new
ioctl command, given a perf event fd, to query the bpf program
array attached to the same perf tracepoint event.
The new uapi ioctl command:
PERF_EVENT_IOC_QUERY_BPF
The new uapi/linux/perf_event.h structure:
struct perf_event_query_bpf {
__u32 ids_len;
__u32 prog_cnt;
__u32 ids[0];
};
User space provides buffer "ids" for kernel to copy to.
When returning from the kernel, the number of available
programs in the array is set in "prog_cnt".
The usage:
struct perf_event_query_bpf *query =
malloc(sizeof(*query) + sizeof(u32) * ids_len);
query.ids_len = ids_len;
err = ioctl(pmu_efd, PERF_EVENT_IOC_QUERY_BPF, query);
if (err == 0) {
/* query.prog_cnt is the number of available progs,
* number of progs in ids: (ids_len == 0) ? 0 : query.prog_cnt
*/
} else if (errno == ENOSPC) {
/* query.ids_len number of progs copied,
* query.prog_cnt is the number of available progs
*/
} else {
/* other errors */
}
Signed-off-by: Yonghong Song <yhs@fb.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>