Add KEY_FLAG_TRUSTED to indicate that a key either comes from a trusted source
or had a cryptographic signature chain that led back to a trusted key the
kernel already possessed.
Add KEY_FLAGS_TRUSTED_ONLY to indicate that a keyring will only accept links to
keys marked with KEY_FLAGS_TRUSTED.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Separate the kernel signature checking keyring from module signing so that it
can be used by code other than the module-signing code.
Signed-off-by: David Howells <dhowells@redhat.com>
Store public key algorithm ID in public_key_signature struct for reference
purposes. This allows a public_key_signature struct to be embedded in
struct x509_certificate and other places more easily.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Store public key algo ID in public_key struct for reference purposes. This
allows it to be removed from the x509_certificate struct and used to find a
default in public_key_verify_signature().
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
This resolves the merge problem with two iio drivers that Stephen
Rothwell pointed out.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Move the public-key algorithm pointer array from x509_public_key.c to
public_key.c as it isn't X.509 specific.
Note that to make this configure correctly, the public key part must be
dependent on the RSA module rather than the other way round. This needs a
further patch to make use of the crypto module loading stuff rather than using
a fixed table.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Rename the arrays of public key parameters (public key algorithm names, hash
algorithm names and ID type names) so that the array name ends in "_name".
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
The old rcu_is_cpu_idle() function is just __rcu_is_watching() with
preemption disabled. This commit therefore renames rcu_is_cpu_idle()
to rcu_is_watching.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
There is currently no way for kernel code to determine whether it
is safe to enter an RCU read-side critical section, in other words,
whether or not RCU is paying attention to the currently running CPU.
Given the large and increasing quantity of code shared by the idle loop
and non-idle code, the this shortcoming is becoming increasingly painful.
This commit therefore adds __rcu_is_watching(), which returns true if
it is safe to enter an RCU read-side critical section on the currently
running CPU. This function is quite fast, using only a __this_cpu_read().
However, the caller must disable preemption.
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
The NFC Forum NCI specification defines both a hardware and software
protocol when using a SPI physical transport to connect an NFC NCI
Chipset. The hardware requirement is that, after having raised the chip
select line, the SPI driver must wait for an INT line from the NFC
chipset to raise before it sends the data. The chip select must be
raised first though, because this is the signal that the NFC chipset
will detect to wake up and then raise its INT line. If the INT line
doesn't raise in a timely fashion, the SPI driver should abort
operation.
When data is transferred from Device host (DH) to NFC Controller (NFCC),
the signaling sequence is the following:
Data Transfer from DH to NFCC
• 1-Master asserts SPI_CSN
• 2-Slave asserts SPI_INT
• 3-Master sends NCI-over-SPI protocol header and payload data
• 4-Slave deasserts SPI_INT
• 5-Master deasserts SPI_CSN
When data must be transferred from NFCC to DH, things are a little bit
different.
Data Transfer from NFCC to DH
• 1-Slave asserts SPI_INT -> NFC chipset irq handler called -> process
reading from SPI
• 2-Master asserts SPI_CSN
• 3-Master send 2-octet NCI-over-SPI protocol header
• 4-Slave sends 2-octet NCI-over-SPI protocol payload length
• 5-Slave sends NCI-over-SPI protocol payload
• 6-Master deasserts SPI_CSN
In this case, SPI driver should function normally as it does today. Note
that the INT line can and will be lowered anytime between beginning of
step 3 and end of step 5. A low INT is therefore valid after chip select
has been raised.
This would be easily implemented in a single driver. Unfortunately, we
don't write the SPI driver and I had to imagine some workaround trick to
get the SPI and NFC drivers to work in a synchronized fashion. The trick
is the following:
- send an empty spi message: this will raise the chip select line, and
send nothing. We expect the /CS line will stay arisen because we asked
for it in the spi_transfer cs_change field
- wait for a completion, that will be completed by the NFC driver IRQ
handler when it knows we are in the process of sending data (NFC spec
says that we use SPI in a half duplex mode, so we are either sending or
receiving).
- when completed, proceed with the normal data send.
This has been tested and verified to work very consistently on a Nexus
10 (spi-s3c64xx driver). It may not work the same with other spi
drivers.
The previously defined nci_spi_ops{} whose intended purpose were to
address this problem are not used anymore and therefore totally removed.
The nci_spi_send() takes a new optional write_handshake_completion
completion pointer. If non NULL, the nci spi layer will run the above
trick when sending data to the NFC Chip. If NULL, the data is sent
normally all at once and it is then the NFC driver responsibility to
know what it's doing.
Signed-off-by: Eric Lapuyade <eric.lapuyade@intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Move the workaround for double sending AUDIO_CODEC and AUDIO_DAC writes
into the SPI core, aiding refactoring to eliminate the ASoC custom I/O
functions and avoiding the extra writes for I2C.
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Previously, nci_spi_recv_frame() would directly transmit incoming frames
to the NCI Core. However, it turns out that some NFC NCI Chips will add
additional proprietary headers that must be handled/removed before NCI
Core gets a chance to handle the frame. With this modification, the chip
phy or driver are now responsible to transmit incoming frames to NCI
Core after proper treatment, and NCI SPI becomes a driver helper instead
of sitting between the NFC driver and NCI Core.
As a general rule in NFC, *_recv_frame() APIs are used to deliver an
incoming frame to an upper layer. To better suit the actual purpose of
nci_spi_recv_frame(), and go along with its nci_spi_send()
counterpart, the function is renamed to nci_spi_read()
The skb is returned as the function result
Signed-off-by: Eric Lapuyade <eric.lapuyade@intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
When using per-cpu preempt_count variables we need to save/restore the
preempt_count on context switch (into per task storage; for instance
the old thread_info::preempt_count variable) because of
PREEMPT_ACTIVE.
However, this means that on fork() the preempt_count value of the last
context switch gets copied and if we had a PREEMPT_ACTIVE switch right
before cloning a child task the child task will now too have
PREEMPT_ACTIVE set and start its life with an extra PREEMPT_ACTIVE
count.
Therefore we need to make init_task_preempt_count() unconditional;
this resets whatever preempt_count we inherited from our parent
process.
Doing so for !per-cpu implementations is harmless.
For !PREEMPT_COUNT kernels we need to be careful not to start life
with an increased preempt_count.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-4k0b7oy1rcdyzochwiixuwi9@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Rewrite the preempt_count macros in order to extract the 3 basic
preempt_count value modifiers:
__preempt_count_add()
__preempt_count_sub()
and the new:
__preempt_count_dec_and_test()
And since we're at it anyway, replace the unconventional
$op_preempt_count names with the more conventional preempt_count_$op.
Since these basic operators are equivalent to the previous _notrace()
variants, do away with the _notrace() versions.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-ewbpdbupy9xpsjhg960zwbv8@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We need a few special preempt_count accessors:
- task_preempt_count() for when we're interested in the preemption
count of another (non-running) task.
- init_task_preempt_count() for properly initializing the preemption
count.
- init_idle_preempt_count() a special case of the above for the idle
threads.
With these no generic code ever touches thread_info::preempt_count
anymore and architectures could choose to remove it.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-jf5swrio8l78j37d06fzmo4r@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
In order to combine the preemption and need_resched test we need to
fold the need_resched information into the preempt_count value.
Since the NEED_RESCHED flag is set across CPUs this needs to be an
atomic operation, however we very much want to avoid making
preempt_count atomic, therefore we keep the existing TIF_NEED_RESCHED
infrastructure in place but at 3 sites test it and fold its value into
preempt_count; namely:
- resched_task() when setting TIF_NEED_RESCHED on the current task
- scheduler_ipi() when resched_task() sets TIF_NEED_RESCHED on a
remote task it follows it up with a reschedule IPI
and we can modify the cpu local preempt_count from
there.
- cpu_idle_loop() for when resched_task() found tsk_is_polling().
We use an inverted bitmask to indicate need_resched so that a 0 means
both need_resched and !atomic.
Also remove the barrier() in preempt_enable() between
preempt_enable_no_resched() and preempt_check_resched() to avoid
having to reload the preemption value and allow the compiler to use
the flags of the previuos decrement. I couldn't come up with any sane
reason for this barrier() to be there as preempt_enable_no_resched()
already has a barrier() before doing the decrement.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-7a7m5qqbn5pmwnd4wko9u6da@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Replace the single preempt_count() 'function' that's an lvalue with
two proper functions:
preempt_count() - returns the preempt_count value as rvalue
preempt_count_set() - Allows setting the preempt-count value
Also provide preempt_count_ptr() as a convenience wrapper to implement
all modifying operations.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-orxrbycjozopqfhb4dxdkdvb@git.kernel.org
[ Fixed build failure. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Mike reported that commit 7d1a9417 ("x86: Use generic idle loop")
regressed several workloads and caused excessive reschedule
interrupts.
The patch in question failed to notice that the x86 code had an
inverted sense of the polling state versus the new generic code (x86:
default polling, generic: default !polling).
Fix the two prominent x86 mwait based idle drivers and introduce a few
new generic polling helpers (fixing the wrong smp_mb__after_clear_bit
usage).
Also switch the idle routines to using tif_need_resched() which is an
immediate TIF_NEED_RESCHED test as opposed to need_resched which will
end up being slightly different.
Reported-by: Mike Galbraith <bitbucket@online.de>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: lenb@kernel.org
Cc: tglx@linutronix.de
Link: http://lkml.kernel.org/n/tip-nc03imb0etuefmzybzj7sprf@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The x86/AMD64 EFI stubs must use a call wrapper to convert between
the Linux and EFI ABIs, so void pointers are sufficient. For ARM,
the ABIs are compatible, so we can directly invoke the function
pointers. The functions that are used by the ARM stub are updated
to match the EFI definitions.
Also add some EFI types used by EFI functions.
Signed-off-by: Roy Franz <roy.franz@linaro.org>
Acked-by: Mark Salter <msalter@redhat.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Change iommu driver call io_page_fault trace event. This iommu_error class
event can be enabled to trigger when an iommu error occurs. Trace information
includes driver name, device name, iova, and flags.
Testing:
Added trace calls to iommu_prepare_identity_map() for testing some of the
conditions that are hard to trigger. Here is the trace from the testing:
swapper/0-1 [003] .... 2.003774: io_page_fault: IOMMU:pci 0000:00:02.0 iova=0x00000000cb800000 flags=0x0002
swapper/0-1 [003] .... 2.004098: io_page_fault: IOMMU:pci 0000:00:1d.0 iova=0x00000000cadc6000 flags=0x0002
swapper/0-1 [003] .... 2.004115: io_page_fault: IOMMU:pci 0000:00:1a.0 iova=0x00000000cadc6000 flags=0x0002
swapper/0-1 [003] .... 2.004129: io_page_fault: IOMMU:pci 0000:00:1f.0 iova=0x0000000000000000 flags=0x0002
Signed-off-by: Shuah Khan <shuah.kh@samsung.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
iommu_error class event can be enabled to trigger when an iommu
error occurs. This trace event is intended to be called to report the
error information. Trace information includes driver name, device name,
iova, and flags.
iommu_error:io_page_fault
Signed-off-by: Shuah Khan <shuah.kh@samsung.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Much of of_irq.h is needlessly ifdef'ed. Clean this up and minimize the
amount ifdef'ed code. This fixes some build warnings when CONFIG_OF
is not enabled (seen on i386 and x86_64):
include/linux/of_irq.h:82:7: warning: 'struct device_node' declared inside parameter list [enabled by default]
include/linux/of_irq.h:82:7: warning: its scope is only this definition or declaration, which is probably not what you want [enabled by default]
include/linux/of_irq.h:87:47: warning: 'struct device_node' declared inside parameter list [enabled by default]
Compile tested on i386, sparc and arm.
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
In order to send and receive ISO7816 APDUs to and from NFC embedded
secure elements, we define a specific netlink command.
On a typical SE use case, host applications will send very few APDUs
(Less than 10) per transaction. This is why we decided to go for a
simple netlink API. Defining another NFC socket protocol for such low
traffic would have been overengineered.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
SENS_RES has no specific endiannes attached to it, the kernel ABI is the
following one: Byte 2 (As described by the NFC Forum Digital spec) is
the u16 most significant byte.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This adds support for NFC-A technology at 106 kbits/s. The stack can
detect tags of type 1 and 2. There is no support for collision
detection. Tags can be read and written by using a user space
application or a daemon like neard.
The flow of polling operations for NFC-A detection is as follow:
1 - The digital stack sends the SENS_REQ command to the NFC device.
2 - The NFC device receives a SENS_RES response from a peer device and
passes it to the digital stack.
3 - If the SENS_RES response identifies a type 1 tag, detection ends.
NFC core is notified through nfc_targets_found().
4 - Otherwise, the digital stack sets the cascade level of NFCID1 to
CL1 and sends the SDD_REQ command.
5 - The digital stack selects SEL_CMD and SEL_PAR according to the
cascade level and sends the SDD_REQ command.
4 - The digital stack receives a SDD_RES response for the cascade level
passed in the SDD_REQ command.
5 - The digital stack analyses (part of) NFCID1 and verify BCC.
6 - The digital stack sends the SEL_REQ command with the NFCID1
received in the SDD_RES.
6 - The peer device replies with a SEL_RES response
7 - Detection ends if NFCID1 is complete. NFC core notified of new
target by nfc_targets_found().
8 - If NFCID1 is not complete, the cascade level is incremented (up
to and including CL3) and the execution continues at step 5 to
get the remaining bytes of NFCID1.
Once target detection is done, type 1 and 2 tag commands must be
handled by a user space application (i.e neard) through the NFC core.
Responses for type 1 tag are returned directly to user space via NFC
core.
Responses of type 2 commands are handled differently. The digital stack
doesn't analyse the type of commands sent through im_transceive() and
must differentiate valid responses from error ones.
The response process flow is as follow:
1 - If the response length is 16 bytes, it is a valid response of a
READ command. the packet is returned to the NFC core through the
callback passed to im_transceive(). Processing stops.
2 - If the response is 1 byte long and is a ACK byte (0x0A), it is a
valid response of a WRITE command for example. First packet byte
is set to 0 for no-error and passed back to the NFC core.
Processing stops.
3 - Any other response is treated as an error and -EIO error code is
returned to the NFC core through the response callback.
Moreover, since the driver can't differentiate success response from a
NACK response, the digital stack has to handle CRC calculation.
Thus, this patch also adds support for CRC calculation. If the driver
doesn't handle it, the digital stack will calculate CRC and will add it
to sent frames. CRC will also be checked and removed from received
frames. Pointers to the correct CRC calculation functions are stored in
the digital stack device structure when a target is detected. This
avoids the need to check the current target type for every call to
im_transceive() and for every response received from a peer device.
Signed-off-by: Thierry Escande <thierry.escande@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This implements the mechanism used to send commands to the driver in
initiator mode through in_send_cmd().
Commands are serialized and sent to the driver by using a work item
on the system workqueue. Responses are handled asynchronously by
another work item. Once the digital stack receives the response through
the command_complete callback, the next command is sent to the driver.
This also implements the polling mechanism. It's handled by a work item
cycling on all supported protocols. The start poll command for a given
protocol is sent to the driver using the mechanism described above.
The process continues until a peer is discovered or stop_poll is
called. This patch implements the poll function for NFC-A that sends a
SENS_REQ command and waits for the SENS_RES response.
Signed-off-by: Thierry Escande <thierry.escande@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Revert commit 3b38722efd ("memcg, vmscan: integrate soft reclaim
tighter with zone shrinking code")
I merged this prematurely - Michal and Johannes still disagree about the
overall design direction and the future remains unclear.
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Revert commit a5b7c87f92 ("vmscan, memcg: do softlimit reclaim also
for targeted reclaim")
I merged this prematurely - Michal and Johannes still disagree about the
overall design direction and the future remains unclear.
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Revert commit de57780dc6 ("memcg: enhance memcg iterator to support
predicates")
I merged this prematurely - Michal and Johannes still disagree about the
overall design direction and the future remains unclear.
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
watchdog_tresh controls how often nmi perf event counter checks per-cpu
hrtimer_interrupts counter and blows up if the counter hasn't changed
since the last check. The counter is updated by per-cpu
watchdog_hrtimer hrtimer which is scheduled with 2/5 watchdog_thresh
period which guarantees that hrtimer is scheduled 2 times per the main
period. Both hrtimer and perf event are started together when the
watchdog is enabled.
So far so good. But...
But what happens when watchdog_thresh is updated from sysctl handler?
proc_dowatchdog will set a new sampling period and hrtimer callback
(watchdog_timer_fn) will use the new value in the next round. The
problem, however, is that nobody tells the perf event that the sampling
period has changed so it is ticking with the period configured when it
has been set up.
This might result in an ear ripping dissonance between perf and hrtimer
parts if the watchdog_thresh is increased. And even worse it might lead
to KABOOM if the watchdog is configured to panic on such a spurious
lockup.
This patch fixes the issue by updating both nmi perf even counter and
hrtimers if the threshold value has changed.
The nmi one is disabled and then reinitialized from scratch. This has
an unpleasant side effect that the allocation of the new event might
fail theoretically so the hard lockup detector would be disabled for
such cpus. On the other hand such a memory allocation failure is very
unlikely because the original event is deallocated right before.
It would be much nicer if we just changed perf event period but there
doesn't seem to be any API to do that right now. It is also unfortunate
that perf_event_alloc uses GFP_KERNEL allocation unconditionally so we
cannot use on_each_cpu() and do the same thing from the per-cpu context.
The update from the current CPU should be safe because
perf_event_disable removes the event atomically before it clears the
per-cpu watchdog_ev so it cannot change anything under running handler
feet.
The hrtimer is simply restarted (thanks to Don Zickus who has pointed
this out) if it is queued because we cannot rely it will fire&adopt to
the new sampling period before a new nmi event triggers (when the
treshold is decreased).
[akpm@linux-foundation.org: the UP version of __smp_call_function_single ended up in the wrong place]
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Fabio Estevam <festevam@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is the initial commit of the NFC Digital Protocol stack
implementation.
It offers an interface for devices that don't have an embedded NFC
Digital protocol stack. The driver instantiates the digital stack by
calling nfc_digital_allocate_device(). Within the nfc_digital_ops
structure, the driver specifies a set of function pointers for driver
operations. These functions must be implemented by the driver and are:
in_configure_hw:
Hardware configuration for RF technology and communication framing in
initiator mode. This is a synchronous function.
in_send_cmd:
Initiator mode data exchange using RF technology and framing previously
set with in_configure_hw. The peer response is returned through
callback cb. If an io error occurs or the peer didn't reply within the
specified timeout (ms), the error code is passed back through the resp
pointer. This is an asynchronous function.
tg_configure_hw:
Hardware configuration for RF technology and communication framing in
target mode. This is a synchronous function.
tg_send_cmd:
Target mode data exchange using RF technology and framing previously
set with tg_configure_hw. The peer next command is returned through
callback cb. If an io error occurs or the peer didn't reply within the
specified timeout (ms), the error code is passed back through the resp
pointer. This is an asynchronous function.
tg_listen:
Put the device in listen mode waiting for data from the peer device.
This is an asynchronous function.
tg_listen_mdaa:
If supported, put the device in automatic listen mode with mode
detection and automatic anti-collision. In this mode, the device
automatically detects the RF technology and executes the
anti-collision detection using the command responses specified in
mdaa_params. The mdaa_params structure contains SENS_RES, NFCID1, and
SEL_RES for 106A RF tech. NFCID2 and system code (sc) for 212F and
424F. The driver returns the NFC-DEP ATR_REQ command through cb. The
digital stack deducts the RF tech by analyzing the SoD of the frame
containing the ATR_REQ command. This is an asynchronous function.
switch_rf:
Turns device radio on or off. The stack does not call explicitly
switch_rf to turn the radio on. A call to in|tg_configure_hw must turn
the device radio on.
abort_cmd:
Discard the last sent command.
Then the driver registers itself against the digital stack by using
nfc_digital_register_device() which in turn registers the digital stack
against the NFC core layer. The digital stack implements common NFC
operations like dev_up(), dev_down(), start_poll(), stop_poll(), etc.
This patch is only a skeleton and NFC operations are just stubs.
Signed-off-by: Thierry Escande <thierry.escande@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
NCI SPI layer should not manage the nci dev, this is the job of the nci
chipset driver. This layer should be limited to frame/deframe nci
packets, and optionnaly check integrity (crc) and manage the ack/nak
protocol.
The NCI SPI must not be mixed up with an NCI dev. spi_[dev|device] are
therefore renamed to a simple spi for more clarity.
The header and crc sizes are moved to nci.h so that drivers can use
them to reserve space in outgoing skbs.
nci_spi_send() is exported to be accessible by drivers.
Signed-off-by: Eric Lapuyade <eric.lapuyade@intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
struct nfc_phy_ops is not an HCI structure only, it can also be used by
NCI or direct NFC Core drivers.
Signed-off-by: Eric Lapuyade <eric.lapuyade@intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
An hci dev is an hdev. An nci dev is an ndev. Calling an nci spi dev an
ndev is misleading since it's not the same thing. The nci dev contained
in the nci spi dev is also named inconsistently.
Signed-off-by: Eric Lapuyade <eric.lapuyade@intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Use a more standard kernel style macro logging name.
Standardize the spacing of the "NFC: " prefix.
Add \n to uses, remove from macro.
Fix the defective uses that already had a \n.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Use the generic kernel function instead of a home-grown
one that does the same thing.
Add \n to uses not at the macro. Don't add \n where
the nfc_dev_dbg macro mistakenly had them already.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Remove NEED_MACH_GPIO_H config select option for ARCH_DAVINCI
to start using gpiolib interface for davinci platforms. This makes
it easier to use the gpio driver on other platforms as it breaks
dependency on mach-davinci.
Latencies for gpio_get/set APIs will increase. On measurement,
latency was found to have increased by 18 microsecond with
gpiolib API as compared to inline APIs.
Measurement was done on DA850 EVM for gpio_get_value() API by
taking the printk timing across the call with interrupts disabled.
inline gpio API with interrupt disabled
[ 29.734337] before gpio_get
[ 29.736847] after gpio_get
Time difference 0.00251
gpio library with interrupt disabled
[ 272.876763] before gpio_get
[ 272.879291] after gpio_get
Time difference 0.002528
Latency increased by (0.002528 - 0.00251) = 18 microsecond.
While at it, remove GPIO_TYPE_DAVINCI enum definition as
gpio-davinci.c is converted to Linux device driver model.
Signed-off-by: Philip Avinash <avinashphilip@ti.com>
Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
[nsekhar@ti.com: minor edits to commit message]
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
'.done' is used to mark the completion of 'async_pf_execute()', but
'cancel_work_sync()' returns true when the work was canceled, so we
use it instead.
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We currently accept cookies that were created less than 4 minutes ago
(ie, cookies with counter delta 0-3). Combined with the 8 mss table
values, this yields 32 possible values (out of 2**32) that will be valid.
Reducing the lifetime to < 2 minutes halves the guessing chance while
still providing a large enough period.
While at it, get rid of jiffies value -- they overflow too quickly on
32 bit platforms.
getnstimeofday is used to create a counter that increments every 64s.
perf shows getnstimeofday cost is negible compared to sha_transform;
normal tcp initial sequence number generation uses getnstimeofday, too.
Reported-by: Jakob Lell <jakob@jakoblell.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
According to Intel Vt-D specs, the offset of Invalidation complete
status register should be 0x9C, not 0x98.
See Intel's VT-d spec, Revision 1.3, Chapter 10.4, Page 98;
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Add tracing feature to iommu to report various iommu events. Classes
iommu_group, iommu_device, and iommu_map_unmap are defined.
iommu_group class events can be enabled to trigger when devices get added
to and removed from an iommu group. Trace information includes iommu group
id and device name.
iommu:add_device_to_group
iommu:remove_device_from_group
iommu_device class events can be enabled to trigger when devices are attached
to and detached from a domain. Trace information includes device name.
iommu:attach_device_to_domain
iommu:detach_device_from_domain
iommu_map_unmap class events can be enabled to trigger when iommu map and
unmap iommu ops. Trace information includes iova, physical address (map event
only), and size.
iommu:map
iommu:unmap
Signed-off-by: Shuah Khan <shuah.kh@samsung.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Add support for per-user_namespace registers of persistent per-UID kerberos
caches held within the kernel.
This allows the kerberos cache to be retained beyond the life of all a user's
processes so that the user's cron jobs can work.
The kerberos cache is envisioned as a keyring/key tree looking something like:
struct user_namespace
\___ .krb_cache keyring - The register
\___ _krb.0 keyring - Root's Kerberos cache
\___ _krb.5000 keyring - User 5000's Kerberos cache
\___ _krb.5001 keyring - User 5001's Kerberos cache
\___ tkt785 big_key - A ccache blob
\___ tkt12345 big_key - Another ccache blob
Or possibly:
struct user_namespace
\___ .krb_cache keyring - The register
\___ _krb.0 keyring - Root's Kerberos cache
\___ _krb.5000 keyring - User 5000's Kerberos cache
\___ _krb.5001 keyring - User 5001's Kerberos cache
\___ tkt785 keyring - A ccache
\___ krbtgt/REDHAT.COM@REDHAT.COM big_key
\___ http/REDHAT.COM@REDHAT.COM user
\___ afs/REDHAT.COM@REDHAT.COM user
\___ nfs/REDHAT.COM@REDHAT.COM user
\___ krbtgt/KERNEL.ORG@KERNEL.ORG big_key
\___ http/KERNEL.ORG@KERNEL.ORG big_key
What goes into a particular Kerberos cache is entirely up to userspace. Kernel
support is limited to giving you the Kerberos cache keyring that you want.
The user asks for their Kerberos cache by:
krb_cache = keyctl_get_krbcache(uid, dest_keyring);
The uid is -1 or the user's own UID for the user's own cache or the uid of some
other user's cache (requires CAP_SETUID). This permits rpc.gssd or whatever to
mess with the cache.
The cache returned is a keyring named "_krb.<uid>" that the possessor can read,
search, clear, invalidate, unlink from and add links to. Active LSMs get a
chance to rule on whether the caller is permitted to make a link.
Each uid's cache keyring is created when it first accessed and is given a
timeout that is extended each time this function is called so that the keyring
goes away after a while. The timeout is configurable by sysctl but defaults to
three days.
Each user_namespace struct gets a lazily-created keyring that serves as the
register. The cache keyrings are added to it. This means that standard key
search and garbage collection facilities are available.
The user_namespace struct's register goes away when it does and anything left
in it is then automatically gc'd.
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Simo Sorce <simo@redhat.com>
cc: Serge E. Hallyn <serge.hallyn@ubuntu.com>
cc: Eric W. Biederman <ebiederm@xmission.com>
Implement a big key type that can save its contents to tmpfs and thus
swapspace when memory is tight. This is useful for Kerberos ticket caches.
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Simo Sorce <simo@redhat.com>
Expand the capacity of a keyring to be able to hold a lot more keys by using
the previously added associative array implementation. Currently the maximum
capacity is:
(PAGE_SIZE - sizeof(header)) / sizeof(struct key *)
which, on a 64-bit system, is a little more 500. However, since this is being
used for the NFS uid mapper, we need more than that. The new implementation
gives us effectively unlimited capacity.
With some alterations, the keyutils testsuite runs successfully to completion
after this patch is applied. The alterations are because (a) keyrings that
are simply added to no longer appear ordered and (b) some of the errors have
changed a bit.
Signed-off-by: David Howells <dhowells@redhat.com>
Add a generic associative array implementation that can be used as the
container for keyrings, thereby massively increasing the capacity available
whilst also speeding up searching in keyrings that contain a lot of keys.
This may also be useful in FS-Cache for tracking cookies.
Documentation is added into Documentation/associative_array.txt
Some of the properties of the implementation are:
(1) Objects are opaque pointers. The implementation does not care where they
point (if anywhere) or what they point to (if anything).
[!] NOTE: Pointers to objects _must_ be zero in the two least significant
bits.
(2) Objects do not need to contain linkage blocks for use by the array. This
permits an object to be located in multiple arrays simultaneously.
Rather, the array is made up of metadata blocks that point to objects.
(3) Objects are labelled as being one of two types (the type is a bool value).
This information is stored in the array, but has no consequence to the
array itself or its algorithms.
(4) Objects require index keys to locate them within the array.
(5) Index keys must be unique. Inserting an object with the same key as one
already in the array will replace the old object.
(6) Index keys can be of any length and can be of different lengths.
(7) Index keys should encode the length early on, before any variation due to
length is seen.
(8) Index keys can include a hash to scatter objects throughout the array.
(9) The array can iterated over. The objects will not necessarily come out in
key order.
(10) The array can be iterated whilst it is being modified, provided the RCU
readlock is being held by the iterator. Note, however, under these
circumstances, some objects may be seen more than once. If this is a
problem, the iterator should lock against modification. Objects will not
be missed, however, unless deleted.
(11) Objects in the array can be looked up by means of their index key.
(12) Objects can be looked up whilst the array is being modified, provided the
RCU readlock is being held by the thread doing the look up.
The implementation uses a tree of 16-pointer nodes internally that are indexed
on each level by nibbles from the index key. To improve memory efficiency,
shortcuts can be emplaced to skip over what would otherwise be a series of
single-occupancy nodes. Further, nodes pack leaf object pointers into spare
space in the node rather than making an extra branch until as such time an
object needs to be added to a full node.
Signed-off-by: David Howells <dhowells@redhat.com>
Define a __key_get() wrapper to use rather than atomic_inc() on the key usage
count as this makes it easier to hook in refcount error debugging.
Signed-off-by: David Howells <dhowells@redhat.com>