Backmerge tag 'v4.14-rc7' into drm-next

Linux 4.14-rc7

Requested by Ben Skeggs for nouveau to avoid major conflicts,
and things were getting a bit conflicty already, esp around amdgpu
reverts.
This commit is contained in:
Dave Airlie
2017-11-02 12:40:41 +10:00
1048 changed files with 11268 additions and 6829 deletions

View File

@@ -1092,8 +1092,8 @@ config PROVE_LOCKING
select DEBUG_MUTEXES
select DEBUG_RT_MUTEXES if RT_MUTEXES
select DEBUG_LOCK_ALLOC
select LOCKDEP_CROSSRELEASE
select LOCKDEP_COMPLETIONS
select LOCKDEP_CROSSRELEASE if BROKEN
select LOCKDEP_COMPLETIONS if BROKEN
select TRACE_IRQFLAGS
default n
help
@@ -1590,6 +1590,54 @@ config LATENCYTOP
source kernel/trace/Kconfig
config PROVIDE_OHCI1394_DMA_INIT
bool "Remote debugging over FireWire early on boot"
depends on PCI && X86
help
If you want to debug problems which hang or crash the kernel early
on boot and the crashing machine has a FireWire port, you can use
this feature to remotely access the memory of the crashed machine
over FireWire. This employs remote DMA as part of the OHCI1394
specification which is now the standard for FireWire controllers.
With remote DMA, you can monitor the printk buffer remotely using
firescope and access all memory below 4GB using fireproxy from gdb.
Even controlling a kernel debugger is possible using remote DMA.
Usage:
If ohci1394_dma=early is used as boot parameter, it will initialize
all OHCI1394 controllers which are found in the PCI config space.
As all changes to the FireWire bus such as enabling and disabling
devices cause a bus reset and thereby disable remote DMA for all
devices, be sure to have the cable plugged and FireWire enabled on
the debugging host before booting the debug target for debugging.
This code (~1k) is freed after boot. By then, the firewire stack
in charge of the OHCI-1394 controllers should be used instead.
See Documentation/debugging-via-ohci1394.txt for more information.
config DMA_API_DEBUG
bool "Enable debugging of DMA-API usage"
depends on HAVE_DMA_API_DEBUG
help
Enable this option to debug the use of the DMA API by device drivers.
With this option you will be able to detect common bugs in device
drivers like double-freeing of DMA mappings or freeing mappings that
were never allocated.
This also attempts to catch cases where a page owned by DMA is
accessed by the cpu in a way that could cause data corruption. For
example, this enables cow_user_page() to check that the source page is
not undergoing DMA.
This option causes a performance degradation. Use only if you want to
debug device drivers and dma interactions.
If unsure, say N.
menu "Runtime Testing"
config LKDTM
@@ -1749,56 +1797,6 @@ config TEST_PARMAN
If unsure, say N.
endmenu # runtime tests
config PROVIDE_OHCI1394_DMA_INIT
bool "Remote debugging over FireWire early on boot"
depends on PCI && X86
help
If you want to debug problems which hang or crash the kernel early
on boot and the crashing machine has a FireWire port, you can use
this feature to remotely access the memory of the crashed machine
over FireWire. This employs remote DMA as part of the OHCI1394
specification which is now the standard for FireWire controllers.
With remote DMA, you can monitor the printk buffer remotely using
firescope and access all memory below 4GB using fireproxy from gdb.
Even controlling a kernel debugger is possible using remote DMA.
Usage:
If ohci1394_dma=early is used as boot parameter, it will initialize
all OHCI1394 controllers which are found in the PCI config space.
As all changes to the FireWire bus such as enabling and disabling
devices cause a bus reset and thereby disable remote DMA for all
devices, be sure to have the cable plugged and FireWire enabled on
the debugging host before booting the debug target for debugging.
This code (~1k) is freed after boot. By then, the firewire stack
in charge of the OHCI-1394 controllers should be used instead.
See Documentation/debugging-via-ohci1394.txt for more information.
config DMA_API_DEBUG
bool "Enable debugging of DMA-API usage"
depends on HAVE_DMA_API_DEBUG
help
Enable this option to debug the use of the DMA API by device drivers.
With this option you will be able to detect common bugs in device
drivers like double-freeing of DMA mappings or freeing mappings that
were never allocated.
This also attempts to catch cases where a page owned by DMA is
accessed by the cpu in a way that could cause data corruption. For
example, this enables cow_user_page() to check that the source page is
not undergoing DMA.
This option causes a performance degradation. Use only if you want to
debug device drivers and dma interactions.
If unsure, say N.
config TEST_LKM
tristate "Test module loading with 'hello world' module"
default n
@@ -1873,18 +1871,6 @@ config TEST_UDELAY
If unsure, say N.
config MEMTEST
bool "Memtest"
depends on HAVE_MEMBLOCK
---help---
This option adds a kernel parameter 'memtest', which allows memtest
to be set.
memtest=0, mean disabled; -- default
memtest=1, mean do 1 test pattern;
...
memtest=17, mean do 17 test patterns.
If you are unsure how to answer this question, answer N.
config TEST_STATIC_KEYS
tristate "Test static keys"
default n
@@ -1894,16 +1880,6 @@ config TEST_STATIC_KEYS
If unsure, say N.
config BUG_ON_DATA_CORRUPTION
bool "Trigger a BUG when data corruption is detected"
select DEBUG_LIST
help
Select this option if the kernel should BUG when it encounters
data corruption in kernel memory structures when they get checked
for validity.
If unsure, say N.
config TEST_KMOD
tristate "kmod stress tester"
default n
@@ -1941,6 +1917,29 @@ config TEST_DEBUG_VIRTUAL
If unsure, say N.
endmenu # runtime tests
config MEMTEST
bool "Memtest"
depends on HAVE_MEMBLOCK
---help---
This option adds a kernel parameter 'memtest', which allows memtest
to be set.
memtest=0, mean disabled; -- default
memtest=1, mean do 1 test pattern;
...
memtest=17, mean do 17 test patterns.
If you are unsure how to answer this question, answer N.
config BUG_ON_DATA_CORRUPTION
bool "Trigger a BUG when data corruption is detected"
select DEBUG_LIST
help
Select this option if the kernel should BUG when it encounters
data corruption in kernel memory structures when they get checked
for validity.
If unsure, say N.
source "samples/Kconfig"

View File

@@ -598,21 +598,31 @@ static bool assoc_array_insert_into_terminal_node(struct assoc_array_edit *edit,
if ((edit->segment_cache[ASSOC_ARRAY_FAN_OUT] ^ base_seg) == 0)
goto all_leaves_cluster_together;
/* Otherwise we can just insert a new node ahead of the old
* one.
/* Otherwise all the old leaves cluster in the same slot, but
* the new leaf wants to go into a different slot - so we
* create a new node (n0) to hold the new leaf and a pointer to
* a new node (n1) holding all the old leaves.
*
* This can be done by falling through to the node splitting
* path.
*/
goto present_leaves_cluster_but_not_new_leaf;
pr_devel("present leaves cluster but not new leaf\n");
}
split_node:
pr_devel("split node\n");
/* We need to split the current node; we know that the node doesn't
* simply contain a full set of leaves that cluster together (it
* contains meta pointers and/or non-clustering leaves).
/* We need to split the current node. The node must contain anything
* from a single leaf (in the one leaf case, this leaf will cluster
* with the new leaf) and the rest meta-pointers, to all leaves, some
* of which may cluster.
*
* It won't contain the case in which all the current leaves plus the
* new leaves want to cluster in the same slot.
*
* We need to expel at least two leaves out of a set consisting of the
* leaves in the node and the new leaf.
* leaves in the node and the new leaf. The current meta pointers can
* just be copied as they shouldn't cluster with any of the leaves.
*
* We need a new node (n0) to replace the current one and a new node to
* take the expelled nodes (n1).
@@ -717,33 +727,6 @@ found_slot_for_multiple_occupancy:
pr_devel("<--%s() = ok [split node]\n", __func__);
return true;
present_leaves_cluster_but_not_new_leaf:
/* All the old leaves cluster in the same slot, but the new leaf wants
* to go into a different slot, so we create a new node to hold the new
* leaf and a pointer to a new node holding all the old leaves.
*/
pr_devel("present leaves cluster but not new leaf\n");
new_n0->back_pointer = node->back_pointer;
new_n0->parent_slot = node->parent_slot;
new_n0->nr_leaves_on_branch = node->nr_leaves_on_branch;
new_n1->back_pointer = assoc_array_node_to_ptr(new_n0);
new_n1->parent_slot = edit->segment_cache[0];
new_n1->nr_leaves_on_branch = node->nr_leaves_on_branch;
edit->adjust_count_on = new_n0;
for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++)
new_n1->slots[i] = node->slots[i];
new_n0->slots[edit->segment_cache[0]] = assoc_array_node_to_ptr(new_n0);
edit->leaf_p = &new_n0->slots[edit->segment_cache[ASSOC_ARRAY_FAN_OUT]];
edit->set[0].ptr = &assoc_array_ptr_to_node(node->back_pointer)->slots[node->parent_slot];
edit->set[0].to = assoc_array_node_to_ptr(new_n0);
edit->excised_meta[0] = assoc_array_node_to_ptr(node);
pr_devel("<--%s() = ok [insert node before]\n", __func__);
return true;
all_leaves_cluster_together:
/* All the leaves, new and old, want to cluster together in this node
* in the same slot, so we have to replace this node with a shortcut to

View File

@@ -87,6 +87,12 @@ static int digsig_verify_rsa(struct key *key,
down_read(&key->sem);
ukp = user_key_payload_locked(key);
if (!ukp) {
/* key was revoked before we acquired its semaphore */
err = -EKEYREVOKED;
goto err1;
}
if (ukp->datalen < sizeof(*pkh))
goto err1;

View File

@@ -146,8 +146,8 @@ EXPORT_SYMBOL(idr_get_next_ext);
* idr_alloc() and idr_remove() (as long as the ID being removed is not
* the one being replaced!).
*
* Returns: 0 on success. %-ENOENT indicates that @id was not found.
* %-EINVAL indicates that @id or @ptr were not valid.
* Returns: the old value on success. %-ENOENT indicates that @id was not
* found. %-EINVAL indicates that @id or @ptr were not valid.
*/
void *idr_replace(struct idr *idr, void *ptr, int id)
{

View File

@@ -294,6 +294,26 @@ static void cleanup_uevent_env(struct subprocess_info *info)
}
#endif
static void zap_modalias_env(struct kobj_uevent_env *env)
{
static const char modalias_prefix[] = "MODALIAS=";
int i;
for (i = 0; i < env->envp_idx;) {
if (strncmp(env->envp[i], modalias_prefix,
sizeof(modalias_prefix) - 1)) {
i++;
continue;
}
if (i != env->envp_idx - 1)
memmove(&env->envp[i], &env->envp[i + 1],
sizeof(env->envp[i]) * env->envp_idx - 1);
env->envp_idx--;
}
}
/**
* kobject_uevent_env - send an uevent with environmental data
*
@@ -409,16 +429,29 @@ int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
}
}
/*
* Mark "add" and "remove" events in the object to ensure proper
* events to userspace during automatic cleanup. If the object did
* send an "add" event, "remove" will automatically generated by
* the core, if not already done by the caller.
*/
if (action == KOBJ_ADD)
switch (action) {
case KOBJ_ADD:
/*
* Mark "add" event so we can make sure we deliver "remove"
* event to userspace during automatic cleanup. If
* the object did send an "add" event, "remove" will
* automatically generated by the core, if not already done
* by the caller.
*/
kobj->state_add_uevent_sent = 1;
else if (action == KOBJ_REMOVE)
break;
case KOBJ_REMOVE:
kobj->state_remove_uevent_sent = 1;
break;
case KOBJ_UNBIND:
zap_modalias_env(env);
break;
default:
break;
}
mutex_lock(&uevent_sock_mutex);
/* we will send an event, so request a new sequence number */

View File

@@ -2031,11 +2031,13 @@ void locking_selftest(void)
print_testname("mixed read-lock/lock-write ABBA");
pr_cont(" |");
dotest(rlock_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
#ifdef CONFIG_PROVE_LOCKING
/*
* Lockdep does indeed fail here, but there's nothing we can do about
* that now. Don't kill lockdep for it.
*/
unexpected_testcase_failures--;
#endif
pr_cont(" |");
dotest(rwsem_ABBA1, FAILURE, LOCKTYPE_RWSEM);

View File

@@ -85,8 +85,8 @@ static FORCE_INLINE int LZ4_decompress_generic(
const BYTE * const lowLimit = lowPrefix - dictSize;
const BYTE * const dictEnd = (const BYTE *)dictStart + dictSize;
const unsigned int dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };
const int dec64table[] = { 0, 0, 0, -1, 0, 1, 2, 3 };
static const unsigned int dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };
static const int dec64table[] = { 0, 0, 0, -1, 0, 1, 2, 3 };
const int safeDecode = (endOnInput == endOnInputSize);
const int checkOffset = ((safeDecode) && (dictSize < (int)(64 * KB)));

View File

@@ -48,7 +48,9 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func)
if (time_is_before_jiffies(rs->begin + rs->interval)) {
if (rs->missed) {
if (!(rs->flags & RATELIMIT_MSG_ON_RELEASE)) {
pr_warn("%s: %d callbacks suppressed\n", func, rs->missed);
printk_deferred(KERN_WARNING
"%s: %d callbacks suppressed\n",
func, rs->missed);
rs->missed = 0;
}
}

View File

@@ -11,7 +11,7 @@
* ==========================================================================
*
* A finite state machine consists of n states (struct ts_fsm_token)
* representing the pattern as a finite automation. The data is read
* representing the pattern as a finite automaton. The data is read
* sequentially on an octet basis. Every state token specifies the number
* of recurrences and the type of value accepted which can be either a
* specific character or ctype based set of characters. The available

View File

@@ -27,7 +27,7 @@
*
* [1] Cormen, Leiserson, Rivest, Stein
* Introdcution to Algorithms, 2nd Edition, MIT Press
* [2] See finite automation theory
* [2] See finite automaton theory
*/
#include <linux/module.h>