Merge git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile

Pull arch/tile updates from Chris Metcalf:
 "This is an even quieter cycle than usual"

* git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
  Fix typo
  Fix typo
  Fix typo
  tile: sort the "select" lines in the TILE/TILEGX configs
  tile: clarify barrier semantics of atomic_add_return
  tile/defconfigs: Remove CONFIG_IPV6_PRIVACY
此提交包含在:
Linus Torvalds
2016-05-23 19:05:11 -07:00
當前提交 d6542d76ec
共有 7 個檔案被更改,包括 50 行新增50 行删除

查看文件

@@ -37,12 +37,25 @@ static inline void atomic_add(int i, atomic_t *v)
__insn_fetchadd4((void *)&v->counter, i);
}
/*
* Note a subtlety of the locking here. We are required to provide a
* full memory barrier before and after the operation. However, we
* only provide an explicit mb before the operation. After the
* operation, we use barrier() to get a full mb for free, because:
*
* (1) The barrier directive to the compiler prohibits any instructions
* being statically hoisted before the barrier;
* (2) the microarchitecture will not issue any further instructions
* until the fetchadd result is available for the "+ i" add instruction;
* (3) the smb_mb before the fetchadd ensures that no other memory
* operations are in flight at this point.
*/
static inline int atomic_add_return(int i, atomic_t *v)
{
int val;
smp_mb(); /* barrier for proper semantics */
val = __insn_fetchadd4((void *)&v->counter, i) + i;
barrier(); /* the "+ i" above will wait on memory */
barrier(); /* equivalent to smp_mb(); see block comment above */
return val;
}
@@ -95,7 +108,7 @@ static inline long atomic64_add_return(long i, atomic64_t *v)
int val;
smp_mb(); /* barrier for proper semantics */
val = __insn_fetchadd((void *)&v->counter, i) + i;
barrier(); /* the "+ i" above will wait on memory */
barrier(); /* equivalent to smp_mb; see atomic_add_return() */
return val;
}