From: Linus Torvalds Date: Mon, 31 Mar 2014 17:59:39 +0000 (-0700) Subject: Merge branch 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel... X-Git-Tag: v3.15-rc1~184 X-Git-Url: http://git.cascardo.info/?p=cascardo%2Flinux.git;a=commitdiff_plain;h=462bf234a82ae1ae9d7628f59bc81022591e1348 Merge branch 'core-locking-for-linus' of git://git./linux/kernel/git/tip/tip Pull core locking updates from Ingo Molnar: "The biggest change is the MCS spinlock generalization changes from Tim Chen, Peter Zijlstra, Jason Low et al. There's also lockdep fixes/enhancements from Oleg Nesterov, in particular a false negative fix related to lockdep_set_novalidate_class() usage" * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (22 commits) locking/mutex: Fix debug checks locking/mutexes: Add extra reschedule point locking/mutexes: Introduce cancelable MCS lock for adaptive spinning locking/mutexes: Unlock the mutex without the wait_lock locking/mutexes: Modify the way optimistic spinners are queued locking/mutexes: Return false if task need_resched() in mutex_can_spin_on_owner() locking: Move mcs_spinlock.h into kernel/locking/ m68k: Skip futex_atomic_cmpxchg_inatomic() test futex: Allow architectures to skip futex_atomic_cmpxchg_inatomic() test Revert "sched/wait: Suppress Sparse 'variable shadowing' warning" lockdep: Change lockdep_set_novalidate_class() to use _and_name lockdep: Change mark_held_locks() to check hlock->check instead of lockdep_no_validate lockdep: Don't create the wrong dependency on hlock->check == 0 lockdep: Make held_lock->check and "int check" argument bool locking/mcs: Allow architecture specific asm files to be used for contended case locking/mcs: Order the header files in Kbuild of each architecture in alphabetical order sched/wait: Suppress Sparse 'variable shadowing' warning hung_task/Documentation: Fix hung_task_warnings description locking/mcs: Allow architectures to hook in to contended paths locking/mcs: Micro-optimize the MCS code, add extra comments ... --- 462bf234a82ae1ae9d7628f59bc81022591e1348 diff --cc arch/avr32/include/asm/Kbuild index c7c64a63c29f,8b398ff96974..00a0f3ccd6eb --- a/arch/avr32/include/asm/Kbuild +++ b/arch/avr32/include/asm/Kbuild @@@ -1,22 -1,22 +1,23 @@@ - generic-y += clkdev.h - generic-y += cputime.h - generic-y += delay.h - generic-y += device.h - generic-y += div64.h - generic-y += emergency-restart.h - generic-y += exec.h - generic-y += futex.h - generic-y += preempt.h - generic-y += irq_regs.h - generic-y += param.h - generic-y += local.h - generic-y += local64.h - generic-y += percpu.h - generic-y += scatterlist.h - generic-y += sections.h - generic-y += topology.h - generic-y += trace_clock.h + generic-y += clkdev.h + generic-y += cputime.h + generic-y += delay.h + generic-y += device.h + generic-y += div64.h + generic-y += emergency-restart.h + generic-y += exec.h + generic-y += futex.h + generic-y += hash.h + generic-y += irq_regs.h + generic-y += local.h + generic-y += local64.h + generic-y += mcs_spinlock.h + generic-y += param.h + generic-y += percpu.h + generic-y += preempt.h + generic-y += scatterlist.h + generic-y += sections.h + generic-y += topology.h + generic-y += trace_clock.h +generic-y += vga.h - generic-y += xor.h - generic-y += hash.h + generic-y += xor.h diff --cc kernel/futex.c index 08ec814ad9d2,5d17e3a83f8c..67dacaf93e56 --- a/kernel/futex.c +++ b/kernel/futex.c @@@ -2893,21 -2882,10 +2914,11 @@@ static int __init futex_init(void &futex_shift, NULL, futex_hashsize, futex_hashsize); futex_hashsize = 1UL << futex_shift; - /* - * This will fail and we want it. Some arch implementations do - * runtime detection of the futex_atomic_cmpxchg_inatomic() - * functionality. We want to know that before we call in any - * of the complex code paths. Also we want to prevent - * registration of robust lists in that case. NULL is - * guaranteed to fault and we get -EFAULT on functional - * implementation, the non-functional ones will return - * -ENOSYS. - */ - if (cmpxchg_futex_value_locked(&curval, NULL, 0, 0) == -EFAULT) - futex_cmpxchg_enabled = 1; + + futex_detect_cmpxchg(); for (i = 0; i < futex_hashsize; i++) { + atomic_set(&futex_queues[i].waiters, 0); plist_head_init(&futex_queues[i].chain); spin_lock_init(&futex_queues[i].lock); }