• Peter Zijlstra's avatar
    mutex: implement adaptive spinning · 0d66bf6d
    Peter Zijlstra authored
    Change mutex contention behaviour such that it will sometimes busy wait on
    acquisition - moving its behaviour closer to that of spinlocks.
    
    This concept got ported to mainline from the -rt tree, where it was originally
    implemented for rtmutexes by Steven Rostedt, based on work by Gregory Haskins.
    
    Testing with Ingo's test-mutex application (http://lkml.org/lkml/2006/1/8/50
    
    )
    gave a 345% boost for VFS scalability on my testbox:
    
     # ./test-mutex-shm V 16 10 | grep "^avg ops"
     avg ops/sec:               296604
    
     # ./test-mutex-shm V 16 10 | grep "^avg ops"
     avg ops/sec:               85870
    
    The key criteria for the busy wait is that the lock owner has to be running on
    a (different) cpu. The idea is that as long as the owner is running, there is a
    fair chance it'll release the lock soon, and thus we'll be better off spinning
    instead of blocking/scheduling.
    
    Since regular mutexes (as opposed to rtmutexes) do not atomically track the
    owner, we add the owner in a non-atomic fashion and deal with the races in
    the slowpath.
    
    Furthermore, to ease the testing of the performance impact of this new code,
    there is means to disable this behaviour runtime (without having to reboot
    the system), when scheduler debugging is enabled (CONFIG_SCHED_DEBUG=y),
    by issuing the following command:
    
     # echo NO_OWNER_SPIN > /debug/sched_features
    
    This command re-enables spinning again (this is also the default):
    
     # echo OWNER_SPIN > /debug/sched_features
    Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
    Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
    0d66bf6d
mutex-debug.h 1.66 KB
/*
 * Mutexes: blocking mutual exclusion locks
 *
 * started by Ingo Molnar:
 *
 *  Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
 *
 * This file contains mutex debugging related internal declarations,
 * prototypes and inline functions, for the CONFIG_DEBUG_MUTEXES case.
 * More details are in kernel/mutex-debug.c.
 */

/*
 * This must be called with lock->wait_lock held.
 */
extern void debug_mutex_lock_common(struct mutex *lock,
				    struct mutex_waiter *waiter);
extern void debug_mutex_wake_waiter(struct mutex *lock,
				    struct mutex_waiter *waiter);
extern void debug_mutex_free_waiter(struct mutex_waiter *waiter);
extern void debug_mutex_add_waiter(struct mutex *lock,
				   struct mutex_waiter *waiter,
				   struct thread_info *ti);
extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
				struct thread_info *ti);
extern void debug_mutex_unlock(struct mutex *lock);
extern void debug_mutex_init(struct mutex *lock, const char *name,
			     struct lock_class_key *key);

static inline void mutex_set_owner(struct mutex *lock)
{
	lock->owner = current_thread_info();
}

static inline void mutex_clear_owner(struct mutex *lock)
{
	lock->owner = NULL;
}

#define spin_lock_mutex(lock, flags)			\
	do {						\
		struct mutex *l = container_of(lock, struct mutex, wait_lock); \
							\
		DEBUG_LOCKS_WARN_ON(in_interrupt());	\
		local_irq_save(flags);			\
		__raw_spin_lock(&(lock)->raw_lock);	\
		DEBUG_LOCKS_WARN_ON(l->magic != l);	\
	} while (0)

#define spin_unlock_mutex(lock, flags)			\
	do {						\
		__raw_spin_unlock(&(lock)->raw_lock);	\
		local_irq_restore(flags);		\
		preempt_check_resched();		\
	} while (0)