From patchwork Sun Apr 28 17:03:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 16465 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f70.google.com (mail-qa0-f70.google.com [209.85.216.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B53FF2395E for ; Sun, 28 Apr 2013 17:06:17 +0000 (UTC) Received: by mail-qa0-f70.google.com with SMTP id hg5sf2438096qab.5 for ; Sun, 28 Apr 2013 10:05:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-beenthere:x-received:received-spf:x-received :x-forwarded-to:x-forwarded-for:delivered-to:x-received:received-spf :to:from:date:message-id:in-reply-to:references:user-agent :mime-version:cc:subject:x-beenthere:x-mailman-version:precedence :list-id:list-unsubscribe:list-archive:list-post:list-help :list-subscribe:errors-to:sender:x-gm-message-state :x-original-sender:x-original-authentication-results:mailing-list :x-google-group-id:content-type:content-transfer-encoding; bh=h9Bk3wjLn2FeLnd506ZF9qrb51S66SEwcPyLxpWFkE8=; b=J/ceklKRmbU6A1AoQd/UI4RSIFhDWJR20Ddgy/rueTy5i0qbhneopu/jYEbW0mUdZy LDFt0PcWbj1nrUI697Hg2JfvvV2qHMMrh7MVTBrQKmUFltQStGkk0iiRH76+FDxwns4x jbcuetHHCxtR++8y+NkmJ5Yj2TrDtAFnD5Kx0raiBj8kmtYMnZgljr8sXAulYhA/liCY C+TnN4I5FXrmKvrdHnhDIBdWPXxWAKJcax50xuiXSDl6wbfNjTt5q4m8WatTHaRYqb3t pgUmB/yjCMBW81JEA81shmUzqaMAT/clzwEOjsNj5itJ1SaZ8HJNeQM4oYDwL0/re3RX LWDQ== X-Received: by 10.224.88.200 with SMTP id b8mr37846807qam.8.1367168716952; Sun, 28 Apr 2013 10:05:16 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.73.39 with SMTP id i7ls2589778qev.48.gmail; Sun, 28 Apr 2013 10:05:16 -0700 (PDT) X-Received: by 10.220.146.82 with SMTP id g18mr24101595vcv.54.1367168716645; Sun, 28 Apr 2013 10:05:16 -0700 (PDT) Received: from mail-vb0-x22b.google.com (mail-vb0-x22b.google.com [2607:f8b0:400c:c02::22b]) by mx.google.com with ESMTPS id gr7si8616313vdc.142.2013.04.28.10.05.16 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 28 Apr 2013 10:05:16 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::22b is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::22b; Received: by mail-vb0-f43.google.com with SMTP id q13so3684588vbe.2 for ; Sun, 28 Apr 2013 10:05:16 -0700 (PDT) X-Received: by 10.52.166.103 with SMTP id zf7mr27338776vdb.94.1367168716371; Sun, 28 Apr 2013 10:05:16 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp27071veb; Sun, 28 Apr 2013 10:05:15 -0700 (PDT) X-Received: by 10.180.13.166 with SMTP id i6mr13330604wic.21.1367168714761; Sun, 28 Apr 2013 10:05:14 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id a5si3180913wix.49.2013.04.28.10.05.14 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 28 Apr 2013 10:05:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1UWV1L-0007ZN-45; Sun, 28 Apr 2013 17:04:15 +0000 Received: from adelie.canonical.com ([91.189.90.139]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1UWV1J-0007Yv-Uw for linaro-mm-sig@lists.linaro.org; Sun, 28 Apr 2013 17:04:14 +0000 Received: from lillypilly.canonical.com ([91.189.89.62]) by adelie.canonical.com with esmtp (Exim 4.71 #1 (Debian)) id 1UWV1E-0006OF-0r; Sun, 28 Apr 2013 17:04:08 +0000 Received: by lillypilly.canonical.com (Postfix, from userid 3489) id D245F26C28DF; Sun, 28 Apr 2013 17:04:00 +0000 (UTC) To: linux-kernel@vger.kernel.org From: Maarten Lankhorst Date: Sun, 28 Apr 2013 19:03:59 +0200 Message-ID: <20130428170359.17075.14895.stgit@patser> In-Reply-To: <20130428165914.17075.57751.stgit@patser> References: <20130428165914.17075.57751.stgit@patser> User-Agent: StGit/0.15 MIME-Version: 1.0 Cc: linux-arch@vger.kernel.org, peterz@infradead.org, x86@kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, robclark@gmail.com, rostedt@goodmis.org, daniel@ffwll.ch, tglx@linutronix.de, mingo@elte.hu, linux-media@vger.kernel.org Subject: [Linaro-mm-sig] [PATCH v3 1/3] arch: make __mutex_fastpath_lock_retval return whether fastpath succeeded or not. X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: linaro-mm-sig-bounces@lists.linaro.org Sender: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQn/fhnqZHE3dvPEYAkaEKVXTyf/rnTDSnYuiPPP4Hc3kA9PXV7S7+egDUznlAE1KZ7+E29c X-Original-Sender: maarten.lankhorst@canonical.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::22b is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This will allow me to call functions that have multiple arguments if fastpath fails. This is required to support ticket mutexes, because they need to be able to pass an extra argument to the fail function. Originally I duplicated the functions, by adding __mutex_fastpath_lock_retval_arg. This ended up being just a duplication of the existing function, so a way to test if fastpath was called ended up being better. This also cleaned up the reservation mutex patch some by being able to call an atomic_set instead of atomic_xchg, and making it easier to detect if the wrong unlock function was previously used. Changes since v1, pointed out by Francesco Lavra: - fix a small comment issue in mutex_32.h - fix the __mutex_fastpath_lock_retval macro for mutex-null.h Signed-off-by: Maarten Lankhorst --- arch/ia64/include/asm/mutex.h | 10 ++++------ arch/powerpc/include/asm/mutex.h | 10 ++++------ arch/sh/include/asm/mutex-llsc.h | 4 ++-- arch/x86/include/asm/mutex_32.h | 11 ++++------- arch/x86/include/asm/mutex_64.h | 11 ++++------- include/asm-generic/mutex-dec.h | 10 ++++------ include/asm-generic/mutex-null.h | 2 +- include/asm-generic/mutex-xchg.h | 10 ++++------ kernel/mutex.c | 32 ++++++++++++++------------------ 9 files changed, 41 insertions(+), 59 deletions(-) diff --git a/arch/ia64/include/asm/mutex.h b/arch/ia64/include/asm/mutex.h index bed73a6..f41e66d 100644 --- a/arch/ia64/include/asm/mutex.h +++ b/arch/ia64/include/asm/mutex.h @@ -29,17 +29,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *)) * __mutex_fastpath_lock_retval - try to take the lock by moving the count * from 1 to a 0 value * @count: pointer of type atomic_t - * @fail_fn: function to call if the original value was not 1 * - * Change the count from 1 to a value lower than 1, and call if - * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, - * or anything the slow path function returns. + * Change the count from 1 to a value lower than 1. This function returns 0 + * if the fastpath succeeds, or -1 otherwise. */ static inline int -__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) +__mutex_fastpath_lock_retval(atomic_t *count) { if (unlikely(ia64_fetchadd4_acq(count, -1) != 1)) - return fail_fn(count); + return -1; return 0; } diff --git a/arch/powerpc/include/asm/mutex.h b/arch/powerpc/include/asm/mutex.h index 5399f7e..127ab23 100644 --- a/arch/powerpc/include/asm/mutex.h +++ b/arch/powerpc/include/asm/mutex.h @@ -82,17 +82,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *)) * __mutex_fastpath_lock_retval - try to take the lock by moving the count * from 1 to a 0 value * @count: pointer of type atomic_t - * @fail_fn: function to call if the original value was not 1 * - * Change the count from 1 to a value lower than 1, and call if - * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, - * or anything the slow path function returns. + * Change the count from 1 to a value lower than 1. This function returns 0 + * if the fastpath succeeds, or -1 otherwise. */ static inline int -__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) +__mutex_fastpath_lock_retval(atomic_t *count) { if (unlikely(__mutex_dec_return_lock(count) < 0)) - return fail_fn(count); + return -1; return 0; } diff --git a/arch/sh/include/asm/mutex-llsc.h b/arch/sh/include/asm/mutex-llsc.h index 090358a..dad29b6 100644 --- a/arch/sh/include/asm/mutex-llsc.h +++ b/arch/sh/include/asm/mutex-llsc.h @@ -37,7 +37,7 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *)) } static inline int -__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) +__mutex_fastpath_lock_retval(atomic_t *count) { int __done, __res; @@ -51,7 +51,7 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) : "t"); if (unlikely(!__done || __res != 0)) - __res = fail_fn(count); + __res = -1; return __res; } diff --git a/arch/x86/include/asm/mutex_32.h b/arch/x86/include/asm/mutex_32.h index 03f90c8..0208c3c 100644 --- a/arch/x86/include/asm/mutex_32.h +++ b/arch/x86/include/asm/mutex_32.h @@ -42,17 +42,14 @@ do { \ * __mutex_fastpath_lock_retval - try to take the lock by moving the count * from 1 to a 0 value * @count: pointer of type atomic_t - * @fail_fn: function to call if the original value was not 1 * - * Change the count from 1 to a value lower than 1, and call if it - * wasn't 1 originally. This function returns 0 if the fastpath succeeds, - * or anything the slow path function returns + * Change the count from 1 to a value lower than 1. This function returns 0 + * if the fastpath succeeds, or -1 otherwise. */ -static inline int __mutex_fastpath_lock_retval(atomic_t *count, - int (*fail_fn)(atomic_t *)) +static inline int __mutex_fastpath_lock_retval(atomic_t *count) { if (unlikely(atomic_dec_return(count) < 0)) - return fail_fn(count); + return -1; else return 0; } diff --git a/arch/x86/include/asm/mutex_64.h b/arch/x86/include/asm/mutex_64.h index 68a87b0..2c543ff 100644 --- a/arch/x86/include/asm/mutex_64.h +++ b/arch/x86/include/asm/mutex_64.h @@ -37,17 +37,14 @@ do { \ * __mutex_fastpath_lock_retval - try to take the lock by moving the count * from 1 to a 0 value * @count: pointer of type atomic_t - * @fail_fn: function to call if the original value was not 1 * - * Change the count from 1 to a value lower than 1, and call if - * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, - * or anything the slow path function returns + * Change the count from 1 to a value lower than 1. This function returns 0 + * if the fastpath succeeds, or -1 otherwise. */ -static inline int __mutex_fastpath_lock_retval(atomic_t *count, - int (*fail_fn)(atomic_t *)) +static inline int __mutex_fastpath_lock_retval(atomic_t *count) { if (unlikely(atomic_dec_return(count) < 0)) - return fail_fn(count); + return -1; else return 0; } diff --git a/include/asm-generic/mutex-dec.h b/include/asm-generic/mutex-dec.h index f104af7..d4f9fb4 100644 --- a/include/asm-generic/mutex-dec.h +++ b/include/asm-generic/mutex-dec.h @@ -28,17 +28,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *)) * __mutex_fastpath_lock_retval - try to take the lock by moving the count * from 1 to a 0 value * @count: pointer of type atomic_t - * @fail_fn: function to call if the original value was not 1 * - * Change the count from 1 to a value lower than 1, and call if - * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, - * or anything the slow path function returns. + * Change the count from 1 to a value lower than 1. This function returns 0 + * if the fastpath succeeds, or -1 otherwise. */ static inline int -__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) +__mutex_fastpath_lock_retval(atomic_t *count) { if (unlikely(atomic_dec_return(count) < 0)) - return fail_fn(count); + return -1; return 0; } diff --git a/include/asm-generic/mutex-null.h b/include/asm-generic/mutex-null.h index e1bbbc7..61069ed 100644 --- a/include/asm-generic/mutex-null.h +++ b/include/asm-generic/mutex-null.h @@ -11,7 +11,7 @@ #define _ASM_GENERIC_MUTEX_NULL_H #define __mutex_fastpath_lock(count, fail_fn) fail_fn(count) -#define __mutex_fastpath_lock_retval(count, fail_fn) fail_fn(count) +#define __mutex_fastpath_lock_retval(count) (-1) #define __mutex_fastpath_unlock(count, fail_fn) fail_fn(count) #define __mutex_fastpath_trylock(count, fail_fn) fail_fn(count) #define __mutex_slowpath_needs_to_unlock() 1 diff --git a/include/asm-generic/mutex-xchg.h b/include/asm-generic/mutex-xchg.h index c04e0db..f169ec0 100644 --- a/include/asm-generic/mutex-xchg.h +++ b/include/asm-generic/mutex-xchg.h @@ -39,18 +39,16 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *)) * __mutex_fastpath_lock_retval - try to take the lock by moving the count * from 1 to a 0 value * @count: pointer of type atomic_t - * @fail_fn: function to call if the original value was not 1 * - * Change the count from 1 to a value lower than 1, and call if it - * wasn't 1 originally. This function returns 0 if the fastpath succeeds, - * or anything the slow path function returns + * Change the count from 1 to a value lower than 1. This function returns 0 + * if the fastpath succeeds, or -1 otherwise. */ static inline int -__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) +__mutex_fastpath_lock_retval(atomic_t *count) { if (unlikely(atomic_xchg(count, 0) != 1)) if (likely(atomic_xchg(count, -1) != 1)) - return fail_fn(count); + return -1; return 0; } diff --git a/kernel/mutex.c b/kernel/mutex.c index 52f2301..84a5f07 100644 --- a/kernel/mutex.c +++ b/kernel/mutex.c @@ -351,10 +351,10 @@ __mutex_unlock_slowpath(atomic_t *lock_count) * mutex_lock_interruptible() and mutex_trylock(). */ static noinline int __sched -__mutex_lock_killable_slowpath(atomic_t *lock_count); +__mutex_lock_killable_slowpath(struct mutex *lock); static noinline int __sched -__mutex_lock_interruptible_slowpath(atomic_t *lock_count); +__mutex_lock_interruptible_slowpath(struct mutex *lock); /** * mutex_lock_interruptible - acquire the mutex, interruptible @@ -372,12 +372,12 @@ int __sched mutex_lock_interruptible(struct mutex *lock) int ret; might_sleep(); - ret = __mutex_fastpath_lock_retval - (&lock->count, __mutex_lock_interruptible_slowpath); - if (!ret) + ret = __mutex_fastpath_lock_retval(&lock->count); + if (likely(!ret)) { mutex_set_owner(lock); - - return ret; + return 0; + } else + return __mutex_lock_interruptible_slowpath(lock); } EXPORT_SYMBOL(mutex_lock_interruptible); @@ -387,12 +387,12 @@ int __sched mutex_lock_killable(struct mutex *lock) int ret; might_sleep(); - ret = __mutex_fastpath_lock_retval - (&lock->count, __mutex_lock_killable_slowpath); - if (!ret) + ret = __mutex_fastpath_lock_retval(&lock->count); + if (likely(!ret)) { mutex_set_owner(lock); - - return ret; + return 0; + } else + return __mutex_lock_killable_slowpath(lock); } EXPORT_SYMBOL(mutex_lock_killable); @@ -405,18 +405,14 @@ __mutex_lock_slowpath(atomic_t *lock_count) } static noinline int __sched -__mutex_lock_killable_slowpath(atomic_t *lock_count) +__mutex_lock_killable_slowpath(struct mutex *lock) { - struct mutex *lock = container_of(lock_count, struct mutex, count); - return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_); } static noinline int __sched -__mutex_lock_interruptible_slowpath(atomic_t *lock_count) +__mutex_lock_interruptible_slowpath(struct mutex *lock) { - struct mutex *lock = container_of(lock_count, struct mutex, count); - return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_); } #endif