From patchwork Tue May 28 14:48:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 17250 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f200.google.com (mail-qc0-f200.google.com [209.85.216.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6A9E323911 for ; Tue, 28 May 2013 14:50:35 +0000 (UTC) Received: by mail-qc0-f200.google.com with SMTP id n10sf9909803qcx.7 for ; Tue, 28 May 2013 07:49:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-beenthere:x-forwarded-to:x-forwarded-for:delivered-to:to:from :date:message-id:in-reply-to:references:user-agent:mime-version:cc :subject:x-beenthere:x-mailman-version:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :errors-to:sender:x-gm-message-state:x-original-sender :x-original-authentication-results:mailing-list:x-google-group-id :content-type:content-transfer-encoding; bh=8PcFMLQyFwtf1L+Fu/sn9lwd4xKk6X2FxhOkyHxL2TA=; b=W4DX8fFGcXYcmMh5WXlHErLeLubrTJl/SNEgkswwRqckQSc7voEQBHrfgiU7l+b6OG tjB/jrAcaxWRcPYB/RnQR/tJ+hA9jem0W35ZOGL4ZCC1jKDw93OXymEWhanUMapIC7dm EDfoq5wN0G5iqjCNETCoNAFyPns0QaxpjriOLeoLvIUlzHiGQiNcYr1fet7FntO5k9jC EvcOxBZJc2TdY/dM8ZWbhc/Rq2Jz5fPVRTHrs2w+7fJFStr8Gr5qw8ZN+i0u0QzKy4vy rLIAVqwrs0o2t3IOrXMGLByo7afUdO4C5YFrJKgP5RdZ4lKVgEk6urgxmjAFfPWUW0YW 4gug== X-Received: by 10.224.165.143 with SMTP id i15mr4380369qay.0.1369752568767; Tue, 28 May 2013 07:49:28 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.70.164 with SMTP id n4ls3184834qeu.64.gmail; Tue, 28 May 2013 07:49:28 -0700 (PDT) X-Received: by 10.52.68.11 with SMTP id r11mr10849284vdt.93.1369752568616; Tue, 28 May 2013 07:49:28 -0700 (PDT) Received: from mail-vc0-f181.google.com (mail-vc0-f181.google.com [209.85.220.181]) by mx.google.com with ESMTPS id p12si18938334vce.22.2013.05.28.07.49.28 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 28 May 2013 07:49:28 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.181; Received: by mail-vc0-f181.google.com with SMTP id lf11so5430301vcb.26 for ; Tue, 28 May 2013 07:49:28 -0700 (PDT) X-Received: by 10.220.112.16 with SMTP id u16mr18251764vcp.40.1369752568482; Tue, 28 May 2013 07:49:28 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.229.199 with SMTP id jj7csp43094vcb; Tue, 28 May 2013 07:49:27 -0700 (PDT) X-Received: by 10.14.5.5 with SMTP id 5mr58564098eek.21.1369752566932; Tue, 28 May 2013 07:49:26 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id a7si15393672eep.326.2013.05.28.07.49.26 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 28 May 2013 07:49:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1UhLBj-00038R-DN; Tue, 28 May 2013 14:47:47 +0000 Received: from adelie.canonical.com ([91.189.90.139]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1UhLBf-00037v-RG for linaro-mm-sig@lists.linaro.org; Tue, 28 May 2013 14:47:44 +0000 Received: from lillypilly.canonical.com ([91.189.89.62]) by adelie.canonical.com with esmtp (Exim 4.71 #1 (Debian)) id 1UhLCm-0007bP-Aa; Tue, 28 May 2013 14:48:52 +0000 Received: by lillypilly.canonical.com (Postfix, from userid 3489) id 4B3E826C2935; Tue, 28 May 2013 14:48:52 +0000 (UTC) To: linux-kernel@vger.kernel.org From: Maarten Lankhorst Date: Tue, 28 May 2013 16:48:51 +0200 Message-ID: <20130528144851.4538.75070.stgit@patser> In-Reply-To: <20130528144420.4538.70725.stgit@patser> References: <20130528144420.4538.70725.stgit@patser> User-Agent: StGit/0.15 MIME-Version: 1.0 Cc: linux-arch@vger.kernel.org, peterz@infradead.org, x86@kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, robclark@gmail.com, rostedt@goodmis.org, daniel@ffwll.ch, tglx@linutronix.de, mingo@elte.hu, linux-media@vger.kernel.org Subject: [Linaro-mm-sig] [PATCH v4 4/4] mutex: w/w mutex slowpath debugging X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: linaro-mm-sig-bounces@lists.linaro.org Sender: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQmc4bZ+LRDq51XXRx/0A/5w25BewsDwgFe89MESdmYov/eactOxbc3XqXaPNruTK6cOLR1v X-Original-Sender: maarten.lankhorst@canonical.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Daniel Vetter Injects EDEADLK conditions at pseudo-random interval, with exponential backoff up to UINT_MAX (to ensure that every lock operation still completes in a reasonable time). This way we can test the wound slowpath even for ww mutex users where contention is never expected, and the ww deadlock avoidance algorithm is only needed for correctness against malicious userspace. An example would be protecting kernel modesetting properties, which thanks to single-threaded X isn't really expected to contend, ever. I've looked into using the CONFIG_FAULT_INJECTION infrastructure, but decided against it for two reasons: - EDEADLK handling is mandatory for ww mutex users and should never affect the outcome of a syscall. This is in contrast to -ENOMEM injection. So fine configurability isn't required. - The fault injection framework only allows to set a simple probability for failure. Now the probability that a ww mutex acquire stage with N locks will never complete (due to too many injected EDEADLK backoffs) is zero. But the expected number of ww_mutex_lock operations for the completely uncontended case would be O(exp(N)). The per-acuiqire ctx exponential backoff solution choosen here only results in O(log N) overhead due to injection and so O(log N * N) lock operations. This way we can fail with high probability (and so have good test coverage even for fancy backoff and lock acquisition paths) without running into patalogical cases. Note that EDEADLK will only ever be injected when we managed to acquire the lock. This prevents any behaviour changes for users which rely on the EALREADY semantics. v2: Drop the cargo-culted __sched (I should read docs next time around) and annotate the non-debug case with inline to prevent gcc from doing something horrible. v3: Rebase on top of Maarten's latest patches. v4: Actually make this stuff compile, I've misplace the hunk in the wrong #ifdef block. v5: Simplify ww_mutex_deadlock_injection definition, and fix lib/locking-selftest.c warnings. Fix lib/Kconfig.debug definition to work correctly. (mlankhorst) v6: Do not inject -EDEADLK when ctx->acquired == 0, because the _slow paths are merged now. (mlankhorst) Cc: Steven Rostedt Signed-off-by: Daniel Vetter Signed-off-by: Maarten Lankhorst --- include/linux/mutex.h | 8 ++++++++ kernel/mutex.c | 44 +++++++++++++++++++++++++++++++++++++++++--- lib/Kconfig.debug | 13 +++++++++++++ lib/locking-selftest.c | 5 +++++ 4 files changed, 67 insertions(+), 3 deletions(-) diff --git a/include/linux/mutex.h b/include/linux/mutex.h index f3ad181..2ff9178 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -95,6 +95,10 @@ struct ww_acquire_ctx { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif +#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH + unsigned deadlock_inject_interval; + unsigned deadlock_inject_countdown; +#endif }; struct ww_mutex { @@ -280,6 +284,10 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, &ww_class->acquire_key, 0); mutex_acquire(&ctx->dep_map, 0, 0, _RET_IP_); #endif +#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH + ctx->deadlock_inject_interval = 1; + ctx->deadlock_inject_countdown = ctx->stamp & 0xf; +#endif } /** diff --git a/kernel/mutex.c b/kernel/mutex.c index 75fc7c4..e40004b 100644 --- a/kernel/mutex.c +++ b/kernel/mutex.c @@ -508,22 +508,60 @@ mutex_lock_interruptible_nested(struct mutex *lock, unsigned int subclass) EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested); +static inline int +ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) +{ +#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH + unsigned tmp; + + if (ctx->deadlock_inject_countdown-- == 0) { + tmp = ctx->deadlock_inject_interval; + if (tmp > UINT_MAX/4) + tmp = UINT_MAX; + else + tmp = tmp*2 + tmp + tmp/2; + + ctx->deadlock_inject_interval = tmp; + ctx->deadlock_inject_countdown = tmp; + ctx->contending_lock = lock; + + ww_mutex_unlock(lock); + + return -EDEADLK; + } +#endif + + return 0; +} int __sched __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { + int ret; + might_sleep(); - return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, + ret = __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0, &ctx->dep_map, _RET_IP_, ctx); + if (!ret && ctx->acquired > 0) + return ww_mutex_deadlock_injection(lock, ctx); + + return ret; } EXPORT_SYMBOL_GPL(__ww_mutex_lock); int __sched __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { + int ret; + might_sleep(); - return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, - 0, &ctx->dep_map, _RET_IP_, ctx); + ret = __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, + 0, &ctx->dep_map, _RET_IP_, ctx); + + if (!ret && ctx->acquired > 0) + return ww_mutex_deadlock_injection(lock, ctx); + + return ret; } EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible); diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 28be08c..06538ee 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -547,6 +547,19 @@ config DEBUG_MUTEXES This feature allows mutex semantics violations to be detected and reported. +config DEBUG_WW_MUTEX_SLOWPATH + bool "Wait/wound mutex debugging: Slowpath testing" + depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT + select DEBUG_LOCK_ALLOC + select DEBUG_SPINLOCK + select DEBUG_MUTEXES + help + This feature enables slowpath testing for w/w mutex users by + injecting additional -EDEADLK wound/backoff cases. Together with + the full mutex checks enabled with (CONFIG_PROVE_LOCKING) this + will test all possible w/w mutex interface abuse with the + exception of simply not acquiring all the required locks. + config DEBUG_LOCK_ALLOC bool "Lock debugging: detect incorrect freeing of live locks" depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c index b18f1d3..7f0bacc 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -199,7 +199,12 @@ static void init_shared_classes(void) #define RSU(x) up_read(&rwsem_##x) #define RWSI(x) init_rwsem(&rwsem_##x) +#ifndef CONFIG_DEBUG_WW_MUTEX_SLOWPATH #define WWAI(x) ww_acquire_init(x, &ww_lockdep) +#else +#define WWAI(x) do { ww_acquire_init(x, &ww_lockdep); (x)->deadlock_inject_countdown = ~0U; } while (0) + +#endif #define WWAD(x) ww_acquire_done(x) #define WWAF(x) ww_acquire_fini(x)