From patchwork Thu Apr 5 16:59:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 132870 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp6520666ljb; Thu, 5 Apr 2018 09:59:51 -0700 (PDT) X-Google-Smtp-Source: AIpwx48lkWTUeZqzl7lePCiO9Hj6aTyB1PGCLU382OrsNJt2Qxd5g5k0gwjMZdsyVxnsq+pFzdk3 X-Received: by 2002:a17:902:3341:: with SMTP id a59-v6mr23636132plc.68.1522947591584; Thu, 05 Apr 2018 09:59:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522947591; cv=none; d=google.com; s=arc-20160816; b=TXuJ6tRo1A66oQ0tiAwj2mjzEj6b77zhrYZsavO6X5twWCDhBhTB6Y1lpo1Vm15PGE 9yp0rrLl2aq0hLWGhUL3MxxJvaNL/LPqJhnvBdq8GZRmasuTEruR7JDMM3WdgFmbK3Av XEBn7teXIKazo7p0QmmmBbkR9LZslv2dBf/jwqTvX45nkfDceGy755ItMvEOs9alVegZ qGrBLeSAE6DANn//zq56zSBcVlYs0UTjQ7KTNIJtsjX8MwxEcsjSSCxAoM1t2UZNRr+B 8nP12rBumnfZJc1js0U8DDwBt/eicgAnfSrRGNkhiHQhlqFBJ3T9AdKMwiN9V7TX/FZm YD0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=rLT3J5NT3Po6sGkKekSQxoftE0EryCGY3wZKvMbGpfQ=; b=vvbP8ObF2kjH4Qh/8r9y8+jj94D2eF3mt3uhP7v1sWgqnNo3TZ9+weWoSIOIyGKuqn eCwCoRGM+bCkCoswfRI1G6b3Kj98MPEqXDz9VuJhs72QyKTawKFJv+PnlpbKEjNPGf+7 6kuyoqjf9He9Wt+Ox1MA/5U/jXYQsB/T7v9XPRhzvBFaScTRTQHkrZHw2riq6E4CUVGL thWrB+mi2RPmaesTisOj7TAcn4ZxHwsT56j1btMDApqoIIhVOFoCyvFJei0fhRMnoNTO W5VrdlV0vqpOj2UW+iA5SLEz/D8FTs5xXdpnEynS06j4V+8SVmXkAwL99DpaqrFhQPNp Yirg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p64si5706075pga.492.2018.04.05.09.59.51; Thu, 05 Apr 2018 09:59:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751759AbeDEQ7W (ORCPT + 29 others); Thu, 5 Apr 2018 12:59:22 -0400 Received: from foss.arm.com ([217.140.101.70]:57396 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751461AbeDEQ64 (ORCPT ); Thu, 5 Apr 2018 12:58:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 25D9316A3; Thu, 5 Apr 2018 09:58:56 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id ECCEC3F7DB; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 1F8AC1AE55AA; Thu, 5 Apr 2018 17:59:09 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Will Deacon Subject: [PATCH 09/10] locking/qspinlock: Make queued_spin_unlock use smp_store_release Date: Thu, 5 Apr 2018 17:59:06 +0100 Message-Id: <1522947547-24081-10-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> References: <1522947547-24081-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A qspinlock can be unlocked simply by writing zero to the locked byte. This can be implemented in the generic code, so do that and remove the arch-specific override for x86 in the !PV case. Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Will Deacon --- arch/x86/include/asm/qspinlock.h | 17 ++++++----------- include/asm-generic/qspinlock.h | 2 +- 2 files changed, 7 insertions(+), 12 deletions(-) -- 2.1.4 diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index 90b0b0ed8161..cc77cbb01432 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -7,6 +7,12 @@ #include #include +#ifdef CONFIG_PARAVIRT_SPINLOCKS +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __pv_init_lock_hash(void); +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); + #define queued_spin_unlock queued_spin_unlock /** * queued_spin_unlock - release a queued spinlock @@ -19,12 +25,6 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) smp_store_release(&lock->locked, 0); } -#ifdef CONFIG_PARAVIRT_SPINLOCKS -extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); -extern void __pv_init_lock_hash(void); -extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); -extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); - static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { pv_queued_spin_lock_slowpath(lock, val); @@ -40,11 +40,6 @@ static inline bool vcpu_is_preempted(long cpu) { return pv_vcpu_is_preempted(cpu); } -#else -static inline void queued_spin_unlock(struct qspinlock *lock) -{ - native_queued_spin_unlock(lock); -} #endif #ifdef CONFIG_PARAVIRT diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index b37b4ad7eb94..a8ed0a352d75 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -100,7 +100,7 @@ static __always_inline void queued_spin_unlock(struct qspinlock *lock) /* * unlock() needs release semantics: */ - (void)atomic_sub_return_release(_Q_LOCKED_VAL, &lock->val); + smp_store_release(&lock->locked, 0); } #endif