From patchwork Tue Jun 9 16:23:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 49666 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BA2CD21419 for ; Tue, 9 Jun 2015 16:25:14 +0000 (UTC) Received: by wibdt2 with SMTP id dt2sf5844729wib.3 for ; Tue, 09 Jun 2015 09:25:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=Jyg9FEaD1cMd2PNq6DJTfjJLp6m2Gj/c1fj6LyPGRFU=; b=CvOwlKMA6vKV4NQ4D8v/H5VfFmugrIqYtifx6nDRcH5F2ECbCfnsNPnfbx0xoRdwjB erdvq7g7/O/nv61cNF0UENN2I2vg2hd63Et6qJkVsIa4Fy6zLTKMrKG1b9jl4cRkfLMr veyPetq+jeGXTeOn+wrNzOj7HajIjGBwrxXeFuE+Wk2ABIEqbU9aIlSORXdwfVv78j5q wqDOJoZUPA575dbaPdkH1KuELwTOxDx+Yf25PfGSRZ5RaNP0m30RiKy3K3GB30O8sYYO q8xWASwsgWsgt2Jwy/IMlDKI4uEYR98HVje62FbNM4iBVBOKsnump8zgrPd7OpWM8LA6 RacQ== X-Gm-Message-State: ALoCoQnt7y7X7hWHagO/agXdgZUAGyIzYlVeVsP1l751frfUybMe5JNB111dNXOCzPYEykxGq3t0 X-Received: by 10.112.219.200 with SMTP id pq8mr22426516lbc.7.1433867114048; Tue, 09 Jun 2015 09:25:14 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.8.51 with SMTP id o19ls96382laa.95.gmail; Tue, 09 Jun 2015 09:25:13 -0700 (PDT) X-Received: by 10.112.168.165 with SMTP id zx5mr23017509lbb.111.1433867113900; Tue, 09 Jun 2015 09:25:13 -0700 (PDT) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id oe4si6167486lbc.49.2015.06.09.09.25.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Jun 2015 09:25:13 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by labpy14 with SMTP id py14so16280872lab.0 for ; Tue, 09 Jun 2015 09:25:13 -0700 (PDT) X-Received: by 10.152.4.137 with SMTP id k9mr23065533lak.29.1433867113781; Tue, 09 Jun 2015 09:25:13 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp2724205lbb; Tue, 9 Jun 2015 09:25:12 -0700 (PDT) X-Received: by 10.66.55.41 with SMTP id o9mr40123732pap.148.1433867111529; Tue, 09 Jun 2015 09:25:11 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ix6si9586379pac.46.2015.06.09.09.25.10; Tue, 09 Jun 2015 09:25:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933145AbbFIQZJ (ORCPT + 5 others); Tue, 9 Jun 2015 12:25:09 -0400 Received: from mail-pa0-f43.google.com ([209.85.220.43]:34022 "EHLO mail-pa0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753935AbbFIQX7 (ORCPT ); Tue, 9 Jun 2015 12:23:59 -0400 Received: by payr10 with SMTP id r10so17056839pay.1 for ; Tue, 09 Jun 2015 09:23:58 -0700 (PDT) X-Received: by 10.70.39.103 with SMTP id o7mr40122998pdk.122.1433867038293; Tue, 09 Jun 2015 09:23:58 -0700 (PDT) Received: from ubuntu.localdomain (i-global254.qualcomm.com. [199.106.103.254]) by mx.google.com with ESMTPSA id fs16sm6095565pdb.12.2015.06.09.09.23.56 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 09 Jun 2015 09:23:57 -0700 (PDT) From: Lina Iyer To: ohad@wizery.com Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Lina Iyer , Jeffrey Hugo , Andy Gross Subject: [PATCH RFC v2 1/2] hwspinlock: Introduce raw capability for hwspinlocks Date: Tue, 9 Jun 2015 10:23:39 -0600 Message-Id: <1433867020-7746-2-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1433867020-7746-1-git-send-email-lina.iyer@linaro.org> References: <1433867020-7746-1-git-send-email-lina.iyer@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: lina.iyer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The hwspinlock framework, uses a s/w spin lock around the hw spinlock to ensure that only process acquires the lock at any time. This is the most general use case. A special case is where a hwspinlock may be acquired in Linux and a remote entity may release the lock. In such a case, the s/w spinlock may never be unlocked as Linux would never call hwspin_unlock on the hwlock. This special case is needed for serializing the processor across context switches from Linux to firmware. Multiple cores may enter cpu idle and would switch context to firmware to power off. A cpu holding the hwspinlock would cause other cpus to wait to acquire the lock, until the lock is released by the firmware. The last core to power down per Linux has the correct state of the shared resources and should be the one considered by the firmware. However, a cpu may be stuck handling FIQs and therefore the last man view of Linux and the firmware may differ. A hwspinlock avoids this problem by serializing the entry from Linux to firmware. Introduce hwcaps member for hwspinlock_device. The hwcaps represents the hw capability of each hwlock. The platform driver is responsible for specifying this capability for each lock in the bank. A lock that has HWL_CAP_ALLOW_RAW set, would indicate to the framework, the capability for ensure locking correctness in the platform. Since no sw spinlock guards the hwspinlock, it is the responsibility of the platform driver to ensure that an unique value is written to the hwspinlock to ensure locking correctness. Drivers may use hwspin_trylock_raw() and hwspin_unlock_raw() api to lock and unlock a hwlock with raw capability. Cc: Jeffrey Hugo Cc: Ohad Ben-Cohen Cc: Andy Gross Signed-off-by: Lina Iyer --- Documentation/hwspinlock.txt | 16 +++++++ drivers/hwspinlock/hwspinlock_core.c | 75 +++++++++++++++++++------------- drivers/hwspinlock/hwspinlock_internal.h | 6 +++ include/linux/hwspinlock.h | 41 +++++++++++++++++ 4 files changed, 108 insertions(+), 30 deletions(-) diff --git a/Documentation/hwspinlock.txt b/Documentation/hwspinlock.txt index 62f7d4e..ca6de6c 100644 --- a/Documentation/hwspinlock.txt +++ b/Documentation/hwspinlock.txt @@ -122,6 +122,15 @@ independent, drivers. notably -EBUSY if the hwspinlock was already taken). The function will never sleep. + int hwspin_trylock_raw(struct hwspinlock *hwlock); + - attempt to lock a previously-assigned hwspinlock, but immediately fail if + it is already taken. The lock must have been declared by the platform + drv code with raw capability support. + Returns 0 on success and an appropriate error code otherwise (most + notably -EBUSY if the hwspinlock was already taken). + This function does not use a sw spinlock around the hwlock. The + responsiblity of the lock is guaranteed by the platform code. + void hwspin_unlock(struct hwspinlock *hwlock); - unlock a previously-locked hwspinlock. Always succeed, and can be called from any context (the function never sleeps). Note: code should _never_ @@ -144,6 +153,13 @@ independent, drivers. and the state of the local interrupts is restored to the state saved at the given flags. This function will never sleep. + void hwspin_unlock_raw(struct hwspinlock *hwlock); + - unlock a previously-locked hwspinlock. Always succeed, and can be called + from any context (the function never sleeps). Note: code should _never_ + unlock an hwspinlock which is already unlocked (there is no protection + against this). The platform driver must support raw capability for this + hwlock. + int hwspin_lock_get_id(struct hwspinlock *hwlock); - retrieve id number of a given hwspinlock. This is needed when an hwspinlock is dynamically assigned: before it can be used to achieve diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c index 461a0d7..18ed7cc 100644 --- a/drivers/hwspinlock/hwspinlock_core.c +++ b/drivers/hwspinlock/hwspinlock_core.c @@ -79,7 +79,10 @@ static DEFINE_MUTEX(hwspinlock_tree_lock); * whether he wants their previous state to be saved. It is up to the user * to choose the appropriate @mode of operation, exactly the same way users * should decide between spin_trylock, spin_trylock_irq and - * spin_trylock_irqsave. + * spin_trylock_irqsave and even no spinlock, if the hwspinlock is always + * acquired in an interrupt disabled context. The platform driver that + * registers such a lock, would explicity specify the capability for the + * lock with the HWL_CAP_ALLOW_RAW capability flag. * * Returns 0 if we successfully locked the hwspinlock or -EBUSY if * the hwspinlock was already taken. @@ -91,6 +94,7 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags) BUG_ON(!hwlock); BUG_ON(!flags && mode == HWLOCK_IRQSTATE); + BUG_ON((hwlock->hwcaps & HWL_CAP_ALLOW_RAW) && (mode != HWLOCK_NOLOCK)); /* * This spin_lock{_irq, _irqsave} serves three purposes: @@ -105,32 +109,36 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags) * problems with hwspinlock usage (e.g. scheduler checks like * 'scheduling while atomic' etc.) */ - if (mode == HWLOCK_IRQSTATE) - ret = spin_trylock_irqsave(&hwlock->lock, *flags); - else if (mode == HWLOCK_IRQ) - ret = spin_trylock_irq(&hwlock->lock); - else - ret = spin_trylock(&hwlock->lock); - - /* is lock already taken by another context on the local cpu ? */ - if (!ret) - return -EBUSY; - - /* try to take the hwspinlock device */ - ret = hwlock->bank->ops->trylock(hwlock); - - /* if hwlock is already taken, undo spin_trylock_* and exit */ - if (!ret) { + if (mode != HWLOCK_NOLOCK) { if (mode == HWLOCK_IRQSTATE) - spin_unlock_irqrestore(&hwlock->lock, *flags); + ret = spin_trylock_irqsave(&hwlock->lock, *flags); else if (mode == HWLOCK_IRQ) - spin_unlock_irq(&hwlock->lock); + ret = spin_trylock_irq(&hwlock->lock); else - spin_unlock(&hwlock->lock); + ret = spin_trylock(&hwlock->lock); - return -EBUSY; + /* is lock already taken by another context on the local cpu? */ + if (!ret) + return -EBUSY; } + /* try to take the hwspinlock device */ + ret = hwlock->bank->ops->trylock(hwlock); + + if (mode != HWLOCK_NOLOCK) { + /* if hwlock is already taken, undo spin_trylock_* and exit */ + if (!ret) { + if (mode == HWLOCK_IRQSTATE) + spin_unlock_irqrestore(&hwlock->lock, *flags); + else if (mode == HWLOCK_IRQ) + spin_unlock_irq(&hwlock->lock); + else + spin_unlock(&hwlock->lock); + } + } + + if (!ret) + return -EBUSY; /* * We can be sure the other core's memory operations * are observable to us only _after_ we successfully take @@ -222,7 +230,10 @@ EXPORT_SYMBOL_GPL(__hwspin_lock_timeout); * if yes, whether he wants their previous state to be restored. It is up * to the user to choose the appropriate @mode of operation, exactly the * same way users decide between spin_unlock, spin_unlock_irq and - * spin_unlock_irqrestore. + * spin_unlock_irqrestore and even no spinlock, if the hwspinlock is always + * acquired in an interrupt disabled context. The platform driver that + * registers such a lock, would explicity specify the capability for the + * lock with the HWL_CAP_ALLOW_RAW capability flag. * * The function will never sleep. */ @@ -230,6 +241,7 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags) { BUG_ON(!hwlock); BUG_ON(!flags && mode == HWLOCK_IRQSTATE); + BUG_ON((hwlock->hwcaps & HWL_CAP_ALLOW_RAW) && (mode != HWLOCK_NOLOCK)); /* * We must make sure that memory operations (both reads and writes), @@ -247,13 +259,15 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags) hwlock->bank->ops->unlock(hwlock); - /* Undo the spin_trylock{_irq, _irqsave} called while locking */ - if (mode == HWLOCK_IRQSTATE) - spin_unlock_irqrestore(&hwlock->lock, *flags); - else if (mode == HWLOCK_IRQ) - spin_unlock_irq(&hwlock->lock); - else - spin_unlock(&hwlock->lock); + if (mode != HWLOCK_NOLOCK) { + /* Undo the spin_trylock{_irq, _irqsave} called while locking */ + if (mode == HWLOCK_IRQSTATE) + spin_unlock_irqrestore(&hwlock->lock, *flags); + else if (mode == HWLOCK_IRQ) + spin_unlock_irq(&hwlock->lock); + else + spin_unlock(&hwlock->lock); + } } EXPORT_SYMBOL_GPL(__hwspin_unlock); @@ -342,7 +356,8 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, for (i = 0; i < num_locks; i++) { hwlock = &bank->lock[i]; - spin_lock_init(&hwlock->lock); + if (!(hwlock->hwcaps & HWL_CAP_ALLOW_RAW)) + spin_lock_init(&hwlock->lock); hwlock->bank = bank; ret = hwspin_lock_register_single(hwlock, base_id + i); diff --git a/drivers/hwspinlock/hwspinlock_internal.h b/drivers/hwspinlock/hwspinlock_internal.h index d26f78b..24a4d79 100644 --- a/drivers/hwspinlock/hwspinlock_internal.h +++ b/drivers/hwspinlock/hwspinlock_internal.h @@ -21,6 +21,9 @@ #include #include +/* hwspinlock capability properties */ +#define HWL_CAP_ALLOW_RAW BIT(1) + struct hwspinlock_device; /** @@ -44,11 +47,14 @@ struct hwspinlock_ops { * @bank: the hwspinlock_device structure which owns this lock * @lock: initialized and used by hwspinlock core * @priv: private data, owned by the underlying platform-specific hwspinlock drv + * @hwcaps: hardware capablity, like raw lock, that does not need s/w spinlock + * around the hwspinlock. */ struct hwspinlock { struct hwspinlock_device *bank; spinlock_t lock; void *priv; + int hwcaps; }; /** diff --git a/include/linux/hwspinlock.h b/include/linux/hwspinlock.h index 3343298..21232d0 100644 --- a/include/linux/hwspinlock.h +++ b/include/linux/hwspinlock.h @@ -24,6 +24,7 @@ /* hwspinlock mode argument */ #define HWLOCK_IRQSTATE 0x01 /* Disable interrupts, save state */ #define HWLOCK_IRQ 0x02 /* Disable interrupts, don't save state */ +#define HWLOCK_NOLOCK 0xFF /* Dont take any lock */ struct device; struct hwspinlock; @@ -189,6 +190,27 @@ static inline int hwspin_trylock(struct hwspinlock *hwlock) } /** + * hwspin_trylock_raw() - attempt to lock a specific hwspinlock without s/w + * spinlocks + * @hwlock: the hwspinlock which we want to trylock + * + * This function attempts to lock the hwspinlock without acquiring a s/w + * spinlock. The function will return failure if the lock is already taken. + * + * The function can only be used on a hwlock that has been initialized with + * raw capability by the platform drv. + * + * The function is expected to be called in an interrupt disabled context. + * + * Returns 0 if we successfully locked the hwspinlock, -EBUSY if the hwspinlock + * is already taken. + */ +static inline int hwspin_trylock_raw(struct hwspinlock *hwlock) +{ + return __hwspin_trylock(hwlock, HWLOCK_NOLOCK, NULL); +} + +/** * hwspin_lock_timeout_irqsave() - lock hwspinlock, with timeout, disable irqs * @hwlock: the hwspinlock to be locked * @to: timeout value in msecs @@ -310,4 +332,23 @@ static inline void hwspin_unlock(struct hwspinlock *hwlock) __hwspin_unlock(hwlock, 0, NULL); } +/** + * hwspin_unlock_raw() - unlock hwspinlock + * @hwlock: a previously acquired hwspinlock which we want to unlock + * + * This function will unlock a specific hwspinlock that was acquired using the + * hwspin_trylock_raw() call. + * + * The function can only be used on a hwlock that has been initialized with + * raw capability by the platform drv. + * + * @hwlock must be already locked (e.g. by hwspin_trylock()) before calling + * this function: it is a bug to call unlock on a @hwlock that is already + * unlocked. + */ +static inline void hwspin_unlock_raw(struct hwspinlock *hwlock) +{ + __hwspin_unlock(hwlock, HWLOCK_NOLOCK, NULL); +} + #endif /* __LINUX_HWSPINLOCK_H */