From patchwork Fri Jan 8 12:15:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1D0EC433DB for ; Fri, 8 Jan 2021 12:17:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB344238E4 for ; Fri, 8 Jan 2021 12:17:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727045AbhAHMQs (ORCPT ); Fri, 8 Jan 2021 07:16:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726844AbhAHMQs (ORCPT ); Fri, 8 Jan 2021 07:16:48 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69322C0612FB for ; Fri, 8 Jan 2021 04:15:34 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id r11so4066078wrs.23 for ; Fri, 08 Jan 2021 04:15:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=FdcXk5XwPP97RT6Rrd5i1sKrdPrNLW7yQE5BzOD9Aa4=; b=M3UXK7M/4/5w67kqm4YRBmxxB2Dup3cHf9jCSc0WL59C3f57wG8qN4u53aNta9iHY+ IxEBHG1yuhrtwoQ+Hd2AqQYA4zmRzpTI0Zud8wT5pQfX/3Av8YuQQcgrI6qQ922bEL5z QmgBzNkPxqBIdqqglBlH+EB3OI+7mEaRNyvmbRqeDJgRMbfqpO8OfeDBITKrSClUj4On miJIokbmGChXEeX4ob483HpVQpAdWTEpBeWXHr8ITS30MAw1s4V6nxeX4AERUhCYnizO 29kGwGN0zPVHuPlW9ZRrcK/0XmtM+Y2JibC++Atok74QKN3S1JGk+6KpB+F9/N3bi8B/ 42qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FdcXk5XwPP97RT6Rrd5i1sKrdPrNLW7yQE5BzOD9Aa4=; b=UFDLvBWPCjy9IAEhABqEVL4DVjg4wBUq9GJs6H6lv2S87CarC47ukkxICPRjGUlOwX CPFY8Sii6Q5HF0z2CUI2OhGAu6j/9fLxAtbVp6b8mQYxkXScr9MtrruAVsXeCApa1kOx Uw9cKklcjbD0UJo8QrY5W0m2YGx4NWUx+5xgmnLMM/II8x7T6RTgytzipCsVgEinN0rE YUpInL3/kDVHzDJIVIk3vLZHmNgHhR1PaKI7PfNsrVI7HUmZeZ50zp05e8lnPWPhNrbt T0+mgEp3GMddBn5kPmJQZqCvGeWc61oJQR/ZkdX9zDmb0ejdJhmc0EIJ82FGEaZBL/yh YLsA== X-Gm-Message-State: AOAM531xj8OJa00xhGDYj+p2rcBQ54UCkiTFmUvpsWSfCUzjVWntR4Pl 40SmngsRDjgWwMMvJnHWy5v9OhThp1SL X-Google-Smtp-Source: ABdhPJyl9JkZuAxlwZjUkJm5Ml5+4k+oG9V/LVsBit2ZNyEdffzcwrN+lSVgxsKMVJ/hAc3WVfndupgwNmrl Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:adf:9cca:: with SMTP id h10mr3389207wre.77.1610108133066; Fri, 08 Jan 2021 04:15:33 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:01 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-4-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 03/26] arm64: kvm: Add standalone ticket spinlock implementation for use at hyp From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Will Deacon We will soon need to synchronise multiple CPUs in the hyp text at EL2. The qspinlock-based locking used by the host is overkill for this purpose and relies on the kernel's "percpu" implementation for the MCS nodes. Implement a simple ticket locking scheme based heavily on the code removed by commit c11090474d70 ("arm64: locking: Replace ticket lock implementation with qspinlock"). Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 92 ++++++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/spinlock.h diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h new file mode 100644 index 000000000000..7584c397bbac --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * A stand-alone ticket spinlock implementation for use by the non-VHE + * KVM hypervisor code running at EL2. + * + * Copyright (C) 2020 Google LLC + * Author: Will Deacon + * + * Heavily based on the implementation removed by c11090474d70 which was: + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ARM64_KVM_NVHE_SPINLOCK_H__ +#define __ARM64_KVM_NVHE_SPINLOCK_H__ + +#include +#include + +typedef union hyp_spinlock { + u32 __val; + struct { +#ifdef __AARCH64EB__ + u16 next, owner; +#else + u16 owner, next; + }; +#endif +} hyp_spinlock_t; + +#define hyp_spin_lock_init(l) \ +do { \ + *(l) = (hyp_spinlock_t){ .__val = 0 }; \ +} while (0) + +static inline void hyp_spin_lock(hyp_spinlock_t *lock) +{ + u32 tmp; + hyp_spinlock_t lockval, newval; + + asm volatile( + /* Atomically increment the next ticket. */ + ARM64_LSE_ATOMIC_INSN( + /* LL/SC */ +" prfm pstl1strm, %3\n" +"1: ldaxr %w0, %3\n" +" add %w1, %w0, #(1 << 16)\n" +" stxr %w2, %w1, %3\n" +" cbnz %w2, 1b\n", + /* LSE atomics */ +" mov %w2, #(1 << 16)\n" +" ldadda %w2, %w0, %3\n" + __nops(3)) + + /* Did we get the lock? */ +" eor %w1, %w0, %w0, ror #16\n" +" cbz %w1, 3f\n" + /* + * No: spin on the owner. Send a local event to avoid missing an + * unlock before the exclusive load. + */ +" sevl\n" +"2: wfe\n" +" ldaxrh %w2, %4\n" +" eor %w1, %w2, %w0, lsr #16\n" +" cbnz %w1, 2b\n" + /* We got the lock. Critical section starts here. */ +"3:" + : "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock) + : "Q" (lock->owner) + : "memory"); +} + +static inline void hyp_spin_unlock(hyp_spinlock_t *lock) +{ + u64 tmp; + + asm volatile( + ARM64_LSE_ATOMIC_INSN( + /* LL/SC */ + " ldrh %w1, %0\n" + " add %w1, %w1, #1\n" + " stlrh %w1, %0", + /* LSE atomics */ + " mov %w1, #1\n" + " staddlh %w1, %0\n" + __nops(1)) + : "=Q" (lock->owner), "=&r" (tmp) + : + : "memory"); +} + +#endif /* __ARM64_KVM_NVHE_SPINLOCK_H__ */