From patchwork Tue Nov 17 18:15:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75D87C8300C for ; Tue, 17 Nov 2020 18:17:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D7A522210 for ; Tue, 17 Nov 2020 18:17:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="N1TibN9j" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731028AbgKQSRI (ORCPT ); Tue, 17 Nov 2020 13:17:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731013AbgKQSRF (ORCPT ); Tue, 17 Nov 2020 13:17:05 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFFABC0617A6 for ; Tue, 17 Nov 2020 10:17:04 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id w17so13200696wrp.11 for ; Tue, 17 Nov 2020 10:17:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=nDg4Q5UW5VyG5A48tL8468KlneKnyHjdeOZ0CSmRGUk=; b=N1TibN9j6rqyDAz7w2MvN4SO5GZxQitBMjo5jrhjQsL+CqRT6cSgzwpG8/qIi2cqKl QupXDJkW94uUSAHQ/Avd20xRPPX5Qshr8JcXWEIpweKRZChjRI+YU7261YvmSIgucGbb YGbidLKp2mBFY77m2LsC5eN7onLPhV6anYgipS2Wbz8yGdVJ5EkVuUBY24a4Oo3WPnkf WvgYp/NvweM/l3FStseFKS26JKS9j8Idghu6U/AclJSNJVmYb5G6lNg8AiDGCVrU6TVx AGBetFJ2Kvmc3d57+t1arZwyoK4CzoIJCeQ7BEGcY2Uf29V1bO3LFUb4jjqch68AkY8Y GtOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nDg4Q5UW5VyG5A48tL8468KlneKnyHjdeOZ0CSmRGUk=; b=TYqQg9gc8P+5eOPe/qHj5lJ/GMSOzXkjeeVEY7O2evf0uztmQRa2G0tNs7Pemj6Cu9 IBm9UlxbVreC023ZBxk+NGNmFTV5eqsrXRGSxtng/8ErmKyBQZgbF7GTDLxdsp+QiLId 5VArivFbXoZ48zm+ZfwrRHSactzQt+XouiAU/0HFs+LJb9yfwrh/mouVjn50kc9XbnDZ xJTgJuVHDXYvVlBG9P9zgLQW3MVttCEeKBBK7v9LeZXx21I1ggwT7dyx74i4w8JdWLEy wO4ts4h+R5gM3sO31+k83rnrT5D5MMZCP+pSL55+WVor1f58EnseRoeY1/PlL1Wl5ezs yQIg== X-Gm-Message-State: AOAM531fQUaTn/Bj8/LyNpOQ+pTF4Cc6FSsRO+04dZ+TtskN7x0Uf4q/ A1Mnn3QlOK9jBRRXGcpqlNz7uqw8kcO4 X-Google-Smtp-Source: ABdhPJzBZwi9tCgTF9ABdJKEIr3LviF1NVAku/55zJOf5XT/50xgW9YDis8O5ppDeir1CJBxP84GEo3WRJol Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a1c:e442:: with SMTP id b63mr418225wmh.10.1605637023474; Tue, 17 Nov 2020 10:17:03 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:59 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-20-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 19/27] KVM: arm64: Use kvm_arch in kvm_s2_mmu From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to make use of the stage 2 pgtable code for the host stage 2, change kvm_s2_mmu to use a kvm_arch pointer in lieu of the kvm pointer, as the host will have the former but not the latter. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/kvm_mmu.h | 7 ++++++- arch/arm64/kvm/mmu.c | 8 ++++---- 3 files changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ee8bb8021637..53b01d25e7d9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -86,7 +86,7 @@ struct kvm_s2_mmu { /* The last vcpu id that ran on each physical CPU */ int __percpu *last_vcpu_ran; - struct kvm *kvm; + struct kvm_arch *arch; }; struct kvm_arch { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index bb756757b51c..714357ebd278 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -275,7 +275,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) */ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) { - write_sysreg(kern_hyp_va(mmu->kvm)->arch.vtcr, vtcr_el2); + write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); /* @@ -285,5 +285,10 @@ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) */ asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } + +static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) +{ + return container_of(mmu->arch, struct kvm, arch); +} #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 384f2acc0115..3b1c53e754ee 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -169,7 +169,7 @@ static void *kvm_host_va(phys_addr_t phys) static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size, bool may_block) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); phys_addr_t end = start + size; assert_spin_locked(&kvm->mmu_lock); @@ -474,7 +474,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) for_each_possible_cpu(cpu) *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1; - mmu->kvm = kvm; + mmu->arch = &kvm->arch; mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); mmu->vmid.vmid_gen = 0; @@ -556,7 +556,7 @@ void stage2_unmap_vm(struct kvm *kvm) void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); struct kvm_pgtable *pgt = NULL; spin_lock(&kvm->mmu_lock); @@ -625,7 +625,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, */ static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect); }