From patchwork Fri Jan 8 12:15:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C5CAC433DB for ; Fri, 8 Jan 2021 12:19:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5F7422E02 for ; Fri, 8 Jan 2021 12:19:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725793AbhAHMTK (ORCPT ); Fri, 8 Jan 2021 07:19:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727762AbhAHMRT (ORCPT ); Fri, 8 Jan 2021 07:17:19 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D224C0612F5 for ; Fri, 8 Jan 2021 04:16:08 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id m8so8121164qvk.1 for ; Fri, 08 Jan 2021 04:16:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=nKYca+zoHB0t45wLrBkHrNQpUNeGtZirqsjOZxYQKRc=; b=uVpDd/3VfWS2V7FGE50eztJxNR1ht1Ks8QHyta0jW0cbETpRaiuIqOgG8rkHGWCOzT JmHycgDR4rQNFgkKhMlILcE66rK1B9k5VcfpE+41k0cv9eKsiAi6WcO0xIhMnsHLoQgb KSEkDnla82gwCfzIowWilKmwq+TBzQCeYRYyLkPqNthTa3k+4HGBALGomeHe8ZijxITU PTKIZoqS4Fay2EqTjmfiyHsykrjJUQIrkFBp4P0OPbuTw26tPhbu1LjbW9wfW5r7i035 RFWm/qFvslI2WWe8I+xjZuOPZkaQlRo4TgA97lsSS6NBc6/ElS5yVWGph53wegZaNTH0 xlNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nKYca+zoHB0t45wLrBkHrNQpUNeGtZirqsjOZxYQKRc=; b=NC5Fd3uAJIdA47mPaCTJxs4PiIs+N6b90Wi3Y2bK2QrW50pc5kpfYuvcvkueHg+oFt O4PgJYfAUL+siBB8hH42LagyBZUPywz7OAirOzjNbdRd9C/gkf8E8s1ehFdQljOIIbZ+ FBURJsGqTODs5J5TZ+Qif5Ual7TV9E6le9ZHUyvC3YititZxNc9hTOJmq0i8HJa+QDgf Qk6faxKovG7g3Pauy29/slLTAgJc6FrdwQ9MVCb5Gv4ODbSdg/LdP5BKrgw2HGHL5gjI rplvRxk7CbrkubOeB/eRus0IgWVAPjAIJrDS0kZi4B3cJuYrWKerrdF4hKHeazbuXLts gQ+Q== X-Gm-Message-State: AOAM530uvMzofn014szEBkaCMpXtaeq/UYXFophOzcgJfa+voiG2+DR2 Fgx4X1aCffceLlY151AzaP4I3pyv+zC5 X-Google-Smtp-Source: ABdhPJxuBC038pFa20wlRkuieyMtPechQYnpLRIU4yj6S+nFgAVEI8gwmWfmtgdSLWI0LLH4YOmFUS0oLP0Y Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:ad4:5ba7:: with SMTP id 7mr3081256qvq.31.1610108167604; Fri, 08 Jan 2021 04:16:07 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:17 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-20-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 19/26] KVM: arm64: Use kvm_arch in kvm_s2_mmu From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to make use of the stage 2 pgtable code for the host stage 2, change kvm_s2_mmu to use a kvm_arch pointer in lieu of the kvm pointer, as the host will have the former but not the latter. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/kvm_mmu.h | 7 ++++++- arch/arm64/kvm/mmu.c | 8 ++++---- 3 files changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9a2feb83eea0..9d59bebcc5ef 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -95,7 +95,7 @@ struct kvm_s2_mmu { /* The last vcpu id that ran on each physical CPU */ int __percpu *last_vcpu_ran; - struct kvm *kvm; + struct kvm_arch *arch; }; struct kvm_arch_memory_slot { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 6c8466a042a9..662f0415344e 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -299,7 +299,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) */ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) { - write_sysreg(kern_hyp_va(mmu->kvm)->arch.vtcr, vtcr_el2); + write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); /* @@ -309,5 +309,10 @@ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) */ asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } + +static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) +{ + return container_of(mmu->arch, struct kvm, arch); +} #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7e6263103943..6f9bf71722bd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -169,7 +169,7 @@ static void *kvm_host_va(phys_addr_t phys) static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size, bool may_block) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); phys_addr_t end = start + size; assert_spin_locked(&kvm->mmu_lock); @@ -474,7 +474,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) for_each_possible_cpu(cpu) *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1; - mmu->kvm = kvm; + mmu->arch = &kvm->arch; mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); mmu->vmid.vmid_gen = 0; @@ -556,7 +556,7 @@ void stage2_unmap_vm(struct kvm *kvm) void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); struct kvm_pgtable *pgt = NULL; spin_lock(&kvm->mmu_lock); @@ -625,7 +625,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, */ static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect); }