From patchwork Fri Jan 8 12:14:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CEB4C43381 for ; Fri, 8 Jan 2021 12:16:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DC93238E4 for ; Fri, 8 Jan 2021 12:16:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726784AbhAHMQM (ORCPT ); Fri, 8 Jan 2021 07:16:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726133AbhAHMQL (ORCPT ); Fri, 8 Jan 2021 07:16:11 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4759CC0612F6 for ; Fri, 8 Jan 2021 04:15:30 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id b8so4071020wrv.14 for ; Fri, 08 Jan 2021 04:15:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=2AvxqWsiNZiBgTFedPJYPTYC/iX22/lJxd+Fe6flKw0=; b=XIyewR3btdhcaLpqgHHf31EmI/fdUcLfY2l8kYP4b62kKxx9iTpjV2a87Z2iqPdVdh 3YXXLs/Ky8cj/4mgJkr2p7eSWVX+S+lIe4YQMowwmloU1LTRFbPQeCWsn7ja28FPpFxg PwX+aTx1oHZub2LCC/ssQtz+EkVTbbxzq0Eep7agFz+EkOn0+/LZFQHu6yoviR1jsOr2 MgFkYUmjkq+qlR91UBaqCSn/hp+eCjs3Wah+TaXvMZOZq8x5mZ4WEfPjCwP3SRodjLET 8G+W+2AnHQygsHUYleRZs3XIMWfRKcaN0SIjbYx3+NFR4g+Nhx4GGr7jVp9LAnWmo/lK i6Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2AvxqWsiNZiBgTFedPJYPTYC/iX22/lJxd+Fe6flKw0=; b=XKVtC5UCCHMrtPQi2kZxkrfgvc+AxMQskYcP7yVFSab/4H8zBW+l/HPVm9m1DEWem0 84kEvR1lbk9osie+hZQ3lQshMgBFyrBqNZ8sC5Ks50NixpiYi4rkoZozkaMPzb/Yt3ce 1L0GpygiLgTRcOXGGkkXO/YkWJ2SEXyLj6IHEpvQ610Nc/tmuYcKMrxzWmv0xARMzFgM MPFG1h1AFqYRWVa+OH+FagmpStjaGD+Z99Pis0kvmLi3zaC1ZXN2LyU1JRqt7C+yS3Ei 601rZyKjYSqqRustOhvSgrxoi0jpaKZ3WY8AymqpXuvGFuvQdNR3qgdmSC2RSvTPC7zk Oa3g== X-Gm-Message-State: AOAM5324rHCYRHBZwfkJChE1F2jmOtrHoIPsQVHsWae8uWYcUOU7Xbws ENZDE7GoijTHS8wUFClAea0GqSsTY961 X-Google-Smtp-Source: ABdhPJyf1ctEy51fuFCix0vOeWng0452zHo1A6Dfvf1j1eurJXucg7DaQIxlCVhea+1iolch1bleRN20EdPf Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a1c:2c89:: with SMTP id s131mr1783892wms.0.1610108128579; Fri, 08 Jan 2021 04:15:28 -0800 (PST) Date: Fri, 8 Jan 2021 12:14:59 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-2-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 01/26] arm64: lib: Annotate {clear, copy}_page() as position-independent From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Will Deacon clear_page() and copy_page() are suitable for use outside of the kernel address space, so annotate them as position-independent code. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/lib/clear_page.S | 4 ++-- arch/arm64/lib/copy_page.S | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S index 073acbf02a7c..b84b179edba3 100644 --- a/arch/arm64/lib/clear_page.S +++ b/arch/arm64/lib/clear_page.S @@ -14,7 +14,7 @@ * Parameters: * x0 - dest */ -SYM_FUNC_START(clear_page) +SYM_FUNC_START_PI(clear_page) mrs x1, dczid_el0 and w1, w1, #0xf mov x2, #4 @@ -25,5 +25,5 @@ SYM_FUNC_START(clear_page) tst x0, #(PAGE_SIZE - 1) b.ne 1b ret -SYM_FUNC_END(clear_page) +SYM_FUNC_END_PI(clear_page) EXPORT_SYMBOL(clear_page) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index e7a793961408..29144f4cd449 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -17,7 +17,7 @@ * x0 - dest * x1 - src */ -SYM_FUNC_START(copy_page) +SYM_FUNC_START_PI(copy_page) alternative_if ARM64_HAS_NO_HW_PREFETCH // Prefetch three cache lines ahead. prfm pldl1strm, [x1, #128] @@ -75,5 +75,5 @@ alternative_else_nop_endif stnp x16, x17, [x0, #112 - 256] ret -SYM_FUNC_END(copy_page) +SYM_FUNC_END_PI(copy_page) EXPORT_SYMBOL(copy_page) From patchwork Fri Jan 8 12:15:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86D86C433E6 for ; Fri, 8 Jan 2021 12:16:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4073823A03 for ; Fri, 8 Jan 2021 12:16:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725791AbhAHMQQ (ORCPT ); Fri, 8 Jan 2021 07:16:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726794AbhAHMQM (ORCPT ); Fri, 8 Jan 2021 07:16:12 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB3AEC0612F9 for ; Fri, 8 Jan 2021 04:15:31 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id g4so9102136qko.23 for ; Fri, 08 Jan 2021 04:15:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=5e3BSd7vmIFaVsypq9rMBmoYLFEaRN3k4F/khfSlKV0=; b=cjAwptXdgK4IjsCiFRTJr4GUXRP+Bt83eRibYAY7d95vi5ie+a7Dh4W6jtEie3v8p4 CSWvFyAbl4e4Ne5fz9XJcVCpIwp+omNyk0PMTcAhlsXmvHds5+cpjdjUUTFLVyxSUaGM jFbE4/Wa4r00sv/Vy5ghNzPONZJ6TO9wyuU6lGRKi7ur6X9TULFMJASjvxgPL8TP2TXx M4sM+47rF9DZKaO6x/DKr1lS+5QuM2uC81E+WViGnRTojd9l0OI8FzMU3BAmZFw9iIL6 OeGqmLn/NOfoUqdkCpqbMfvQPupVqW7zkoMoaYTb1YUgzWZa9tk8L2MICwwUFfDAExoI 5BBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5e3BSd7vmIFaVsypq9rMBmoYLFEaRN3k4F/khfSlKV0=; b=EGpGkpAZwdIgf5RjSInWxUwM47rWHnBMe8B6Q71lrjKGoHbD3XFHcoCvepUuLwQYsz XAtodxTMUWVsZIDtVj1U4JOm4nVX6ehC4Ms+NaCoolLGEp9GBaao2rvw2nC/hpLan9P2 FN5KUJtjqOomTJ22S8GBY2OiOIm6l7D99NbX71GArohY5VF+C23OD8FjOY5XuoqJT9yr m2qn1LrfX9Ic2GevjwNYi6fhx3uIiPLVo604jYb6kJrwfIJ76B7Jgr2sdMDB/d8pGGHp xoefehCOkeCz4i+NgMM1bm32Kpe/Zjfo6TJxWXqGMKw5NCbwnaA8HBPq/WpfyuC7G49S b7yQ== X-Gm-Message-State: AOAM530TTxeFsiAqaYkQbuq0gph+AIgjRMViGhzOwPKODygQlZFNMYJQ 6XyWsD6zWlRjSueyg2n1gQj2WWjhwgJ9 X-Google-Smtp-Source: ABdhPJwY/NqATOtRh6UO3IjAH6vrAoafYC2NEIZf1AOtSUiHAi44hYkfMu+wX3SAuwLxf/UxVn56Yi12CCSa Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:ad4:4952:: with SMTP id o18mr3311159qvy.27.1610108130916; Fri, 08 Jan 2021 04:15:30 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:00 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-3-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 02/26] KVM: arm64: Link position-independent string routines into .hyp.text From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Will Deacon Pull clear_page(), copy_page(), memcpy() and memset() into the nVHE hyp code and ensure that we always execute the '__pi_' entry point on the offchance that it changes in future. [ qperret: Commit title nits ] Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/hyp_image.h | 3 +++ arch/arm64/kernel/image-vars.h | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 4 ++++ 3 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/hyp_image.h b/arch/arm64/include/asm/hyp_image.h index daa1a1da539e..e06842756051 100644 --- a/arch/arm64/include/asm/hyp_image.h +++ b/arch/arm64/include/asm/hyp_image.h @@ -31,6 +31,9 @@ */ #define KVM_NVHE_ALIAS(sym) kvm_nvhe_sym(sym) = sym; +/* Defines a linker script alias for KVM nVHE hyp symbols */ +#define KVM_NVHE_ALIAS_HYP(first, sec) kvm_nvhe_sym(first) = kvm_nvhe_sym(sec); + #endif /* LINKER_SCRIPT */ #endif /* __ARM64_HYP_IMAGE_H__ */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 39289d75118d..43f3a1d6e92d 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -102,6 +102,17 @@ KVM_NVHE_ALIAS(__stop___kvm_ex_table); /* Array containing bases of nVHE per-CPU memory regions. */ KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base); +/* Position-independent library routines */ +KVM_NVHE_ALIAS_HYP(clear_page, __pi_clear_page); +KVM_NVHE_ALIAS_HYP(copy_page, __pi_copy_page); +KVM_NVHE_ALIAS_HYP(memcpy, __pi_memcpy); +KVM_NVHE_ALIAS_HYP(memset, __pi_memset); + +#ifdef CONFIG_KASAN +KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy); +KVM_NVHE_ALIAS_HYP(__memset, __pi_memset); +#endif + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 1f1e351c5fe2..590fdefb42dd 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -6,10 +6,14 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ +lib-objs := clear_page.o copy_page.o memcpy.o memset.o +lib-objs := $(addprefix ../../../lib/, $(lib-objs)) + obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o +obj-y += $(lib-objs) ## ## Build rules for compiling nVHE hyp code From patchwork Fri Jan 8 12:15:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1D0EC433DB for ; Fri, 8 Jan 2021 12:17:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB344238E4 for ; Fri, 8 Jan 2021 12:17:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727045AbhAHMQs (ORCPT ); Fri, 8 Jan 2021 07:16:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726844AbhAHMQs (ORCPT ); Fri, 8 Jan 2021 07:16:48 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69322C0612FB for ; Fri, 8 Jan 2021 04:15:34 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id r11so4066078wrs.23 for ; Fri, 08 Jan 2021 04:15:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=FdcXk5XwPP97RT6Rrd5i1sKrdPrNLW7yQE5BzOD9Aa4=; b=M3UXK7M/4/5w67kqm4YRBmxxB2Dup3cHf9jCSc0WL59C3f57wG8qN4u53aNta9iHY+ IxEBHG1yuhrtwoQ+Hd2AqQYA4zmRzpTI0Zud8wT5pQfX/3Av8YuQQcgrI6qQ922bEL5z QmgBzNkPxqBIdqqglBlH+EB3OI+7mEaRNyvmbRqeDJgRMbfqpO8OfeDBITKrSClUj4On miJIokbmGChXEeX4ob483HpVQpAdWTEpBeWXHr8ITS30MAw1s4V6nxeX4AERUhCYnizO 29kGwGN0zPVHuPlW9ZRrcK/0XmtM+Y2JibC++Atok74QKN3S1JGk+6KpB+F9/N3bi8B/ 42qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FdcXk5XwPP97RT6Rrd5i1sKrdPrNLW7yQE5BzOD9Aa4=; b=UFDLvBWPCjy9IAEhABqEVL4DVjg4wBUq9GJs6H6lv2S87CarC47ukkxICPRjGUlOwX CPFY8Sii6Q5HF0z2CUI2OhGAu6j/9fLxAtbVp6b8mQYxkXScr9MtrruAVsXeCApa1kOx Uw9cKklcjbD0UJo8QrY5W0m2YGx4NWUx+5xgmnLMM/II8x7T6RTgytzipCsVgEinN0rE YUpInL3/kDVHzDJIVIk3vLZHmNgHhR1PaKI7PfNsrVI7HUmZeZ50zp05e8lnPWPhNrbt T0+mgEp3GMddBn5kPmJQZqCvGeWc61oJQR/ZkdX9zDmb0ejdJhmc0EIJ82FGEaZBL/yh YLsA== X-Gm-Message-State: AOAM531xj8OJa00xhGDYj+p2rcBQ54UCkiTFmUvpsWSfCUzjVWntR4Pl 40SmngsRDjgWwMMvJnHWy5v9OhThp1SL X-Google-Smtp-Source: ABdhPJyl9JkZuAxlwZjUkJm5Ml5+4k+oG9V/LVsBit2ZNyEdffzcwrN+lSVgxsKMVJ/hAc3WVfndupgwNmrl Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:adf:9cca:: with SMTP id h10mr3389207wre.77.1610108133066; Fri, 08 Jan 2021 04:15:33 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:01 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-4-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 03/26] arm64: kvm: Add standalone ticket spinlock implementation for use at hyp From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Will Deacon We will soon need to synchronise multiple CPUs in the hyp text at EL2. The qspinlock-based locking used by the host is overkill for this purpose and relies on the kernel's "percpu" implementation for the MCS nodes. Implement a simple ticket locking scheme based heavily on the code removed by commit c11090474d70 ("arm64: locking: Replace ticket lock implementation with qspinlock"). Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 92 ++++++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/spinlock.h diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h new file mode 100644 index 000000000000..7584c397bbac --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * A stand-alone ticket spinlock implementation for use by the non-VHE + * KVM hypervisor code running at EL2. + * + * Copyright (C) 2020 Google LLC + * Author: Will Deacon + * + * Heavily based on the implementation removed by c11090474d70 which was: + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ARM64_KVM_NVHE_SPINLOCK_H__ +#define __ARM64_KVM_NVHE_SPINLOCK_H__ + +#include +#include + +typedef union hyp_spinlock { + u32 __val; + struct { +#ifdef __AARCH64EB__ + u16 next, owner; +#else + u16 owner, next; + }; +#endif +} hyp_spinlock_t; + +#define hyp_spin_lock_init(l) \ +do { \ + *(l) = (hyp_spinlock_t){ .__val = 0 }; \ +} while (0) + +static inline void hyp_spin_lock(hyp_spinlock_t *lock) +{ + u32 tmp; + hyp_spinlock_t lockval, newval; + + asm volatile( + /* Atomically increment the next ticket. */ + ARM64_LSE_ATOMIC_INSN( + /* LL/SC */ +" prfm pstl1strm, %3\n" +"1: ldaxr %w0, %3\n" +" add %w1, %w0, #(1 << 16)\n" +" stxr %w2, %w1, %3\n" +" cbnz %w2, 1b\n", + /* LSE atomics */ +" mov %w2, #(1 << 16)\n" +" ldadda %w2, %w0, %3\n" + __nops(3)) + + /* Did we get the lock? */ +" eor %w1, %w0, %w0, ror #16\n" +" cbz %w1, 3f\n" + /* + * No: spin on the owner. Send a local event to avoid missing an + * unlock before the exclusive load. + */ +" sevl\n" +"2: wfe\n" +" ldaxrh %w2, %4\n" +" eor %w1, %w2, %w0, lsr #16\n" +" cbnz %w1, 2b\n" + /* We got the lock. Critical section starts here. */ +"3:" + : "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock) + : "Q" (lock->owner) + : "memory"); +} + +static inline void hyp_spin_unlock(hyp_spinlock_t *lock) +{ + u64 tmp; + + asm volatile( + ARM64_LSE_ATOMIC_INSN( + /* LL/SC */ + " ldrh %w1, %0\n" + " add %w1, %w1, #1\n" + " stlrh %w1, %0", + /* LSE atomics */ + " mov %w1, #1\n" + " staddlh %w1, %0\n" + __nops(1)) + : "=Q" (lock->owner), "=&r" (tmp) + : + : "memory"); +} + +#endif /* __ARM64_KVM_NVHE_SPINLOCK_H__ */ From patchwork Fri Jan 8 12:15:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 732F6C43381 for ; Fri, 8 Jan 2021 12:17:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B5CB23998 for ; Fri, 8 Jan 2021 12:17:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726418AbhAHMQw (ORCPT ); Fri, 8 Jan 2021 07:16:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727484AbhAHMQv (ORCPT ); Fri, 8 Jan 2021 07:16:51 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1059C0612FE for ; Fri, 8 Jan 2021 04:15:35 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id e14so8137861qtr.8 for ; Fri, 08 Jan 2021 04:15:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=RWcnIk+Zz+pG3Av2Dxn/kb2+XtU0B6nl5U7IxSgo4iw=; b=wNDXx/sD5GcSm/A2PTYKJfLmxpXERQHCwjVkRWUnY9K+2OAVeVfYr/UCXk9mDcjoMp vJKYCy75bWYTCgFQ1ptMnhasFa5sou7N8b/rWBAkDQEQ11itqD0Ptok7bcwYArwv8MTT 9TdaY+L2RigUm80IdI2+QXKM7SSGNfVKsoakb54xiYsDiKfTpVORSAlTWKiRhFJiYmI7 ft3aZUjZ78GH8luL/PoDAYpN0RFLmsGt7EKO9aYJyfcBeHWNAsof0F5HnfdZV4a9lKUP s1fPLO8kVOYYtT/nh9NctbbbsnRbvfWLwrHL6PED18gm4ShxMOZWiyUWpfdmAM78/ZM1 x0yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RWcnIk+Zz+pG3Av2Dxn/kb2+XtU0B6nl5U7IxSgo4iw=; b=alih5Z/3Od8cRs7FP6qPNFzYfnUCzB5qt8c07k1Q/dDdsH8TvzWzKfvX2Aq2txxUvA xK4nt3nWYjyhncw0cvKj0Rj42YA3CxgFKrtgMdMeDG0+qj5JVs02ukwS+veeEtbF0rSe 4c29gEKdmi3eYxoFXFSJzzAXcokzP7rvOkZxjGdQiTuyNTs8VaxwGRT6JsB8y+Jlkt90 M14lRUtZgDgOeI/uBZHEeN+D+vlZtd1FM0yJIuKjCm9xfmDzsbMY3WTs7ZSEYCvyR/eR 18NigLA1DHU1pPDJPfc+xhlkbOgUNvSAi5k3SUoQK+qHb4KjLp6+D7nXlsHgxNI3zwe/ 3A+g== X-Gm-Message-State: AOAM531qc4LAYTsnQy9fDGVVilm0cdhikLtJTAwNsbsrcwx03X9+qbPZ K/YwjGBECvS36eJEgkR6/tWvjF+I+bMc X-Google-Smtp-Source: ABdhPJzjzlzr8QnoGpMUomUK/uLWkut64kFG4OYSU3shcMs3B8KGc+9cpRoXp++qeovMbvcWCVJNIGFBfZa5 Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:f812:: with SMTP id r18mr3189415qvn.39.1610108135126; Fri, 08 Jan 2021 04:15:35 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:02 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-5-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 04/26] KVM: arm64: Initialize kvm_nvhe_init_params early From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Move the initialization of kvm_nvhe_init_params in a dedicated function that is run early, and only once during KVM init, rather than every time the KVM vectors are set and reset. This also opens the opportunity for the hypervisor to change the init structs during boot, hence simplifying the replacement of host-provided page-tables and stacks by the ones the hypervisor will create for itself. Signed-off-by: Quentin Perret --- arch/arm64/kvm/arm.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 04c44853b103..3ac0f3425833 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1383,21 +1383,17 @@ static int kvm_init_vector_slots(void) return 0; } -static void cpu_init_hyp_mode(void) +static void cpu_prepare_hyp_mode(int cpu) { - struct kvm_nvhe_init_params *params = this_cpu_ptr_nvhe_sym(kvm_init_params); - struct arm_smccc_res res; + struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); unsigned long tcr; - /* Switch from the HYP stub to our own HYP init vector */ - __hyp_set_vectors(kvm_get_idmap_vector()); - /* * Calculate the raw per-cpu offset without a translation from the * kernel's mapping to the linear mapping, and store it in tpidr_el2 * so that we can use adr_l to access per-cpu variables in EL2. */ - params->tpidr_el2 = (unsigned long)this_cpu_ptr_nvhe_sym(__per_cpu_start) - + params->tpidr_el2 = (unsigned long)per_cpu_ptr_nvhe_sym(__per_cpu_start, cpu) - (unsigned long)kvm_ksym_ref(CHOOSE_NVHE_SYM(__per_cpu_start)); params->mair_el2 = read_sysreg(mair_el1); @@ -1421,7 +1417,7 @@ static void cpu_init_hyp_mode(void) tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET; params->tcr_el2 = tcr; - params->stack_hyp_va = kern_hyp_va(__this_cpu_read(kvm_arm_hyp_stack_page) + PAGE_SIZE); + params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) + PAGE_SIZE); params->pgd_pa = kvm_mmu_get_httbr(); /* @@ -1429,6 +1425,15 @@ static void cpu_init_hyp_mode(void) * be read while the MMU is off. */ kvm_flush_dcache_to_poc(params, sizeof(*params)); +} + +static void cpu_init_hyp_mode(void) +{ + struct kvm_nvhe_init_params *params; + struct arm_smccc_res res; + + /* Switch from the HYP stub to our own HYP init vector */ + __hyp_set_vectors(kvm_get_idmap_vector()); /* * Call initialization code, and switch to the full blown HYP code. @@ -1437,6 +1442,7 @@ static void cpu_init_hyp_mode(void) * cpus_have_const_cap() wrapper. */ BUG_ON(!system_capabilities_finalized()); + params = this_cpu_ptr_nvhe_sym(kvm_init_params); arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(__kvm_hyp_init), virt_to_phys(params), &res); WARN_ON(res.a0 != SMCCC_RET_SUCCESS); @@ -1807,6 +1813,12 @@ static int init_hyp_mode(void) goto out_err; } + /* + * Prepare the CPU initialization parameters + */ + for_each_possible_cpu(cpu) + cpu_prepare_hyp_mode(cpu); + return 0; out_err: From patchwork Fri Jan 8 12:15:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359222 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65806C4332E for ; Fri, 8 Jan 2021 12:17:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F4602388B for ; Fri, 8 Jan 2021 12:17:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727543AbhAHMQw (ORCPT ); Fri, 8 Jan 2021 07:16:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727489AbhAHMQv (ORCPT ); Fri, 8 Jan 2021 07:16:51 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03169C061282 for ; Fri, 8 Jan 2021 04:15:38 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id u17so9090898qku.17 for ; Fri, 08 Jan 2021 04:15:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=unx+w/DYhIKkLLjEFZ0ZjVS7ADRwuEnqRq53UmcJNco=; b=siB1XHnnTlEHKkQZGDr8YyjzyGhAZrCxbjSbr5CQNJ9d5DIj1wZzvNJWZGJt3xADr7 RPbPqVYK7G/VjHQbbDFy9FHLUf8EvAZ37P9j+wpxd0ndJ+tgVdfgIKCOMtA/3mGyqOwg m/00VMN9dan8SnkxyGgNCyaQtGwDsNJrwgBtTpJWAvLEnqEYh6rBD8HUvXsNXLHMjD8v CrzfgYDC4I+fG7aQ12H1g9qmiXCJAI//rxlzo/WqKUuMkqjSOJ1fiiDhZeIcQFkHbRcQ OcOGC7EQ7FnqzdklWauf0h4lPGJW8VoDERKaWoBtCaZRuVy8Tp2QyWRbTfRcnGDltlD0 OOCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=unx+w/DYhIKkLLjEFZ0ZjVS7ADRwuEnqRq53UmcJNco=; b=OehLHJWR5I1yFi/fLIHQvITXojm/PLG2qUW+NXW492Yov4U56bn78sLpGCyXX+k0CE uS4P/QVFoqwrPYSTqEmgL2O0SbgQ/gzkqMA+s0hxfNs8C45gbolypUKelao0Z9T81OCW BLN5M3TjKws7T+tIYtmHDMHFnvNMIxzMEyn0U1BAFDJBe+1tAjvczPEEAOB0m5ADbu0U HUf07Klo3s2GTj9JPFMguiOPT6gXvmMhVVxLwv0pJdqk2MhfXF1v9MExH+bF3CtsrehW xVjTLAKCcYkHwkNr0HyNIGoaA5JGHIakRiK2Z0G/4nE3l/dlQOondLs17QVh492O5oRM meNA== X-Gm-Message-State: AOAM533wnfQ3E49AucIFA6Fm3m3BxRX0s5+VM9umKn1dkGHAjiYjji8r vvdUGn/DvLxVO8L7bY20NDdwb3VcsTHz X-Google-Smtp-Source: ABdhPJzbPjW7FIolFk3swj7ZTloyVho84hOEZCHjuWJg3S9PMHPt+3WGf+xElVymEqN4wYzsD6qCiUsy42cO Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:1230:: with SMTP id p16mr3316335qvv.47.1610108137203; Fri, 08 Jan 2021 04:15:37 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:03 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-6-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 05/26] KVM: arm64: Avoid free_page() in page-table allocator From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Currently, the KVM page-table allocator uses a mix of put_page() and free_page() calls depending on the context even though page-allocation is always achieved using variants of __get_free_page(). Make the code consitent by using put_page() throughout, and reduce the memory management API surface used by the page-table code. This will ease factoring out page-alloction from pgtable.c, which is a pre-requisite to creating page-tables at EL2. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/kvm/hyp/pgtable.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0271b4a3b9fe..d7122c5eac24 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -410,7 +410,7 @@ int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits) static int hyp_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) { - free_page((unsigned long)kvm_pte_follow(*ptep)); + put_page(virt_to_page(kvm_pte_follow(*ptep))); return 0; } @@ -422,7 +422,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt) }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); - free_page((unsigned long)pgt->pgd); + put_page(virt_to_page(pgt->pgd)); pgt->pgd = NULL; } @@ -551,7 +551,7 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level, if (!data->anchor) return 0; - free_page((unsigned long)kvm_pte_follow(*ptep)); + put_page(virt_to_page(kvm_pte_follow(*ptep))); put_page(virt_to_page(ptep)); if (data->anchor == ptep) { @@ -674,7 +674,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, } if (childp) - free_page((unsigned long)childp); + put_page(virt_to_page(childp)); return 0; } @@ -871,7 +871,7 @@ static int stage2_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, put_page(virt_to_page(ptep)); if (kvm_pte_table(pte, level)) - free_page((unsigned long)kvm_pte_follow(pte)); + put_page(virt_to_page(kvm_pte_follow(pte))); return 0; } From patchwork Fri Jan 8 12:15:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F91CC43331 for ; Fri, 8 Jan 2021 12:17:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EBBBB238E4 for ; Fri, 8 Jan 2021 12:17:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727495AbhAHMQ7 (ORCPT ); Fri, 8 Jan 2021 07:16:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727489AbhAHMQw (ORCPT ); Fri, 8 Jan 2021 07:16:52 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B7ACC061290 for ; Fri, 8 Jan 2021 04:15:40 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id f27so9144446qkh.0 for ; Fri, 08 Jan 2021 04:15:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=RnVS1dxm1rUiQ69+GOF8g4XyZD3vGjLf2qqEDN7j3Lo=; b=qKQFpDva5vCwD+2vCXQv4j4NHIbTRh4DuhI12lK8an4HzjfYZf0L4m4KRXMerd4VVq JXBLYKcifAIqTcROob4gRiijxRSO0jbbl6041cyOTkWOjXS7kxmVJGzrAnvPKsXaO8KV AvFxYGZ71wUUr3wITObjCF0i4vvV20JPL5k6F618cmXCz8PoOekd9nDLmJ2/SYDMvCD0 rWz6vGH0kWsCKtERSC1lM53aqzLucbC+BYD/+hJe6NG4bmktfEggbMOX0mrFQbzFSa/W 0JQZ9PfI+slZiTBJC7n5CVwJCLy3mjRPFtVDDz2nds13rqYHV5OjD7yO2ydd12JTMLro JwHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RnVS1dxm1rUiQ69+GOF8g4XyZD3vGjLf2qqEDN7j3Lo=; b=kayqqRkTf3/A4q8jEWwVSNghpOK8rjPSNKedEdppnEkPLynBP53AC1aKURWrun99iA QQiVvDBR1PyRmtWfmLdw+Khpie30vR3+eKZ1Y/C+ERS4NqtEOGhA0rhg9wWdbtipNYeE clRUClXsnd0mDOR59S/kmmLouMtGb2V+KjNvX5rd51TcHwW2HzFrzv50gONP+DjxV4TX H77FRd1WFakmhJqbzx++B73X4Q6gqLTEhVT4pHXXnjaPHVd00KUkc4fjwjsHierQFf4F yTfqntW28FBAJkGwnEYhVy78z6LHBbgDwR8hQMu9Io4ebbA80OZV2eFxlD4LR0UPbr8w 3coQ== X-Gm-Message-State: AOAM533vzqijrxvm6kb5elRjVVz0no+SFNaZZL8WNaiNsc3/xSziR3P/ 65rZLiwYg142GhxsYGbwUX+RzEvWaYTU X-Google-Smtp-Source: ABdhPJxijhg6fB4bKZxY4D8T4+b+1B7k67AqmxZN517ARS0IRBw3O4dazVCSCmEfVsjVwNaoW5mwpKc8g1LA Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:6a1:: with SMTP id s1mr3113241qvz.20.1610108139242; Fri, 08 Jan 2021 04:15:39 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:04 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-7-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 06/26] KVM: arm64: Factor memory allocation out of pgtable.c From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In preparation for enabling the creation of page-tables at EL2, factor all memory allocation out of the page-table code, hence making it re-usable with any compatible memory allocator. No functional changes intended. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 32 +++++++++- arch/arm64/kvm/hyp/pgtable.c | 90 +++++++++++++++++----------- arch/arm64/kvm/mmu.c | 70 +++++++++++++++++++++- 3 files changed, 154 insertions(+), 38 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 52ab38db04c7..45acc9dc6c45 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -13,17 +13,41 @@ typedef u64 kvm_pte_t; +/** + * struct kvm_pgtable_mm_ops - Memory management callbacks. + * @zalloc_page: Allocate a zeroed memory page. + * @zalloc_pages_exact: Allocate an exact number of zeroed memory pages. + * @free_pages_exact: Free an exact number of memory pages. + * @get_page: Increment the refcount on a page. + * @put_page: Decrement the refcount on a page. + * @page_count: Returns the refcount of a page. + * @phys_to_virt: Convert a physical address into a virtual address. + * @virt_to_phys: Convert a virtual address into a physical address. + */ +struct kvm_pgtable_mm_ops { + void* (*zalloc_page)(void *arg); + void* (*zalloc_pages_exact)(size_t size); + void (*free_pages_exact)(void *addr, size_t size); + void (*get_page)(void *addr); + void (*put_page)(void *addr); + int (*page_count)(void *addr); + void* (*phys_to_virt)(phys_addr_t phys); + phys_addr_t (*virt_to_phys)(void *addr); +}; + /** * struct kvm_pgtable - KVM page-table. * @ia_bits: Maximum input address size, in bits. * @start_level: Level at which the page-table walk starts. * @pgd: Pointer to the first top-level entry of the page-table. + * @mm_ops: Memory management callbacks. * @mmu: Stage-2 KVM MMU struct. Unused for stage-1 page-tables. */ struct kvm_pgtable { u32 ia_bits; u32 start_level; kvm_pte_t *pgd; + struct kvm_pgtable_mm_ops *mm_ops; /* Stage-2 only */ struct kvm_s2_mmu *mmu; @@ -86,10 +110,12 @@ struct kvm_pgtable_walker { * kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table. * @pgt: Uninitialised page-table structure to initialise. * @va_bits: Maximum virtual address bits. + * @mm_ops: Memory management callbacks. * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits); +int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, + struct kvm_pgtable_mm_ops *mm_ops); /** * kvm_pgtable_hyp_destroy() - Destroy an unused hypervisor stage-1 page-table. @@ -126,10 +152,12 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table. * @pgt: Uninitialised page-table structure to initialise. * @kvm: KVM structure representing the guest virtual machine. + * @mm_ops: Memory management callbacks. * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm); +int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm, + struct kvm_pgtable_mm_ops *mm_ops); /** * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index d7122c5eac24..61a8a34ddfdb 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -148,9 +148,9 @@ static kvm_pte_t kvm_phys_to_pte(u64 pa) return pte; } -static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte) +static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte, struct kvm_pgtable_mm_ops *mm_ops) { - return __va(kvm_pte_to_phys(pte)); + return mm_ops->phys_to_virt(kvm_pte_to_phys(pte)); } static void kvm_set_invalid_pte(kvm_pte_t *ptep) @@ -159,9 +159,10 @@ static void kvm_set_invalid_pte(kvm_pte_t *ptep) WRITE_ONCE(*ptep, pte & ~KVM_PTE_VALID); } -static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp) +static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp, + struct kvm_pgtable_mm_ops *mm_ops) { - kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(__pa(childp)); + kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(mm_ops->virt_to_phys(childp)); pte |= FIELD_PREP(KVM_PTE_TYPE, KVM_PTE_TYPE_TABLE); pte |= KVM_PTE_VALID; @@ -229,7 +230,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data, goto out; } - childp = kvm_pte_follow(pte); + childp = kvm_pte_follow(pte, data->pgt->mm_ops); ret = __kvm_pgtable_walk(data, childp, level + 1); if (ret) goto out; @@ -304,8 +305,9 @@ int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, } struct hyp_map_data { - u64 phys; - kvm_pte_t attr; + u64 phys; + kvm_pte_t attr; + struct kvm_pgtable_mm_ops *mm_ops; }; static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot, @@ -355,6 +357,8 @@ static int hyp_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) { kvm_pte_t *childp; + struct hyp_map_data *data = arg; + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; if (hyp_map_walker_try_leaf(addr, end, level, ptep, arg)) return 0; @@ -362,11 +366,11 @@ static int hyp_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1)) return -EINVAL; - childp = (kvm_pte_t *)get_zeroed_page(GFP_KERNEL); + childp = (kvm_pte_t *)mm_ops->zalloc_page(NULL); if (!childp) return -ENOMEM; - kvm_set_table_pte(ptep, childp); + kvm_set_table_pte(ptep, childp, mm_ops); return 0; } @@ -376,6 +380,7 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, int ret; struct hyp_map_data map_data = { .phys = ALIGN_DOWN(phys, PAGE_SIZE), + .mm_ops = pgt->mm_ops, }; struct kvm_pgtable_walker walker = { .cb = hyp_map_walker, @@ -393,16 +398,18 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, return ret; } -int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits) +int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, + struct kvm_pgtable_mm_ops *mm_ops) { u64 levels = ARM64_HW_PGTABLE_LEVELS(va_bits); - pgt->pgd = (kvm_pte_t *)get_zeroed_page(GFP_KERNEL); + pgt->pgd = (kvm_pte_t *)mm_ops->zalloc_page(NULL); if (!pgt->pgd) return -ENOMEM; pgt->ia_bits = va_bits; pgt->start_level = KVM_PGTABLE_MAX_LEVELS - levels; + pgt->mm_ops = mm_ops; pgt->mmu = NULL; return 0; } @@ -410,7 +417,9 @@ int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits) static int hyp_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) { - put_page(virt_to_page(kvm_pte_follow(*ptep))); + struct kvm_pgtable_mm_ops *mm_ops = arg; + + mm_ops->put_page((void *)kvm_pte_follow(*ptep, mm_ops)); return 0; } @@ -419,10 +428,11 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt) struct kvm_pgtable_walker walker = { .cb = hyp_free_walker, .flags = KVM_PGTABLE_WALK_TABLE_POST, + .arg = pgt->mm_ops, }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); - put_page(virt_to_page(pgt->pgd)); + pgt->mm_ops->put_page(pgt->pgd); pgt->pgd = NULL; } @@ -434,6 +444,8 @@ struct stage2_map_data { struct kvm_s2_mmu *mmu; struct kvm_mmu_memory_cache *memcache; + + struct kvm_pgtable_mm_ops *mm_ops; }; static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot, @@ -501,12 +513,12 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level, static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct stage2_map_data *data) { + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; kvm_pte_t *childp, pte = *ptep; - struct page *page = virt_to_page(ptep); if (data->anchor) { if (kvm_pte_valid(pte)) - put_page(page); + mm_ops->put_page(ptep); return 0; } @@ -520,7 +532,7 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, if (!data->memcache) return -ENOMEM; - childp = kvm_mmu_memory_cache_alloc(data->memcache); + childp = mm_ops->zalloc_page(data->memcache); if (!childp) return -ENOMEM; @@ -532,13 +544,13 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, if (kvm_pte_valid(pte)) { kvm_set_invalid_pte(ptep); kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level); - put_page(page); + mm_ops->put_page(ptep); } - kvm_set_table_pte(ptep, childp); + kvm_set_table_pte(ptep, childp, mm_ops); out_get_page: - get_page(page); + mm_ops->get_page(ptep); return 0; } @@ -546,13 +558,14 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct stage2_map_data *data) { + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; int ret = 0; if (!data->anchor) return 0; - put_page(virt_to_page(kvm_pte_follow(*ptep))); - put_page(virt_to_page(ptep)); + mm_ops->put_page(kvm_pte_follow(*ptep, mm_ops)); + mm_ops->put_page(ptep); if (data->anchor == ptep) { data->anchor = NULL; @@ -607,6 +620,7 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, .phys = ALIGN_DOWN(phys, PAGE_SIZE), .mmu = pgt->mmu, .memcache = mc, + .mm_ops = pgt->mm_ops, }; struct kvm_pgtable_walker walker = { .cb = stage2_map_walker, @@ -643,7 +657,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) { - struct kvm_s2_mmu *mmu = arg; + struct kvm_pgtable *pgt = arg; + struct kvm_s2_mmu *mmu = pgt->mmu; + struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; kvm_pte_t pte = *ptep, *childp = NULL; bool need_flush = false; @@ -651,9 +667,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, return 0; if (kvm_pte_table(pte, level)) { - childp = kvm_pte_follow(pte); + childp = kvm_pte_follow(pte, mm_ops); - if (page_count(virt_to_page(childp)) != 1) + if (mm_ops->page_count(childp) != 1) return 0; } else if (stage2_pte_cacheable(pte)) { need_flush = true; @@ -666,15 +682,15 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, */ kvm_set_invalid_pte(ptep); kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, addr, level); - put_page(virt_to_page(ptep)); + mm_ops->put_page(ptep); if (need_flush) { - stage2_flush_dcache(kvm_pte_follow(pte), + stage2_flush_dcache(kvm_pte_follow(pte, mm_ops), kvm_granule_size(level)); } if (childp) - put_page(virt_to_page(childp)); + mm_ops->put_page(childp); return 0; } @@ -683,7 +699,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, - .arg = pgt->mmu, + .arg = pgt, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; @@ -815,12 +831,13 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) { + struct kvm_pgtable_mm_ops *mm_ops = arg; kvm_pte_t pte = *ptep; if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pte)) return 0; - stage2_flush_dcache(kvm_pte_follow(pte), kvm_granule_size(level)); + stage2_flush_dcache(kvm_pte_follow(pte, mm_ops), kvm_granule_size(level)); return 0; } @@ -829,6 +846,7 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) struct kvm_pgtable_walker walker = { .cb = stage2_flush_walker, .flags = KVM_PGTABLE_WALK_LEAF, + .arg = pgt->mm_ops, }; if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) @@ -837,7 +855,8 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) return kvm_pgtable_walk(pgt, addr, size, &walker); } -int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm) +int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm, + struct kvm_pgtable_mm_ops *mm_ops) { size_t pgd_sz; u64 vtcr = kvm->arch.vtcr; @@ -846,12 +865,13 @@ int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm) u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0; pgd_sz = kvm_pgd_pages(ia_bits, start_level) * PAGE_SIZE; - pgt->pgd = alloc_pages_exact(pgd_sz, GFP_KERNEL_ACCOUNT | __GFP_ZERO); + pgt->pgd = mm_ops->zalloc_pages_exact(pgd_sz); if (!pgt->pgd) return -ENOMEM; pgt->ia_bits = ia_bits; pgt->start_level = start_level; + pgt->mm_ops = mm_ops; pgt->mmu = &kvm->arch.mmu; /* Ensure zeroed PGD pages are visible to the hardware walker */ @@ -863,15 +883,16 @@ static int stage2_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) { + struct kvm_pgtable_mm_ops *mm_ops = arg; kvm_pte_t pte = *ptep; if (!kvm_pte_valid(pte)) return 0; - put_page(virt_to_page(ptep)); + mm_ops->put_page(ptep); if (kvm_pte_table(pte, level)) - put_page(virt_to_page(kvm_pte_follow(pte))); + mm_ops->put_page(kvm_pte_follow(pte, mm_ops)); return 0; } @@ -883,10 +904,11 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) .cb = stage2_free_walker, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .arg = pgt->mm_ops, }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level) * PAGE_SIZE; - free_pages_exact(pgt->pgd, pgd_sz); + pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_sz); pgt->pgd = NULL; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1f41173e6149..278e163beda4 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -88,6 +88,48 @@ static bool kvm_is_device_pfn(unsigned long pfn) return !pfn_valid(pfn); } +static void *stage2_memcache_alloc_page(void *arg) +{ + struct kvm_mmu_memory_cache *mc = arg; + kvm_pte_t *ptep = NULL; + + /* Allocated with GFP_KERNEL_ACCOUNT, so no need to zero */ + if (mc && mc->nobjs) + ptep = mc->objects[--mc->nobjs]; + + return ptep; +} + +static void *kvm_host_zalloc_pages_exact(size_t size) +{ + return alloc_pages_exact(size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); +} + +static void kvm_host_get_page(void *addr) +{ + get_page(virt_to_page(addr)); +} + +static void kvm_host_put_page(void *addr) +{ + put_page(virt_to_page(addr)); +} + +static int kvm_host_page_count(void *addr) +{ + return page_count(virt_to_page(addr)); +} + +static phys_addr_t kvm_host_pa(void *addr) +{ + return __pa(addr); +} + +static void *kvm_host_va(phys_addr_t phys) +{ + return __va(phys); +} + /* * Unmapping vs dcache management: * @@ -351,6 +393,17 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size, return 0; } +static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = { + .zalloc_page = stage2_memcache_alloc_page, + .zalloc_pages_exact = kvm_host_zalloc_pages_exact, + .free_pages_exact = free_pages_exact, + .get_page = kvm_host_get_page, + .put_page = kvm_host_put_page, + .page_count = kvm_host_page_count, + .phys_to_virt = kvm_host_va, + .virt_to_phys = kvm_host_pa, +}; + /** * kvm_init_stage2_mmu - Initialise a S2 MMU strucrure * @kvm: The pointer to the KVM structure @@ -374,7 +427,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) if (!pgt) return -ENOMEM; - err = kvm_pgtable_stage2_init(pgt, kvm); + err = kvm_pgtable_stage2_init(pgt, kvm, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; @@ -1198,6 +1251,19 @@ static int kvm_map_idmap_text(void) return err; } +static void *kvm_hyp_zalloc_page(void *arg) +{ + return (void *)get_zeroed_page(GFP_KERNEL); +} + +static struct kvm_pgtable_mm_ops kvm_hyp_mm_ops = { + .zalloc_page = kvm_hyp_zalloc_page, + .get_page = kvm_host_get_page, + .put_page = kvm_host_put_page, + .phys_to_virt = kvm_host_va, + .virt_to_phys = kvm_host_pa, +}; + int kvm_mmu_init(void) { int err; @@ -1241,7 +1307,7 @@ int kvm_mmu_init(void) goto out; } - err = kvm_pgtable_hyp_init(hyp_pgtable, hyp_va_bits); + err = kvm_pgtable_hyp_init(hyp_pgtable, hyp_va_bits, &kvm_hyp_mm_ops); if (err) goto out_free_pgtable; From patchwork Fri Jan 8 12:15:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3FA9C4332D for ; Fri, 8 Jan 2021 12:17:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE6CF2388B for ; Fri, 8 Jan 2021 12:17:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727332AbhAHMQ7 (ORCPT ); Fri, 8 Jan 2021 07:16:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727566AbhAHMQw (ORCPT ); Fri, 8 Jan 2021 07:16:52 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0799CC0612A0 for ; Fri, 8 Jan 2021 04:15:42 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id f7so8135048qtj.7 for ; Fri, 08 Jan 2021 04:15:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=U7Ezu9BWvQQl3+odVqPz50z5xQDmhHP5/rCWfk/q+sQ=; b=JHhFj4U2WS5haunZ3iuTJrcTb98S1QwYxWw5N+fvVur6ZGQbm7UhHohf61X1JkhbvN ut1SdfYZaSpOoVkcidF1ynuHx36dawwj2Gbagonl4g9CuoJhsoZx83BLUtCuoSSj79Nq 0CHi+rHOysdgysr2eOTDf8XaTtXlVmnwQ9nexay8EII07X792gzrdWracJ7x6cF1lEby g7tzoDCqDeBWnKLlXPYVbZdtqmn+8mMCzYFdIVDyl9nXxq8RoWXQWYk4umkpgEsfawg4 DO6ggtYvRaWrEpyavTe+Zl2yBGcF3nwhiM43yVnP5jQt/iX8hnpGBGEStyzwV2zwYtRt E94g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=U7Ezu9BWvQQl3+odVqPz50z5xQDmhHP5/rCWfk/q+sQ=; b=d7UmS5gpKC7JWwoBcqFx5/WjDFCmYEG0jWO9pxyZj/X0B6V4OZ/Y7ivEhfuHtUBMFd O46Xuafd0KAaUoqFn4BHTerEecZzk09+mwuLByqBrqjWqaHJiPcL3+UqrMAuA9NYoQlF Ml8qsEvBierNfzgJUSuR8YEDHofH4jyDU9kbaHjYDLu1pf0cWA3IpSm8zOSthDoRhEjv XwKXd3gu0J3aV2VyhbxBC86QNVO/OVqeJzIcsLfWUd7aue0v+2cmwmzyPq+It4Ij6O9D ANhaMitKJhMbUeT/4gzGhgTsCjEJSyfO1GpYeq/S2o43Sl/NwklkmQjKlyIPUqYEhaYa 8hLw== X-Gm-Message-State: AOAM533tOAOvCUHQjBPOqqNBN47+D4wsS5E7S913XwZIeUv0HtDQxhdd A2FhqYvYlLV1XK/UgCqvTjC1LOLZKQIp X-Google-Smtp-Source: ABdhPJw/4cr8P48R7RVf1xArteNINTq8P74DXyz+B4OUiC/ZzBFlkO7kRqbd09zSVKPjQ4dVKp1TcQnazFJK Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:ea34:: with SMTP id t20mr3195107qvp.5.1610108141217; Fri, 08 Jan 2021 04:15:41 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:05 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-8-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 07/26] KVM: arm64: Introduce a BSS section for use at Hyp From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Currently, the hyp code cannot make full use of a bss, as the kernel section is mapped read-only. While this mapping could simply be changed to read-write, it would intermingle even more the hyp and kernel state than they currently are. Instead, introduce a __hyp_bss section, that uses reserved pages, and create the appropriate RW hyp mappings during KVM init. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/sections.h | 1 + arch/arm64/kernel/vmlinux.lds.S | 7 +++++++ arch/arm64/kvm/arm.c | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/hyp.lds.S | 1 + 4 files changed, 20 insertions(+) diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index 8ff579361731..f58cf493de16 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -12,6 +12,7 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[]; extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; extern char __hyp_text_start[], __hyp_text_end[]; extern char __hyp_data_ro_after_init_start[], __hyp_data_ro_after_init_end[]; +extern char __hyp_bss_start[], __hyp_bss_end[]; extern char __idmap_text_start[], __idmap_text_end[]; extern char __initdata_begin[], __initdata_end[]; extern char __inittext_begin[], __inittext_end[]; diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 43af13968dfd..3eca35d5a7cf 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -8,6 +8,13 @@ #define RO_EXCEPTION_TABLE_ALIGN 8 #define RUNTIME_DISCARD_EXIT +#define BSS_FIRST_SECTIONS \ + . = ALIGN(PAGE_SIZE); \ + __hyp_bss_start = .; \ + *(.hyp.bss) \ + . = ALIGN(PAGE_SIZE); \ + __hyp_bss_end = .; + #include #include #include diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 3ac0f3425833..51b53ca36dc5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1770,7 +1770,18 @@ static int init_hyp_mode(void) goto out_err; } + /* + * .hyp.bss is placed at the beginning of the .bss section, so map that + * part RW, and the rest RO as the hyp shouldn't be touching it. + */ err = create_hyp_mappings(kvm_ksym_ref(__bss_start), + kvm_ksym_ref(__hyp_bss_end), PAGE_HYP); + if (err) { + kvm_err("Cannot map hyp bss section: %d\n", err); + goto out_err; + } + + err = create_hyp_mappings(kvm_ksym_ref(__hyp_bss_end), kvm_ksym_ref(__bss_stop), PAGE_HYP_RO); if (err) { kvm_err("Cannot map bss section\n"); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp.lds.S b/arch/arm64/kvm/hyp/nvhe/hyp.lds.S index 5d76ff2ba63e..dc281d90063e 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp.lds.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp.lds.S @@ -17,4 +17,5 @@ SECTIONS { PERCPU_INPUT(L1_CACHE_BYTES) } HYP_SECTION(.data..ro_after_init) + HYP_SECTION(.bss) } From patchwork Fri Jan 8 12:15:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4C7EC433E0 for ; Fri, 8 Jan 2021 12:17:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8AA62239EB for ; Fri, 8 Jan 2021 12:17:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727489AbhAHMRF (ORCPT ); Fri, 8 Jan 2021 07:17:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727364AbhAHMRD (ORCPT ); Fri, 8 Jan 2021 07:17:03 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E16AC0612A2 for ; Fri, 8 Jan 2021 04:15:44 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id x74so9138377qkb.12 for ; Fri, 08 Jan 2021 04:15:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=FNjMcGMBepA+DKYzWBZdafn9Pfvz5tgi5bFjDXzrpjw=; b=MnwPPAzoXNwaOiIaAGjdxyYKgCpjZp6Y/8k70XQrFxP2o4aKAID1+AZSD9MNSZDBu7 C1KdtY+o82o9hD3Jv3i9pN11yqVnvTEMzLJECVqtKuxjPOhr4G+fCEVPEA1slzI1JowR CSXnnwarOjT0oN1R+ZhxnBm4V4rEHrNYwYu1fsoKb+lRL7HgwNvni2KmTT4h/RzLGhOO llmt1BjnI2tjj0RTLZJ49lVdW1/iNrO+a1O+zULzBx1EyMYKzRKJa7IR/UiOX2z6y3so 2Qt8L9hn3buycUtLVrwGXMuKpsy74lToWQTVDmDWMw+PhLbObGNxGzeYdWb4/SRx0AMn srfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FNjMcGMBepA+DKYzWBZdafn9Pfvz5tgi5bFjDXzrpjw=; b=qraXxIDlx5pv8eRkSrbF2T7ZU4OO/+d1c+jDXd8APwolDPuKPhI15rDQT2W7QUgo7J +d5AWOoSjWy2UtWTU/GWeSfZNauJNjwa8N323ZlQ0p0OpIzk+VAEzEWyUFOOipsGs2JT RKg78vL38PqF2tRQRLhkynSNOD/I3cmPqd/q35F2TFCEuBQPogSqLGCL+p+EnO9ya6vn X1w4z7T0U6dLhrjVHV6HiWRxRjsWvl59VXPfR20Os1jN+EbHTy5qUgwP91VqNhcL6eN8 atGnMExMSUchbmI9RfXTUNCqzvVlFYD53AAlsF7ttRi7z0lAyQCBSjYzBRNDx1sT7+RS WJ4g== X-Gm-Message-State: AOAM532vZEjDi9nuKwlSPa4LFK9m761n9WsJ8fAdYyj2zn+7FxVmGIl5 2s0RPsIJn7wE/xR1uNBzgVyBjtqqw+Es X-Google-Smtp-Source: ABdhPJzuZf85ALEgP2PMc7AcYtKZ4On8truit0/ITvMunEfbpbr7IAAERcJbv904cp6MdFKGk7/AvT6xTBf6 Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:1754:: with SMTP id dc20mr3314023qvb.7.1610108143358; Fri, 08 Jan 2021 04:15:43 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:06 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-9-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 08/26] KVM: arm64: Make kvm_call_hyp() a function call at Hyp From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org kvm_call_hyp() has some logic to issue a function call or a hypercall depending the EL at which the kernel is running. However, all the code compiled under __KVM_NVHE_HYPERVISOR__ is guaranteed to run only at EL2, and in this case a simple function call is needed. Add ifdefery to kvm_host.h to symplify kvm_call_hyp() in .hyp.text. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 8fcfab0c2567..81212958ef55 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -592,6 +592,7 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); +#ifndef __KVM_NVHE_HYPERVISOR__ #define kvm_call_hyp_nvhe(f, ...) \ ({ \ struct arm_smccc_res res; \ @@ -631,6 +632,11 @@ void kvm_arm_resume_guest(struct kvm *kvm); \ ret; \ }) +#else /* __KVM_NVHE_HYPERVISOR__ */ +#define kvm_call_hyp(f, ...) f(__VA_ARGS__) +#define kvm_call_hyp_ret(f, ...) f(__VA_ARGS__) +#define kvm_call_hyp_nvhe(f, ...) f(__VA_ARGS__) +#endif /* __KVM_NVHE_HYPERVISOR__ */ void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); From patchwork Fri Jan 8 12:15:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67729C433DB for ; Fri, 8 Jan 2021 12:19:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 247F722E02 for ; Fri, 8 Jan 2021 12:19:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727719AbhAHMRF (ORCPT ); Fri, 8 Jan 2021 07:17:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727489AbhAHMRE (ORCPT ); Fri, 8 Jan 2021 07:17:04 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C52BEC0612A5 for ; Fri, 8 Jan 2021 04:15:46 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id 4so4083015wrb.16 for ; Fri, 08 Jan 2021 04:15:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=j2147gnSzl9I0ScvZHWmAEizoS1X3L4ZHr3IS+13AKY=; b=Ns/jsJnBKAEMgPThyuR5EpWbpEFuji6DabFH+5GBY0CvrWpmvNqu9R86B3G/uacWQS KmN2zpRmuvtgq4B7YACdY7SURg02CGduxtogRTfv3ogluubm5YiaSF9YIp7Z01d6HSs6 s580RGQ/bhzezrX1DLcpZQ1xub6tyD6US0n+U7l3f5mQr/Nng8tw58EvPQ+pOHn13Kbs m3e0q3j6/ETq9wkmQoFp8k0t3mxpKxsFtfPE+Iu6HMXj2287z9WV/eS3AJBTmYhm3rF1 K4g+HsMOnZqEbMKeNYSPpu3EAeAydUEzoDvGYM3uBjU/V0yKSJX+qcVx62rpCysU4UJj 8gCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=j2147gnSzl9I0ScvZHWmAEizoS1X3L4ZHr3IS+13AKY=; b=kZC9lgFkUQyFjNfgymTvnQ2eHfmHIiudcV3BCEZY1EIhye58fCaI7XH9XpgU33+ECG FNwUSR72//WbsoRr7mD7pqjhf+Vf4r32uryvlPtPqRjjKIalD6PpDPjNxpwmqfKbYvrZ rl5JBK3Iels9ZswZOE0fGZ9vqm2/9cL/EbG6Re8bJUzdAYDEALl8onUsEVnxd6Na/V0z u+njRpTMYTX9BD8LEPeH64sPUTeIXHnFuI9YR91ZB9R/DUBruynNNOIBEqJ7xNuI7+fE DNthtIbiGDQ2bizUtC6n+MpBhlu6788L0V+6IW0y9ea+EWPUogfSpgzMtxQQMmNpqZgq F3Dg== X-Gm-Message-State: AOAM530fztoIj4qcRXgF/F1Fmhz2NhJSfV/1fyo/CJY3XIu6pYquLXAD 94hYKfjuDhK1KesLfGGkxmGeHxDXDsgo X-Google-Smtp-Source: ABdhPJwRmzTEPIbtvpk5L3tJWHR0GVJuAQ8eMijo/E5QxD7yqAIOXUsNUHDzy7jweXBIXi3SBPh+MOe9FUID Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:adf:fbd2:: with SMTP id d18mr3451456wrs.222.1610108145504; Fri, 08 Jan 2021 04:15:45 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:07 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-10-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 09/26] KVM: arm64: Allow using kvm_nvhe_sym() in hyp code From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to allow the usage of code shared by the host and the hyp in static inline library function, allow the usage of kvm_nvhe_sym() at el2 by defaulting to the raw symbol name. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/include/asm/hyp_image.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/include/asm/hyp_image.h b/arch/arm64/include/asm/hyp_image.h index e06842756051..fb16e1018ea9 100644 --- a/arch/arm64/include/asm/hyp_image.h +++ b/arch/arm64/include/asm/hyp_image.h @@ -7,11 +7,15 @@ #ifndef __ARM64_HYP_IMAGE_H__ #define __ARM64_HYP_IMAGE_H__ +#ifndef __KVM_NVHE_HYPERVISOR__ /* * KVM nVHE code has its own symbol namespace prefixed with __kvm_nvhe_, * to separate it from the kernel proper. */ #define kvm_nvhe_sym(sym) __kvm_nvhe_##sym +#else +#define kvm_nvhe_sym(sym) sym +#endif #ifdef LINKER_SCRIPT From patchwork Fri Jan 8 12:15:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A7BCC433DB for ; Fri, 8 Jan 2021 12:19:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3D6AD2388B for ; Fri, 8 Jan 2021 12:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726566AbhAHMTc (ORCPT ); Fri, 8 Jan 2021 07:19:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727566AbhAHMRE (ORCPT ); Fri, 8 Jan 2021 07:17:04 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDD2BC0612A7 for ; Fri, 8 Jan 2021 04:15:48 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id r1so3145459wmn.8 for ; Fri, 08 Jan 2021 04:15:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=WL1fe5YDuGo6RqcNSK7jGefPRcmeHL6zQSRS7UoVl6U=; b=qTtNp0lrOUjUQnxa/3nRsEp99/L+dot8wUSGG26zbwh+ZKGUVXzXrfVC4z4sW5uPPn 5vHQcZS7ZbmICKBVtkmfR1Cgpobl9mgCuFhz40XVuE4e4Oavab9Vrmwur1ODKdCmUoMI vYgvfv5QqHruJghmynhNLsDUFkyzdVlaMNe2Rorezh2EZyxo/uIrFS77rLYLQybMaq4o NAhVIYi1r68N/5t8QF2rOCvuqqek7yfqOmCkBvUs2JdLg6L981du3Bfho/9mTzTDdUZi cxISsdg2MsZQKuuHYY5q+g3C/b9FLtKnkqMgZ31UfIencgXsyxxswnaC6yh4sHzx/G+a +mWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WL1fe5YDuGo6RqcNSK7jGefPRcmeHL6zQSRS7UoVl6U=; b=lz2gM17bofsp1kqwjNjK7cjw8TTtnkAhpEIMGDKB4XGJtt5d8/Xb4zBUctDnPm6IwL 2sZ4PbCPwN/frRJQevOAF7DVgNjdT+EREBJNgY18B4j5nBJfjTJRosI0UDl5QGwYU+9c HXyBN3f1oDmAK4s6P86Q4CIK9mzG4eLABLP+h6Y8mqcU07sgOrnVfJvvmxH3XuEuz9et KMw00+x4/g4e1jBSQjkEeNgSfnKrR6EncoejroDYB47DTk2pynY4puqk2+RSNQF/GCaZ bEwTdUe67dDuDlPlMvgXO/HPgYj55JFKlGHMmOUoNfpNH78IhckZGKXRcbekME3uP08d VmNA== X-Gm-Message-State: AOAM533DphfcvcO+vWXS1oz83nSMwNZrCV8X9X/O46Ptl3ChqxSzkuVv 8/uBwqt+FrvQdkNASrx3gAvbaDc3aXTA X-Google-Smtp-Source: ABdhPJzybUgooTtGbr0eZgxsj4aAsZC46QSb8W0QRoz6Za6WlhPrNCvvGpSAps5zeAC2/oLAwBbkwYtCpQ5S Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:600c:211:: with SMTP id 17mr2838410wmi.84.1610108147556; Fri, 08 Jan 2021 04:15:47 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:08 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-11-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 10/26] KVM: arm64: Introduce an early Hyp page allocator From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org With nVHE, the host currently creates all s1 hypervisor mappings at EL1 during boot, installs them at EL2, and extends them as required (e.g. when creating a new VM). But in a world where the host is no longer trusted, it cannot have full control over the code mapped in the hypervisor. In preparation for enabling the hypervisor to create its own s1 mappings during boot, introduce an early page allocator, with minimal functionality. This allocator is designed to be used only during early bootstrap of the hyp code when memory protection is enabled, which will then switch to using a full-fledged page allocator after init. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/early_alloc.h | 14 +++++ arch/arm64/kvm/hyp/include/nvhe/memory.h | 24 ++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/early_alloc.c | 60 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/psci-relay.c | 4 +- 5 files changed, 100 insertions(+), 4 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/early_alloc.h create mode 100644 arch/arm64/kvm/hyp/include/nvhe/memory.h create mode 100644 arch/arm64/kvm/hyp/nvhe/early_alloc.c diff --git a/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h b/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h new file mode 100644 index 000000000000..68ce2bf9a718 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __KVM_HYP_EARLY_ALLOC_H +#define __KVM_HYP_EARLY_ALLOC_H + +#include + +void hyp_early_alloc_init(void *virt, unsigned long size); +unsigned long hyp_early_alloc_nr_pages(void); +void *hyp_early_alloc_page(void *arg); +void *hyp_early_alloc_contig(unsigned int nr_pages); + +extern struct kvm_pgtable_mm_ops hyp_early_alloc_mm_ops; + +#endif /* __KVM_HYP_EARLY_ALLOC_H */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h new file mode 100644 index 000000000000..64c44c142c95 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __KVM_HYP_MEMORY_H +#define __KVM_HYP_MEMORY_H + +#include + +#include + +extern s64 hyp_physvirt_offset; + +#define __hyp_pa(virt) ((phys_addr_t)(virt) + hyp_physvirt_offset) +#define __hyp_va(virt) ((void *)((phys_addr_t)(virt) - hyp_physvirt_offset)) + +static inline void *hyp_phys_to_virt(phys_addr_t phys) +{ + return __hyp_va(phys); +} + +static inline phys_addr_t hyp_virt_to_phys(void *addr) +{ + return __hyp_pa(addr); +} + +#endif /* __KVM_HYP_MEMORY_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 590fdefb42dd..1fc0684a7678 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -10,7 +10,7 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ - hyp-main.o hyp-smp.o psci-relay.o + hyp-main.o hyp-smp.o psci-relay.o early_alloc.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/early_alloc.c b/arch/arm64/kvm/hyp/nvhe/early_alloc.c new file mode 100644 index 000000000000..de4c45662970 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/early_alloc.c @@ -0,0 +1,60 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include + +#include + +struct kvm_pgtable_mm_ops hyp_early_alloc_mm_ops; +s64 __ro_after_init hyp_physvirt_offset; + +static unsigned long base; +static unsigned long end; +static unsigned long cur; + +unsigned long hyp_early_alloc_nr_pages(void) +{ + return (cur - base) >> PAGE_SHIFT; +} + +extern void clear_page(void *to); + +void *hyp_early_alloc_contig(unsigned int nr_pages) +{ + unsigned long ret = cur, i, p; + + if (!nr_pages) + return NULL; + + cur += nr_pages << PAGE_SHIFT; + if (cur > end) { + cur = ret; + return NULL; + } + + for (i = 0; i < nr_pages; i++) { + p = ret + (i << PAGE_SHIFT); + clear_page((void *)(p)); + } + + return (void *)ret; +} + +void *hyp_early_alloc_page(void *arg) +{ + return hyp_early_alloc_contig(1); +} + +void hyp_early_alloc_init(unsigned long virt, unsigned long size) +{ + base = virt; + end = virt + size; + cur = virt; + + hyp_early_alloc_mm_ops.zalloc_page = hyp_early_alloc_page; + hyp_early_alloc_mm_ops.phys_to_virt = hyp_phys_to_virt; + hyp_early_alloc_mm_ops.virt_to_phys = hyp_virt_to_phys; +} diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c b/arch/arm64/kvm/hyp/nvhe/psci-relay.c index e3947846ffcb..bdd8054bce4c 100644 --- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c +++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c @@ -11,6 +11,7 @@ #include #include +#include #include void kvm_hyp_cpu_entry(unsigned long r0); @@ -20,9 +21,6 @@ void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); /* Config options set by the host. */ struct kvm_host_psci_config __ro_after_init kvm_host_psci_config; -s64 __ro_after_init hyp_physvirt_offset; - -#define __hyp_pa(x) ((phys_addr_t)((x)) + hyp_physvirt_offset) #define INVALID_CPU_ID UINT_MAX From patchwork Fri Jan 8 12:15:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0218EC433E6 for ; Fri, 8 Jan 2021 12:19:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB8A12388B for ; Fri, 8 Jan 2021 12:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726006AbhAHMS1 (ORCPT ); Fri, 8 Jan 2021 07:18:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728069AbhAHMR3 (ORCPT ); Fri, 8 Jan 2021 07:17:29 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C2D8C0612A8 for ; Fri, 8 Jan 2021 04:15:50 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id h18so8151688qtr.2 for ; Fri, 08 Jan 2021 04:15:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=1zm32OSU+zNrGKw9/hOuTdLQGjd9ykhvUXTPnKtB/pc=; b=hTdkSIrZA8jRY9d2Vo+3e32B5JaCAcEyOgWafLqU479GjRA+wuV1EKbj7dli8WSEot ov9tsSsHblKcShOicPx10jIo6EYN04HyhQaJb9LILOwxisunTQr4dSxejkLLjyZOPnd9 f6L3SjMyKWmc7X+Tic0/qLfiB38vmK1DbAjvbGnglkuYwluyDJytSrCJ8sJNWcSnUM8I cWNwQOWkEgvxWKdph19saLEvUhUINi8/UvsEm5hEvmkp916ThrFcWIuZk41DBJscv6Ri t5+yZ13n9RRVeYH6mIq994/ydC7sGQpNiEdXBVz0oVW5gSJ1zmdzv4Jd/hCwMZZ47CzG gPqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1zm32OSU+zNrGKw9/hOuTdLQGjd9ykhvUXTPnKtB/pc=; b=MUg1lVHN0ewVvcxZZ1JLFb4fQGF6WFQmsK+hLB5RRqKweF/ztLq6EaTydpjKdjPg7c 0H3Ys/DR/RXkhUJRT3pxtRFg9E7kSP5hRuVUgp60amLwbqAyB+aftjgyJSCQ5cUu6ByF jQiiTr2XTpyR7LYZSxiLth+MIEtNs59cdtEqWnzyveiY5XVIGHIxXZmkl5yKXl2pQghp 4byV+BOGLJKFn3Ef1GSA7lIyh9qQ+CYIjPuBBEL+13K6gbFqehmftQq7deSBTszzzZA3 Pv8cUE/lA/LzEVbvXEScw6eoWBem+2KZt7cxSg1eU4Mg7bYROVNLoPsTA7QZo2nFAgl3 egpw== X-Gm-Message-State: AOAM530Yxxt49QlyhlcWHe73dg92JtGxsuxle5zCm072tTRsztyJKaSw 1bnLJ3OmP/TztyLd3UjkYshLfuVz/Cy3 X-Google-Smtp-Source: ABdhPJwtoYexZEdaC0bQI+CZT27/P6ssHLkHC/kRjUA9Pq4S0VxlCZ8waLnUiDerC6tLIhWoI3hSf751jQxy Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:ad4:56ab:: with SMTP id bd11mr6199101qvb.53.1610108149650; Fri, 08 Jan 2021 04:15:49 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:09 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-12-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 11/26] KVM: arm64: Stub CONFIG_DEBUG_LIST at Hyp From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to use the kernel list library at EL2, introduce stubs for the CONFIG_DEBUG_LIST out-of-lines calls. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/stub.c | 22 ++++++++++++++++++++++ 2 files changed, 23 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/stub.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 1fc0684a7678..33bd381d8f73 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -10,7 +10,7 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ - hyp-main.o hyp-smp.o psci-relay.o early_alloc.o + hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/stub.c b/arch/arm64/kvm/hyp/nvhe/stub.c new file mode 100644 index 000000000000..c0aa6bbfd79d --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/stub.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Stubs for out-of-line function calls caused by re-using kernel + * infrastructure at EL2. + * + * Copyright (C) 2020 - Google LLC + */ + +#include + +#ifdef CONFIG_DEBUG_LIST +bool __list_add_valid(struct list_head *new, struct list_head *prev, + struct list_head *next) +{ + return true; +} + +bool __list_del_entry_valid(struct list_head *entry) +{ + return true; +} +#endif From patchwork Fri Jan 8 12:15:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14168C433E9 for ; Fri, 8 Jan 2021 12:17:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CBBCB238E4 for ; Fri, 8 Jan 2021 12:17:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727012AbhAHMRe (ORCPT ); Fri, 8 Jan 2021 07:17:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728079AbhAHMRc (ORCPT ); Fri, 8 Jan 2021 07:17:32 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5655C0612AB for ; Fri, 8 Jan 2021 04:15:52 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id 188so9113658qkh.7 for ; Fri, 08 Jan 2021 04:15:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=ceff8+9tLTRWb3Ib2eeejY1GjuxFu6p07DG9AQWeIgA=; b=o1HoA0mPFwZpw0DdHGRPHKx/T6rDflGmOszd0YIx7p2fKFiqlAW1ROvHpBj+onvC50 qzY9C/SkLL0uTt2VNDditVMD7t6C9KEql1rV3GtAcNq9znapL8tpohAHALK+2Ij5wgA/ tzYUMncZHqtGZ54jL66WjScg4hKdxqxNnNglO0QT3oi7VbdAuTqlCPSusUJmyo9htZ/T QAjg5JHq2MD5peMijqpHS4HhIMdGIX615K0Rti9mHiFzLC+5+27V3BrdxJqCpzp9g4HN KMNIIAQ9v6tSIfkbEa6sEDGL1JBwMLOoti3DUBBRGdeyEJLpif4t5Jpquvv2W0ydVH+s 1slw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ceff8+9tLTRWb3Ib2eeejY1GjuxFu6p07DG9AQWeIgA=; b=KvC7ddq5g+JZkKJL4efHA8kGiH/URdSlQAe5MPg1m2dLPo0Cb9srH870dkiMWOAJzi iv1mSO96ixJ6hyx7PRyLDiA7ChyWmlL1Wzu70hT4gjZmSujiPG4cxdBajg6Ukcz3vuyW 3FN8XteE6Kwyb7Oa/g4FnlmTihlqtO7LLsZAAtYbnsJ219hy+gbvxq/mCNM9c3t3G0dh eT7DLlxOBJrkGHjB6nIpKhB5BdTeytqRsfgN6JNLDOeGJtwkSFggIiN5O2nHuyBCWFKq xhkbI6cIiL/x69LtpTsr7syN8/0gpM3h3q5BuG7l7hvZ9ggz9K2dNMboxJACH/VK9oTv aLzA== X-Gm-Message-State: AOAM530C/MrJ9Wg/eOPW7lj7X25GiRmhPWjZJ7GslGA1dbgCtMNUxbA4 8eMgkLwpYgMkTeAjlHpTiEt4uptbAcPJ X-Google-Smtp-Source: ABdhPJwkGRhporxOaFcuqjq5p6+PDcs5YIKF7gM3HMEKHCm/kO0804yGlAJfanmP6R2nmlALxy+tEsjGTRKE Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:fd68:: with SMTP id k8mr6434357qvs.56.1610108151798; Fri, 08 Jan 2021 04:15:51 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:10 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-13-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 12/26] KVM: arm64: Introduce a Hyp buddy page allocator From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When memory protection is enabled, the hyp code will require a basic form of memory management in order to allocate and free memory pages at EL2. This is needed for various use-cases, including the creation of hyp mappings or the allocation of stage 2 page tables. To address these use-case, introduce a simple memory allocator in the hyp code. The allocator is designed as a conventional 'buddy allocator', working with a page granularity. It allows to allocate and free physically contiguous pages from memory 'pools', with a guaranteed order alignment in the PA space. Each page in a memory pool is associated with a struct hyp_page which holds the page's metadata, including its refcount, as well as its current order, hence mimicking the kernel's buddy system in the GFP infrastructure. The hyp_page metadata are made accessible through a hyp_vmemmap, following the concept of SPARSE_VMEMMAP in the kernel. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 32 ++++ arch/arm64/kvm/hyp/include/nvhe/memory.h | 25 +++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 185 +++++++++++++++++++++++ 4 files changed, 243 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/gfp.h create mode 100644 arch/arm64/kvm/hyp/nvhe/page_alloc.c diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h new file mode 100644 index 000000000000..95587faee171 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __KVM_HYP_GFP_H +#define __KVM_HYP_GFP_H + +#include + +#include +#include + +#define HYP_MAX_ORDER 11U +#define HYP_NO_ORDER UINT_MAX + +struct hyp_pool { + hyp_spinlock_t lock; + struct list_head free_area[HYP_MAX_ORDER + 1]; + phys_addr_t range_start; + phys_addr_t range_end; +}; + +/* GFP flags */ +#define HYP_GFP_NONE 0 +#define HYP_GFP_ZERO 1 + +/* Allocation */ +void *hyp_alloc_pages(struct hyp_pool *pool, gfp_t mask, unsigned int order); +void hyp_get_page(void *addr); +void hyp_put_page(void *addr); + +/* Used pages cannot be freed */ +int hyp_pool_init(struct hyp_pool *pool, phys_addr_t phys, + unsigned int nr_pages, unsigned int used_pages); +#endif /* __KVM_HYP_GFP_H */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 64c44c142c95..ed47674bc988 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -6,7 +6,17 @@ #include +struct hyp_pool; +struct hyp_page { + unsigned int refcount; + unsigned int order; + struct hyp_pool *pool; + struct list_head node; +}; + extern s64 hyp_physvirt_offset; +extern u64 __hyp_vmemmap; +#define hyp_vmemmap ((struct hyp_page *)__hyp_vmemmap) #define __hyp_pa(virt) ((phys_addr_t)(virt) + hyp_physvirt_offset) #define __hyp_va(virt) ((void *)((phys_addr_t)(virt) - hyp_physvirt_offset)) @@ -21,4 +31,19 @@ static inline phys_addr_t hyp_virt_to_phys(void *addr) return __hyp_pa(addr); } +#define hyp_phys_to_pfn(phys) ((phys) >> PAGE_SHIFT) +#define hyp_phys_to_page(phys) (&hyp_vmemmap[hyp_phys_to_pfn(phys)]) +#define hyp_virt_to_page(virt) hyp_phys_to_page(__hyp_pa(virt)) + +#define hyp_page_to_phys(page) ((phys_addr_t)((page) - hyp_vmemmap) << PAGE_SHIFT) +#define hyp_page_to_virt(page) __hyp_va(hyp_page_to_phys(page)) +#define hyp_page_to_pool(page) (((struct hyp_page *)page)->pool) + +static inline int hyp_page_count(void *addr) +{ + struct hyp_page *p = hyp_virt_to_page(addr); + + return p->refcount; +} + #endif /* __KVM_HYP_MEMORY_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 33bd381d8f73..9e5eacfec6ec 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -10,7 +10,7 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ - hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o + hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c new file mode 100644 index 000000000000..6de6515f0432 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -0,0 +1,185 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include +#include + +u64 __hyp_vmemmap; + +/* + * Example buddy-tree for a 4-pages physically contiguous pool: + * + * o : Page 3 + * / + * o-o : Page 2 + * / + * / o : Page 1 + * / / + * o---o-o : Page 0 + * Order 2 1 0 + * + * Example of requests on this zon: + * __find_buddy(pool, page 0, order 0) => page 1 + * __find_buddy(pool, page 0, order 1) => page 2 + * __find_buddy(pool, page 1, order 0) => page 0 + * __find_buddy(pool, page 2, order 0) => page 3 + */ +static struct hyp_page *__find_buddy(struct hyp_pool *pool, struct hyp_page *p, + unsigned int order) +{ + phys_addr_t addr = hyp_page_to_phys(p); + + addr ^= (PAGE_SIZE << order); + if (addr < pool->range_start || addr >= pool->range_end) + return NULL; + + return hyp_phys_to_page(addr); +} + +static void __hyp_attach_page(struct hyp_pool *pool, + struct hyp_page *p) +{ + unsigned int order = p->order; + struct hyp_page *buddy; + + p->order = HYP_NO_ORDER; + for (; order < HYP_MAX_ORDER; order++) { + /* Nothing to do if the buddy isn't in a free-list */ + buddy = __find_buddy(pool, p, order); + if (!buddy || list_empty(&buddy->node) || buddy->order != order) + break; + + /* Otherwise, coalesce the buddies and go one level up */ + list_del_init(&buddy->node); + buddy->order = HYP_NO_ORDER; + p = (p < buddy) ? p : buddy; + } + + p->order = order; + list_add_tail(&p->node, &pool->free_area[order]); +} + +void hyp_put_page(void *addr) +{ + struct hyp_page *p = hyp_virt_to_page(addr); + struct hyp_pool *pool = hyp_page_to_pool(p); + + hyp_spin_lock(&pool->lock); + if (!p->refcount) + hyp_panic(); + p->refcount--; + if (!p->refcount) + __hyp_attach_page(pool, p); + hyp_spin_unlock(&pool->lock); +} + +void hyp_get_page(void *addr) +{ + struct hyp_page *p = hyp_virt_to_page(addr); + struct hyp_pool *pool = hyp_page_to_pool(p); + + hyp_spin_lock(&pool->lock); + p->refcount++; + hyp_spin_unlock(&pool->lock); +} + +/* Extract a page from the buddy tree, at a specific order */ +static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, + struct hyp_page *p, + unsigned int order) +{ + struct hyp_page *buddy; + + if (p->order == HYP_NO_ORDER || p->order < order) + return NULL; + + list_del_init(&p->node); + + /* Split the page in two until reaching the requested order */ + while (p->order > order) { + p->order--; + buddy = __find_buddy(pool, p, p->order); + buddy->order = p->order; + list_add_tail(&buddy->node, &pool->free_area[buddy->order]); + } + + p->refcount = 1; + + return p; +} + +static void clear_hyp_page(struct hyp_page *p) +{ + unsigned long i; + + for (i = 0; i < (1 << p->order); i++) + clear_page(hyp_page_to_virt(p) + (i << PAGE_SHIFT)); +} + +static void *__hyp_alloc_pages(struct hyp_pool *pool, gfp_t mask, + unsigned int order) +{ + unsigned int i = order; + struct hyp_page *p; + + /* Look for a high-enough-order page */ + while (i <= HYP_MAX_ORDER && list_empty(&pool->free_area[i])) + i++; + if (i > HYP_MAX_ORDER) + return NULL; + + /* Extract it from the tree at the right order */ + p = list_first_entry(&pool->free_area[i], struct hyp_page, node); + p = __hyp_extract_page(pool, p, order); + + if (mask & HYP_GFP_ZERO) + clear_hyp_page(p); + + return p; +} + +void *hyp_alloc_pages(struct hyp_pool *pool, gfp_t mask, unsigned int order) +{ + struct hyp_page *p; + + hyp_spin_lock(&pool->lock); + p = __hyp_alloc_pages(pool, mask, order); + hyp_spin_unlock(&pool->lock); + + return p ? hyp_page_to_virt(p) : NULL; +} + +/* hyp_vmemmap must be backed beforehand */ +int hyp_pool_init(struct hyp_pool *pool, phys_addr_t phys, + unsigned int nr_pages, unsigned int used_pages) +{ + struct hyp_page *p; + int i; + + if (phys % PAGE_SIZE) + return -EINVAL; + + hyp_spin_lock_init(&pool->lock); + for (i = 0; i <= HYP_MAX_ORDER; i++) + INIT_LIST_HEAD(&pool->free_area[i]); + pool->range_start = phys; + pool->range_end = phys + (nr_pages << PAGE_SHIFT); + + /* Init the vmemmap portion */ + p = hyp_phys_to_page(phys); + memset(p, 0, sizeof(*p) * nr_pages); + for (i = 0; i < nr_pages; i++, p++) { + p->pool = pool; + INIT_LIST_HEAD(&p->node); + } + + /* Attach the unused pages to the buddy tree */ + p = hyp_phys_to_page(phys + (used_pages << PAGE_SHIFT)); + for (i = used_pages; i < nr_pages; i++, p++) + __hyp_attach_page(pool, p); + + return 0; +} From patchwork Fri Jan 8 12:15:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_NONE, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC9D5C4332B for ; Fri, 8 Jan 2021 12:18:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73BBA2388B for ; Fri, 8 Jan 2021 12:18:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727042AbhAHMSS (ORCPT ); Fri, 8 Jan 2021 07:18:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728082AbhAHMRc (ORCPT ); Fri, 8 Jan 2021 07:17:32 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52FADC0612B0 for ; Fri, 8 Jan 2021 04:15:55 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id q18so4078553wrc.20 for ; Fri, 08 Jan 2021 04:15:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=XzM1R3G7iPuwWNhb7rvYlamX+8/2PUNQToNsyQEZsII=; b=DcKmxLORRq1M0kuovDWg/xP8OMXxTwrOF1DgVrJ/oZz0DDarrYz/XCNtKeRq0PM4mL 7mQ905PXF/2WcWqR2bVtKvBb9Q+EMqd5Hq+Lg6bsWrbGaqf3uU/GlBoNOe8+C/LLcvVa npkwZ24gaoPrrTbndrnH41KXYZ6dDPFfxalkWli3+0tT2E0KNcXHnF4bczzCYIqCpy62 jVveL/voAZz6EHnXr8YMtycQPIBQL2Vyp5SRGeNs0mDC8Cjy/wrExpKmHyt7ZvQxbPGk FygyqJW6IE3D/K9yDrMsA67cOEunpV3gtHa2n+TpA3pBPU7IO6whLEj9Bu2wJJVCkQo0 isCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XzM1R3G7iPuwWNhb7rvYlamX+8/2PUNQToNsyQEZsII=; b=lgBqkfmINnkNQmLNMkfi5QSAkGZ7VyZ9nxPtXHfMpzozeJcf6hYxkqUCvKdBK2Ybgg t6ZoLVgBZpwg4gQh6NB5F3zpxTGcXo3cdXwDeebHdwCSVpd+7YzqyrmCBOUb1sOZZsvH 47qOrK58gaZqJMxDNLBVmX7Wb4bbmIk51bIej/U+eGBf6v5jeoFXIFnBgg2cq/sujTd2 RjeCbQywFZGO2e5svXFw7eexHCm9CuknMytYb1XD8hfkrKehzdpcYI/zJG7w5QTdFqL4 A0yFmp0ot3Hg3U8M6OaX/upic7TqZi91UJ9NOUTplBhbswWfpg70WNSQ2cj6ZvpiSpnS ptgA== X-Gm-Message-State: AOAM533il6DR1iQW/Xk+AMd71oVp7NWo1aCpzf45l0VTHwWhXGoq/zCh JKJXIMgxA1nDMSrAtbC/wXg/BK4IVssr X-Google-Smtp-Source: ABdhPJyFmSVG1I0i2SdgXi2p2BgXDRlYslbUGi+bLN946AWy1KJL4BIuJSWsCymKp386o9X+cEeV1PaXK082 Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:600c:250:: with SMTP id 16mr2864472wmj.6.1610108154051; Fri, 08 Jan 2021 04:15:54 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:11 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-14-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 13/26] KVM: arm64: Enable access to sanitized CPU features at EL2 From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Introduce the infrastructure in KVM enabling to copy CPU feature registers into EL2-owned data-structures, to allow reading sanitised values directly at EL2 in nVHE. Given that only a subset of these features are being read by the hypervisor, the ones that need to be copied are to be listed under together with the name of the nVHE variable that will hold the copy. While at it, introduce the first user of this infrastructure by implementing __flush_dcache_area at EL2, which needs arm64_ftr_reg_ctrel0. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/include/asm/kvm_cpufeature.h | 17 ++++++++++++++ arch/arm64/kernel/cpufeature.c | 12 ++++++++++ arch/arm64/kvm/arm.c | 31 +++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 3 ++- arch/arm64/kvm/hyp/nvhe/cache.S | 13 +++++++++++ arch/arm64/kvm/hyp/nvhe/cpufeature.c | 8 +++++++ 7 files changed, 84 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/kvm_cpufeature.h create mode 100644 arch/arm64/kvm/hyp/nvhe/cache.S create mode 100644 arch/arm64/kvm/hyp/nvhe/cpufeature.c diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 16063c813dcd..742e9bcc051b 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -600,6 +600,7 @@ void __init setup_cpu_features(void); void check_local_cpu_capabilities(void); u64 read_sanitised_ftr_reg(u32 id); +int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst); static inline bool cpu_supports_mixed_endian_el0(void) { diff --git a/arch/arm64/include/asm/kvm_cpufeature.h b/arch/arm64/include/asm/kvm_cpufeature.h new file mode 100644 index 000000000000..d34f85cba358 --- /dev/null +++ b/arch/arm64/include/asm/kvm_cpufeature.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 - Google LLC + * Author: Quentin Perret + */ + +#include + +#ifndef KVM_HYP_CPU_FTR_REG +#if defined(__KVM_NVHE_HYPERVISOR__) +#define KVM_HYP_CPU_FTR_REG(id, name) extern struct arm64_ftr_reg name; +#else +#define KVM_HYP_CPU_FTR_REG(id, name) DECLARE_KVM_NVHE_SYM(name); +#endif +#endif + +KVM_HYP_CPU_FTR_REG(SYS_CTR_EL0, arm64_ftr_reg_ctrel0) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index bc3549663957..c2019aaaadc3 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1113,6 +1113,18 @@ u64 read_sanitised_ftr_reg(u32 id) } EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg); +int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst) +{ + struct arm64_ftr_reg *regp = get_arm64_ftr_reg(id); + + if (!regp) + return -EINVAL; + + memcpy(dst, regp, sizeof(*regp)); + + return 0; +} + #define read_sysreg_case(r) \ case r: return read_sysreg_s(r) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 51b53ca36dc5..9fd769349e9e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -1697,6 +1698,29 @@ static void teardown_hyp_mode(void) } } +#undef KVM_HYP_CPU_FTR_REG +#define KVM_HYP_CPU_FTR_REG(id, name) \ + { .sys_id = id, .dst = (struct arm64_ftr_reg *)&kvm_nvhe_sym(name) }, +static const struct __ftr_reg_copy_entry { + u32 sys_id; + struct arm64_ftr_reg *dst; +} hyp_ftr_regs[] = { + #include +}; + +static int copy_cpu_ftr_regs(void) +{ + int i, ret; + + for (i = 0; i < ARRAY_SIZE(hyp_ftr_regs); i++) { + ret = copy_ftr_reg(hyp_ftr_regs[i].sys_id, hyp_ftr_regs[i].dst); + if (ret) + return ret; + } + + return 0; +} + /** * Inits Hyp-mode on all online CPUs */ @@ -1705,6 +1729,13 @@ static int init_hyp_mode(void) int cpu; int err = 0; + /* + * Copy the required CPU feature register in their EL2 counterpart + */ + err = copy_cpu_ftr_regs(); + if (err) + return err; + /* * Allocate Hyp PGD and setup Hyp identity mapping */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 9e5eacfec6ec..72cfe53f106f 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -10,7 +10,8 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ - hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o + hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ + cache.o cpufeature.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S new file mode 100644 index 000000000000..36cef6915428 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/cache.S @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Code copied from arch/arm64/mm/cache.S. + */ + +#include +#include +#include + +SYM_FUNC_START_PI(__flush_dcache_area) + dcache_by_line_op civac, sy, x0, x1, x2, x3 + ret +SYM_FUNC_END_PI(__flush_dcache_area) diff --git a/arch/arm64/kvm/hyp/nvhe/cpufeature.c b/arch/arm64/kvm/hyp/nvhe/cpufeature.c new file mode 100644 index 000000000000..a887508f996f --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/cpufeature.c @@ -0,0 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 - Google LLC + * Author: Quentin Perret + */ + +#define KVM_HYP_CPU_FTR_REG(id, name) struct arm64_ftr_reg name; +#include From patchwork Fri Jan 8 12:15:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B597FC433E0 for ; Fri, 8 Jan 2021 12:19:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6285D22E02 for ; Fri, 8 Jan 2021 12:19:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727895AbhAHMRU (ORCPT ); Fri, 8 Jan 2021 07:17:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727760AbhAHMRH (ORCPT ); Fri, 8 Jan 2021 07:17:07 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11C3DC061240 for ; Fri, 8 Jan 2021 04:15:57 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id e25so9162955qka.3 for ; Fri, 08 Jan 2021 04:15:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Qr6fo9oQ1pXE+2ggRpUPc27A93CFNZ1q8V/bSIAa7vQ=; b=Lgsjd8FEXR/txZoyV8WuF4f0d4YJhY9e1uIwY2ZKBxuqtEnQJg33daGu1ILE6X1yVm 3bmI+xMaWKSUxl2SagC/g4n5Md9L7G3vxikeo+mdAU7BfccT5W+kxzU5S7UG3feLKbme ERxijuVD+57qQr/KtwpIs8VkMV1mg6FWqyYzHjswMh2hCX94sLdVeK/2UtrPFh000pBo Wat+4kpdcMUp41zMURoOa/5m18zhBYzly85HHjcR5ZjnghiaHqmXLElzNX1WkR3dAuv3 zJfl09gvrPAlyBaybMTEqu+Yz+UH5M+Wv3AdrSOkrKLW83DqbiuFH6Ipyx6juV7+q36G PnNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Qr6fo9oQ1pXE+2ggRpUPc27A93CFNZ1q8V/bSIAa7vQ=; b=HVyU3hkfoLiqUUa2WDyjzi8mfqfJxR2Tun3dP9hHi5OLJAZrP/SLZn6rnbR1oy0tlQ zAYwOfanCl9zJzvRUyqEipbTR8G90sgerrSwocjrNSv59eRNwLC4GYuwxUSq9zvsz6DS rUtGNW29bc6nYtjqhPjYxPHV/VdvOUhaNYotRLBU2El9HMnz6q392z1zwyXNai73PSOV VO1H+3VgRXld7EaFc8oP7H90q23dzBIhCAfk7yC9xmJN9Y0bXk5fAoqzvwR4rOeiiDYi hL7GL3kzAXPPtoOJHq3j9gTmgijOAvRKer0uMZQOOSWpnTo5PvOOUFpu+lnFcc8kA0Y/ 4sIw== X-Gm-Message-State: AOAM533EK38GxFVh1d961Gmk3yndeW37nwI+fOdD5BbXNB1qiUarM9X3 kY2AkgVORxL+g+9NbpGYiAlmXflxmsOT X-Google-Smtp-Source: ABdhPJzRr7svgpB722+nCLZgCDSVanauE2weLVt/XbLAdLhD6Eu5KBqqXCwGoYrf6841yPmBUrW5blfOV65g Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:edab:: with SMTP id h11mr3057028qvr.23.1610108156221; Fri, 08 Jan 2021 04:15:56 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:12 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-15-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 14/26] KVM: arm64: Factor out vector address calculation From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to re-map the guest vectors at EL2 when pKVM is enabled, refactor __kvm_vector_slot2idx() and kvm_init_vector_slot() to move all the address calculation logic in a static inline function. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_mmu.h | 8 ++++++++ arch/arm64/kvm/arm.c | 9 +-------- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index e52d82aeadca..d7ebd73ec86f 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -195,6 +195,14 @@ phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_get_idmap_vector(void); int kvm_mmu_init(void); +static inline void *__kvm_vector_slot2addr(void *base, + enum arm64_hyp_spectre_vector slot) +{ + int idx = slot - (slot != HYP_VECTOR_DIRECT); + + return base + (idx * SZ_2K); +} + struct kvm; #define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9fd769349e9e..6af9204bcd5b 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1346,16 +1346,9 @@ static unsigned long nvhe_percpu_order(void) /* A lookup table holding the hypervisor VA for each vector slot */ static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS]; -static int __kvm_vector_slot2idx(enum arm64_hyp_spectre_vector slot) -{ - return slot - (slot != HYP_VECTOR_DIRECT); -} - static void kvm_init_vector_slot(void *base, enum arm64_hyp_spectre_vector slot) { - int idx = __kvm_vector_slot2idx(slot); - - hyp_spectre_vector_selector[slot] = base + (idx * SZ_2K); + hyp_spectre_vector_selector[slot] = __kvm_vector_slot2addr(base, slot); } static int kvm_init_vector_slots(void) From patchwork Fri Jan 8 12:15:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67D42C43381 for ; Fri, 8 Jan 2021 12:18:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 176B7238E4 for ; Fri, 8 Jan 2021 12:18:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728082AbhAHMST (ORCPT ); Fri, 8 Jan 2021 07:18:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728086AbhAHMRc (ORCPT ); Fri, 8 Jan 2021 07:17:32 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07B0FC061242 for ; Fri, 8 Jan 2021 04:16:00 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id n11so4070910wro.7 for ; Fri, 08 Jan 2021 04:15:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=7wQIE5XecZh/pAi+A9ahJ0JiP2BIW9kKktEGdKjk9KE=; b=A8PxRTnMwBpbyy3mjy7HcZVKT4lcOkB6tCxDtqR0LOEWgzLXSW1lJKcuTnVfX5uu1/ jEpCOtIwnUA6Uui3Zv+q3w+TGLuWznDsE+qg8r9wx5p29uGmwnaWocLUmC/y6c+VyPQR JXTLKVSCPBeri30E44X4gWfFGk6vhc8hUAWuFi91Y/XmSgxFa9zzwfapa5aRNyF0pqcX W+FbQGjkGPLb6qITvlNZvexo7Qlg1gJ/bFXZdVD0DPrawILRu3CDrLNQmjmLAQBP+SE7 aXSaZX+KY+NRDYk5QN4FYmPthdCftGMPdExRyClSUnhfXCNXEl/a/ZGyv9b8Z98mpNJ/ y2aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7wQIE5XecZh/pAi+A9ahJ0JiP2BIW9kKktEGdKjk9KE=; b=OAhNCYGfAL3y9zFmVyLznK3y61hzc5H5ThfSzc7xL+gJkucAYx9uBG4SfHKm/JCybA A+0ZWg5wFZTleOAjYjhEpJg7yhcFHeGl8HvsbtV/sSeW9UVTv9IoCCwr9mEOKyWpmBoA ffKZqLkZx8JEUNXGGQGoJhiQhy3aWwINJLY7nGjbOAoI+3gF4kG8G/x7U0biUwCoyiSq Ko/aZn9SVzsjvHGwAAHyh/Em1ZenmSqt2hix4JMk58PeFAz06o7MZiBVWoILTq9AMuDK VPH/dVJb7MaxPs8fKIkGEEa0kGkuWuR7MC6DIxb68c3CplvuC/I3lMittp8B2vU5xEON eWOQ== X-Gm-Message-State: AOAM531uDq2WgOuqEFWfo4lmE61DJXlq2ScuocTp/atOydJSZJFcKrLA QUgL5JxtrWJOUjUaPyn75xbdV6zyC3Ut X-Google-Smtp-Source: ABdhPJzSE1Qnh3bmoUiNpT3+OMdMwVfkpG80eqm+sA8SoXeBjeN9dqGurdaqfhYzIGM73v4r+iFahg4hIkFI Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a7b:c044:: with SMTP id u4mr1783959wmc.1.1610108158307; Fri, 08 Jan 2021 04:15:58 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:13 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-16-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 15/26] of/fdt: Introduce early_init_dt_add_memory_hyp() From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Introduce early_init_dt_add_memory_hyp() to allow KVM to conserve a copy of the memory regions parsed from DT. This will be needed in the context of the protected nVHE feature of KVM/arm64 where the code running at EL2 will be cleanly separated from the host kernel during boot, and will need its own representation of memory. Signed-off-by: Quentin Perret --- drivers/of/fdt.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 4602e467ca8b..af2b5a09c5b4 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -1099,6 +1099,10 @@ int __init early_init_dt_scan_chosen(unsigned long node, const char *uname, #define MAX_MEMBLOCK_ADDR ((phys_addr_t)~0) #endif +void __init __weak early_init_dt_add_memory_hyp(u64 base, u64 size) +{ +} + void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size) { const u64 phys_offset = MIN_MEMBLOCK_ADDR; @@ -1139,6 +1143,7 @@ void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size) base = phys_offset; } memblock_add(base, size); + early_init_dt_add_memory_hyp(base, size); } int __init __weak early_init_dt_mark_hotplug_memory_arch(u64 base, u64 size) From patchwork Fri Jan 8 12:15:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79F70C433DB for ; Fri, 8 Jan 2021 12:17:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4324A23A53 for ; Fri, 8 Jan 2021 12:17:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727889AbhAHMRT (ORCPT ); Fri, 8 Jan 2021 07:17:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727762AbhAHMRI (ORCPT ); Fri, 8 Jan 2021 07:17:08 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B48AC061245 for ; Fri, 8 Jan 2021 04:16:01 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id n13so9132846qkn.2 for ; Fri, 08 Jan 2021 04:16:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=2NTBmWUDdfaI5FYWekeVxtUkIS2pQYyvzIpsHMc8tx8=; b=SLo2Ym9dUAnQBQZZJ6obID7GgWiczJLUF5jVibi60f7Xqyfh2qWH0WsAjpmZcSzFbd fGXSlB/EbgW9bjgKgn6BzWnvAJu2XwCSQJSqpwT9PMjdk8+uibk3WYnLXosTPNh57UkD Y76zFy2r/azHUTA05zmEa/Qkb0lZ/nYFEQOm65k1itI1aCivvTzBPptNd6mfXFl+PfXA VfvStenVHUJqmq/peHxbem7pmtQACjJl5hmmnmA43Caqc9Z/TzEXL//Olx/fqKBfqjXZ 6aeEvEeYqYMEDNmG71X1vwiWRz6q3b7p6xHAX/UE4/9mtIHNhFr7KwS0NNeVNlg5ic6n MfzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2NTBmWUDdfaI5FYWekeVxtUkIS2pQYyvzIpsHMc8tx8=; b=MSo0YnMKslMjjEYIRgAvy6Vs1nf8hqCtvm6PTvOZ3mXN+UyVL3S5M+u+xWNPaGJUFP G1nlf3NgD+75opBUL7JyGtklxaU6iO66IH1whd9lXxbNTLNNgdvo/YjlV7RUk15VNOXE TvyEBrTaJozqahxcTvZ1gTwHCRHPNfQqDQ2MtRRkhjNoghIuabaLe0sx9A7QkApqxs1H hVEjLwEVjKx2keQ+oELVhtg/73h6DfmahDMxIfFOW9anXCpRMcDYtCPTUR79LVNaOfDf UB3tMbzbVyylCBHz2H0JTlBumRYALU9mazFxSfcjEFQYJN7JNmwJjb5d3q3d4cpFgea0 aB/g== X-Gm-Message-State: AOAM532Ze+y08njKS2i14gCQfhSlSYOVnhj9yNY6kATdeDCfZDjkjQLp wfksUbwmrpMwOTvgAgbiY8YBOgRYbg0H X-Google-Smtp-Source: ABdhPJzG4FNqpmig9fgCBfgmTq36DV8U/BPQiYWM4KJYG7Vc+0+ioI+Ffg1sGsWzgJgJCYldAVf+S08kcp4x Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:14ae:: with SMTP id bo14mr3321054qvb.16.1610108160740; Fri, 08 Jan 2021 04:16:00 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:14 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-17-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 16/26] KVM: arm64: Prepare Hyp memory protection From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When memory protection is enabled, the Hyp code needs the ability to create and manage its own page-table. To do so, introduce a new set of hypercalls to initialize Hyp memory protection. During the init hcall, the hypervisor runs with the host-provided page-table and uses the trivial early page allocator to create its own set of page-tables, using a memory pool that was donated by the host. Specifically, the hypervisor creates its own mappings for __hyp_text, the Hyp memory pool, the __hyp_bss, the portion of hyp_vmemmap corresponding to the Hyp pool, among other things. It then jumps back in the idmap page, switches to use the newly-created pgd (instead of the temporary one provided by the host) and then installs the full-fledged buddy allocator which will then be the only one in used from then on. Note that for the sake of symplifying the review, this only introduces the code doing this operation, without actually being called by anyhing yet. This will be done in a subsequent patch, which will introduce the necessary host kernel changes. Credits to Will for __pkvm_init_switch_pgd. Co-authored-by: Will Deacon Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 4 + arch/arm64/include/asm/kvm_host.h | 8 + arch/arm64/include/asm/kvm_hyp.h | 8 + arch/arm64/kernel/image-vars.h | 19 +++ arch/arm64/kvm/hyp/Makefile | 2 +- arch/arm64/kvm/hyp/include/nvhe/memory.h | 6 + arch/arm64/kvm/hyp/include/nvhe/mm.h | 79 +++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 4 +- arch/arm64/kvm/hyp/nvhe/hyp-init.S | 31 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 42 +++++ arch/arm64/kvm/hyp/nvhe/mm.c | 174 ++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 196 +++++++++++++++++++++++ arch/arm64/kvm/hyp/reserved_mem.c | 102 ++++++++++++ arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 3 + 15 files changed, 676 insertions(+), 4 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mm.h create mode 100644 arch/arm64/kvm/hyp/nvhe/mm.c create mode 100644 arch/arm64/kvm/hyp/nvhe/setup.c create mode 100644 arch/arm64/kvm/hyp/reserved_mem.c diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 7ccf770c53d9..4fc27ac08836 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -57,6 +57,10 @@ #define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2 12 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs 13 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs 14 +#define __KVM_HOST_SMCCC_FUNC___pkvm_init 15 +#define __KVM_HOST_SMCCC_FUNC___pkvm_create_mappings 16 +#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping 17 +#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector 18 #ifndef __ASSEMBLY__ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 81212958ef55..9a2feb83eea0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -777,4 +777,12 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); #define kvm_vcpu_has_pmu(vcpu) \ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) +#ifdef CONFIG_KVM +extern phys_addr_t hyp_mem_base; +extern phys_addr_t hyp_mem_size; +void __init kvm_hyp_reserve(void); +#else +static inline void kvm_hyp_reserve(void) { } +#endif + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index c0450828378b..a0e113734b20 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -100,4 +100,12 @@ void __noreturn hyp_panic(void); void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par); #endif +#ifdef __KVM_NVHE_HYPERVISOR__ +void __pkvm_init_switch_pgd(phys_addr_t phys, unsigned long size, + phys_addr_t pgd, void *sp, void *cont_fn); +int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus, + unsigned long *per_cpu_base); +void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); +#endif + #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 43f3a1d6e92d..366d837f0d39 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -113,6 +113,25 @@ KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy); KVM_NVHE_ALIAS_HYP(__memset, __pi_memset); #endif +/* Hypevisor VA size */ +KVM_NVHE_ALIAS(hyp_va_bits); + +/* Kernel memory sections */ +KVM_NVHE_ALIAS(__start_rodata); +KVM_NVHE_ALIAS(__end_rodata); +KVM_NVHE_ALIAS(__bss_start); +KVM_NVHE_ALIAS(__bss_stop); + +/* Hyp memory sections */ +KVM_NVHE_ALIAS(__hyp_idmap_text_start); +KVM_NVHE_ALIAS(__hyp_idmap_text_end); +KVM_NVHE_ALIAS(__hyp_text_start); +KVM_NVHE_ALIAS(__hyp_text_end); +KVM_NVHE_ALIAS(__hyp_data_ro_after_init_start); +KVM_NVHE_ALIAS(__hyp_data_ro_after_init_end); +KVM_NVHE_ALIAS(__hyp_bss_start); +KVM_NVHE_ALIAS(__hyp_bss_end); + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 687598e41b21..b726332eec49 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -10,4 +10,4 @@ subdir-ccflags-y := -I$(incdir) \ -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o +obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o reserved_mem.o diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index ed47674bc988..c8af6fe87bfb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -6,6 +6,12 @@ #include +#define HYP_MEMBLOCK_REGIONS 128 +struct hyp_memblock_region { + phys_addr_t start; + phys_addr_t end; +}; + struct hyp_pool; struct hyp_page { unsigned int refcount; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h new file mode 100644 index 000000000000..f0cc09b127a5 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __KVM_HYP_MM_H +#define __KVM_HYP_MM_H + +#include +#include +#include + +#include +#include + +extern struct hyp_memblock_region kvm_nvhe_sym(hyp_memory)[]; +extern int kvm_nvhe_sym(hyp_memblock_nr); +extern struct kvm_pgtable pkvm_pgtable; +extern hyp_spinlock_t pkvm_pgd_lock; +extern struct hyp_pool hpool; +extern u64 __io_map_base; +extern u32 hyp_va_bits; + +int hyp_create_idmap(void); +int hyp_map_vectors(void); +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back); +int pkvm_cpu_set_vector(enum arm64_hyp_spectre_vector slot); +int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); +int __pkvm_create_mappings(unsigned long start, unsigned long size, + unsigned long phys, unsigned long prot); +unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, + unsigned long prot); + +static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size, + unsigned long *start, unsigned long *end) +{ + unsigned long nr_pages = size >> PAGE_SHIFT; + struct hyp_page *p = hyp_phys_to_page(phys); + + *start = (unsigned long)p; + *end = *start + nr_pages * sizeof(struct hyp_page); + *start = ALIGN_DOWN(*start, PAGE_SIZE); + *end = ALIGN(*end, PAGE_SIZE); +} + +static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) +{ + unsigned long total = 0, i; + + /* Provision the worst case scenario with 4 levels of page-table */ + for (i = 0; i < 4; i++) { + nr_pages = DIV_ROUND_UP(nr_pages, PTRS_PER_PTE); + total += nr_pages; + } + + return total; +} + +static inline unsigned long hyp_s1_pgtable_size(void) +{ + struct hyp_memblock_region *reg; + unsigned long nr_pages, res = 0; + int i; + + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return 0; + + for (i = 0; i < kvm_nvhe_sym(hyp_memblock_nr); i++) { + reg = &kvm_nvhe_sym(hyp_memory)[i]; + nr_pages = (reg->end - reg->start) >> PAGE_SHIFT; + nr_pages = __hyp_pgtable_max_pages(nr_pages); + res += nr_pages << PAGE_SHIFT; + } + + /* Allow 1 GiB for private mappings */ + nr_pages = (1 << 30) >> PAGE_SHIFT; + nr_pages = __hyp_pgtable_max_pages(nr_pages); + res += nr_pages << PAGE_SHIFT; + + return res; +} + +#endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 72cfe53f106f..d7381a503182 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -11,9 +11,9 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ - cache.o cpufeature.o + cache.o cpufeature.o setup.o mm.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ - ../fpsimd.o ../hyp-entry.o ../exception.o + ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-y += $(lib-objs) ## diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S index 31b060a44045..ad943966c39f 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S @@ -251,4 +251,35 @@ alternative_else_nop_endif SYM_CODE_END(__kvm_handle_stub_hvc) +SYM_FUNC_START(__pkvm_init_switch_pgd) + /* Turn the MMU off */ + pre_disable_mmu_workaround + mrs x2, sctlr_el2 + bic x3, x2, #SCTLR_ELx_M + msr sctlr_el2, x3 + isb + + tlbi alle2 + + /* Install the new pgtables */ + ldr x3, [x0, #NVHE_INIT_PGD_PA] + phys_to_ttbr x4, x3 +alternative_if ARM64_HAS_CNP + orr x4, x4, #TTBR_CNP_BIT +alternative_else_nop_endif + msr ttbr0_el2, x4 + + /* Set the new stack pointer */ + ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA] + mov sp, x0 + + /* And turn the MMU back on! */ + dsb nsh + isb + msr sctlr_el2, x2 + ic iallu + isb + ret x1 +SYM_FUNC_END(__pkvm_init_switch_pgd) + .popsection diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index a906f9e2ff34..3075f117651c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -6,12 +6,14 @@ #include +#include #include #include #include #include #include +#include #include DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); @@ -106,6 +108,42 @@ static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt) __vgic_v3_restore_aprs(kern_hyp_va(cpu_if)); } +static void handle___pkvm_init(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(phys_addr_t, phys, host_ctxt, 1); + DECLARE_REG(unsigned long, size, host_ctxt, 2); + DECLARE_REG(unsigned long, nr_cpus, host_ctxt, 3); + DECLARE_REG(unsigned long *, per_cpu_base, host_ctxt, 4); + + cpu_reg(host_ctxt, 1) = __pkvm_init(phys, size, nr_cpus, per_cpu_base); +} + +static void handle___pkvm_cpu_set_vector(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(enum arm64_hyp_spectre_vector, slot, host_ctxt, 1); + + cpu_reg(host_ctxt, 1) = pkvm_cpu_set_vector(slot); +} + +static void handle___pkvm_create_mappings(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(unsigned long, start, host_ctxt, 1); + DECLARE_REG(unsigned long, size, host_ctxt, 2); + DECLARE_REG(unsigned long, phys, host_ctxt, 3); + DECLARE_REG(unsigned long, prot, host_ctxt, 4); + + cpu_reg(host_ctxt, 1) = __pkvm_create_mappings(start, size, phys, prot); +} + +static void handle___pkvm_create_private_mapping(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(phys_addr_t, phys, host_ctxt, 1); + DECLARE_REG(size_t, size, host_ctxt, 2); + DECLARE_REG(unsigned long, prot, host_ctxt, 3); + + cpu_reg(host_ctxt, 1) = __pkvm_create_private_mapping(phys, size, prot); +} + typedef void (*hcall_t)(struct kvm_cpu_context *); #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = kimg_fn_ptr(handle_##x) @@ -125,6 +163,10 @@ static const hcall_t *host_hcall[] = { HANDLE_FUNC(__kvm_get_mdcr_el2), HANDLE_FUNC(__vgic_v3_save_aprs), HANDLE_FUNC(__vgic_v3_restore_aprs), + HANDLE_FUNC(__pkvm_init), + HANDLE_FUNC(__pkvm_cpu_set_vector), + HANDLE_FUNC(__pkvm_create_mappings), + HANDLE_FUNC(__pkvm_create_private_mapping), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c new file mode 100644 index 000000000000..f3481646a94e --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -0,0 +1,174 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +struct kvm_pgtable pkvm_pgtable; +hyp_spinlock_t pkvm_pgd_lock; +u64 __io_map_base; + +struct hyp_memblock_region hyp_memory[HYP_MEMBLOCK_REGIONS]; +int hyp_memblock_nr; + +int __pkvm_create_mappings(unsigned long start, unsigned long size, + unsigned long phys, unsigned long prot) +{ + int err; + + hyp_spin_lock(&pkvm_pgd_lock); + err = kvm_pgtable_hyp_map(&pkvm_pgtable, start, size, phys, prot); + hyp_spin_unlock(&pkvm_pgd_lock); + + return err; +} + +unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, + unsigned long prot) +{ + unsigned long addr; + int ret; + + hyp_spin_lock(&pkvm_pgd_lock); + + size = PAGE_ALIGN(size + offset_in_page(phys)); + addr = __io_map_base; + __io_map_base += size; + + /* Are we overflowing on the vmemmap ? */ + if (__io_map_base > __hyp_vmemmap) { + __io_map_base -= size; + addr = 0; + goto out; + } + + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, size, phys, prot); + if (ret) { + addr = 0; + goto out; + } + + addr = addr + offset_in_page(phys); +out: + hyp_spin_unlock(&pkvm_pgd_lock); + + return addr; +} + +int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot) +{ + unsigned long start = (unsigned long)from; + unsigned long end = (unsigned long)to; + unsigned long virt_addr; + phys_addr_t phys; + + start = start & PAGE_MASK; + end = PAGE_ALIGN(end); + + for (virt_addr = start; virt_addr < end; virt_addr += PAGE_SIZE) { + int err; + + phys = hyp_virt_to_phys((void *)virt_addr); + err = __pkvm_create_mappings(virt_addr, PAGE_SIZE, phys, prot); + if (err) + return err; + } + + return 0; +} + +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back) +{ + unsigned long start, end; + + hyp_vmemmap_range(phys, size, &start, &end); + + return __pkvm_create_mappings(start, end - start, back, PAGE_HYP); +} + +static void *__hyp_bp_vect_base; +int pkvm_cpu_set_vector(enum arm64_hyp_spectre_vector slot) +{ + void *vector; + + switch (slot) { + case HYP_VECTOR_DIRECT: { + vector = hyp_symbol_addr(__kvm_hyp_vector); + break; + } + case HYP_VECTOR_SPECTRE_DIRECT: { + vector = hyp_symbol_addr(__bp_harden_hyp_vecs); + break; + } + case HYP_VECTOR_INDIRECT: + case HYP_VECTOR_SPECTRE_INDIRECT: { + vector = (void *)__hyp_bp_vect_base; + break; + } + default: + return -EINVAL; + } + + vector = __kvm_vector_slot2addr(vector, slot); + *this_cpu_ptr(&kvm_hyp_vector) = (unsigned long)vector; + + return 0; +} + +int hyp_map_vectors(void) +{ + unsigned long bp_base; + + if (!cpus_have_const_cap(ARM64_SPECTRE_V3A)) + return 0; + + bp_base = (unsigned long)hyp_symbol_addr(__bp_harden_hyp_vecs); + bp_base = __hyp_pa(bp_base); + bp_base = __pkvm_create_private_mapping(bp_base, __BP_HARDEN_HYP_VECS_SZ, + PAGE_HYP_EXEC); + if (!bp_base) + return -1; + + __hyp_bp_vect_base = (void *)bp_base; + + return 0; +} + +int hyp_create_idmap(void) +{ + unsigned long start, end; + + start = (unsigned long)hyp_symbol_addr(__hyp_idmap_text_start); + start = hyp_virt_to_phys((void *)start); + start = ALIGN_DOWN(start, PAGE_SIZE); + + end = (unsigned long)hyp_symbol_addr(__hyp_idmap_text_end); + end = hyp_virt_to_phys((void *)end); + end = ALIGN(end, PAGE_SIZE); + + /* + * One half of the VA space is reserved to linearly map portions of + * memory -- see va_layout.c for more details. The other half of the VA + * space contains the trampoline page, and needs some care. Split that + * second half in two and find the quarter of VA space not conflicting + * with the idmap to place the IOs and the vmemmap. IOs use the lower + * half of the quarter and the vmemmap the upper half. + */ + __io_map_base = start & BIT(hyp_va_bits - 2); + __io_map_base ^= BIT(hyp_va_bits - 2); + __hyp_vmemmap = __io_map_base | BIT(hyp_va_bits - 3); + + return __pkvm_create_mappings(start, end - start, start, PAGE_HYP_EXEC); +} diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c new file mode 100644 index 000000000000..6d1faede86ae --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -0,0 +1,196 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include +#include +#include +#include + +#include +#include +#include +#include + +struct hyp_pool hpool; +struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops; +unsigned long hyp_nr_cpus; + +#define hyp_percpu_size ((unsigned long)__per_cpu_end - \ + (unsigned long)__per_cpu_start) + +static void *stacks_base; +static void *vmemmap_base; +static void *hyp_pgt_base; + +static int divide_memory_pool(void *virt, unsigned long size) +{ + unsigned long vstart, vend, nr_pages; + + hyp_early_alloc_init(virt, size); + + stacks_base = hyp_early_alloc_contig(hyp_nr_cpus); + if (!stacks_base) + return -ENOMEM; + + hyp_vmemmap_range(__hyp_pa(virt), size, &vstart, &vend); + nr_pages = (vend - vstart) >> PAGE_SHIFT; + vmemmap_base = hyp_early_alloc_contig(nr_pages); + if (!vmemmap_base) + return -ENOMEM; + + nr_pages = hyp_s1_pgtable_size() >> PAGE_SHIFT; + hyp_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!hyp_pgt_base) + return -ENOMEM; + + return 0; +} + +static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, + unsigned long *per_cpu_base) +{ + void *start, *end, *virt = hyp_phys_to_virt(phys); + int ret, i; + + /* Recreate the hyp page-table using the early page allocator */ + hyp_early_alloc_init(hyp_pgt_base, hyp_s1_pgtable_size()); + ret = kvm_pgtable_hyp_init(&pkvm_pgtable, hyp_va_bits, + &hyp_early_alloc_mm_ops); + if (ret) + return ret; + + ret = hyp_create_idmap(); + if (ret) + return ret; + + ret = hyp_map_vectors(); + if (ret) + return ret; + + ret = hyp_back_vmemmap(phys, size, hyp_virt_to_phys(vmemmap_base)); + if (ret) + return ret; + + ret = pkvm_create_mappings(hyp_symbol_addr(__hyp_text_start), + hyp_symbol_addr(__hyp_text_end), + PAGE_HYP_EXEC); + if (ret) + return ret; + + ret = pkvm_create_mappings(hyp_symbol_addr(__start_rodata), + hyp_symbol_addr(__end_rodata), PAGE_HYP_RO); + if (ret) + return ret; + + ret = pkvm_create_mappings(hyp_symbol_addr(__hyp_data_ro_after_init_start), + hyp_symbol_addr(__hyp_data_ro_after_init_end), + PAGE_HYP_RO); + if (ret) + return ret; + + ret = pkvm_create_mappings(hyp_symbol_addr(__bss_start), + hyp_symbol_addr(__hyp_bss_end), PAGE_HYP); + if (ret) + return ret; + + ret = pkvm_create_mappings(hyp_symbol_addr(__hyp_bss_end), + hyp_symbol_addr(__bss_stop), PAGE_HYP_RO); + if (ret) + return ret; + + ret = pkvm_create_mappings(virt, virt + size - 1, PAGE_HYP); + if (ret) + return ret; + + for (i = 0; i < hyp_nr_cpus; i++) { + start = (void *)kern_hyp_va(per_cpu_base[i]); + end = start + PAGE_ALIGN(hyp_percpu_size); + ret = pkvm_create_mappings(start, end, PAGE_HYP); + if (ret) + return ret; + } + + return 0; +} + +static void update_nvhe_init_params(void) +{ + struct kvm_nvhe_init_params *params; + unsigned long i, stack; + + for (i = 0; i < hyp_nr_cpus; i++) { + stack = (unsigned long)stacks_base + (i << PAGE_SHIFT); + params = per_cpu_ptr(&kvm_init_params, i); + params->stack_hyp_va = stack + PAGE_SIZE; + params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd); + __flush_dcache_area(params, sizeof(*params)); + } +} + +static void *hyp_zalloc_hyp_page(void *arg) +{ + return hyp_alloc_pages(&hpool, HYP_GFP_ZERO, 0); +} + +void __noreturn __pkvm_init_finalise(void) +{ + struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); + struct kvm_cpu_context *host_ctxt = &host_data->host_ctxt; + unsigned long nr_pages, used_pages; + int ret; + + /* Now that the vmemmap is backed, install the full-fledged allocator */ + nr_pages = hyp_s1_pgtable_size() >> PAGE_SHIFT; + used_pages = hyp_early_alloc_nr_pages(); + ret = hyp_pool_init(&hpool, __hyp_pa(hyp_pgt_base), nr_pages, used_pages); + if (ret) + goto out; + + pkvm_pgtable_mm_ops.zalloc_page = hyp_zalloc_hyp_page; + pkvm_pgtable_mm_ops.phys_to_virt = hyp_phys_to_virt; + pkvm_pgtable_mm_ops.virt_to_phys = hyp_virt_to_phys; + pkvm_pgtable_mm_ops.get_page = hyp_get_page; + pkvm_pgtable_mm_ops.put_page = hyp_put_page; + pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; + +out: + host_ctxt->regs.regs[0] = SMCCC_RET_SUCCESS; + host_ctxt->regs.regs[1] = ret; + + __host_enter(host_ctxt); +} + +int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus, + unsigned long *per_cpu_base) +{ + struct kvm_nvhe_init_params *params; + void *virt = hyp_phys_to_virt(phys); + void (*fn)(phys_addr_t params_pa, void *finalize_fn_va); + int ret; + + if (phys % PAGE_SIZE || size % PAGE_SIZE || (u64)virt % PAGE_SIZE) + return -EINVAL; + + hyp_spin_lock_init(&pkvm_pgd_lock); + hyp_nr_cpus = nr_cpus; + + ret = divide_memory_pool(virt, size); + if (ret) + return ret; + + ret = recreate_hyp_mappings(phys, size, per_cpu_base); + if (ret) + return ret; + + update_nvhe_init_params(); + + /* Jump in the idmap page to switch to the new page-tables */ + params = this_cpu_ptr(&kvm_init_params); + fn = (typeof(fn))__hyp_pa(hyp_symbol_addr(__pkvm_init_switch_pgd)); + fn(__hyp_pa(params), hyp_symbol_addr(__pkvm_init_finalise)); + + unreachable(); +} diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c new file mode 100644 index 000000000000..32f648992835 --- /dev/null +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -0,0 +1,102 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020 - Google LLC + * Author: Quentin Perret + */ + +#include +#include +#include + +#include + +#include +#include + +phys_addr_t hyp_mem_base; +phys_addr_t hyp_mem_size; + +int __init early_init_dt_add_memory_hyp(u64 base, u64 size) +{ + struct hyp_memblock_region *reg; + + if (kvm_nvhe_sym(hyp_memblock_nr) >= HYP_MEMBLOCK_REGIONS) + kvm_nvhe_sym(hyp_memblock_nr) = -1; + + if (kvm_nvhe_sym(hyp_memblock_nr) < 0) + return -ENOMEM; + + reg = kvm_nvhe_sym(hyp_memory); + reg[kvm_nvhe_sym(hyp_memblock_nr)].start = base; + reg[kvm_nvhe_sym(hyp_memblock_nr)].end = base + size; + kvm_nvhe_sym(hyp_memblock_nr)++; + + return 0; +} + +static int cmp_hyp_memblock(const void *p1, const void *p2) +{ + const struct hyp_memblock_region *r1 = p1; + const struct hyp_memblock_region *r2 = p2; + + return r1->start < r2->start ? -1 : (r1->start > r2->start); +} + +static void __init sort_memblock_regions(void) +{ + sort(kvm_nvhe_sym(hyp_memory), + kvm_nvhe_sym(hyp_memblock_nr), + sizeof(struct hyp_memblock_region), + cmp_hyp_memblock, + NULL); +} + +void __init kvm_hyp_reserve(void) +{ + u64 nr_pages, prev; + + if (!is_hyp_mode_available() || is_kernel_in_hyp_mode()) + return; + + if (kvm_get_mode() != KVM_MODE_PROTECTED) + return; + + if (kvm_nvhe_sym(hyp_memblock_nr) < 0) { + kvm_err("Failed to register hyp memblocks\n"); + return; + } + + sort_memblock_regions(); + + /* + * We don't know the number of possible CPUs yet, so allocate for the + * worst case. + */ + hyp_mem_size += NR_CPUS << PAGE_SHIFT; + hyp_mem_size += hyp_s1_pgtable_size(); + + /* + * The hyp_vmemmap needs to be backed by pages, but these pages + * themselves need to be present in the vmemmap, so compute the number + * of pages needed by looking for a fixed point. + */ + nr_pages = 0; + do { + prev = nr_pages; + nr_pages = (hyp_mem_size >> PAGE_SHIFT) + prev; + nr_pages = DIV_ROUND_UP(nr_pages * sizeof(struct hyp_page), PAGE_SIZE); + nr_pages += __hyp_pgtable_max_pages(nr_pages); + } while (nr_pages != prev); + hyp_mem_size += nr_pages << PAGE_SHIFT; + + hyp_mem_base = memblock_find_in_range(0, memblock_end_of_DRAM(), + hyp_mem_size, SZ_2M); + if (!hyp_mem_base) { + kvm_err("Failed to reserve hyp memory\n"); + return; + } + memblock_reserve(hyp_mem_base, hyp_mem_size); + + kvm_info("Reserved %lld MiB at 0x%llx\n", hyp_mem_size >> 20, + hyp_mem_base); +} diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 278e163beda4..3cf9397dabdb 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1264,10 +1264,10 @@ static struct kvm_pgtable_mm_ops kvm_hyp_mm_ops = { .virt_to_phys = kvm_host_pa, }; +u32 hyp_va_bits; int kvm_mmu_init(void) { int err; - u32 hyp_va_bits; hyp_idmap_start = __pa_symbol(__hyp_idmap_text_start); hyp_idmap_start = ALIGN_DOWN(hyp_idmap_start, PAGE_SIZE); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 095540667f0f..903ad0b0476c 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -420,6 +421,8 @@ void __init bootmem_init(void) dma_pernuma_cma_reserve(); + kvm_hyp_reserve(); + /* * sparse_init() tries to allocate memory from memblock, so must be * done after the fixed reservations From patchwork Fri Jan 8 12:15:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81EA4C433DB for ; Fri, 8 Jan 2021 12:18:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 403CF2388B for ; Fri, 8 Jan 2021 12:18:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728092AbhAHMRe (ORCPT ); Fri, 8 Jan 2021 07:17:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725817AbhAHMRd (ORCPT ); Fri, 8 Jan 2021 07:17:33 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 987A6C061246 for ; Fri, 8 Jan 2021 04:16:03 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id u9so9146329qkk.5 for ; Fri, 08 Jan 2021 04:16:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=/oJOiodTmUn3adVtPGhw7M8zv8JCp37BW1nUJ/LMPTQ=; b=pU/9SM07gIQWUokQHD3qTj/NffUXCMLsM8TsFEn4oVEVeZPwQv+BFOZFalEFokXz6m kebX9OWnC7Ufx8YeV5eb2zUbyi2hDbPPOsnyH+9qSOApc6p+Wl1Na1hoVzx/LRIXJ52f U2FNQrVVHJgMRVlcsycw6jGoo8I4B3KKr584MmnydYCeH/D3TXeZB9nzMvXB8+cJ6FEP vooAmBFoUQci3tycOUbChjhEypy8Yifjabxebh2m5fIDJQ7vigXwCb38PZk6kKpVK90X cqGoUojnqYzUoItkHpdGgVO16Rafonm42FcVi46NZNkrSW5TPUtN7G880l7qgTpGuXAB Ns6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/oJOiodTmUn3adVtPGhw7M8zv8JCp37BW1nUJ/LMPTQ=; b=RyzMKPACsueMCw3pMT6Xu/ZeEnCAl38YXcSBZU6PUxq8QL1lXbQtiUYeU2BrfdOJl7 o8Xzb94geXLU8NSwp9cvbhjo9G50zPNUUoBIRqnQeWUbUb2Ev7A/ROJjm4a/fahlsK5t PtYapVL2sdbV/WuXgc7SgC23ilIjU9gfupIyW9gEb5AmHv/PSSeuAYYRrJ0uRBufrez1 lzes3YXNX9ef2Qw+9IE7jeDecOzXfqbKzyEk4eJzoUlSrpKm1rFInj+l4si9C83AtYE5 XeZ7kGp/7zjnt1pk0xLH1FUILyCK33coQLn/stagobZjBibLrB/yesyJpiRDAtia+hyK bpGw== X-Gm-Message-State: AOAM532r8cjXPpyMBYmJVNRqR7WeLG5bWs7DAHiSMsB/ueGJWGCWXbwQ 9EUZHuflEUMA4c2T1IZKyBADq8J42jkM X-Google-Smtp-Source: ABdhPJzgKPDINojiI0dbVwoX+0TC5gHxTs7z5Z+8Zygu+vvUzA255jllaF6UbGDN3Gp4kXcFpUe67wJWaUvz Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:da07:: with SMTP id x7mr6289768qvj.39.1610108162797; Fri, 08 Jan 2021 04:16:02 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:15 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-18-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 17/26] KVM: arm64: Elevate Hyp mappings creation at EL2 From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Previous commits have introduced infrastructure at EL2 to enable the Hyp code to manage its own memory, and more specifically its stage 1 page tables. However, this was preliminary work, and none of it is currently in use. Put all of this together by elevating the hyp mappings creation at EL2 when memory protection is enabled. In this case, the host kernel running at EL1 still creates _temporary_ Hyp mappings, only used while initializing the hypervisor, but frees them right after. As such, all calls to create_hyp_mappings() after kvm init has finished turn into hypercalls, as the host now has no 'legal' way to modify the hypevisor page tables directly. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 1 - arch/arm64/kvm/arm.c | 62 +++++++++++++++++++++++++++++--- arch/arm64/kvm/mmu.c | 34 ++++++++++++++++++ 3 files changed, 92 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index d7ebd73ec86f..6c8466a042a9 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -309,6 +309,5 @@ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) */ asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } - #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 6af9204bcd5b..e524682c2ccf 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1421,7 +1421,7 @@ static void cpu_prepare_hyp_mode(int cpu) kvm_flush_dcache_to_poc(params, sizeof(*params)); } -static void cpu_init_hyp_mode(void) +static void kvm_set_hyp_vector(void) { struct kvm_nvhe_init_params *params; struct arm_smccc_res res; @@ -1439,6 +1439,11 @@ static void cpu_init_hyp_mode(void) params = this_cpu_ptr_nvhe_sym(kvm_init_params); arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(__kvm_hyp_init), virt_to_phys(params), &res); WARN_ON(res.a0 != SMCCC_RET_SUCCESS); +} + +static void cpu_init_hyp_mode(void) +{ + kvm_set_hyp_vector(); /* * Disabling SSBD on a non-VHE system requires us to enable SSBS @@ -1481,7 +1486,10 @@ static void cpu_set_hyp_vector(void) struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data); void *vector = hyp_spectre_vector_selector[data->slot]; - *this_cpu_ptr_hyp_sym(kvm_hyp_vector) = (unsigned long)vector; + if (!is_protected_kvm_enabled()) + *this_cpu_ptr_hyp_sym(kvm_hyp_vector) = (unsigned long)vector; + else + kvm_call_hyp_nvhe(__pkvm_cpu_set_vector, data->slot); } static void cpu_hyp_reinit(void) @@ -1489,13 +1497,14 @@ static void cpu_hyp_reinit(void) kvm_init_host_cpu_context(&this_cpu_ptr_hyp_sym(kvm_host_data)->host_ctxt); cpu_hyp_reset(); - cpu_set_hyp_vector(); if (is_kernel_in_hyp_mode()) kvm_timer_init_vhe(); else cpu_init_hyp_mode(); + cpu_set_hyp_vector(); + kvm_arm_init_debug(); if (vgic_present) @@ -1714,13 +1723,52 @@ static int copy_cpu_ftr_regs(void) return 0; } +static int kvm_hyp_enable_protection(void) +{ + void *per_cpu_base = kvm_ksym_ref(kvm_arm_hyp_percpu_base); + int ret, cpu; + void *addr; + + if (!is_protected_kvm_enabled()) + return 0; + + if (!hyp_mem_base) + return -ENOMEM; + + addr = phys_to_virt(hyp_mem_base); + ret = create_hyp_mappings(addr, addr + hyp_mem_size - 1, PAGE_HYP); + if (ret) + return ret; + + preempt_disable(); + kvm_set_hyp_vector(); + ret = kvm_call_hyp_nvhe(__pkvm_init, hyp_mem_base, hyp_mem_size, + num_possible_cpus(), kern_hyp_va(per_cpu_base)); + preempt_enable(); + if (ret) + return ret; + + free_hyp_pgds(); + for_each_possible_cpu(cpu) + free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); + + return 0; +} + /** * Inits Hyp-mode on all online CPUs */ static int init_hyp_mode(void) { int cpu; - int err = 0; + int err = -ENOMEM; + + /* + * The protected Hyp-mode cannot be initialized if the memory pool + * allocation has failed. + */ + if (is_protected_kvm_enabled() && !hyp_mem_base) + return err; /* * Copy the required CPU feature register in their EL2 counterpart @@ -1854,6 +1902,12 @@ static int init_hyp_mode(void) for_each_possible_cpu(cpu) cpu_prepare_hyp_mode(cpu); + err = kvm_hyp_enable_protection(); + if (err) { + kvm_err("Failed to enable hyp memory protection: %d\n", err); + goto out_err; + } + return 0; out_err: diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3cf9397dabdb..9d4c9251208e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -225,15 +225,39 @@ void free_hyp_pgds(void) if (hyp_pgtable) { kvm_pgtable_hyp_destroy(hyp_pgtable); kfree(hyp_pgtable); + hyp_pgtable = NULL; } mutex_unlock(&kvm_hyp_pgd_mutex); } +static bool kvm_host_owns_hyp_mappings(void) +{ + if (static_branch_likely(&kvm_protected_mode_initialized)) + return false; + + /* + * This can happen at boot time when __create_hyp_mappings() is called + * after the hyp protection has been enabled, but the static key has + * not been flipped yet. + */ + if (!hyp_pgtable && is_protected_kvm_enabled()) + return false; + + BUG_ON(!hyp_pgtable); + + return true; +} + static int __create_hyp_mappings(unsigned long start, unsigned long size, unsigned long phys, enum kvm_pgtable_prot prot) { int err; + if (!kvm_host_owns_hyp_mappings()) { + return kvm_call_hyp_nvhe(__pkvm_create_mappings, + start, size, phys, prot); + } + mutex_lock(&kvm_hyp_pgd_mutex); err = kvm_pgtable_hyp_map(hyp_pgtable, start, size, phys, prot); mutex_unlock(&kvm_hyp_pgd_mutex); @@ -295,6 +319,16 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, unsigned long base; int ret = 0; + if (!kvm_host_owns_hyp_mappings()) { + base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, + phys_addr, size, prot); + if (!base) + return -ENOMEM; + *haddr = base; + + return 0; + } + mutex_lock(&kvm_hyp_pgd_mutex); /* From patchwork Fri Jan 8 12:15:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4E07C43381 for ; Fri, 8 Jan 2021 12:19:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F3AF22E02 for ; Fri, 8 Jan 2021 12:19:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727373AbhAHMTM (ORCPT ); Fri, 8 Jan 2021 07:19:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727881AbhAHMRT (ORCPT ); Fri, 8 Jan 2021 07:17:19 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C687C061248 for ; Fri, 8 Jan 2021 04:16:07 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id g17so4068055wrr.11 for ; Fri, 08 Jan 2021 04:16:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=QIpOnJAJDFYwraC0ddDqr7tOixX5GyHqc+eN2zNJpAo=; b=il5Ne6t9ivsIijZMruN4gBaX+EYvvCeqM4wXdD5b87Akf7GQMUT8tlcc2rbeXb7ACq W7SWjD+hfCBW1GKfXcme+zjutCmdrdDkjDj5gkZnYovFg7hEqVXuCt4LrZRE0D57GxY6 jZbSgOnq3+qcZ4Dw+JxLPQAjxFDxivg8qTM4alr2Zlpvs1Qqp7TcV7961SoTi+NCFKv4 EJFusiTj/nYe3sG8Izc+8vA0vPTM0bQ3MFtgsDM67pHpZQSPjgWpmqNfWmxC513MPx9X B29+4inY3oKYwuI28k+z5rC2h4J3QqMM7y9GNFIXvEv3PjLk2FMDzIH2+TuzoBoPrG9L a+Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QIpOnJAJDFYwraC0ddDqr7tOixX5GyHqc+eN2zNJpAo=; b=AJeNj1yHx6TWq63Y1CbaqeLCq6S82sG2oSiOlwuHsxMZxCpP4JO4YOpZASEFZu37wp 5mcbyMRP04Jhz6BKtdNHDPN0v9nbWy1hLbvrZNRnq5cvvcK/dXRE/63XzjoYKj7UV8EN xM4O6rgOKnxQJfc6GLI0rBRjMc23u0v74lF1jfiwOWtIuTHvB7ILX/Ybf3mR9No9f6ls ywHIvg8gOKqy7C9izmIN7ZNPa+7ngaz3Odmv6qtDuOeoSRrr/7NufXAY78kYhCYjRxlD BYxwsHBQTS4rfObTGEtarWumJbsWSwmw8U1t9gSgKGFm2cs3WzQXbjASU2XxJbW3Dph0 2+ug== X-Gm-Message-State: AOAM531G3ZmHZB3oaEHft5ED6aeuKp4hg3/QGbT2HXkLPM0yZH85Ydzw iF7EQpCzc/clKL0jmcwvBwUgnWVTWKpO X-Google-Smtp-Source: ABdhPJzjWGN+5xVfhWSi3YDL8WD48Ac+Xyz9aYshsE62TB/0secJVMv8rCBBauyHDfw1ABRdMfgAQ+Xn26ZG Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a7b:c044:: with SMTP id u4mr1784013wmc.1.1610108165063; Fri, 08 Jan 2021 04:16:05 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:16 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-19-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 18/26] KVM: arm64: Use kvm_arch for stage 2 pgtable From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to make use of the stage 2 pgtable code for the host stage 2, use struct kvm_arch in lieu of struct kvm as the host will have the former but not the latter. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_pgtable.h | 5 +++-- arch/arm64/kvm/hyp/pgtable.c | 6 +++--- arch/arm64/kvm/mmu.c | 2 +- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 45acc9dc6c45..8e8f1d2c5e0e 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -151,12 +151,13 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, /** * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table. * @pgt: Uninitialised page-table structure to initialise. - * @kvm: KVM structure representing the guest virtual machine. + * @arch: Arch-specific KVM structure representing the guest virtual + * machine. * @mm_ops: Memory management callbacks. * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm, +int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch, struct kvm_pgtable_mm_ops *mm_ops); /** diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 61a8a34ddfdb..96a25d0b7b6e 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -855,11 +855,11 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) return kvm_pgtable_walk(pgt, addr, size, &walker); } -int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm, +int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch, struct kvm_pgtable_mm_ops *mm_ops) { size_t pgd_sz; - u64 vtcr = kvm->arch.vtcr; + u64 vtcr = arch->vtcr; u32 ia_bits = VTCR_EL2_IPA(vtcr); u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr); u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0; @@ -872,7 +872,7 @@ int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm, pgt->ia_bits = ia_bits; pgt->start_level = start_level; pgt->mm_ops = mm_ops; - pgt->mmu = &kvm->arch.mmu; + pgt->mmu = &arch->mmu; /* Ensure zeroed PGD pages are visible to the hardware walker */ dsb(ishst); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 9d4c9251208e..7e6263103943 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -461,7 +461,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) if (!pgt) return -ENOMEM; - err = kvm_pgtable_stage2_init(pgt, kvm, &kvm_s2_mm_ops); + err = kvm_pgtable_stage2_init(pgt, &kvm->arch, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; From patchwork Fri Jan 8 12:15:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C5CAC433DB for ; Fri, 8 Jan 2021 12:19:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5F7422E02 for ; Fri, 8 Jan 2021 12:19:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725793AbhAHMTK (ORCPT ); Fri, 8 Jan 2021 07:19:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727762AbhAHMRT (ORCPT ); Fri, 8 Jan 2021 07:17:19 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D224C0612F5 for ; Fri, 8 Jan 2021 04:16:08 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id m8so8121164qvk.1 for ; Fri, 08 Jan 2021 04:16:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=nKYca+zoHB0t45wLrBkHrNQpUNeGtZirqsjOZxYQKRc=; b=uVpDd/3VfWS2V7FGE50eztJxNR1ht1Ks8QHyta0jW0cbETpRaiuIqOgG8rkHGWCOzT JmHycgDR4rQNFgkKhMlILcE66rK1B9k5VcfpE+41k0cv9eKsiAi6WcO0xIhMnsHLoQgb KSEkDnla82gwCfzIowWilKmwq+TBzQCeYRYyLkPqNthTa3k+4HGBALGomeHe8ZijxITU PTKIZoqS4Fay2EqTjmfiyHsykrjJUQIrkFBp4P0OPbuTw26tPhbu1LjbW9wfW5r7i035 RFWm/qFvslI2WWe8I+xjZuOPZkaQlRo4TgA97lsSS6NBc6/ElS5yVWGph53wegZaNTH0 xlNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nKYca+zoHB0t45wLrBkHrNQpUNeGtZirqsjOZxYQKRc=; b=NC5Fd3uAJIdA47mPaCTJxs4PiIs+N6b90Wi3Y2bK2QrW50pc5kpfYuvcvkueHg+oFt O4PgJYfAUL+siBB8hH42LagyBZUPywz7OAirOzjNbdRd9C/gkf8E8s1ehFdQljOIIbZ+ FBURJsGqTODs5J5TZ+Qif5Ual7TV9E6le9ZHUyvC3YititZxNc9hTOJmq0i8HJa+QDgf Qk6faxKovG7g3Pauy29/slLTAgJc6FrdwQ9MVCb5Gv4ODbSdg/LdP5BKrgw2HGHL5gjI rplvRxk7CbrkubOeB/eRus0IgWVAPjAIJrDS0kZi4B3cJuYrWKerrdF4hKHeazbuXLts gQ+Q== X-Gm-Message-State: AOAM530uvMzofn014szEBkaCMpXtaeq/UYXFophOzcgJfa+voiG2+DR2 Fgx4X1aCffceLlY151AzaP4I3pyv+zC5 X-Google-Smtp-Source: ABdhPJxuBC038pFa20wlRkuieyMtPechQYnpLRIU4yj6S+nFgAVEI8gwmWfmtgdSLWI0LLH4YOmFUS0oLP0Y Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:ad4:5ba7:: with SMTP id 7mr3081256qvq.31.1610108167604; Fri, 08 Jan 2021 04:16:07 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:17 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-20-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 19/26] KVM: arm64: Use kvm_arch in kvm_s2_mmu From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to make use of the stage 2 pgtable code for the host stage 2, change kvm_s2_mmu to use a kvm_arch pointer in lieu of the kvm pointer, as the host will have the former but not the latter. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/kvm_mmu.h | 7 ++++++- arch/arm64/kvm/mmu.c | 8 ++++---- 3 files changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9a2feb83eea0..9d59bebcc5ef 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -95,7 +95,7 @@ struct kvm_s2_mmu { /* The last vcpu id that ran on each physical CPU */ int __percpu *last_vcpu_ran; - struct kvm *kvm; + struct kvm_arch *arch; }; struct kvm_arch_memory_slot { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 6c8466a042a9..662f0415344e 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -299,7 +299,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) */ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) { - write_sysreg(kern_hyp_va(mmu->kvm)->arch.vtcr, vtcr_el2); + write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); /* @@ -309,5 +309,10 @@ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) */ asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } + +static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) +{ + return container_of(mmu->arch, struct kvm, arch); +} #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7e6263103943..6f9bf71722bd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -169,7 +169,7 @@ static void *kvm_host_va(phys_addr_t phys) static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size, bool may_block) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); phys_addr_t end = start + size; assert_spin_locked(&kvm->mmu_lock); @@ -474,7 +474,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) for_each_possible_cpu(cpu) *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1; - mmu->kvm = kvm; + mmu->arch = &kvm->arch; mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); mmu->vmid.vmid_gen = 0; @@ -556,7 +556,7 @@ void stage2_unmap_vm(struct kvm *kvm) void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); struct kvm_pgtable *pgt = NULL; spin_lock(&kvm->mmu_lock); @@ -625,7 +625,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, */ static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect); } From patchwork Fri Jan 8 12:15:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 384A9C43381 for ; Fri, 8 Jan 2021 12:19:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 086D0238E4 for ; Fri, 8 Jan 2021 12:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728347AbhAHMS4 (ORCPT ); Fri, 8 Jan 2021 07:18:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727554AbhAHMRX (ORCPT ); Fri, 8 Jan 2021 07:17:23 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 256B6C0612F4 for ; Fri, 8 Jan 2021 04:16:11 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id g16so4103030wrv.1 for ; Fri, 08 Jan 2021 04:16:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=rB56z3mGpxUuTHn/Xak6xMo1bLRSjWgNwLW9Odgs7SE=; b=sZ9lz6KMm/awvK/FUyTBV6gbtoAj9u+t5wqR7EyJZaYQJYs1DgmJhAN9guL76b5AqX QrxnXL3TMkZnxaVdtGkrkFZ/Gror/3uSLgG2nFKVL5rA7QnIaNSQb7EX57CAMFLeHOBl rmIbib7QdTe/xFex0JoDhO2k5ITo1JbrYwQkhGCkHYGI8BAJEEo6g5/YTCGccge+QdAl CRwtB9ejkWhG9X7XdM2TAD7cn1oLDobKGPPwlabY6hqmcuwTfyjNjmFLttVJsLF/m9qK A3XmppIclwW8skmL/2C+GvmYDiqnOMiU/EzjsQkQ+NRBgWf+YV7qsLfYg4XgXIhLuUvy /AQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rB56z3mGpxUuTHn/Xak6xMo1bLRSjWgNwLW9Odgs7SE=; b=CLcCGbDMGI6AVVGr7bRjEPu+4tyLeSLPfNZkWp+haEf5dVxp8Oaxu8c7d4UgCS1oM1 Q30GTGwNL9Gaq3qmzkxysLQskyEiXioMU6Ad4onu9stbQaY0HlynC4yrnM1UYIl9d13E 0OKrSMWFhLXLqXQyqXjwupWc4SdRTT5Ln9kl+7L2vpsvfHSpIKD2pzcgksxCi+PPg9B/ AO+5pgYv3e0YnS/5EfrQ6geLhJItuQN9lVZ99yRqozrHSDnjRy7UYEwIs5/0Q2r4SZ8Y wgd55U17ouVxQuOH3ECdayVdWvpyDIVWQ9CLJv4LBm/bb7v9aOyaLJ2+0zMQzmVD1NdS xfJQ== X-Gm-Message-State: AOAM533G4NWjKM6ddQzn2tkPYCv3ZfEfr78sLgZMnG70JlIWe33naH+B 9AkRwk7SE6RkKdObXBA1l++JLMvrgvDh X-Google-Smtp-Source: ABdhPJxmrOhRm41THdsIbrnj6ypbcYm9MJwmDBYKYq5S39H6WEBJ74SHvp6aXE7vK44XdtRutbNDxGyXjPZs Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:adf:f0d0:: with SMTP id x16mr3485427wro.162.1610108169879; Fri, 08 Jan 2021 04:16:09 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:18 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-21-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 20/26] KVM: arm64: Set host stage 2 using kvm_nvhe_init_params From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Move the registers relevant to host stage 2 enablement to kvm_nvhe_init_params to prepare the ground for enabling it in later patches. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kvm/arm.c | 5 +++++ arch/arm64/kvm/hyp/nvhe/hyp-init.S | 9 +++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 5 +---- 5 files changed, 21 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 4fc27ac08836..5354b05eb9e2 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -158,6 +158,9 @@ struct kvm_nvhe_init_params { unsigned long tpidr_el2; unsigned long stack_hyp_va; phys_addr_t pgd_pa; + unsigned long hcr_el2; + unsigned long vttbr; + unsigned long vtcr; }; /* Translate a kernel address @ptr into its equivalent linear mapping */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 5e82488f1b82..9cf7736e31db 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -114,6 +114,9 @@ int main(void) DEFINE(NVHE_INIT_TPIDR_EL2, offsetof(struct kvm_nvhe_init_params, tpidr_el2)); DEFINE(NVHE_INIT_STACK_HYP_VA, offsetof(struct kvm_nvhe_init_params, stack_hyp_va)); DEFINE(NVHE_INIT_PGD_PA, offsetof(struct kvm_nvhe_init_params, pgd_pa)); + DEFINE(NVHE_INIT_HCR_EL2, offsetof(struct kvm_nvhe_init_params, hcr_el2)); + DEFINE(NVHE_INIT_VTTBR, offsetof(struct kvm_nvhe_init_params, vttbr)); + DEFINE(NVHE_INIT_VTCR, offsetof(struct kvm_nvhe_init_params, vtcr)); #endif #ifdef CONFIG_CPU_PM DEFINE(CPU_CTX_SP, offsetof(struct cpu_suspend_ctx, sp)); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e524682c2ccf..00cee4489cd7 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1413,6 +1413,11 @@ static void cpu_prepare_hyp_mode(int cpu) params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) + PAGE_SIZE); params->pgd_pa = kvm_mmu_get_httbr(); + if (is_protected_kvm_enabled()) + params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS; + else + params->hcr_el2 = HCR_HOST_NVHE_FLAGS; + params->vttbr = params->vtcr = 0; /* * Flush the init params from the data cache because the struct will diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S index ad943966c39f..b1341bb4b453 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S @@ -102,6 +102,15 @@ alternative_else_nop_endif ldr x1, [x0, #NVHE_INIT_MAIR_EL2] msr mair_el2, x1 + ldr x1, [x0, #NVHE_INIT_HCR_EL2] + msr hcr_el2, x1 + + ldr x1, [x0, #NVHE_INIT_VTTBR] + msr vttbr_el2, x1 + + ldr x1, [x0, #NVHE_INIT_VTCR] + msr vtcr_el2, x1 + ldr x1, [x0, #NVHE_INIT_PGD_PA] phys_to_ttbr x2, x1 alternative_if ARM64_HAS_CNP diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f3d0e9eca56c..979a76cdf9fb 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -97,10 +97,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; write_sysreg(mdcr_el2, mdcr_el2); - if (is_protected_kvm_enabled()) - write_sysreg(HCR_HOST_NVHE_PROTECTED_FLAGS, hcr_el2); - else - write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); + write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); write_sysreg(__kvm_hyp_host_vector, vbar_el2); } From patchwork Fri Jan 8 12:15:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F4EAC433E0 for ; Fri, 8 Jan 2021 12:17:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 09ADA238E4 for ; Fri, 8 Jan 2021 12:17:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728020AbhAHMR0 (ORCPT ); Fri, 8 Jan 2021 07:17:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728004AbhAHMRZ (ORCPT ); Fri, 8 Jan 2021 07:17:25 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9F55C0612F6 for ; Fri, 8 Jan 2021 04:16:12 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id d187so15540576ybc.6 for ; Fri, 08 Jan 2021 04:16:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=GhGffTy323Yc+svVJ64exq2jWsG8vcFR1ycW44G52EU=; b=X6LuXuqSMOCXGstdrtYG446m5i+SA1fYqx6i8lVcMTiFWou9/nsB7x54DWzDLtiES2 Fg23HyDLfoSFjmty3TbmbhmEOSbW6X66j/PyorgqtZDJedMf1nv/dM7Df4ONZZasdtP+ oviDJS3boNBbOZ1SjvbnT69MRbt0w9jfzZxYCO4RMNPfHjHmd9oog/roAMMYEufaINzT zYTfYjpPDKiu0l/EjrI/FMMuYWp7uMozeSwxThsvSBBsqTPI5+0KkRnj4Iv1DOX+dWG+ yQhZXVMYJN50EtL1lDp1OlkTlYOl8wC7/26msdhpKU4wBvO4tM+j0JWx6Z+xbDiQcwZ2 YXzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GhGffTy323Yc+svVJ64exq2jWsG8vcFR1ycW44G52EU=; b=IUdbIWMpt1dzDq1Ao/vbZCgljk9bU5G7zTIegk92UJjLgj9jgBI0W1ZI8hstQ1Bzc5 CAxSp8Pleo1JqY06m9Mj3Z9okRE5jR3SwfErBncRkk/yLgidXCAzCzKMgbIUcA6FD1+e T6r4D+QnD65VMELUgGL6jxKLRjj+HBlFhvWdlo/4RqMNSd5uunGjAMA+NR+GsJGkiYmn lGM2hIGwyMflN6PO/V4+U+JLDLQ11G4XNFzO76dUlbFt0Fm4ZONWf/GiU8IGvIdX3nyC kR5UR8VPFJjaENwn3rwjAOKjJCvdpHkzY0P6XOj11G3C2fm4mwtR3VP7M1ykthG6GpaY MnpA== X-Gm-Message-State: AOAM531l0AHUWrJ1crsHNigF6Cc6kF7A4sWaz9W2bVyJqQCNbVgGz9nj 5fgq2Q4CnS9RrXqM+M06MzonOvZKMznL X-Google-Smtp-Source: ABdhPJy5ayKMRyDgH4Jki/zohuoMLZrHsKaWpuc3Szg88TgJjpuy2vC9qXyeS+x0GSaUvDCuYV8J0Q+G2MHJ Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a25:69d1:: with SMTP id e200mr5049678ybc.3.1610108172058; Fri, 08 Jan 2021 04:16:12 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:19 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-22-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 21/26] KVM: arm64: Refactor kvm_arm_setup_stage2() From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to re-use some of the stage 2 setup at EL2, factor parts of kvm_arm_setup_stage2() out into static inline functions. No functional change intended. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 48 ++++++++++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 42 +++------------------------- 2 files changed, 52 insertions(+), 38 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 662f0415344e..83b4c5cf4768 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -280,6 +280,54 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa, return ret; } +static inline u64 kvm_get_parange(u64 mmfr0) +{ + u64 parange = cpuid_feature_extract_unsigned_field(mmfr0, + ID_AA64MMFR0_PARANGE_SHIFT); + if (parange > ID_AA64MMFR0_PARANGE_MAX) + parange = ID_AA64MMFR0_PARANGE_MAX; + + return parange; +} + +/* + * The VTCR value is common across all the physical CPUs on the system. + * We use system wide sanitised values to fill in different fields, + * except for Hardware Management of Access Flags. HA Flag is set + * unconditionally on all CPUs, as it is safe to run with or without + * the feature and the bit is RES0 on CPUs that don't support it. + */ +static inline u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift) +{ + u64 vtcr = VTCR_EL2_FLAGS; + u8 lvls; + + vtcr |= kvm_get_parange(mmfr0) << VTCR_EL2_PS_SHIFT; + vtcr |= VTCR_EL2_T0SZ(phys_shift); + /* + * Use a minimum 2 level page table to prevent splitting + * host PMD huge pages at stage2. + */ + lvls = stage2_pgtable_levels(phys_shift); + if (lvls < 2) + lvls = 2; + vtcr |= VTCR_EL2_LVLS_TO_SL0(lvls); + + /* + * Enable the Hardware Access Flag management, unconditionally + * on all CPUs. The features is RES0 on CPUs without the support + * and must be ignored by the CPUs. + */ + vtcr |= VTCR_EL2_HA; + + /* Set the vmid bits */ + vtcr |= (get_vmid_bits(mmfr1) == 16) ? + VTCR_EL2_VS_16BIT : + VTCR_EL2_VS_8BIT; + + return vtcr; +} + #define kvm_phys_to_vttbr(addr) phys_to_ttbr(addr) static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 47f3f035f3ea..6aae118c960a 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -332,19 +332,10 @@ int kvm_set_ipa_limit(void) return 0; } -/* - * Configure the VTCR_EL2 for this VM. The VTCR value is common - * across all the physical CPUs on the system. We use system wide - * sanitised values to fill in different fields, except for Hardware - * Management of Access Flags. HA Flag is set unconditionally on - * all CPUs, as it is safe to run with or without the feature and - * the bit is RES0 on CPUs that don't support it. - */ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) { - u64 vtcr = VTCR_EL2_FLAGS, mmfr0; - u32 parange, phys_shift; - u8 lvls; + u64 mmfr0, mmfr1; + u32 phys_shift; if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) return -EINVAL; @@ -359,33 +350,8 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) } mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); - parange = cpuid_feature_extract_unsigned_field(mmfr0, - ID_AA64MMFR0_PARANGE_SHIFT); - if (parange > ID_AA64MMFR0_PARANGE_MAX) - parange = ID_AA64MMFR0_PARANGE_MAX; - vtcr |= parange << VTCR_EL2_PS_SHIFT; - - vtcr |= VTCR_EL2_T0SZ(phys_shift); - /* - * Use a minimum 2 level page table to prevent splitting - * host PMD huge pages at stage2. - */ - lvls = stage2_pgtable_levels(phys_shift); - if (lvls < 2) - lvls = 2; - vtcr |= VTCR_EL2_LVLS_TO_SL0(lvls); - - /* - * Enable the Hardware Access Flag management, unconditionally - * on all CPUs. The features is RES0 on CPUs without the support - * and must be ignored by the CPUs. - */ - vtcr |= VTCR_EL2_HA; + mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); - /* Set the vmid bits */ - vtcr |= (kvm_get_vmid_bits() == 16) ? - VTCR_EL2_VS_16BIT : - VTCR_EL2_VS_8BIT; - kvm->arch.vtcr = vtcr; return 0; } From patchwork Fri Jan 8 12:15:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49E3EC4332B for ; Fri, 8 Jan 2021 12:17:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1C80C23A1D for ; Fri, 8 Jan 2021 12:17:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728105AbhAHMRf (ORCPT ); Fri, 8 Jan 2021 07:17:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728091AbhAHMRe (ORCPT ); Fri, 8 Jan 2021 07:17:34 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3290C061257 for ; Fri, 8 Jan 2021 04:16:14 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id g17so15379054ybh.5 for ; Fri, 08 Jan 2021 04:16:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=UneVgNM1tyQijhnQQmLMVb1v446RL4t+C96eG3CaS6s=; b=sjy+Rq4XVFYrNu/u8dYKqV5xzTkxrsryfinUxE4vTpQxNEMUawE0WVAXTYAxrZrRxK 3AQucel0UXMfP6Z1Fpjunu5VnUt1x7FYQpkVARaMlEMqOoPg0rEpkTKfdvwdD4PNY1ho lP9zsxTHuFqSVVGhBMwFWckCWtFWKgAs+clmqF5nYdn1Tix2xYcJIfcboKppux/K+q6g 0f9UzpCDoWW2Drr/z5io3pGP2/80Wr2zCw//2zx2HfLq17hY58yPvumPXSHa44HUbFM3 Zm6CqmHh74BUuZd0yCaychVURrioQxlyJilhlVhE/sUyuEzx8GXENr6x9SEOLk/jOx8p kQlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UneVgNM1tyQijhnQQmLMVb1v446RL4t+C96eG3CaS6s=; b=gwqjn0sSub+HIuo5T+XTqHhT174Hw32hmqMMPw1+j46Wu6/hq2Ki/Vk4H1gWQt5xWj 4Smm8vaN0/stPLBnaY2QvY6htgULHLxJ44tHZX8qDL9xAO4DBqUejoKmu4x+t9umy6HS 5M6e/fSD60AygflEYXBAR7e1QojB/7LCi6Fg6Qml3OEl2QEF7fhQgq2RAif01nqpT2I5 +K4M4Dd0s9GLZcl6a7R2PJg7m9OYPp3MSG7W4/rF1BuibLf8l8iSDWfGs/3B0ZJTjdPY ZzRygPfhnbTqX5G9uhslHwEfYvp+VGGRbqBIEvF6gnsrjYCE0PPMAxFrBdoPgpqs/Iii c+jA== X-Gm-Message-State: AOAM531fcHQelB6J08UNrtzju9CK5CMXseRrKallR7fWQib2BR1jmhe7 pzks7v84OCVZ2jy+0u67kzgVaCj5A0TK X-Google-Smtp-Source: ABdhPJx2tvQM5pHxJ1110zxDP3Dc7/b7XCKfM0MD28OR/73itm2z2W+f9Cz7ihMgWsQ1O6tYqWDGlPIiIyMg Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a25:6405:: with SMTP id y5mr5295727ybb.328.1610108174241; Fri, 08 Jan 2021 04:16:14 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:20 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-23-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 22/26] KVM: arm64: Refactor __load_guest_stage2() From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Refactor __load_guest_stage2() to introduce __load_stage2() which will be re-used when loading the host stage 2. Signed-off-by: Quentin Perret Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_mmu.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 83b4c5cf4768..8d37d6d1ed29 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -345,9 +345,9 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) * Must be called from hyp code running at EL2 with an updated VTTBR * and interrupts disabled. */ -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) +static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr) { - write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2); + write_sysreg(vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); /* @@ -358,6 +358,11 @@ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) +{ + __load_stage2(mmu, kern_hyp_va(mmu->arch)->vtcr); +} + static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) { return container_of(mmu->arch, struct kvm, arch); From patchwork Fri Jan 8 12:15:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D62E1C433E0 for ; Fri, 8 Jan 2021 12:18:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D78F23975 for ; Fri, 8 Jan 2021 12:18:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728246AbhAHMSK (ORCPT ); Fri, 8 Jan 2021 07:18:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728097AbhAHMRe (ORCPT ); Fri, 8 Jan 2021 07:17:34 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5332AC061258 for ; Fri, 8 Jan 2021 04:16:17 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id x74so9139308qkb.12 for ; Fri, 08 Jan 2021 04:16:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=pSI0R0y0LyZ0MvEgT/8TLG9r/LMGz98d06c78ikqYMM=; b=ncicFESs3ffoWFoGIrRaXV+UDl8OOBsRzdKG5EpyzwDsT0yV7/eDINZFH8YAYdLFQh Y9TheqAFWLGOdu5Ma12BYM5IyqUHQAiuEm03eYEc1djPhUHUt+GW5V+HRmWrfKSK6PFG OnQkRt6iTH4FfGnyGLVzO+Hhvsauj/Z1MiAePgkHI9jvmc0XEJ5bFXuE23HXRU5lNRY8 /bH/4yf2JeHIynsSwkiZuJFYhDGSVNoub55OTznCHT6ga1UcI3TVJsV9yzVZ4EFAl5RS dUVWi7djxRQN8R6/wJHxV2dByH4KaC4+SmMuujY+2acUKwtQOZnmnJyoO2sgaxMrN5ms k35w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pSI0R0y0LyZ0MvEgT/8TLG9r/LMGz98d06c78ikqYMM=; b=Y2vpCL4rTYi5pN9hejWgS+3Ix8rRYt5qoZijs+tMNk6vmK4rGOdHHwfRs2x8bndWPc 9hRVu3CM3hWOUvp7hqXk6BCLHlqOcrfBZLY3W4pwPQICXTuOBssLH+4ZWd3SlkfszqmJ bhtsN+GpQYlWl9kpj2KA4HH1QM5bMP3aSFbOgfKenhC00KxuTso/rLA7ntQGIuRkIu0v h5DvS7es0DKQG6kb0evu0c+QTug9RPQS/cx3pJc1BAdrgfd5vEEsf3pM7TNuThkPwDEW qsirPDSsDgrvI1qedvavuPI3ckhr89YB8grTyHNVF7AG7BSrFKEVaalRAibkrEZT9zMZ D1Dg== X-Gm-Message-State: AOAM533L/ltEw7ml5ffibxgqRRkUle/bkCyexCsjVz3TD2BhLEf5AmmW lplFryG1BTPaydymo+Cxb1cwcrahTywf X-Google-Smtp-Source: ABdhPJznjaU0Yun87ow+H6KZ/LD9kUqsdbc9elNbI5loy7zUnacmNP+CBQyMkz84xwCbZHRBZ8tbgygT96wL Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:90ee:: with SMTP id p101mr3023584qvp.29.1610108176503; Fri, 08 Jan 2021 04:16:16 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:21 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-24-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 23/26] KVM: arm64: Refactor __populate_fault_info() From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Refactor __populate_fault_info() to introduce __get_fault_info() which will be used once the host is wrapped in a stage 2. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/hyp/switch.h | 36 +++++++++++++++---------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 84473574c2e7..e9005255d639 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -157,19 +157,9 @@ static inline bool __translate_far_to_hpfar(u64 far, u64 *hpfar) return true; } -static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) +static inline bool __get_fault_info(u64 esr, u64 *far, u64 *hpfar) { - u8 ec; - u64 esr; - u64 hpfar, far; - - esr = vcpu->arch.fault.esr_el2; - ec = ESR_ELx_EC(esr); - - if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) - return true; - - far = read_sysreg_el2(SYS_FAR); + *far = read_sysreg_el2(SYS_FAR); /* * The HPFAR can be invalid if the stage 2 fault did not @@ -185,12 +175,30 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) if (!(esr & ESR_ELx_S1PTW) && (cpus_have_final_cap(ARM64_WORKAROUND_834220) || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { - if (!__translate_far_to_hpfar(far, &hpfar)) + if (!__translate_far_to_hpfar(*far, hpfar)) return false; } else { - hpfar = read_sysreg(hpfar_el2); + *hpfar = read_sysreg(hpfar_el2); } + return true; +} + +static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) +{ + u8 ec; + u64 esr; + u64 hpfar, far; + + esr = vcpu->arch.fault.esr_el2; + ec = ESR_ELx_EC(esr); + + if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) + return true; + + if (!__get_fault_info(esr, &far, &hpfar)) + return false; + vcpu->arch.fault.far_el2 = far; vcpu->arch.fault.hpfar_el2 = hpfar; return true; From patchwork Fri Jan 8 12:15:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359218 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BF6BC4332E for ; Fri, 8 Jan 2021 12:18:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D5327238E4 for ; Fri, 8 Jan 2021 12:18:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728129AbhAHMRl (ORCPT ); Fri, 8 Jan 2021 07:17:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728102AbhAHMRf (ORCPT ); Fri, 8 Jan 2021 07:17:35 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3904AC06125E for ; Fri, 8 Jan 2021 04:16:19 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id g26so9137630qkk.13 for ; Fri, 08 Jan 2021 04:16:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=/kvCjtuIQz5SjJffuu+OL8P3ygl/wSXxaf12YxB4qCw=; b=sXaLVMq61JcLZJ7SAyFmj4qvYYTao/QZUDKHDb3xCD5xHYJ4fLJSb1MDw77gXVZahu GHd8anCXlIA+gEPA3OrkKfCdIBBPed21yIvvP4u/QtmwFyIr4RKm78t6hPk4LcmtiuGl LWJlmj0AegTkmCNkcO9neBNdiLcVfxNxYmXhWiJEbq0QdSldprFK73+TKhabFk7lWCmq b1SZpo8VVMfhkLGNhaU9VUXMO3vR+8ecZ+cYehglSMSC+/qa0JQ2fmL4M8QIMdaveKli Q5mBZfiOrHA99BRo6Ve44te6CbokgQW9gTYHdRzbGq2clm3sZJWpIXFM48iiufWt7Z/U ncpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/kvCjtuIQz5SjJffuu+OL8P3ygl/wSXxaf12YxB4qCw=; b=J/mGctA4xbABcVyovBjWb/1eTurnNWvdCE/infF7GopCO1oWV8jQWUXQQARiJt8PR9 x2gMv9+70rTyIDuvglLAhKYD2zGpMw6S61nio2izr7+Zjoyz7/t2v+Xvde4Kno60JJox tTQhFll08TIXri7DQ8rwG7s+EUby9/tonrgpPjHlHpW17L46yNjBBRH2Jj5ssXO6PTPS Z1/pNO8IGMPS7pK9GjE9lZ4piWqAi7TrA3hzqPWJcEZWdvVeULdgF2PDssN2FRPC7zdx b33DUkKgDAjmIZNAxwL7807zH6QhwymADwH33vxcvx5D58KT5QYjdkW1wEZE54fc5+47 UvUA== X-Gm-Message-State: AOAM530wW6nOXJp4I0LRMtRoBjsqiEh3v6q7Nmq39Mqp5Y7bXNpVs5/V jyQbSSNC6ANWN1Gw5whDqj2TPHImNLSp X-Google-Smtp-Source: ABdhPJyAyeWtBSy6/qlX3SVvUQWsmlUBEqJw+CdqkyK2mN7Fz3TinYAuvO+j5sLLsHjag7/tCMs04/QRlCoX Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:ad4:4083:: with SMTP id l3mr3194245qvp.17.1610108178425; Fri, 08 Jan 2021 04:16:18 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:22 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-25-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 24/26] KVM: arm64: Make memcache anonymous in pgtable allocator From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The current stage2 page-table allocator uses a memcache to get pre-allocated pages when it needs any. To allow re-using this code at EL2 which uses a concept of memory pools, make the memcache argument to kvm_pgtable_stage2_map() anonymous. and let the mm_ops zalloc_page() callbacks use it the way they need to. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 6 +++--- arch/arm64/kvm/hyp/pgtable.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 8e8f1d2c5e0e..d846bc3d3b77 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -176,8 +176,8 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); * @size: Size of the mapping. * @phys: Physical address of the memory to map. * @prot: Permissions and attributes for the mapping. - * @mc: Cache of pre-allocated GFP_PGTABLE_USER memory from which to - * allocate page-table pages. + * @mc: Cache of pre-allocated memory from which to allocate page-table + * pages. * * The offset of @addr within a page is ignored, @size is rounded-up to * the next page boundary and @phys is rounded-down to the previous page @@ -194,7 +194,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); */ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot, - struct kvm_mmu_memory_cache *mc); + void *mc); /** * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 page-table. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 96a25d0b7b6e..5dd1b4978fe8 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -443,7 +443,7 @@ struct stage2_map_data { kvm_pte_t *anchor; struct kvm_s2_mmu *mmu; - struct kvm_mmu_memory_cache *memcache; + void *memcache; struct kvm_pgtable_mm_ops *mm_ops; }; @@ -613,7 +613,7 @@ static int stage2_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot, - struct kvm_mmu_memory_cache *mc) + void *mc) { int ret; struct stage2_map_data map_data = { From patchwork Fri Jan 8 12:15:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B87F2C4332D for ; Fri, 8 Jan 2021 12:17:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7677023A00 for ; Fri, 8 Jan 2021 12:17:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728133AbhAHMRk (ORCPT ); Fri, 8 Jan 2021 07:17:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728129AbhAHMRk (ORCPT ); Fri, 8 Jan 2021 07:17:40 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6161DC061261 for ; Fri, 8 Jan 2021 04:16:21 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id b11so8119002qtj.11 for ; Fri, 08 Jan 2021 04:16:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=ubj9eKj6feW724lUWuvX7193xJxAVUWYCd7LIBt/xZM=; b=ZyLpAlcpBv9O9CVzJMFTVvbJ+v6N6/MlI5lagtaZQ42jam3KCDm199szqV8m1XH6vm 6GtLu1qOGyZthvo9l8MWkzfw/K4Xl7I4QFtfnfMIduarldtpphBLWUhCRFPwAl/SpFil MHMQ6ecjKrZdBANGg/u/TkAGtRynKiY+Sk/NiOrxwBMvVNpOuaEXjziQfL/DUqHuhahi 0r6ZiITozRE25Td67i0Nq3Ldf5/d3TJNfFj9+Nl3f3DybGKwfPV2ON93JUo21Higyxi8 P7ZnY2P8LAGAth/TT6nZsVWvGdmVB6IOZcbIPfZk4vnIxtEk1euUBNhPnbgVeBpb5uEQ KI5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ubj9eKj6feW724lUWuvX7193xJxAVUWYCd7LIBt/xZM=; b=JFOhkgfJHkkOwYXE0oLGlAyXmsWzVFNi/eyulj9h9T0748faXnX21gcgR+CE+Ft0Go Dyt4Bb6bXV2uzCY1R8tv5Qr82eQaJCZjj8LXMBTtOYr/hHHCCSJSu/uaXniVQ4fpPSMq U+LU0RRbiRks58vdiRVTU7P4nYSmij8JuM0akNpiN3T1/wRI21Q0iTsgl0W/XDCyoa0c EjCZtDjR+VqHKLPyUfzqQZ6VVVLg9Plq3t3O/JHaqI+rwcfvzRM3W1Xr3TJMM/KJZk5f M6N0mmNY2Ysys6IptHCSRM2cTFqSbs5ILY+enP+fptRF8LDJGaXLyxyYUSSchV4b9Ipf L/hQ== X-Gm-Message-State: AOAM530jXrgyfpogH/cHSf3QNMQ3f9IwMHJOrZ/eG/G+dk4Yy8+UNJPZ 0MHVg7YVHUImNo1jLzLfEluvt9i7PK69 X-Google-Smtp-Source: ABdhPJwsPu8BKRBWZUjzdMSkwBKnShyda60jjXL5FbZKOAM/v5VuP41Mk0K0+RI4xm3tuDvT7Z/bcAxZqOYT Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:ad4:4b21:: with SMTP id s1mr6284968qvw.59.1610108180588; Fri, 08 Jan 2021 04:16:20 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:23 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-26-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 25/26] KVM: arm64: Reserve memory for host stage 2 From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Extend the memory pool allocated for the hypervisor to include enough pages to map all of memory at page granularity for the host stage 2. While at it, also reserve some memory for device mappings. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mm.h | 36 ++++++++++++++++++++++++---- arch/arm64/kvm/hyp/nvhe/setup.c | 12 ++++++++++ arch/arm64/kvm/hyp/reserved_mem.c | 2 ++ 3 files changed, 46 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index f0cc09b127a5..cdf2e3447b2a 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -52,15 +52,12 @@ static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) return total; } -static inline unsigned long hyp_s1_pgtable_size(void) +static inline unsigned long __hyp_pgtable_total_size(void) { struct hyp_memblock_region *reg; unsigned long nr_pages, res = 0; int i; - if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) - return 0; - for (i = 0; i < kvm_nvhe_sym(hyp_memblock_nr); i++) { reg = &kvm_nvhe_sym(hyp_memory)[i]; nr_pages = (reg->end - reg->start) >> PAGE_SHIFT; @@ -68,6 +65,18 @@ static inline unsigned long hyp_s1_pgtable_size(void) res += nr_pages << PAGE_SHIFT; } + return res; +} + +static inline unsigned long hyp_s1_pgtable_size(void) +{ + unsigned long res, nr_pages; + + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return 0; + + res = __hyp_pgtable_total_size(); + /* Allow 1 GiB for private mappings */ nr_pages = (1 << 30) >> PAGE_SHIFT; nr_pages = __hyp_pgtable_max_pages(nr_pages); @@ -76,4 +85,23 @@ static inline unsigned long hyp_s1_pgtable_size(void) return res; } +static inline unsigned long host_s2_mem_pgtable_size(void) +{ + unsigned long max_pgd_sz = 16 << PAGE_SHIFT; + + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return 0; + + return __hyp_pgtable_total_size() + max_pgd_sz; +} + +static inline unsigned long host_s2_dev_pgtable_size(void) +{ + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return 0; + + /* Allow 1 GiB for private mappings */ + return __hyp_pgtable_max_pages((1 << 30) >> PAGE_SHIFT) << PAGE_SHIFT; +} + #endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 6d1faede86ae..79b697df01e2 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -24,6 +24,8 @@ unsigned long hyp_nr_cpus; static void *stacks_base; static void *vmemmap_base; static void *hyp_pgt_base; +static void *host_s2_mem_pgt_base; +static void *host_s2_dev_pgt_base; static int divide_memory_pool(void *virt, unsigned long size) { @@ -46,6 +48,16 @@ static int divide_memory_pool(void *virt, unsigned long size) if (!hyp_pgt_base) return -ENOMEM; + nr_pages = host_s2_mem_pgtable_size() >> PAGE_SHIFT; + host_s2_mem_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!host_s2_mem_pgt_base) + return -ENOMEM; + + nr_pages = host_s2_dev_pgtable_size() >> PAGE_SHIFT; + host_s2_dev_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!host_s2_dev_pgt_base) + return -ENOMEM; + return 0; } diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c index 32f648992835..ee97e55e3c59 100644 --- a/arch/arm64/kvm/hyp/reserved_mem.c +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -74,6 +74,8 @@ void __init kvm_hyp_reserve(void) */ hyp_mem_size += NR_CPUS << PAGE_SHIFT; hyp_mem_size += hyp_s1_pgtable_size(); + hyp_mem_size += host_s2_mem_pgtable_size(); + hyp_mem_size += host_s2_dev_pgtable_size(); /* * The hyp_vmemmap needs to be backed by pages, but these pages From patchwork Fri Jan 8 12:15:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 359217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A42D1C433E6 for ; Fri, 8 Jan 2021 12:18:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 686DB238E4 for ; Fri, 8 Jan 2021 12:18:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725966AbhAHMSA (ORCPT ); Fri, 8 Jan 2021 07:18:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728134AbhAHMRk (ORCPT ); Fri, 8 Jan 2021 07:17:40 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AD3BC061263 for ; Fri, 8 Jan 2021 04:16:23 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id l138so9129595qke.4 for ; Fri, 08 Jan 2021 04:16:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Lrlj1zDpp+shrrwmzbQl9mYOjw6ZCNar4qli7UVIZg8=; b=QYBd4WEQOF4REiQLDjhfFLKTSA4R+NFLQg2B9Erk2dzayOKXXWBgd3qanm8CvF9O6J Dj+uCbnNKDFXnYnwn4+Xm8ijA/E495hdew2v/ZPC72mevkxaL2ciGTNCFbeW9grImfI0 78aW9PAWZckzh8gSBG9r8R7njJgk/4bj4nIpggBODLnxQc1pgv/I/bpGkj1wAdP5Mq/o RUjCsP159LZoa4NZE3yw7vylLW6Fi5vjJwtAc30L3gIzdKOEl2q4BBVHkH0fHqZyJlJx sn6M3xQBYc7fCDnQ1I65JGs7sYpKM3fVCAvCjj4PmjMmnt9grfV66Dy3LHpzSb/IEfiD QDcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Lrlj1zDpp+shrrwmzbQl9mYOjw6ZCNar4qli7UVIZg8=; b=CSCu5RttlPoXsIkG0necTukdqsiqau79dkbrek57v8on1QJt8x2RfPCfvQ8hWl3Gbz a/alVnyqiS52Qo0csIkfOrvIPr3ap99eEwLdioWknytOpN+KHMl6bN7KMlAKadp4r53E G9qDcFCSMILQOzkArNNlwtvVW+GtDXu1Zh1GTYvaEQyhX4i5osAkHFDRvejVlPtwq3Bp txch/WWir0qyoGPKYqfeVwwjqVgoPADA0Uw7IuvpsbA/MglCzXIaiwHc+Arq6d58hKYw Ux01JRgWSJOEbYDjOcsRm4qDYB63NaZVASGJ6diVH+rkdggqqbeHAMABixPRnGM6Piku Jy2A== X-Gm-Message-State: AOAM533N/XMUDCu7W56mi+smN/8PBmkOM5sLX5fi0SU2K+qtQ1iivvyr XSym1VJqNRH+h1SL1GbYPHvNGuy3dct7 X-Google-Smtp-Source: ABdhPJwWb0+XWZHJHqQdj0HxGhIlVOl0nrUOGKnqucLq/KGZtWR5BTrNZ4VLQrIi7/M7Gz3+vkyldnyYXjU8 Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:9ccb:: with SMTP id j11mr6363177qvf.44.1610108182674; Fri, 08 Jan 2021 04:16:22 -0800 (PST) Date: Fri, 8 Jan 2021 12:15:24 +0000 In-Reply-To: <20210108121524.656872-1-qperret@google.com> Message-Id: <20210108121524.656872-27-qperret@google.com> Mime-Version: 1.0 References: <20210108121524.656872-1-qperret@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [RFC PATCH v2 26/26] KVM: arm64: Wrap the host with a stage 2 From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: devicetree@vger.kernel.org, android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Fuad Tabba , Mark Rutland , David Brazdil Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When KVM runs in protected nVHE mode, make use of a stage 2 page-table to give the hypervisor some control over the host memory accesses. At the moment all memory aborts from the host will be instantly idmapped RWX at stage 2 in a lazy fashion. Later patches will make use of that infrastructure to implement access control restrictions to e.g. protect guest memory from the host. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_cpufeature.h | 2 + arch/arm64/kernel/image-vars.h | 3 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 33 +++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/hyp-init.S | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 191 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 6 + arch/arm64/kvm/hyp/nvhe/switch.c | 7 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 4 +- 10 files changed, 248 insertions(+), 7 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h create mode 100644 arch/arm64/kvm/hyp/nvhe/mem_protect.c diff --git a/arch/arm64/include/asm/kvm_cpufeature.h b/arch/arm64/include/asm/kvm_cpufeature.h index d34f85cba358..74043a149322 100644 --- a/arch/arm64/include/asm/kvm_cpufeature.h +++ b/arch/arm64/include/asm/kvm_cpufeature.h @@ -15,3 +15,5 @@ #endif KVM_HYP_CPU_FTR_REG(SYS_CTR_EL0, arm64_ftr_reg_ctrel0) +KVM_HYP_CPU_FTR_REG(SYS_ID_AA64MMFR0_EL1, arm64_ftr_reg_id_aa64mmfr0_el1) +KVM_HYP_CPU_FTR_REG(SYS_ID_AA64MMFR1_EL1, arm64_ftr_reg_id_aa64mmfr1_el1) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 366d837f0d39..e4e4f30ac251 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -132,6 +132,9 @@ KVM_NVHE_ALIAS(__hyp_data_ro_after_init_end); KVM_NVHE_ALIAS(__hyp_bss_start); KVM_NVHE_ALIAS(__hyp_bss_end); +/* pKVM static key */ +KVM_NVHE_ALIAS(kvm_protected_mode_initialized); + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h new file mode 100644 index 000000000000..a22ef118a610 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#ifndef __KVM_NVHE_MEM_PROTECT__ +#define __KVM_NVHE_MEM_PROTECT__ +#include +#include +#include +#include +#include + +struct host_kvm { + struct kvm_arch arch; + struct kvm_pgtable pgt; + struct kvm_pgtable_mm_ops mm_ops; + hyp_spinlock_t lock; +}; +extern struct host_kvm host_kvm; + +int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool); +void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); + +static __always_inline void __load_host_stage2(void) +{ + if (static_branch_likely(&kvm_protected_mode_initialized)) + __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); + else + write_sysreg(0, vttbr_el2); +} +#endif /* __KVM_NVHE_MEM_PROTECT__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index d7381a503182..c3e2f98555c4 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -11,7 +11,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ - cache.o cpufeature.o setup.o mm.o + cache.o cpufeature.o setup.o mm.o mem_protect.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S index b1341bb4b453..32591db76c75 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S @@ -129,6 +129,7 @@ alternative_else_nop_endif /* Invalidate the stale TLBs from Bootloader */ tlbi alle2 + tlbi vmalls12e1 dsb sy /* diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 3075f117651c..93699600bc22 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -13,6 +13,7 @@ #include #include +#include #include #include @@ -222,6 +223,11 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) case ESR_ELx_EC_SMC64: handle_host_smc(host_ctxt); break; + case ESR_ELx_EC_IABT_LOW: + fallthrough; + case ESR_ELx_EC_DABT_LOW: + handle_host_mem_abort(host_ctxt); + break; default: hyp_panic(); } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c new file mode 100644 index 000000000000..0cd3eb178f3b --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -0,0 +1,191 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include + +extern unsigned long hyp_nr_cpus; +struct host_kvm host_kvm; + +struct hyp_pool host_s2_mem; +struct hyp_pool host_s2_dev; + +static void *host_s2_zalloc_pages_exact(size_t size) +{ + return hyp_alloc_pages(&host_s2_mem, HYP_GFP_ZERO, get_order(size)); +} + +static void *host_s2_zalloc_page(void *pool) +{ + return hyp_alloc_pages(pool, HYP_GFP_ZERO, 0); +} + +static int prepare_s2_pools(void *mem_pgt_pool, void *dev_pgt_pool) +{ + unsigned long nr_pages; + int ret; + + nr_pages = host_s2_mem_pgtable_size() >> PAGE_SHIFT; + ret = hyp_pool_init(&host_s2_mem, __hyp_pa(mem_pgt_pool), nr_pages, 0); + if (ret) + return ret; + + nr_pages = host_s2_dev_pgtable_size() >> PAGE_SHIFT; + ret = hyp_pool_init(&host_s2_dev, __hyp_pa(dev_pgt_pool), nr_pages, 0); + if (ret) + return ret; + + host_kvm.mm_ops.zalloc_pages_exact = host_s2_zalloc_pages_exact; + host_kvm.mm_ops.zalloc_page = host_s2_zalloc_page; + host_kvm.mm_ops.phys_to_virt = hyp_phys_to_virt; + host_kvm.mm_ops.virt_to_phys = hyp_virt_to_phys; + host_kvm.mm_ops.page_count = hyp_page_count; + host_kvm.mm_ops.get_page = hyp_get_page; + host_kvm.mm_ops.put_page = hyp_put_page; + + return 0; +} + +static void prepare_host_vtcr(void) +{ + u32 parange, phys_shift; + u64 mmfr0, mmfr1; + + mmfr0 = arm64_ftr_reg_id_aa64mmfr0_el1.sys_val; + mmfr1 = arm64_ftr_reg_id_aa64mmfr1_el1.sys_val; + + /* The host stage 2 is id-mapped, so use parange for T0SZ */ + parange = kvm_get_parange(mmfr0); + phys_shift = id_aa64mmfr0_parange_to_phys_shift(parange); + + host_kvm.arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); +} + +int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool) +{ + struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu; + struct kvm_nvhe_init_params *params; + int ret, i; + + prepare_host_vtcr(); + hyp_spin_lock_init(&host_kvm.lock); + + ret = prepare_s2_pools(mem_pgt_pool, dev_pgt_pool); + if (ret) + return ret; + + ret = kvm_pgtable_stage2_init(&host_kvm.pgt, &host_kvm.arch, + &host_kvm.mm_ops); + if (ret) + return ret; + + mmu->pgd_phys = __hyp_pa(host_kvm.pgt.pgd); + mmu->arch = &host_kvm.arch; + mmu->pgt = &host_kvm.pgt; + mmu->vmid.vmid_gen = 0; + mmu->vmid.vmid = 0; + + for (i = 0; i < hyp_nr_cpus; i++) { + params = per_cpu_ptr(&kvm_init_params, i); + params->vttbr = kvm_get_vttbr(mmu); + params->vtcr = host_kvm.arch.vtcr; + params->hcr_el2 |= HCR_VM; + __flush_dcache_area(params, sizeof(*params)); + } + + write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); + __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); + + return 0; +} + +static void host_stage2_unmap_dev_all(void) +{ + struct kvm_pgtable *pgt = &host_kvm.pgt; + struct hyp_memblock_region *reg; + u64 addr = 0; + int i; + + /* Unmap all non-memory regions to recycle the pages */ + for (i = 0; i < hyp_memblock_nr; i++, addr = reg->end) { + reg = &hyp_memory[i]; + kvm_pgtable_stage2_unmap(pgt, addr, reg->start - addr); + } + kvm_pgtable_stage2_unmap(pgt, addr, ULONG_MAX); +} + +static bool ipa_is_memory(u64 ipa) +{ + int cur, left = 0, right = hyp_memblock_nr; + struct hyp_memblock_region *reg; + + /* The list of memblock regions is sorted, binary search it */ + while (left < right) { + cur = (left + right) >> 1; + reg = &hyp_memory[cur]; + if (ipa < reg->start) + right = cur; + else if (ipa >= reg->end) + left = cur + 1; + else + return true; + } + + return false; +} + +static int __host_stage2_map(u64 ipa, u64 size, enum kvm_pgtable_prot prot, + struct hyp_pool *p) +{ + return kvm_pgtable_stage2_map(&host_kvm.pgt, ipa, size, ipa, prot, p); +} + +static int host_stage2_map(u64 ipa, u64 size, enum kvm_pgtable_prot prot) +{ + int ret, is_memory = ipa_is_memory(ipa); + struct hyp_pool *pool; + + pool = is_memory ? &host_s2_mem : &host_s2_dev; + + hyp_spin_lock(&host_kvm.lock); + ret = __host_stage2_map(ipa, size, prot, pool); + if (ret == -ENOMEM && !is_memory) { + host_stage2_unmap_dev_all(); + ret = __host_stage2_map(ipa, size, prot, pool); + } + hyp_spin_unlock(&host_kvm.lock); + + return ret; +} + +void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) +{ + enum kvm_pgtable_prot prot; + u64 far, hpfar, esr, ipa; + int ret; + + esr = read_sysreg_el2(SYS_ESR); + if (!__get_fault_info(esr, &far, &hpfar)) + hyp_panic(); + + prot = KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W | KVM_PGTABLE_PROT_X; + ipa = (hpfar & HPFAR_MASK) << 8; + ret = host_stage2_map(ipa, PAGE_SIZE, prot); + if (ret) + hyp_panic(); +} diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 79b697df01e2..f6d3318e92fa 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -12,6 +12,7 @@ #include #include #include +#include #include struct hyp_pool hpool; @@ -161,6 +162,11 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; + /* Wrap the host with a stage 2 */ + ret = kvm_host_prepare_stage2(host_s2_mem_pgt_base, host_s2_dev_pgt_base); + if (ret) + goto out; + pkvm_pgtable_mm_ops.zalloc_page = hyp_zalloc_hyp_page; pkvm_pgtable_mm_ops.phys_to_virt = hyp_phys_to_virt; pkvm_pgtable_mm_ops.virt_to_phys = hyp_virt_to_phys; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 979a76cdf9fb..31bc1a843bf8 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -28,6 +28,8 @@ #include #include +#include + /* Non-VHE specific context */ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); @@ -102,11 +104,6 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(__kvm_hyp_host_vector, vbar_el2); } -static void __load_host_stage2(void) -{ - write_sysreg(0, vttbr_el2); -} - /* Save VGICv3 state on non-VHE systems */ static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu) { diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index fbde89a2c6e8..255a23a1b2db 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -8,6 +8,8 @@ #include #include +#include + struct tlb_inv_context { u64 tcr; }; @@ -43,7 +45,7 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, static void __tlb_switch_to_host(struct tlb_inv_context *cxt) { - write_sysreg(0, vttbr_el2); + __load_host_stage2(); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { /* Ensure write of the host VMID */