From patchwork Tue Nov 17 18:15:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E59C5C5519F for ; Tue, 17 Nov 2020 18:16:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 89D4E221FB for ; Tue, 17 Nov 2020 18:16:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="i29eJtch" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729976AbgKQSQY (ORCPT ); Tue, 17 Nov 2020 13:16:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726731AbgKQSQX (ORCPT ); Tue, 17 Nov 2020 13:16:23 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7B05C0613CF for ; Tue, 17 Nov 2020 10:16:22 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id o19so2144400wme.2 for ; Tue, 17 Nov 2020 10:16:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=wfMub1Dq2/zzWvEJdcb9YkOg3Kk4cyIQtTysz5FPZ9k=; b=i29eJtch8pt4MddWICqOePgkq/MwJTbTc/hOyGFj3909KVEBPm5n0cRj+EplVnonKY WbI373E5+4cYXjRgWXgB7yoEj51rISpjpUCCpNubJ4wb3DMp/jUm/ja856YXPM1roevU DCSoS5kmz/1ross0mm9LO5aO5Df0pZvR0PhjdaR8ldJFyvyqiVcNi/BAVeb15byHXn9C 8IMifCCt418cG8UX7UZpo+rA5552KYu5sl1fjIkWTaOGhboJk+EaVtU77VEJh1fdvKKP 1csKvBXdENtZrlIOODJUyxJZCWUu3IhGdceC5DZsOslbPK2hFkR/d4psXG8fA7BWlsm1 lmjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wfMub1Dq2/zzWvEJdcb9YkOg3Kk4cyIQtTysz5FPZ9k=; b=We+JKN9EAjkFR1+nOr5x6mbTSTUkVJf5/NM61jOhkMwGrXW/gx89FcLhPgcfGHTxU2 dxuoY/jC2v4cl63YaUAT9YY4Mnz3tJJUK5iXWwzhDCf0E+Gnyyef1n4qoPRs37aIsKE1 QXvSpN2mLeSMuBfFj6+q0AbZ7alOrlMnaTXIzklqPOm6VfR/la5ZIqvCgBNESPRpWcQe mbV1Lg+qK5p6VPYUtMjHt+Pc5+hhUDqKjZzePzd8zl3gPMfErxfB57HFnF6a4VhrZQrS zvQyRH7Mn8aoxCh9j31KxwInyG9pILF/F2Qp1V7BWIlh4Bl/D3ScU1/f8sp21H+pF6Hg EZNA== X-Gm-Message-State: AOAM532S7PyBF/URMyedrqDfbO0dBGNzl3MUyqd1ZH62/vHz+lFSjul3 Clb4DFkJgjqe8WFUmTPMFIiJTj9l/CNO X-Google-Smtp-Source: ABdhPJwLzQmM3f+/yXxfkHBpPHMIy89xImU6xwZzUPy415BoTaClg6AeXK5YHYslpqgilPliowk09nC1PT5L Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a1c:3d54:: with SMTP id k81mr389168wma.144.1605636981668; Tue, 17 Nov 2020 10:16:21 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:41 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-2-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 01/27] arm64: lib: Annotate {clear, copy}_page() as position-independent From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Will Deacon clear_page() and copy_page() are suitable for use outside of the kernel address space, so annotate them as position-independent code. Signed-off-by: Will Deacon --- arch/arm64/lib/clear_page.S | 4 ++-- arch/arm64/lib/copy_page.S | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S index 073acbf02a7c..b84b179edba3 100644 --- a/arch/arm64/lib/clear_page.S +++ b/arch/arm64/lib/clear_page.S @@ -14,7 +14,7 @@ * Parameters: * x0 - dest */ -SYM_FUNC_START(clear_page) +SYM_FUNC_START_PI(clear_page) mrs x1, dczid_el0 and w1, w1, #0xf mov x2, #4 @@ -25,5 +25,5 @@ SYM_FUNC_START(clear_page) tst x0, #(PAGE_SIZE - 1) b.ne 1b ret -SYM_FUNC_END(clear_page) +SYM_FUNC_END_PI(clear_page) EXPORT_SYMBOL(clear_page) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index e7a793961408..29144f4cd449 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -17,7 +17,7 @@ * x0 - dest * x1 - src */ -SYM_FUNC_START(copy_page) +SYM_FUNC_START_PI(copy_page) alternative_if ARM64_HAS_NO_HW_PREFETCH // Prefetch three cache lines ahead. prfm pldl1strm, [x1, #128] @@ -75,5 +75,5 @@ alternative_else_nop_endif stnp x16, x17, [x0, #112 - 256] ret -SYM_FUNC_END(copy_page) +SYM_FUNC_END_PI(copy_page) EXPORT_SYMBOL(copy_page) From patchwork Tue Nov 17 18:15:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F64FC64E7A for ; Tue, 17 Nov 2020 18:16:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C615422210 for ; Tue, 17 Nov 2020 18:16:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Dnk6DEKL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730589AbgKQSQZ (ORCPT ); Tue, 17 Nov 2020 13:16:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726731AbgKQSQY (ORCPT ); Tue, 17 Nov 2020 13:16:24 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3505C0613CF for ; Tue, 17 Nov 2020 10:16:24 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id ek3so7956048qvb.0 for ; Tue, 17 Nov 2020 10:16:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=BJadavlyJaEECXi/UfC92ghmWl3ZHk3QkybvjorYO/E=; b=Dnk6DEKLEHkZgTwkTbD6MYFgmWYFpJl2OGFyj4gCuoFYePt7Mtzuqdl8/Pq1pvD11U BWX8ZkLWbm00UHzCwQKUlwmt7cxlXEikfwIeaywi0jAmubiAVQC6UFZRSD64nhv4OajC 8GM+OO1cEmHVMjDnBj1yjfr+CpFs5FuisgZcCrst6Kti8soVn7OtELYLgmoHevMNGj7L Ui7OZS+hF5d9GnfYGHSfJS66fNDOL0URBhlRZdoDaUHxQZNAySSvA8fmrx/lkyK/2roN 5HV6l22AiuuUqQ38FO7K47r4nUDftLLbmFziAUUXdVz5RHqS++6+XcMJfWYt3sxg7qBB Li+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BJadavlyJaEECXi/UfC92ghmWl3ZHk3QkybvjorYO/E=; b=NyYNztUdPlYEGqyzvGTyDGfjf7+1aqmNtPKBG5sp64keWToDmG1YNo0waF6G/agJG9 jIvru7GAAsOyA/wbeBvQ3VVVE+47HRceC8c76yVFK1TYZmtBfOL9t+gpHt+R/l1HjUsS xzdguYa3ndgped5RAUP3rnzSnPr2xV3v1YlsQ8m8G1h+6uqyILrB6yIjOnfL5h04Y/Hz u7dpnlTUYb8vmraZ50FfWn/mN7jMJoZpWJUrAVM1qlHVpa+UOqPJ7aq8S5LMJoSkOA04 Bqr874BxOEcB+t1dPIVfZpFYljKjE6gC4uNRikP86giZn4wPpU6K+7sKGDuiG7/Dju+y /01A== X-Gm-Message-State: AOAM530d3SAoRA2i1Cb+t06jebZQn5m+ZKpJ5iG9s+9FEk5HCkMOMps3 OgRymcPv21uryIawLe5tqQxeKFqSJf7Y X-Google-Smtp-Source: ABdhPJwADVZ7KA4lluf+rS88dHojtzpcrwFUrsFZ5Z/Kf+n3XgZdS9/NeT5dfwme4Jn7l/egpgFYp8Qa6c6N Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a0c:cd92:: with SMTP id v18mr884407qvm.47.1605636983901; Tue, 17 Nov 2020 10:16:23 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:42 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-3-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 02/27] KVM: arm64: Link position-independent string routines into .hyp.text From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Will Deacon Pull clear_page(), copy_page(), memcpy() and memset() into the nVHE hyp code and ensure that we always execute the '__pi_' entry point on the offchance that it changes in future. [ qperret: Commit title nits ] Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kernel/image-vars.h | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 4 ++++ 2 files changed, 15 insertions(+) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 8539f34d7538..dd8ccc9efb6a 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -105,6 +105,17 @@ KVM_NVHE_ALIAS(__stop___kvm_ex_table); /* Array containing bases of nVHE per-CPU memory regions. */ KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base); +/* Position-independent library routines */ +__kvm_nvhe_clear_page = __kvm_nvhe___pi_clear_page; +__kvm_nvhe_copy_page = __kvm_nvhe___pi_copy_page; +__kvm_nvhe_memcpy = __kvm_nvhe___pi_memcpy; +__kvm_nvhe_memset = __kvm_nvhe___pi_memset; + +#ifdef CONFIG_KASAN +__kvm_nvhe___memcpy = __kvm_nvhe___pi_memcpy; +__kvm_nvhe___memset = __kvm_nvhe___pi_memset; +#endif + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 1f1e351c5fe2..590fdefb42dd 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -6,10 +6,14 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ +lib-objs := clear_page.o copy_page.o memcpy.o memset.o +lib-objs := $(addprefix ../../../lib/, $(lib-objs)) + obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o +obj-y += $(lib-objs) ## ## Build rules for compiling nVHE hyp code From patchwork Tue Nov 17 18:15:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326286 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AE9AC63798 for ; Tue, 17 Nov 2020 18:16:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B41C246BF for ; Tue, 17 Nov 2020 18:16:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vS+rvB1/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730678AbgKQSQc (ORCPT ); Tue, 17 Nov 2020 13:16:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730671AbgKQSQb (ORCPT ); Tue, 17 Nov 2020 13:16:31 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A483EC0617A7 for ; Tue, 17 Nov 2020 10:16:29 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id q17so1912711wmc.1 for ; Tue, 17 Nov 2020 10:16:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=IZC56EUbcZGLv67uWppNeQH9/JsQWBJOLy6w8MLjYOk=; b=vS+rvB1/6KGnGkdgssdAFqr2pdbbPpSdTO3v0TbihY/ScWy+IKWsAhYSb+9u1xIoQY iJ5ArdUb3OBFMAalNNv9B0qc726x3h1bvk8i+Bt1in0DvI/BetdAiJgv2iq6+11P85K3 POOkjSLXMrr5CvPC7Usq08l9ERlJLMKxBdTUq+O3qFMGt1wjdx9cjLeG1FpZm0WbcGgI gWEdRNRf4cc7JUlIQ8kW/6J7K8sBCpWEHsM8GS/TPC0z9RhMv2am4tyeUPp/P/FUsJ0h xuW1/Ugcc9nP8aKCuNn+d2qK3dxsz5m2Qxz8K4SfKxYCDJLLMPLQDJAyyK1BKwJgdl4V VQOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IZC56EUbcZGLv67uWppNeQH9/JsQWBJOLy6w8MLjYOk=; b=bBCv1aUZ1se+wNFAtv4fNG7ilrzhTKREZlF63s4tBprPhzzKAxCh1mBszI6l/q8ZPW EZSQZSpvc+PHlsK9Ntrs/q1aGWh2D1oYpaT+WOhh0NzKXGRzpVsNgqe6NMUYA3Em6nk1 DSUguZXfjw4JsZxpKMhKznnAR5fJjsVzrphjYM5R92xPkJHmk8f2jNqUWwRBBNY4KyUH 2NDzXyJwc05svJkAKsx4Yhyl6LDHyL8IekD+1iqF8u8ZX+BcV3Cq7CQ/ONN4kkdu7Ovv vroAFGw/tKTquFcwmTfSW4dBzu5W4Y+hPfyk1wqeMTk60uul5isi6nbJP9rV0OUCviTT V/xw== X-Gm-Message-State: AOAM530jfc31CjE7ilUxk85gE5g2Eg8SGrKx5OiyZYxTDRCk7m2nSxxo CXvoCrLNeFlqk7P/NcXtw+R/Bpaai6Ix X-Google-Smtp-Source: ABdhPJzcNUt5sppuGQvEX0isAFbepTz9C9E9Dic3GRImbsOwvG5qyl5LhxGBa5f3ag7x2AHxrvf7fK/v7tNU Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a7b:cd10:: with SMTP id f16mr348760wmj.69.1605636988252; Tue, 17 Nov 2020 10:16:28 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:44 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-5-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 04/27] KVM: arm64: Initialize kvm_nvhe_init_params early From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Move the initialization of kvm_nvhe_init_params in a dedicated function that is run early, and only once during KVM init, rather than every time the KVM vectors are set and reset. This also opens the opportunity for the hypervisor to change the init structs during boot, hence simplifying the replacement of host-provided page-tables and stacks by the ones the hypervisor will create for itself. Signed-off-by: Quentin Perret --- arch/arm64/kvm/arm.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d6d5211653b7..7335eb4fb0bd 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1355,24 +1355,20 @@ static int kvm_init_vector_slots(void) return 0; } -static void cpu_init_hyp_mode(void) +static void cpu_prepare_hyp_mode(int cpu) { - struct kvm_nvhe_init_params *params = this_cpu_ptr_nvhe_sym(kvm_init_params); - struct arm_smccc_res res; - - /* Switch from the HYP stub to our own HYP init vector */ - __hyp_set_vectors(kvm_get_idmap_vector()); + struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); /* * Calculate the raw per-cpu offset without a translation from the * kernel's mapping to the linear mapping, and store it in tpidr_el2 * so that we can use adr_l to access per-cpu variables in EL2. */ - params->tpidr_el2 = (unsigned long)this_cpu_ptr_nvhe_sym(__per_cpu_start) - + params->tpidr_el2 = (unsigned long)per_cpu_ptr_nvhe_sym(__per_cpu_start, cpu) - (unsigned long)kvm_ksym_ref(CHOOSE_NVHE_SYM(__per_cpu_start)); params->vector_hyp_va = kern_hyp_va((unsigned long)kvm_ksym_ref(__kvm_hyp_host_vector)); - params->stack_hyp_va = kern_hyp_va(__this_cpu_read(kvm_arm_hyp_stack_page) + PAGE_SIZE); + params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) + PAGE_SIZE); params->entry_hyp_va = kern_hyp_va((unsigned long)kvm_ksym_ref(__kvm_hyp_psci_cpu_entry)); params->pgd_pa = kvm_mmu_get_httbr(); @@ -1381,6 +1377,15 @@ static void cpu_init_hyp_mode(void) * be read while the MMU is off. */ __flush_dcache_area(params, sizeof(*params)); +} + +static void cpu_init_hyp_mode(void) +{ + struct kvm_nvhe_init_params *params; + struct arm_smccc_res res; + + /* Switch from the HYP stub to our own HYP init vector */ + __hyp_set_vectors(kvm_get_idmap_vector()); /* * Call initialization code, and switch to the full blown HYP code. @@ -1389,6 +1394,7 @@ static void cpu_init_hyp_mode(void) * cpus_have_const_cap() wrapper. */ BUG_ON(!system_capabilities_finalized()); + params = this_cpu_ptr_nvhe_sym(kvm_init_params); arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(__kvm_hyp_init), virt_to_phys(params), &res); WARN_ON(res.a0 != SMCCC_RET_SUCCESS); @@ -1742,6 +1748,12 @@ static int init_hyp_mode(void) init_cpu_logical_map(); init_psci_relay(); + /* + * Prepare the CPU initialization parameters + */ + for_each_possible_cpu(cpu) + cpu_prepare_hyp_mode(cpu); + return 0; out_err: From patchwork Tue Nov 17 18:15:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71120C8300B for ; Tue, 17 Nov 2020 18:16:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 310DE221FB for ; Tue, 17 Nov 2020 18:16:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nNRoD2M9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730832AbgKQSQk (ORCPT ); Tue, 17 Nov 2020 13:16:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729124AbgKQSQk (ORCPT ); Tue, 17 Nov 2020 13:16:40 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F36AC0617A6 for ; Tue, 17 Nov 2020 10:16:38 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id d206so5341670qkc.23 for ; Tue, 17 Nov 2020 10:16:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=lh2wfGLmmAYfouTWYrt95KjC1/ZF6WAi7ZBhV5AjDD0=; b=nNRoD2M9XXWoWM8NgpAlnEhXBoYa9xn/smGakcLCZJxBll99tRVH7XlZ6kpAfLmDzS YEBOp/igtTiXRZTz+j56GXi5pzEcTsxtC69P2Bv0KA1AZUyCDWPY853b0xcR0BXqNsjY zzoThHvrbMLTNPm/8mNuW8sATGGuqFD6NUWfoAWRd9Krb9CmjdTe3yBuDe1UP2ALLPRo K9J+hEQ46hZWlOQ5xsj57z6jJFoBPfuEglscL+ZMWfWKy3HK4Aci7rdN/cI80VZX34fB Q8O1Od1+22A+zc26YZLJls0NZORus1Ag2kLC1/y6w0rVECW+qKDUHu2XDuWGww78dUnJ 85Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lh2wfGLmmAYfouTWYrt95KjC1/ZF6WAi7ZBhV5AjDD0=; b=EHFtIbshHkJgXHtqnBzPfrSwUmThqX8aCnQZe6zdbmH/QxxD8npkx2kv6Yl6gOzMjo G0BMbp0wQtlaX2aJWt2awPfuYO2bKicNyKoTu8y6XhpmFEluK2WmxDMNj4ae4hr/PObG K1CJAfMYcdMb4D/IY3u73/IAV38wlXViItdlNdUPM+dJaTeMVvD8hUcN7X3DugZitNvB JHdLcybOvq6ZZF55fTyWU+9OGs7BgnaKEUh9ugH3+9qjieTDEKLGPREYE1MNy+Dfu+FZ kcr/EezB0xFXIHKbgHimuDTi3lxByigN3LzLxzQtXb5sMWJBUvaG5ZM6rIt00z1LZH5S Q7Tg== X-Gm-Message-State: AOAM530Vn4ncc9s5/jo/yV2jSDQOWRN7Gn0Gl0+QyVXRa+zgN2+K9XN3 iGc/SykO8OHAkT+eikWnE6NpFgCQPeGo X-Google-Smtp-Source: ABdhPJw9IfFuggDXuw8zeDBboBwHrDiqLV2n/32la8Pmpb9DK5T8ZLl8MJjnvcQGvWKpPM+6LkFNsSmTdWC9 Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a0c:80e1:: with SMTP id 88mr995546qvb.10.1605636997650; Tue, 17 Nov 2020 10:16:37 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:48 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-9-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 08/27] KVM: arm64: Make kvm_call_hyp() a function call at Hyp From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org kvm_call_hyp() has some logic to issue a function call or a hypercall depending the EL at which the kernel is running. However, all the code compiled under __KVM_NVHE_HYPERVISOR__ is guaranteed to run only at EL2, and in this case a simple function call is needed. Add ifdefery to kvm_host.h to symplify kvm_call_hyp() in .hyp.text. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ac11adab6602..7a5d5f4b3351 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -557,6 +557,7 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); +#ifndef __KVM_NVHE_HYPERVISOR__ #define kvm_call_hyp_nvhe(f, ...) \ ({ \ struct arm_smccc_res res; \ @@ -596,6 +597,11 @@ void kvm_arm_resume_guest(struct kvm *kvm); \ ret; \ }) +#else /* __KVM_NVHE_HYPERVISOR__ */ +#define kvm_call_hyp(f, ...) f(__VA_ARGS__) +#define kvm_call_hyp_ret(f, ...) f(__VA_ARGS__) +#define kvm_call_hyp_nvhe(f, ...) f(__VA_ARGS__) +#endif /* __KVM_NVHE_HYPERVISOR__ */ void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); From patchwork Tue Nov 17 18:15:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326284 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADBD8C5519F for ; Tue, 17 Nov 2020 18:16:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CE752468F for ; Tue, 17 Nov 2020 18:16:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gOHYeik0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730854AbgKQSQp (ORCPT ); Tue, 17 Nov 2020 13:16:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730812AbgKQSQp (ORCPT ); Tue, 17 Nov 2020 13:16:45 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAD4FC0613CF for ; Tue, 17 Nov 2020 10:16:44 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id a134so1895979wmd.8 for ; Tue, 17 Nov 2020 10:16:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Nx9TzG8OmvjJxKNgiM/4waPsacypWhGMNqjRf5oXlbQ=; b=gOHYeik0Afu79DkLLq77U6M3Z0aWksdQKzAaxc0pEuoi6vq+mZ+7aoUPAYEia6bkBQ 9IrUSm0MEJOXZ7x6ny2axoWaZn00V/b44gHHx63cD9WwcGzTfW5WifeWGoTFivw+WX/P KkfodH14f9sfhSpiwxbImnoa9TFjZnqQOyEpgrf0mwsbW11qOp5BtIrM2nYLu5g9/Ln+ Rz3PmqagXh38rv9cjiqPGgYn2U/plPy31Md0cZbeZKwT+2qZUIeZhE5sc+u2dCpMfpvF gEqfzsNnU4cJx98r2oU7plVAULgQrF0izi2qcXfZ/80LI1tJ7Fsh7hga4tXYeN1pdo3c CU7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Nx9TzG8OmvjJxKNgiM/4waPsacypWhGMNqjRf5oXlbQ=; b=P7vnjxLJZ/bjyrOmCnml5Fp3qpnRLLpR4XPhghI53Fy/GmZYJE29VNTsnhB7HqbIVa okKV04CSA0MCLSt7aRPmijH+2oaNJ0ps659eEhLxbkaf97JO7OWx6T7uQNXvPybZSd2d y3anhgNfCg2j53Qk1lVDZ//YOQJ8UIJsbyNdKcvpWU4MGhdK2yK8L1Inw1UV5Fo+FY8/ EaiveQUXVkiCXPQHNczvfTM18BNXWQpTSEhv+nStwmasHIDaVk8OFSVSo2CRzIv2juQw kY9xysZCvbVUq4UZS5r2U6TNqBV9sBYJXnCpHVmjE4yZexSN8yVw+AlcRlqdetmefXX0 c8/g== X-Gm-Message-State: AOAM532wtg5LWssA+7qasuVaCi6JzGtvUMr+SNEK2wgSnvQ+mF1GouRm 5/L40GZILIqoylH43877fiFZFP5Lx7bd X-Google-Smtp-Source: ABdhPJyhHRlIsUkfi/eYpmMORHp5D3o0p7xzGK05hQPhQ5wLmsbUlLSI6op+6zLdgW/DN66bUlH0P+0JVW6S Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:adf:ebc5:: with SMTP id v5mr876894wrn.392.1605637003434; Tue, 17 Nov 2020 10:16:43 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:50 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-11-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 10/27] KVM: arm64: Introduce an early Hyp page allocator From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org With nVHE, the host currently creates all s1 hypervisor mappings at EL1 during boot, installs them at EL2, and extends them as required (e.g. when creating a new VM). But in a world where the host is no longer trusted, it cannot have full control over the code mapped in the hypervisor. In preparation for enabling the hypervisor to create its own s1 mappings during boot, introduce an early page allocator, with minimal functionality. This allocator is designed to be used only during early bootstrap of the hyp code when memory protection is enabled, which will then switch to using a full-fledged page allocator after init. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/early_alloc.h | 14 +++++ arch/arm64/kvm/hyp/include/nvhe/memory.h | 24 ++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/early_alloc.c | 60 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/psci-relay.c | 5 +- 5 files changed, 101 insertions(+), 4 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/early_alloc.h create mode 100644 arch/arm64/kvm/hyp/include/nvhe/memory.h create mode 100644 arch/arm64/kvm/hyp/nvhe/early_alloc.c diff --git a/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h b/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h new file mode 100644 index 000000000000..68ce2bf9a718 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __KVM_HYP_EARLY_ALLOC_H +#define __KVM_HYP_EARLY_ALLOC_H + +#include + +void hyp_early_alloc_init(void *virt, unsigned long size); +unsigned long hyp_early_alloc_nr_pages(void); +void *hyp_early_alloc_page(void *arg); +void *hyp_early_alloc_contig(unsigned int nr_pages); + +extern struct kvm_pgtable_mm_ops hyp_early_alloc_mm_ops; + +#endif /* __KVM_HYP_EARLY_ALLOC_H */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h new file mode 100644 index 000000000000..64c44c142c95 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __KVM_HYP_MEMORY_H +#define __KVM_HYP_MEMORY_H + +#include + +#include + +extern s64 hyp_physvirt_offset; + +#define __hyp_pa(virt) ((phys_addr_t)(virt) + hyp_physvirt_offset) +#define __hyp_va(virt) ((void *)((phys_addr_t)(virt) - hyp_physvirt_offset)) + +static inline void *hyp_phys_to_virt(phys_addr_t phys) +{ + return __hyp_va(phys); +} + +static inline phys_addr_t hyp_virt_to_phys(void *addr) +{ + return __hyp_pa(addr); +} + +#endif /* __KVM_HYP_MEMORY_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 590fdefb42dd..1fc0684a7678 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -10,7 +10,7 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ - hyp-main.o hyp-smp.o psci-relay.o + hyp-main.o hyp-smp.o psci-relay.o early_alloc.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/early_alloc.c b/arch/arm64/kvm/hyp/nvhe/early_alloc.c new file mode 100644 index 000000000000..de4c45662970 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/early_alloc.c @@ -0,0 +1,60 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include + +#include + +struct kvm_pgtable_mm_ops hyp_early_alloc_mm_ops; +s64 __ro_after_init hyp_physvirt_offset; + +static unsigned long base; +static unsigned long end; +static unsigned long cur; + +unsigned long hyp_early_alloc_nr_pages(void) +{ + return (cur - base) >> PAGE_SHIFT; +} + +extern void clear_page(void *to); + +void *hyp_early_alloc_contig(unsigned int nr_pages) +{ + unsigned long ret = cur, i, p; + + if (!nr_pages) + return NULL; + + cur += nr_pages << PAGE_SHIFT; + if (cur > end) { + cur = ret; + return NULL; + } + + for (i = 0; i < nr_pages; i++) { + p = ret + (i << PAGE_SHIFT); + clear_page((void *)(p)); + } + + return (void *)ret; +} + +void *hyp_early_alloc_page(void *arg) +{ + return hyp_early_alloc_contig(1); +} + +void hyp_early_alloc_init(unsigned long virt, unsigned long size) +{ + base = virt; + end = virt + size; + cur = virt; + + hyp_early_alloc_mm_ops.zalloc_page = hyp_early_alloc_page; + hyp_early_alloc_mm_ops.phys_to_virt = hyp_phys_to_virt; + hyp_early_alloc_mm_ops.virt_to_phys = hyp_virt_to_phys; +} diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c b/arch/arm64/kvm/hyp/nvhe/psci-relay.c index 313ef42f0eab..dbe57ae84a0c 100644 --- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c +++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c @@ -14,6 +14,8 @@ #include #include +#include + #define INVALID_CPU_ID UINT_MAX extern char __kvm_hyp_cpu_entry[]; @@ -21,9 +23,6 @@ extern char __kvm_hyp_cpu_entry[]; /* Config options set by the host. */ u32 __ro_after_init kvm_host_psci_version = PSCI_VERSION(0, 0); u32 __ro_after_init kvm_host_psci_function_id[PSCI_FN_MAX]; -s64 __ro_after_init hyp_physvirt_offset; - -#define __hyp_pa(x) ((phys_addr_t)((x)) + hyp_physvirt_offset) struct kvm_host_psci_state { atomic_t pending_on; From patchwork Tue Nov 17 18:15:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86EBFC64E7A for ; Tue, 17 Nov 2020 18:18:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B5C322210 for ; Tue, 17 Nov 2020 18:18:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Rw7RcFAX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730915AbgKQSQw (ORCPT ); Tue, 17 Nov 2020 13:16:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730858AbgKQSQv (ORCPT ); Tue, 17 Nov 2020 13:16:51 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27D4DC0613CF for ; Tue, 17 Nov 2020 10:16:51 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id h29so13264721wrb.13 for ; Tue, 17 Nov 2020 10:16:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=S6cwVwNRfmqiLf8lpRu/biimRGMThpGBDpyPzm8SfcU=; b=Rw7RcFAXWTa9NM5eUWt9ns6PvMjCLwpQUv7zU5flYQ+vJf2W0LNCLix93I5eiPBNhd aSn/XNz2iCuRt7d9fob7z1iSqBYaHL4jSNIE6w7GIgsZO1tNa4nmEohpgfm+zAH0Wlw9 DVg6qEKI+ump3dkMvgBwDMhZEox7plRC1EVURxfkurDRSnciwtlpumtvCwe1TwDJ/3uH PdUcYZ6f6e6G2VWWwtyBc6DSgoIsDauyd0xqvgYdPPX1RYdtzlcNTAi5Rx3h8USdW9Qj +XXV9dpQsp81t1EzBw5lGKS45aGWAI6aKc6lgHu35S7qhBK3hFUqeBSLNq/FTuY5Z6ci JNbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=S6cwVwNRfmqiLf8lpRu/biimRGMThpGBDpyPzm8SfcU=; b=Xv4twO39FmCbbov6tBkvYgEkavCf2fUIJYj3RpV3JGTcoRMpUMPQZgXtm+xBM8Io6b Hlg0n4CRGtt2FL+Sg5hSiUtG9bHC70b/2vNKGxPLi+xGhcqYCB97WSV653nb3hUPsvsR /2wSrLFCaPZn3AfWB93IFdSfsRaytskRzjwnZM3jG9ywXIitJNYGd2QYfxfWApHek7/2 t1nwf5KioD/zMYkhRSONdW5y3qmKH/7q1ldzFL1iV+ER44Ig5MKPQ2BXhlcxckG3qXCw BIWlCyhAblWuXeo2Yggn9VvzFcMc+uEvrl3AbwNzlo380BftVOpnV4f0rX6HMQJD7XIn I2aA== X-Gm-Message-State: AOAM532eX+6tiNF4Y2zwlQvxbFiilhSiIMewYjenc5j1dv5p3ztNuXlj aQCN/e7x6B0MxBSGaqapajrIPCVwFWS0 X-Google-Smtp-Source: ABdhPJyBvcTlVnO9kqVOz3fMTDExVUwGTO7ddJFQeswKQ25tqqOb6DNLvzHX+R28Y8s8EBggqoaXPxIosaD/ Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a7b:c195:: with SMTP id y21mr379137wmi.138.1605637009838; Tue, 17 Nov 2020 10:16:49 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:53 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-14-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 13/27] KVM: arm64: Enable access to sanitized CPU features at EL2 From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Introduce the infrastructure in KVM enabling to copy CPU feature registers into EL2-owned data-structures, to allow reading sanitised values directly at EL2 in nVHE. Given that only a subset of these features are being read by the hypervisor, the ones that need to be copied are to be listed under together with the name of the nVHE variable that will hold the copy. While at it, introduce the first user of this infrastructure by implementing __flush_dcache_area at EL2, which needs arm64_ftr_reg_ctrel0. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/include/asm/kvm_cpufeature.h | 17 ++++++++++++++ arch/arm64/kernel/cpufeature.c | 12 ++++++++++ arch/arm64/kernel/image-vars.h | 2 ++ arch/arm64/kvm/arm.c | 31 +++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 3 ++- arch/arm64/kvm/hyp/nvhe/cache.S | 13 +++++++++++ arch/arm64/kvm/hyp/nvhe/cpufeature.c | 8 +++++++ 8 files changed, 86 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/kvm_cpufeature.h create mode 100644 arch/arm64/kvm/hyp/nvhe/cache.S create mode 100644 arch/arm64/kvm/hyp/nvhe/cpufeature.c diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index da250e4741bd..3dfbd76fb647 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -600,6 +600,7 @@ void __init setup_cpu_features(void); void check_local_cpu_capabilities(void); u64 read_sanitised_ftr_reg(u32 id); +int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst); static inline bool cpu_supports_mixed_endian_el0(void) { diff --git a/arch/arm64/include/asm/kvm_cpufeature.h b/arch/arm64/include/asm/kvm_cpufeature.h new file mode 100644 index 000000000000..d34f85cba358 --- /dev/null +++ b/arch/arm64/include/asm/kvm_cpufeature.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 - Google LLC + * Author: Quentin Perret + */ + +#include + +#ifndef KVM_HYP_CPU_FTR_REG +#if defined(__KVM_NVHE_HYPERVISOR__) +#define KVM_HYP_CPU_FTR_REG(id, name) extern struct arm64_ftr_reg name; +#else +#define KVM_HYP_CPU_FTR_REG(id, name) DECLARE_KVM_NVHE_SYM(name); +#endif +#endif + +KVM_HYP_CPU_FTR_REG(SYS_CTR_EL0, arm64_ftr_reg_ctrel0) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index dd5bc0f0cf0d..3bc86d1423f8 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1116,6 +1116,18 @@ u64 read_sanitised_ftr_reg(u32 id) } EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg); +int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst) +{ + struct arm64_ftr_reg *regp = get_arm64_ftr_reg(id); + + if (!regp) + return -EINVAL; + + memcpy(dst, regp, sizeof(*regp)); + + return 0; +} + #define read_sysreg_case(r) \ case r: return read_sysreg_s(r) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index dd8ccc9efb6a..c35d768672eb 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -116,6 +116,8 @@ __kvm_nvhe___memcpy = __kvm_nvhe___pi_memcpy; __kvm_nvhe___memset = __kvm_nvhe___pi_memset; #endif +_kvm_nvhe___flush_dcache_area = __kvm_nvhe___pi___flush_dcache_area; + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 391cf6753a13..c7f8fca97202 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -1636,6 +1637,29 @@ static void teardown_hyp_mode(void) } } +#undef KVM_HYP_CPU_FTR_REG +#define KVM_HYP_CPU_FTR_REG(id, name) \ + { .sys_id = id, .dst = (struct arm64_ftr_reg *)&kvm_nvhe_sym(name) }, +static const struct __ftr_reg_copy_entry { + u32 sys_id; + struct arm64_ftr_reg *dst; +} hyp_ftr_regs[] = { + #include +}; + +static int copy_cpu_ftr_regs(void) +{ + int i, ret; + + for (i = 0; i < ARRAY_SIZE(hyp_ftr_regs); i++) { + ret = copy_ftr_reg(hyp_ftr_regs[i].sys_id, hyp_ftr_regs[i].dst); + if (ret) + return ret; + } + + return 0; +} + /** * Inits Hyp-mode on all online CPUs */ @@ -1644,6 +1668,13 @@ static int init_hyp_mode(void) int cpu; int err = 0; + /* + * Copy the required CPU feature register in their EL2 counterpart + */ + err = copy_cpu_ftr_regs(); + if (err) + return err; + /* * Allocate Hyp PGD and setup Hyp identity mapping */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 9e5eacfec6ec..72cfe53f106f 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -10,7 +10,8 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ - hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o + hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ + cache.o cpufeature.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S new file mode 100644 index 000000000000..36cef6915428 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/cache.S @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Code copied from arch/arm64/mm/cache.S. + */ + +#include +#include +#include + +SYM_FUNC_START_PI(__flush_dcache_area) + dcache_by_line_op civac, sy, x0, x1, x2, x3 + ret +SYM_FUNC_END_PI(__flush_dcache_area) diff --git a/arch/arm64/kvm/hyp/nvhe/cpufeature.c b/arch/arm64/kvm/hyp/nvhe/cpufeature.c new file mode 100644 index 000000000000..a887508f996f --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/cpufeature.c @@ -0,0 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 - Google LLC + * Author: Quentin Perret + */ + +#define KVM_HYP_CPU_FTR_REG(id, name) struct arm64_ftr_reg name; +#include From patchwork Tue Nov 17 18:15:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90CFBC63697 for ; Tue, 17 Nov 2020 18:18:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 488AB221FD for ; Tue, 17 Nov 2020 18:18:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZN7TMzWc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731163AbgKQSRt (ORCPT ); Tue, 17 Nov 2020 13:17:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730943AbgKQSQz (ORCPT ); Tue, 17 Nov 2020 13:16:55 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D3B5C0617A6 for ; Tue, 17 Nov 2020 10:16:53 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id w17so13200466wrp.11 for ; Tue, 17 Nov 2020 10:16:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=F6/Ku+z/rldXTKyTjaRYVBv4lFiTA1v23xAOFYW1znk=; b=ZN7TMzWcg2kpOSqffDoVhtCt8uuThETRtG/2q52T5qCUyS9wVVeqC8DLp9GHUxjPzl aRJom2yDGA29gjrTZrqqST+sCuyxUep0AcG3zyv30IblaP/4C9KEnPFKrtG+ZDCaIltz G1DBpjrBArDifkZHj2qKeMp5ucykr8vNPn9d5SaiPvot0qDGmYdqV7hchxuHrIYBK3Yt zQezM/vBty6xeYGG3xPKCMDP12KKhXP5Pcw3ErAA+STpsQVaYQ62FE6TFa2w6PAZc4YJ a7cIVY5OSaYEJLpdXMS0djxudcoIzYtYFsaSPvip1YOmJbhIujEddKb27SQvDMsnNhfF PBPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=F6/Ku+z/rldXTKyTjaRYVBv4lFiTA1v23xAOFYW1znk=; b=ngHjumA3t2ev1h8HarYSIPMZZLMPWGlo45TUdYDBVcTToX+fA4wH8ZYPZqEn9SX6Gv BvtUE73tyfQIo9Q1OHCSVw3mpcHIM9BzZICu8Cf188RQyMk9PtYr4nxUIArVsrXaCcYN NeX6ffIkfBfgM5HRw+/GWtUjg+8Ebinufwvnc6oG6reW4BctkCt2X4rjSDnxIMiRbk/2 NO6CxPmBR+8OugCX8VhVIy6IAqoEoF8qd/urbMhCmRgIOXKjBxSnssJIxynOARf8jIt8 4h8P/TMpMllqhtzQt4laePOsxuzcDtdru4AHnbSTn9O7IZwvp6j8CHRxP/p1lhOuhysE 6PEQ== X-Gm-Message-State: AOAM532zJ727FKwZTY8qOh23zYpbOkcoDlCfNp2rX5lytivyPj42whFS zGfiFOoYH+TP1AlH7EOZaAsWRniRr4fd X-Google-Smtp-Source: ABdhPJxPq5M30rdGbubfNCRpT9hqgInad/1CQDFyjy/L+083VvfB25t4bfYqH1wKxMY1b6mlTwrfv3eLZu9/ Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:adf:82f5:: with SMTP id 108mr804663wrc.269.1605637011993; Tue, 17 Nov 2020 10:16:51 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:54 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-15-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 14/27] KVM: arm64: Factor out vector address calculation From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to re-map the guest vectors at EL2 when pKVM is enabled, refactor __kvm_vector_slot2idx() and kvm_init_vector_slot() to move all the address calculation logic in a static inline function. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 8 ++++++++ arch/arm64/kvm/arm.c | 9 +-------- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 5168a0c516ae..cb104443d8e4 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -171,6 +171,14 @@ phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_get_idmap_vector(void); int kvm_mmu_init(void); +static inline void *__kvm_vector_slot2addr(void *base, + enum arm64_hyp_spectre_vector slot) +{ + int idx = slot - (slot != HYP_VECTOR_DIRECT); + + return base + (idx * SZ_2K); +} + struct kvm; #define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c7f8fca97202..b1e1747e4bbf 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1318,16 +1318,9 @@ static unsigned long nvhe_percpu_order(void) /* A lookup table holding the hypervisor VA for each vector slot */ static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS]; -static int __kvm_vector_slot2idx(enum arm64_hyp_spectre_vector slot) -{ - return slot - (slot != HYP_VECTOR_DIRECT); -} - static void kvm_init_vector_slot(void *base, enum arm64_hyp_spectre_vector slot) { - int idx = __kvm_vector_slot2idx(slot); - - hyp_spectre_vector_selector[slot] = base + (idx * SZ_2K); + hyp_spectre_vector_selector[slot] = __kvm_vector_slot2addr(base, slot); } static int kvm_init_vector_slots(void) From patchwork Tue Nov 17 18:15:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 432D8C64E8A for ; Tue, 17 Nov 2020 18:17:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00FD52465E for ; Tue, 17 Nov 2020 18:17:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vj91rGly" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730997AbgKQSRA (ORCPT ); Tue, 17 Nov 2020 13:17:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730980AbgKQSRA (ORCPT ); Tue, 17 Nov 2020 13:17:00 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 181FCC0617A6 for ; Tue, 17 Nov 2020 10:16:58 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id h11so13243924wrq.20 for ; Tue, 17 Nov 2020 10:16:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=MgizBMNnQpOXMuPbGh3Gv9pNQJdacDwGgNswbIKCgRM=; b=vj91rGlyBUiDaweBntl+5cvf32le9A/39kXxtbIkljG8ZEKn/l/CPd8HLS2CKj5WZz AnOXzv5VnD2+DDeBd1baBNBOt3sJC1ZZzjXs/+RjCpAxMaxU5MoJ5uLPYCq8so+TzN+m Lk9ci5ay+OQF82KAc7Na5JPfCxpOmzFkgXsc22LYMpnjKGZ1utArWX4ApIRtVwdzXUCw Q8/tshlSiEGdcHIqAYDr6YajRx8x/B+OMGMERGdEUUxq7tTg09BwiD0MowhQtAbEP0Q6 mbqVsNcMNMO1q45lm2lpgzdYzjd/w0Hl4DBRWqJH/E9LRVRdPqy5O8neqkIeKISnsgzf QwcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=MgizBMNnQpOXMuPbGh3Gv9pNQJdacDwGgNswbIKCgRM=; b=CKERCI4WAZ1/JofbQ9DMpN8WB6MyjZ78Bc3TADGQegwWKKYgPKbgtIHfNQlEqpumn3 5JMfV6rB7AjeA7+DaPEt9hGgY6zRxjg0hcmXjuwUwE8Gj9hORjQmTLqjDsi9S4fRjq95 yDZPFCKGAZhwozismuQXAQBgG9PC1917AA0xNMfAUDBvE/CuXP1hj3TFGzX0CUjUBQnu h1Yg3WCTOQGg5iJS6w0uEdPGHGNkDyQUn5bDFj2sqt730B+HdNqLNa6/PahrDwU9vlrZ /RfHomBpEMoWwQ5Z2gZhyS4VZKFyMtMI1v/OOQUoOXiCpeU8c9nbdmnwZkE8m2hRDxt+ 54XA== X-Gm-Message-State: AOAM5302IXCwzwCYUXHnUZZVvRtyDTIGNDzsO7jq6BLgTvv9IAI6h/9l RoI+btXSxkJEN2t6mwgtsIRjjGzXLkLP X-Google-Smtp-Source: ABdhPJz2eJqLEtX4wrSz5wgqPzcYPIYZ6U19THDj7bMfThHVEgY3yn4Tbc9CIRdWeU2TS/IWvG6covmrMsxf Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a1c:658b:: with SMTP id z133mr353482wmb.1.1605637016704; Tue, 17 Nov 2020 10:16:56 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:56 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-17-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When memory protection is enabled, the Hyp code needs the ability to create and manage its own page-table. To do so, introduce a new set of hypercalls to initialize Hyp memory protection. During the init hcall, the hypervisor runs with the host-provided page-table and uses the trivial early page allocator to create its own set of page-tables, using a memory pool that was donated by the host. Specifically, the hypervisor creates its own mappings for __hyp_text, the Hyp memory pool, the __hyp_bss, the portion of hyp_vmemmap corresponding to the Hyp pool, among other things. It then jumps back in the idmap page, switches to use the newly-created pgd (instead of the temporary one provided by the host) and then installs the full-fledged buddy allocator which will then be the only one in used from then on. Note that for the sake of symplifying the review, this only introduces the code doing this operation, without actually being called by anyhing yet. This will be done in a subsequent patch, which will introduce the necessary host kernel changes. Credits to Will for __kvm_init_switch_pgd. Co-authored-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 6 +- arch/arm64/include/asm/kvm_host.h | 8 + arch/arm64/include/asm/kvm_hyp.h | 8 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/image-vars.h | 19 +++ arch/arm64/kvm/hyp/Makefile | 2 +- arch/arm64/kvm/hyp/include/nvhe/memory.h | 6 + arch/arm64/kvm/hyp/include/nvhe/mm.h | 79 +++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 4 +- arch/arm64/kvm/hyp/nvhe/hyp-init.S | 30 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 44 +++++ arch/arm64/kvm/hyp/nvhe/mm.c | 175 ++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/psci-relay.c | 2 - arch/arm64/kvm/hyp/nvhe/setup.c | 196 +++++++++++++++++++++++ arch/arm64/kvm/hyp/reserved_mem.c | 75 +++++++++ arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 3 + 17 files changed, 653 insertions(+), 8 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mm.h create mode 100644 arch/arm64/kvm/hyp/nvhe/mm.c create mode 100644 arch/arm64/kvm/hyp/nvhe/setup.c create mode 100644 arch/arm64/kvm/hyp/reserved_mem.c diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index e4934f5e4234..9266b17f8ba9 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -57,6 +57,10 @@ #define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2 12 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs 13 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs 14 +#define __KVM_HOST_SMCCC_FUNC___kvm_hyp_protect 15 +#define __KVM_HOST_SMCCC_FUNC___hyp_create_mappings 16 +#define __KVM_HOST_SMCCC_FUNC___hyp_create_private_mapping 17 +#define __KVM_HOST_SMCCC_FUNC___hyp_cpu_set_vector 18 #ifndef __ASSEMBLY__ @@ -171,7 +175,7 @@ struct kvm_vcpu; struct kvm_s2_mmu; DECLARE_KVM_NVHE_SYM(__kvm_hyp_init); -DECLARE_KVM_NVHE_SYM(__kvm_hyp_host_vector); +DECLARE_KVM_HYP_SYM(__kvm_hyp_host_vector); DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); #define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init) #define __kvm_hyp_host_vector CHOOSE_NVHE_SYM(__kvm_hyp_host_vector) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7a5d5f4b3351..ee8bb8021637 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -742,4 +742,12 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); #define kvm_vcpu_has_pmu(vcpu) \ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) +#ifdef CONFIG_KVM +extern phys_addr_t hyp_mem_base; +extern phys_addr_t hyp_mem_size; +void __init reserve_kvm_hyp(void); +#else +static inline void reserve_kvm_hyp(void) { } +#endif + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 95a2bbbcc7e1..dbd2ef86afa9 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -105,5 +105,13 @@ void __noreturn hyp_panic(void); void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par); #endif +#ifdef __KVM_NVHE_HYPERVISOR__ +void __kvm_init_switch_pgd(phys_addr_t phys, unsigned long size, + phys_addr_t pgd, void *sp, void *cont_fn); +int __kvm_hyp_protect(phys_addr_t phys, unsigned long size, + unsigned long nr_cpus, unsigned long *per_cpu_base); +void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); +#endif + #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 3bc86d1423f8..010458f6d799 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1722,7 +1722,7 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) #endif /* CONFIG_ARM64_MTE */ #ifdef CONFIG_KVM -static bool enable_protected_kvm; +bool enable_protected_kvm; static bool has_protected_kvm(const struct arm64_cpu_capabilities *entry, int __unused) { diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index c35d768672eb..f2d43e6cd86d 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -118,6 +118,25 @@ __kvm_nvhe___memset = __kvm_nvhe___pi_memset; _kvm_nvhe___flush_dcache_area = __kvm_nvhe___pi___flush_dcache_area; +/* Hypevisor VA size */ +KVM_NVHE_ALIAS(hyp_va_bits); + +/* Kernel memory sections */ +KVM_NVHE_ALIAS(__start_rodata); +KVM_NVHE_ALIAS(__end_rodata); +KVM_NVHE_ALIAS(__bss_start); +KVM_NVHE_ALIAS(__bss_stop); + +/* Hyp memory sections */ +KVM_NVHE_ALIAS(__hyp_idmap_text_start); +KVM_NVHE_ALIAS(__hyp_idmap_text_end); +KVM_NVHE_ALIAS(__hyp_text_start); +KVM_NVHE_ALIAS(__hyp_text_end); +KVM_NVHE_ALIAS(__hyp_data_ro_after_init_start); +KVM_NVHE_ALIAS(__hyp_data_ro_after_init_end); +KVM_NVHE_ALIAS(__hyp_bss_start); +KVM_NVHE_ALIAS(__hyp_bss_end); + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 687598e41b21..b726332eec49 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -10,4 +10,4 @@ subdir-ccflags-y := -I$(incdir) \ -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o +obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o reserved_mem.o diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index ed47674bc988..c8af6fe87bfb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -6,6 +6,12 @@ #include +#define HYP_MEMBLOCK_REGIONS 128 +struct hyp_memblock_region { + phys_addr_t start; + phys_addr_t end; +}; + struct hyp_pool; struct hyp_page { unsigned int refcount; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h new file mode 100644 index 000000000000..5a3ad6f4e5bc --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __KVM_HYP_MM_H +#define __KVM_HYP_MM_H + +#include +#include +#include + +#include +#include + +extern struct hyp_memblock_region kvm_nvhe_sym(hyp_memory)[]; +extern int kvm_nvhe_sym(hyp_memblock_nr); +extern struct kvm_pgtable hyp_pgtable; +extern hyp_spinlock_t __hyp_pgd_lock; +extern struct hyp_pool hpool; +extern u64 __io_map_base; +extern u32 hyp_va_bits; + +int hyp_create_idmap(void); +int hyp_map_vectors(void); +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back); +int hyp_cpu_set_vector(enum arm64_hyp_spectre_vector slot); +int hyp_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); +int __hyp_create_mappings(unsigned long start, unsigned long size, + unsigned long phys, unsigned long prot); +unsigned long __hyp_create_private_mapping(phys_addr_t phys, size_t size, + unsigned long prot); + +static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size, + unsigned long *start, unsigned long *end) +{ + unsigned long nr_pages = size >> PAGE_SHIFT; + struct hyp_page *p = hyp_phys_to_page(phys); + + *start = (unsigned long)p; + *end = *start + nr_pages * sizeof(struct hyp_page); + *start = ALIGN_DOWN(*start, PAGE_SIZE); + *end = ALIGN(*end, PAGE_SIZE); +} + +static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) +{ + unsigned long total = 0, i; + + /* Provision the worst case scenario with 4 levels of page-table */ + for (i = 0; i < 4; i++) { + nr_pages = DIV_ROUND_UP(nr_pages, PTRS_PER_PTE); + total += nr_pages; + } + + return total; +} + +static inline unsigned long hyp_s1_pgtable_size(void) +{ + struct hyp_memblock_region *reg; + unsigned long nr_pages, res = 0; + int i; + + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return 0; + + for (i = 0; i < kvm_nvhe_sym(hyp_memblock_nr); i++) { + reg = &kvm_nvhe_sym(hyp_memory)[i]; + nr_pages = (reg->end - reg->start) >> PAGE_SHIFT; + nr_pages = __hyp_pgtable_max_pages(nr_pages); + res += nr_pages << PAGE_SHIFT; + } + + /* Allow 1 GiB for private mappings */ + nr_pages = (1 << 30) >> PAGE_SHIFT; + nr_pages = __hyp_pgtable_max_pages(nr_pages); + res += nr_pages << PAGE_SHIFT; + + return res; +} + +#endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 72cfe53f106f..d7381a503182 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -11,9 +11,9 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ - cache.o cpufeature.o + cache.o cpufeature.o setup.o mm.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ - ../fpsimd.o ../hyp-entry.o ../exception.o + ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-y += $(lib-objs) ## diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S index 8f3602f320ac..e2d62297edfe 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S @@ -247,4 +247,34 @@ alternative_else_nop_endif SYM_CODE_END(__kvm_handle_stub_hvc) +SYM_FUNC_START(__kvm_init_switch_pgd) + /* Turn the MMU off */ + pre_disable_mmu_workaround + mrs x2, sctlr_el2 + bic x3, x2, #SCTLR_ELx_M + msr sctlr_el2, x3 + isb + + tlbi alle2 + + /* Install the new pgtables */ + ldr x3, [x0, #NVHE_INIT_PGD_PA] + phys_to_ttbr x4, x3 +alternative_if ARM64_HAS_CNP + orr x4, x4, #TTBR_CNP_BIT +alternative_else_nop_endif + msr ttbr0_el2, x4 + + /* Set the new stack pointer */ + ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA] + mov sp, x0 + + /* And turn the MMU back on! */ + dsb nsh + isb + msr sctlr_el2, x2 + isb + ret x1 +SYM_FUNC_END(__kvm_init_switch_pgd) + .popsection diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 933329699425..a0bfe0d26da6 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -6,12 +6,15 @@ #include +#include #include #include #include #include #include +#include + DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); #define cpu_reg(ctxt, r) (ctxt)->regs.regs[r] @@ -106,6 +109,43 @@ static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt) __vgic_v3_restore_aprs(kern_hyp_va(cpu_if)); } +static void handle___kvm_hyp_protect(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(phys_addr_t, phys, host_ctxt, 1); + DECLARE_REG(unsigned long, size, host_ctxt, 2); + DECLARE_REG(unsigned long, nr_cpus, host_ctxt, 3); + DECLARE_REG(unsigned long *, per_cpu_base, host_ctxt, 4); + + cpu_reg(host_ctxt, 1) = __kvm_hyp_protect(phys, size, nr_cpus, + per_cpu_base); +} + +static void handle___hyp_cpu_set_vector(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(enum arm64_hyp_spectre_vector, slot, host_ctxt, 1); + + cpu_reg(host_ctxt, 1) = hyp_cpu_set_vector(slot); +} + +static void handle___hyp_create_mappings(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(unsigned long, start, host_ctxt, 1); + DECLARE_REG(unsigned long, size, host_ctxt, 2); + DECLARE_REG(unsigned long, phys, host_ctxt, 3); + DECLARE_REG(unsigned long, prot, host_ctxt, 4); + + cpu_reg(host_ctxt, 1) = __hyp_create_mappings(start, size, phys, prot); +} + +static void handle___hyp_create_private_mapping(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(phys_addr_t, phys, host_ctxt, 1); + DECLARE_REG(size_t, size, host_ctxt, 2); + DECLARE_REG(unsigned long, prot, host_ctxt, 3); + + cpu_reg(host_ctxt, 1) = __hyp_create_private_mapping(phys, size, prot); +} + typedef void (*hcall_t)(struct kvm_cpu_context *); #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = kimg_fn_ptr(handle_##x) @@ -125,6 +165,10 @@ static const hcall_t *host_hcall[] = { HANDLE_FUNC(__kvm_get_mdcr_el2), HANDLE_FUNC(__vgic_v3_save_aprs), HANDLE_FUNC(__vgic_v3_restore_aprs), + HANDLE_FUNC(__kvm_hyp_protect), + HANDLE_FUNC(__hyp_cpu_set_vector), + HANDLE_FUNC(__hyp_create_mappings), + HANDLE_FUNC(__hyp_create_private_mapping), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c new file mode 100644 index 000000000000..cad5dae197c6 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -0,0 +1,175 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +struct kvm_pgtable hyp_pgtable; + +hyp_spinlock_t __hyp_pgd_lock; +u64 __io_map_base; + +struct hyp_memblock_region hyp_memory[HYP_MEMBLOCK_REGIONS]; +int hyp_memblock_nr; + +int __hyp_create_mappings(unsigned long start, unsigned long size, + unsigned long phys, unsigned long prot) +{ + int err; + + hyp_spin_lock(&__hyp_pgd_lock); + err = kvm_pgtable_hyp_map(&hyp_pgtable, start, size, phys, prot); + hyp_spin_unlock(&__hyp_pgd_lock); + + return err; +} + +unsigned long __hyp_create_private_mapping(phys_addr_t phys, size_t size, + unsigned long prot) +{ + unsigned long addr; + int ret; + + hyp_spin_lock(&__hyp_pgd_lock); + + size = PAGE_ALIGN(size + offset_in_page(phys)); + addr = __io_map_base; + __io_map_base += size; + + /* Are we overflowing on the vmemmap ? */ + if (__io_map_base > __hyp_vmemmap) { + __io_map_base -= size; + addr = 0; + goto out; + } + + ret = kvm_pgtable_hyp_map(&hyp_pgtable, addr, size, phys, prot); + if (ret) { + addr = 0; + goto out; + } + + addr = addr + offset_in_page(phys); +out: + hyp_spin_unlock(&__hyp_pgd_lock); + + return addr; +} + +int hyp_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot) +{ + unsigned long start = (unsigned long)from; + unsigned long end = (unsigned long)to; + unsigned long virt_addr; + phys_addr_t phys; + + start = start & PAGE_MASK; + end = PAGE_ALIGN(end); + + for (virt_addr = start; virt_addr < end; virt_addr += PAGE_SIZE) { + int err; + + phys = hyp_virt_to_phys((void *)virt_addr); + err = __hyp_create_mappings(virt_addr, PAGE_SIZE, phys, prot); + if (err) + return err; + } + + return 0; +} + +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back) +{ + unsigned long start, end; + + hyp_vmemmap_range(phys, size, &start, &end); + + return __hyp_create_mappings(start, end - start, back, PAGE_HYP); +} + +static void *__hyp_bp_vect_base; +int hyp_cpu_set_vector(enum arm64_hyp_spectre_vector slot) +{ + void *vector; + + switch (slot) { + case HYP_VECTOR_DIRECT: { + vector = hyp_symbol_addr(__kvm_hyp_vector); + break; + } + case HYP_VECTOR_SPECTRE_DIRECT: { + vector = hyp_symbol_addr(__bp_harden_hyp_vecs); + break; + } + case HYP_VECTOR_INDIRECT: + case HYP_VECTOR_SPECTRE_INDIRECT: { + vector = (void *)__hyp_bp_vect_base; + break; + } + default: + return -EINVAL; + } + + vector = __kvm_vector_slot2addr(vector, slot); + *this_cpu_ptr(&kvm_hyp_vector) = (unsigned long)vector; + + return 0; +} + +int hyp_map_vectors(void) +{ + unsigned long bp_base; + + if (!cpus_have_const_cap(ARM64_SPECTRE_V3A)) + return 0; + + bp_base = (unsigned long)hyp_symbol_addr(__bp_harden_hyp_vecs); + bp_base = __hyp_pa(bp_base); + bp_base = __hyp_create_private_mapping(bp_base, __BP_HARDEN_HYP_VECS_SZ, + PAGE_HYP_EXEC); + if (!bp_base) + return -1; + + __hyp_bp_vect_base = (void *)bp_base; + + return 0; +} + +int hyp_create_idmap(void) +{ + unsigned long start, end; + + start = (unsigned long)hyp_symbol_addr(__hyp_idmap_text_start); + start = hyp_virt_to_phys((void *)start); + start = ALIGN_DOWN(start, PAGE_SIZE); + + end = (unsigned long)hyp_symbol_addr(__hyp_idmap_text_end); + end = hyp_virt_to_phys((void *)end); + end = ALIGN(end, PAGE_SIZE); + + /* + * One half of the VA space is reserved to linearly map portions of + * memory -- see va_layout.c for more details. The other half of the VA + * space contains the trampoline page, and needs some care. Split that + * second half in two and find the quarter of VA space not conflicting + * with the idmap to place the IOs and the vmemmap. IOs use the lower + * half of the quarter and the vmemmap the upper half. + */ + __io_map_base = start & BIT(hyp_va_bits - 2); + __io_map_base ^= BIT(hyp_va_bits - 2); + __hyp_vmemmap = __io_map_base | BIT(hyp_va_bits - 3); + + return __hyp_create_mappings(start, end - start, start, PAGE_HYP_EXEC); +} diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c b/arch/arm64/kvm/hyp/nvhe/psci-relay.c index dbe57ae84a0c..cfc6dac0f0ac 100644 --- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c +++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c @@ -193,8 +193,6 @@ static int psci_cpu_on(u64 func_id, struct kvm_cpu_context *host_ctxt) return ret; } -void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); - asmlinkage void __noreturn __kvm_hyp_psci_cpu_entry(void) { struct kvm_host_psci_state *cpu_state = this_cpu_ptr(&kvm_host_psci_state); diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c new file mode 100644 index 000000000000..9679c97b875b --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -0,0 +1,196 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include +#include +#include +#include + +#include +#include +#include +#include + +struct hyp_pool hpool; +struct kvm_pgtable_mm_ops hyp_pgtable_mm_ops; +unsigned long hyp_nr_cpus; + +#define hyp_percpu_size ((unsigned long)__per_cpu_end - \ + (unsigned long)__per_cpu_start) + +static void *stacks_base; +static void *vmemmap_base; +static void *hyp_pgt_base; + +static int divide_memory_pool(void *virt, unsigned long size) +{ + unsigned long vstart, vend, nr_pages; + + hyp_early_alloc_init(virt, size); + + stacks_base = hyp_early_alloc_contig(hyp_nr_cpus); + if (!stacks_base) + return -ENOMEM; + + hyp_vmemmap_range(__hyp_pa(virt), size, &vstart, &vend); + nr_pages = (vend - vstart) >> PAGE_SHIFT; + vmemmap_base = hyp_early_alloc_contig(nr_pages); + if (!vmemmap_base) + return -ENOMEM; + + nr_pages = hyp_s1_pgtable_size() >> PAGE_SHIFT; + hyp_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!hyp_pgt_base) + return -ENOMEM; + + return 0; +} + +static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, + unsigned long *per_cpu_base) +{ + void *start, *end, *virt = hyp_phys_to_virt(phys); + int ret, i; + + /* Recreate the hyp page-table using the early page allocator */ + hyp_early_alloc_init(hyp_pgt_base, hyp_s1_pgtable_size()); + ret = kvm_pgtable_hyp_init(&hyp_pgtable, hyp_va_bits, + &hyp_early_alloc_mm_ops); + if (ret) + return ret; + + ret = hyp_create_idmap(); + if (ret) + return ret; + + ret = hyp_map_vectors(); + if (ret) + return ret; + + ret = hyp_back_vmemmap(phys, size, hyp_virt_to_phys(vmemmap_base)); + if (ret) + return ret; + + ret = hyp_create_mappings(hyp_symbol_addr(__hyp_text_start), + hyp_symbol_addr(__hyp_text_end), + PAGE_HYP_EXEC); + if (ret) + return ret; + + ret = hyp_create_mappings(hyp_symbol_addr(__start_rodata), + hyp_symbol_addr(__end_rodata), PAGE_HYP_RO); + if (ret) + return ret; + + ret = hyp_create_mappings(hyp_symbol_addr(__hyp_data_ro_after_init_start), + hyp_symbol_addr(__hyp_data_ro_after_init_end), + PAGE_HYP_RO); + if (ret) + return ret; + + ret = hyp_create_mappings(hyp_symbol_addr(__bss_start), + hyp_symbol_addr(__hyp_bss_end), PAGE_HYP); + if (ret) + return ret; + + ret = hyp_create_mappings(hyp_symbol_addr(__hyp_bss_end), + hyp_symbol_addr(__bss_stop), PAGE_HYP_RO); + if (ret) + return ret; + + ret = hyp_create_mappings(virt, virt + size - 1, PAGE_HYP); + if (ret) + return ret; + + for (i = 0; i < hyp_nr_cpus; i++) { + start = (void *)kern_hyp_va(per_cpu_base[i]); + end = start + PAGE_ALIGN(hyp_percpu_size); + ret = hyp_create_mappings(start, end, PAGE_HYP); + if (ret) + return ret; + } + + return 0; +} + +static void update_nvhe_init_params(void) +{ + struct kvm_nvhe_init_params *params; + unsigned long i, stack; + + for (i = 0; i < hyp_nr_cpus; i++) { + stack = (unsigned long)stacks_base + (i << PAGE_SHIFT); + params = per_cpu_ptr(&kvm_init_params, i); + params->stack_hyp_va = stack + PAGE_SIZE; + params->pgd_pa = __hyp_pa(hyp_pgtable.pgd); + __flush_dcache_area(params, sizeof(*params)); + } +} + +static void *hyp_zalloc_hyp_page(void *arg) +{ + return hyp_alloc_pages(&hpool, HYP_GFP_ZERO, 0); +} + +void __noreturn __kvm_hyp_protect_finalise(void) +{ + struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); + struct kvm_cpu_context *host_ctxt = &host_data->host_ctxt; + unsigned long nr_pages, used_pages; + int ret; + + /* Now that the vmemmap is backed, install the full-fledged allocator */ + nr_pages = hyp_s1_pgtable_size() >> PAGE_SHIFT; + used_pages = hyp_early_alloc_nr_pages(); + ret = hyp_pool_init(&hpool, __hyp_pa(hyp_pgt_base), nr_pages, used_pages); + if (ret) + goto out; + + hyp_pgtable_mm_ops.zalloc_page = hyp_zalloc_hyp_page; + hyp_pgtable_mm_ops.phys_to_virt = hyp_phys_to_virt; + hyp_pgtable_mm_ops.virt_to_phys = hyp_virt_to_phys; + hyp_pgtable_mm_ops.get_page = hyp_get_page; + hyp_pgtable_mm_ops.put_page = hyp_put_page; + hyp_pgtable.mm_ops = &hyp_pgtable_mm_ops; + +out: + host_ctxt->regs.regs[0] = SMCCC_RET_SUCCESS; + host_ctxt->regs.regs[1] = ret; + + __host_enter(host_ctxt); +} + +int __kvm_hyp_protect(phys_addr_t phys, unsigned long size, + unsigned long nr_cpus, unsigned long *per_cpu_base) +{ + struct kvm_nvhe_init_params *params; + void *virt = hyp_phys_to_virt(phys); + void (*fn)(phys_addr_t params_pa, void *finalize_fn_va); + int ret; + + if (phys % PAGE_SIZE || size % PAGE_SIZE || (u64)virt % PAGE_SIZE) + return -EINVAL; + + hyp_spin_lock_init(&__hyp_pgd_lock); + hyp_nr_cpus = nr_cpus; + + ret = divide_memory_pool(virt, size); + if (ret) + return ret; + + ret = recreate_hyp_mappings(phys, size, per_cpu_base); + if (ret) + return ret; + + update_nvhe_init_params(); + + /* Jump in the idmap page to switch to the new page-tables */ + params = this_cpu_ptr(&kvm_init_params); + fn = (typeof(fn))__hyp_pa(hyp_symbol_addr(__kvm_init_switch_pgd)); + fn(__hyp_pa(params), hyp_symbol_addr(__kvm_hyp_protect_finalise)); + + unreachable(); +} diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c new file mode 100644 index 000000000000..02b0b18006f5 --- /dev/null +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -0,0 +1,75 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020 - Google LLC + * Author: Quentin Perret + */ + +#include +#include + +#include + +#include +#include + +phys_addr_t hyp_mem_base; +phys_addr_t hyp_mem_size; + +void __init early_init_dt_add_memory_hyp(u64 base, u64 size) +{ + struct hyp_memblock_region *reg; + + if (kvm_nvhe_sym(hyp_memblock_nr) >= HYP_MEMBLOCK_REGIONS) + kvm_nvhe_sym(hyp_memblock_nr) = -1; + + if (kvm_nvhe_sym(hyp_memblock_nr) < 0) + return; + + reg = kvm_nvhe_sym(hyp_memory); + reg[kvm_nvhe_sym(hyp_memblock_nr)].start = base; + reg[kvm_nvhe_sym(hyp_memblock_nr)].end = base + size; + kvm_nvhe_sym(hyp_memblock_nr)++; +} + +extern bool enable_protected_kvm; +void __init reserve_kvm_hyp(void) +{ + u64 nr_pages, prev; + + if (!enable_protected_kvm) + return; + + if (!is_hyp_mode_available() || is_kernel_in_hyp_mode()) + return; + + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return; + + hyp_mem_size += num_possible_cpus() << PAGE_SHIFT; + hyp_mem_size += hyp_s1_pgtable_size(); + + /* + * The hyp_vmemmap needs to be backed by pages, but these pages + * themselves need to be present in the vmemmap, so compute the number + * of pages needed by looking for a fixed point. + */ + nr_pages = 0; + do { + prev = nr_pages; + nr_pages = (hyp_mem_size >> PAGE_SHIFT) + prev; + nr_pages = DIV_ROUND_UP(nr_pages * sizeof(struct hyp_page), PAGE_SIZE); + nr_pages += __hyp_pgtable_max_pages(nr_pages); + } while (nr_pages != prev); + hyp_mem_size += nr_pages << PAGE_SHIFT; + + hyp_mem_base = memblock_find_in_range(0, memblock_end_of_DRAM(), + hyp_mem_size, SZ_2M); + if (!hyp_mem_base) { + kvm_err("Failed to reserve hyp memory\n"); + return; + } + memblock_reserve(hyp_mem_base, hyp_mem_size); + + kvm_info("Reserved %lld MiB at 0x%llx\n", hyp_mem_size >> 20, + hyp_mem_base); +} diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 278e163beda4..3cf9397dabdb 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1264,10 +1264,10 @@ static struct kvm_pgtable_mm_ops kvm_hyp_mm_ops = { .virt_to_phys = kvm_host_pa, }; +u32 hyp_va_bits; int kvm_mmu_init(void) { int err; - u32 hyp_va_bits; hyp_idmap_start = __pa_symbol(__hyp_idmap_text_start); hyp_idmap_start = ALIGN_DOWN(hyp_idmap_start, PAGE_SIZE); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 095540667f0f..f81da019b677 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -390,6 +391,8 @@ void __init arm64_memblock_init(void) reserve_elfcorehdr(); + reserve_kvm_hyp(); + high_memory = __va(memblock_end_of_DRAM() - 1) + 1; dma_contiguous_reserve(arm64_dma32_phys_limit); From patchwork Tue Nov 17 18:15:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93801C64E90 for ; Tue, 17 Nov 2020 18:17:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 453D824686 for ; Tue, 17 Nov 2020 18:17:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NJLgqh5H" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730998AbgKQSRB (ORCPT ); Tue, 17 Nov 2020 13:17:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730991AbgKQSRA (ORCPT ); Tue, 17 Nov 2020 13:17:00 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3921C0617A7 for ; Tue, 17 Nov 2020 10:16:59 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id bn4so13663598qvb.9 for ; Tue, 17 Nov 2020 10:16:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Px6WAgjKWF2Q0iqwF7bmFxmseAlEjtKBsgqTnzkkNWs=; b=NJLgqh5HBJngzvYPET5mEaPI73GpSKAf4rJXeE6y8BoRGUgdmGNL8Ri4PkROPi6FUs sBzLkCyeyPxVaN9jtD0MMHQlnrSA76EKS5GArc0RZ48GV2tAms/4qOC/gsJHLw+/8+kQ 27vI8g71hHkjnRxdvbxBG/o7y4vNgVpvBctJdSyUJagWy0Y9IASIv5nV2j7SWOJJjLYx ZUjB4QS1y8BOo6IY5XcN9nGEmicXNxCrp2F0vmDDaqf0TxrEKEzgSOCatomjtBBlRQQH dyeQe39UQ4bZlvedXtUtNM6uc0v3By9WnNozWhxdarrlqjcgPQUu+l4FeYr5smA8ydts GIfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Px6WAgjKWF2Q0iqwF7bmFxmseAlEjtKBsgqTnzkkNWs=; b=oVR548qSGjhRu+Vy3lY1bNzfssnKOKNN5Tn+WDcoNWShmlOvOgaAWTfAQyNNFRP4wu ZZMb2XXl9MzXWLOXpiL5XlfZQAwSOx7sSOdTt3X7KklSlnYMPwnf+n3K00xZNZ7Lgw0g n7YnBaoCkCOqti2zlKm7+SAGdwTjwcnaN0oC5GTGeNuXJyu0oHHG8+UI9WtQ/8BEzZim u/vFyJZUO0A+qnnFco2bbZn3zWVrN5Njvupsv8+CfCqXu+YNgAV2HFTtOZmU7VxqVMCL 5MbpXhmjOF8rwFHtx4mibzQ35PDPCKNSElTKa2EA0lm54OdxviCXuhXkmC42LDf2Ghb2 qNpg== X-Gm-Message-State: AOAM53034jmSMEKyUfo75nXFA/kO0DiUAGL/e2iyL6bgJF+BCjjnvdyZ fKAV/XlQoRf4KrGg9HsmbqGbHm1d+oI2 X-Google-Smtp-Source: ABdhPJzrKHjfNzUAFaFELqtQU9Z839XHaSsET7WjpyCI34QBLSq4T6TTN/LNOv3c8qkVPSVywbDlrFz4Lbl0 Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a0c:e50a:: with SMTP id l10mr812333qvm.55.1605637019068; Tue, 17 Nov 2020 10:16:59 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:57 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-18-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 17/27] KVM: arm64: Elevate Hyp mappings creation at EL2 From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Previous commits have introduced infrastructure at EL2 to enable the Hyp code to manage its own memory, and more specifically its stage 1 page tables. However, this was preliminary work, and none of it is currently in use. Put all of this together by elevating the hyp mappings creation at EL2 when memory protection is enabled. In this case, the host kernel running at EL1 still creates _temporary_ Hyp mappings, only used while initializing the hypervisor, but frees them right after, and flips a static key marking the new 'protected' mode of operation. As such, all calls to create_hyp_mappings() after kvm init has finished turn into hypercalls, as the host now has no 'legal' way to modify the hypevisor page tables directly. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 1 - arch/arm64/kvm/arm.c | 51 ++++++++++++++++++++++++++++++-- arch/arm64/kvm/mmu.c | 34 +++++++++++++++++++++ 3 files changed, 82 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index cb104443d8e4..bb756757b51c 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -285,6 +285,5 @@ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) */ asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } - #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b1e1747e4bbf..cfe5cc55b425 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1373,7 +1373,7 @@ static void cpu_prepare_hyp_mode(int cpu) __flush_dcache_area(params, sizeof(*params)); } -static void cpu_init_hyp_mode(void) +static void kvm_set_hyp_vector(void) { struct kvm_nvhe_init_params *params; struct arm_smccc_res res; @@ -1391,6 +1391,11 @@ static void cpu_init_hyp_mode(void) params = this_cpu_ptr_nvhe_sym(kvm_init_params); arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(__kvm_hyp_init), virt_to_phys(params), &res); WARN_ON(res.a0 != SMCCC_RET_SUCCESS); +} + +static void cpu_init_hyp_mode(void) +{ + kvm_set_hyp_vector(); /* * Disabling SSBD on a non-VHE system requires us to enable SSBS @@ -1433,7 +1438,10 @@ static void cpu_set_hyp_vector(void) struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data); void *vector = hyp_spectre_vector_selector[data->slot]; - *this_cpu_ptr_hyp_sym(kvm_hyp_vector) = (unsigned long)vector; + if (!is_protected_kvm_enabled()) + *this_cpu_ptr_hyp_sym(kvm_hyp_vector) = (unsigned long)vector; + else + kvm_call_hyp_nvhe(__hyp_cpu_set_vector, data->slot); } static void cpu_hyp_reinit(void) @@ -1441,13 +1449,14 @@ static void cpu_hyp_reinit(void) kvm_init_host_cpu_context(&this_cpu_ptr_hyp_sym(kvm_host_data)->host_ctxt); cpu_hyp_reset(); - cpu_set_hyp_vector(); if (is_kernel_in_hyp_mode()) kvm_timer_init_vhe(); else cpu_init_hyp_mode(); + cpu_set_hyp_vector(); + kvm_arm_init_debug(); if (vgic_present) @@ -1653,6 +1662,36 @@ static int copy_cpu_ftr_regs(void) return 0; } +static int kvm_hyp_enable_protection(void) +{ + void *per_cpu_base = kvm_ksym_ref(kvm_arm_hyp_percpu_base); + int ret, cpu; + void *addr; + + if (!is_protected_kvm_enabled()) + return 0; + + if (!hyp_mem_base) + return -ENOMEM; + + addr = phys_to_virt(hyp_mem_base); + ret = create_hyp_mappings(addr, addr + hyp_mem_size - 1, PAGE_HYP); + if (ret) + return ret; + + kvm_set_hyp_vector(); + ret = kvm_call_hyp_nvhe(__kvm_hyp_protect, hyp_mem_base, hyp_mem_size, + num_possible_cpus(), kern_hyp_va(per_cpu_base)); + if (ret) + return ret; + + free_hyp_pgds(); + for_each_possible_cpu(cpu) + free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); + + return 0; +} + /** * Inits Hyp-mode on all online CPUs */ @@ -1789,6 +1828,12 @@ static int init_hyp_mode(void) for_each_possible_cpu(cpu) cpu_prepare_hyp_mode(cpu); + err = kvm_hyp_enable_protection(); + if (err) { + kvm_err("Failed to enable hyp memory protection: %d\n", err); + goto out_err; + } + return 0; out_err: diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3cf9397dabdb..5c2e0feb9689 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -225,15 +225,39 @@ void free_hyp_pgds(void) if (hyp_pgtable) { kvm_pgtable_hyp_destroy(hyp_pgtable); kfree(hyp_pgtable); + hyp_pgtable = NULL; } mutex_unlock(&kvm_hyp_pgd_mutex); } +static bool kvm_host_owns_hyp_mappings(void) +{ + if (static_branch_likely(&kvm_protected_mode_initialized)) + return false; + + /* + * This can happen at boot time when __create_hyp_mappings() is called + * after the hyp protection has been enabled, but the static key has + * not been flipped yet. + */ + if (!hyp_pgtable && is_protected_kvm_enabled()) + return false; + + BUG_ON(!hyp_pgtable); + + return true; +} + static int __create_hyp_mappings(unsigned long start, unsigned long size, unsigned long phys, enum kvm_pgtable_prot prot) { int err; + if (!kvm_host_owns_hyp_mappings()) { + return kvm_call_hyp_nvhe(__hyp_create_mappings, + start, size, phys, prot); + } + mutex_lock(&kvm_hyp_pgd_mutex); err = kvm_pgtable_hyp_map(hyp_pgtable, start, size, phys, prot); mutex_unlock(&kvm_hyp_pgd_mutex); @@ -295,6 +319,16 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, unsigned long base; int ret = 0; + if (!kvm_host_owns_hyp_mappings()) { + base = kvm_call_hyp_nvhe(__hyp_create_private_mapping, + phys_addr, size, prot); + if (!base) + return -ENOMEM; + *haddr = base; + + return 0; + } + mutex_lock(&kvm_hyp_pgd_mutex); /* From patchwork Tue Nov 17 18:15:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75D87C8300C for ; Tue, 17 Nov 2020 18:17:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D7A522210 for ; Tue, 17 Nov 2020 18:17:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="N1TibN9j" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731028AbgKQSRI (ORCPT ); Tue, 17 Nov 2020 13:17:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731013AbgKQSRF (ORCPT ); Tue, 17 Nov 2020 13:17:05 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFFABC0617A6 for ; Tue, 17 Nov 2020 10:17:04 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id w17so13200696wrp.11 for ; Tue, 17 Nov 2020 10:17:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=nDg4Q5UW5VyG5A48tL8468KlneKnyHjdeOZ0CSmRGUk=; b=N1TibN9j6rqyDAz7w2MvN4SO5GZxQitBMjo5jrhjQsL+CqRT6cSgzwpG8/qIi2cqKl QupXDJkW94uUSAHQ/Avd20xRPPX5Qshr8JcXWEIpweKRZChjRI+YU7261YvmSIgucGbb YGbidLKp2mBFY77m2LsC5eN7onLPhV6anYgipS2Wbz8yGdVJ5EkVuUBY24a4Oo3WPnkf WvgYp/NvweM/l3FStseFKS26JKS9j8Idghu6U/AclJSNJVmYb5G6lNg8AiDGCVrU6TVx AGBetFJ2Kvmc3d57+t1arZwyoK4CzoIJCeQ7BEGcY2Uf29V1bO3LFUb4jjqch68AkY8Y GtOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nDg4Q5UW5VyG5A48tL8468KlneKnyHjdeOZ0CSmRGUk=; b=TYqQg9gc8P+5eOPe/qHj5lJ/GMSOzXkjeeVEY7O2evf0uztmQRa2G0tNs7Pemj6Cu9 IBm9UlxbVreC023ZBxk+NGNmFTV5eqsrXRGSxtng/8ErmKyBQZgbF7GTDLxdsp+QiLId 5VArivFbXoZ48zm+ZfwrRHSactzQt+XouiAU/0HFs+LJb9yfwrh/mouVjn50kc9XbnDZ xJTgJuVHDXYvVlBG9P9zgLQW3MVttCEeKBBK7v9LeZXx21I1ggwT7dyx74i4w8JdWLEy wO4ts4h+R5gM3sO31+k83rnrT5D5MMZCP+pSL55+WVor1f58EnseRoeY1/PlL1Wl5ezs yQIg== X-Gm-Message-State: AOAM531fQUaTn/Bj8/LyNpOQ+pTF4Cc6FSsRO+04dZ+TtskN7x0Uf4q/ A1Mnn3QlOK9jBRRXGcpqlNz7uqw8kcO4 X-Google-Smtp-Source: ABdhPJzBZwi9tCgTF9ABdJKEIr3LviF1NVAku/55zJOf5XT/50xgW9YDis8O5ppDeir1CJBxP84GEo3WRJol Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a1c:e442:: with SMTP id b63mr418225wmh.10.1605637023474; Tue, 17 Nov 2020 10:17:03 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:59 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-20-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 19/27] KVM: arm64: Use kvm_arch in kvm_s2_mmu From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to make use of the stage 2 pgtable code for the host stage 2, change kvm_s2_mmu to use a kvm_arch pointer in lieu of the kvm pointer, as the host will have the former but not the latter. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/kvm_mmu.h | 7 ++++++- arch/arm64/kvm/mmu.c | 8 ++++---- 3 files changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ee8bb8021637..53b01d25e7d9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -86,7 +86,7 @@ struct kvm_s2_mmu { /* The last vcpu id that ran on each physical CPU */ int __percpu *last_vcpu_ran; - struct kvm *kvm; + struct kvm_arch *arch; }; struct kvm_arch { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index bb756757b51c..714357ebd278 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -275,7 +275,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) */ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) { - write_sysreg(kern_hyp_va(mmu->kvm)->arch.vtcr, vtcr_el2); + write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); /* @@ -285,5 +285,10 @@ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) */ asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } + +static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) +{ + return container_of(mmu->arch, struct kvm, arch); +} #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 384f2acc0115..3b1c53e754ee 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -169,7 +169,7 @@ static void *kvm_host_va(phys_addr_t phys) static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size, bool may_block) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); phys_addr_t end = start + size; assert_spin_locked(&kvm->mmu_lock); @@ -474,7 +474,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) for_each_possible_cpu(cpu) *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1; - mmu->kvm = kvm; + mmu->arch = &kvm->arch; mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); mmu->vmid.vmid_gen = 0; @@ -556,7 +556,7 @@ void stage2_unmap_vm(struct kvm *kvm) void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); struct kvm_pgtable *pgt = NULL; spin_lock(&kvm->mmu_lock); @@ -625,7 +625,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, */ static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - struct kvm *kvm = mmu->kvm; + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect); } From patchwork Tue Nov 17 18:16:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326278 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1FFAC71155 for ; Tue, 17 Nov 2020 18:17:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 787922466D for ; Tue, 17 Nov 2020 18:17:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QOvLP7+8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731038AbgKQSRK (ORCPT ); Tue, 17 Nov 2020 13:17:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729761AbgKQSRI (ORCPT ); Tue, 17 Nov 2020 13:17:08 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EA37C0617A7 for ; Tue, 17 Nov 2020 10:17:06 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id 198so13838413qkj.7 for ; Tue, 17 Nov 2020 10:17:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=uWTirSBGllb/UnC1/gfuRUGVZCQysfGVvtfYpfqhwps=; b=QOvLP7+8rJSTpw6cIxRDKU37M+xX2BsqlE+SSoroXw4nil7j3KIE5nlZL9QPhOUnV2 neKN1A7pQ+x+SQWyzb6lI+RG4t3CdSbNFNKbHjPSf/y2Cy7OdeGY+IEdNMjcuceKN2L0 T1sJRvfMBIkD++hzeifuRpVg7u3ErEumK8vjo1uj0nq8BIvfiQAko56WeOPhqZZMID8+ SHUSt/rtqPXiPAZnsC7EX97B/GMHUi2XVojFSZHDFVU95enBs0xE3vp9oc+pprUh9dxA 0RVD59xrq8RJdccXUEgST+e87cFbujgI/4HZeyYKmtJHEk87BTRvkL+2Jbca91yf6Bx+ Bx1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uWTirSBGllb/UnC1/gfuRUGVZCQysfGVvtfYpfqhwps=; b=sMLxhwk+U7W6qPPHth0i+KBU5Hg4PV199KaAti0mcgk+ioFjduPFUsrzGtHi27MgaL NJq+gmID1Q6yAiho/WVcgSeXemksPH9n32+1WOGwgGSurQZEEHrb0cwmSbFyXMieGCbd cVcWGz3WN4CUWOQoQ/uKkuD4XajbFz3X7y3eoVEKwMxlMUdrozFEWsbWzH2RMyOI29IX ybOSkVU3wc/Z8A9Vr7ZG4hS9qJFbiBm+TlikW4N9nQzsLltw8TddwywIgySbHxPvhoKG t5rPKdRhfg5VDXSHu3B3IQQI8C9Db4vNo8kaibt609R//ozQki19O7uE8p7ORzCA6UD+ Cvxg== X-Gm-Message-State: AOAM532AKe1BHIr7FWQHS9+DnP6bVpgZ9ry71z3vZv450OijEKDTjdY8 ckTSn/Ru7FAZNBC1lz/RhCo80a2TJTO+ X-Google-Smtp-Source: ABdhPJwdF2Gg54bc1EmlvAWDzl41GijC84V6agc2skK4FVwntGHb1cSvCccXppxqfrsHcQ1LZjG1wvbDZQfU Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a0c:db11:: with SMTP id d17mr513281qvk.39.1605637025659; Tue, 17 Nov 2020 10:17:05 -0800 (PST) Date: Tue, 17 Nov 2020 18:16:00 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-21-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 20/27] KVM: arm64: Set host stage 2 using kvm_nvhe_init_params From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Move the registers relevant to host stage 2 enablement to kvm_nvhe_init_params to prepare the ground for enabling it in later patches. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kvm/arm.c | 5 +++++ arch/arm64/kvm/hyp/nvhe/hyp-init.S | 9 +++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 5 +---- 5 files changed, 21 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 9266b17f8ba9..089eea6e54fc 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -158,6 +158,9 @@ struct kvm_nvhe_init_params { unsigned long stack_hyp_va; unsigned long entry_hyp_va; phys_addr_t pgd_pa; + unsigned long hcr_el2; + unsigned long vttbr; + unsigned long vtcr; }; /* Translate a kernel address @ptr into its equivalent linear mapping */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 9752100bf01f..2c3813bff6ea 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -115,6 +115,9 @@ int main(void) DEFINE(NVHE_INIT_STACK_HYP_VA, offsetof(struct kvm_nvhe_init_params, stack_hyp_va)); DEFINE(NVHE_INIT_ENTRY_HYP_VA, offsetof(struct kvm_nvhe_init_params, entry_hyp_va)); DEFINE(NVHE_INIT_PGD_PA, offsetof(struct kvm_nvhe_init_params, pgd_pa)); + DEFINE(NVHE_INIT_HCR_EL2, offsetof(struct kvm_nvhe_init_params, hcr_el2)); + DEFINE(NVHE_INIT_VTTBR, offsetof(struct kvm_nvhe_init_params, vttbr)); + DEFINE(NVHE_INIT_VTCR, offsetof(struct kvm_nvhe_init_params, vtcr)); #endif #ifdef CONFIG_CPU_PM DEFINE(CPU_CTX_SP, offsetof(struct cpu_suspend_ctx, sp)); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index cfe5cc55b425..e06c95a10dba 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1365,6 +1365,11 @@ static void cpu_prepare_hyp_mode(int cpu) params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) + PAGE_SIZE); params->entry_hyp_va = kern_hyp_va((unsigned long)kvm_ksym_ref_nvhe(__kvm_hyp_psci_cpu_entry)); params->pgd_pa = kvm_mmu_get_httbr(); + if (is_protected_kvm_enabled()) + params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS; + else + params->hcr_el2 = HCR_HOST_NVHE_FLAGS; + params->vttbr = params->vtcr = 0; /* * Flush the init params from the data cache because the struct will diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S index e2d62297edfe..9f3f3098670a 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S @@ -103,6 +103,15 @@ alternative_else_nop_endif ldr x1, [x0, #NVHE_INIT_STACK_HYP_VA] mov sp, x1 + ldr x1, [x0, #NVHE_INIT_HCR_EL2] + msr hcr_el2, x1 + + ldr x1, [x0, #NVHE_INIT_VTTBR] + msr vttbr_el2, x1 + + ldr x1, [x0, #NVHE_INIT_VTCR] + msr vtcr_el2, x1 + ldr x1, [x0, #NVHE_INIT_PGD_PA] phys_to_ttbr x0, x1 alternative_if ARM64_HAS_CNP diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f3d0e9eca56c..979a76cdf9fb 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -97,10 +97,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; write_sysreg(mdcr_el2, mdcr_el2); - if (is_protected_kvm_enabled()) - write_sysreg(HCR_HOST_NVHE_PROTECTED_FLAGS, hcr_el2); - else - write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); + write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); write_sysreg(__kvm_hyp_host_vector, vbar_el2); } From patchwork Tue Nov 17 18:16:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 347FBC63777 for ; Tue, 17 Nov 2020 18:17:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E83D2221FD for ; Tue, 17 Nov 2020 18:17:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GrNemWJ5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731069AbgKQSRP (ORCPT ); Tue, 17 Nov 2020 13:17:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731061AbgKQSRO (ORCPT ); Tue, 17 Nov 2020 13:17:14 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20593C0617A6 for ; Tue, 17 Nov 2020 10:17:14 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id u8so468927wmj.1 for ; Tue, 17 Nov 2020 10:17:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=e9MvRELompdcQ9+x82O89Ri9iXp7qNvrmx/wGARfLl4=; b=GrNemWJ571s3IP05lzCIf+o9mQWG20WSzj1XWIbqs4BH51p4FibpD+1iIudjxdsldY VTOoE+KKuVVeeDxGnJBQ6qpGvS4olzVpA4h2kHUJdYePK03MTWF1sVNn29wYr8jCqsP1 zfnuBAtCdwfY4dVJuW13O2zHGn/wvP8TJd+7Gv88DeaAnl6Vr0lt8jupKQCVqBMY6LJu UuVkqgukfwDXxkwuFocX4twgxviyx3guAteMin0WGg+8TngA8wcruUWGUCFFyFlGLDAH GgrgZwJ5cpl+PeisJnHcFbNvplQ+JU+cK6fr/J2jY7HohI0LpW8XQQFBmZ0pMJ5mGWt9 sFyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=e9MvRELompdcQ9+x82O89Ri9iXp7qNvrmx/wGARfLl4=; b=RWcZ4rpqaYh7uiHeKrdxKzZEkp8YXH3pwSSAwP3iHZe0ZZfERjerGz9rCbysSL8PVJ XJ0PB0YEQBRpP1QXppbvJkl7IInofFYOLkPgxmQZ4/alucqpTGI8PYH5b0+Xnb6HhOc5 aCUGHr155OrR5FfCJDMNGJDxS6x2t6I/2H3PMQuU9bNGSz9ND9mkHoqySatEt1hJcJ2Z 7hvRF1+eDZKj7wCPDMS5bBX1BlwZd7MTbsj9NEmbx7cSldOOqGYLDMI8JnGUxU8br0ie R0uIXHkLiQioU1zwRxcR8ZInUDQIQUo1ySBNECuQmLnVg39pwiBBEPn+hE+Ld/3KbwfC 5pHg== X-Gm-Message-State: AOAM530jWc/mlKOBYRRDRvIqTHJ3e8hlybsFHL+8pYtNGMhkm5BXyo6a KVQ7kXdsL7AySWgylj4HukMnoUTKvBNR X-Google-Smtp-Source: ABdhPJz7BNpBZ+O3LELUcm62RbTCKz5SEYVRtuO5fjYtpJDAqHE3syxKSKHYJAgBXthIughVPk93YxkQXa72 Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a1c:6405:: with SMTP id y5mr376392wmb.150.1605637032528; Tue, 17 Nov 2020 10:17:12 -0800 (PST) Date: Tue, 17 Nov 2020 18:16:03 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-24-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 23/27] KVM: arm64: Refactor __populate_fault_info() From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Refactor __populate_fault_info() to introduce __get_fault_info() which will be used once the host is wrapped in a stage 2. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/hyp/switch.h | 36 +++++++++++++++---------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 84473574c2e7..e9005255d639 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -157,19 +157,9 @@ static inline bool __translate_far_to_hpfar(u64 far, u64 *hpfar) return true; } -static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) +static inline bool __get_fault_info(u64 esr, u64 *far, u64 *hpfar) { - u8 ec; - u64 esr; - u64 hpfar, far; - - esr = vcpu->arch.fault.esr_el2; - ec = ESR_ELx_EC(esr); - - if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) - return true; - - far = read_sysreg_el2(SYS_FAR); + *far = read_sysreg_el2(SYS_FAR); /* * The HPFAR can be invalid if the stage 2 fault did not @@ -185,12 +175,30 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) if (!(esr & ESR_ELx_S1PTW) && (cpus_have_final_cap(ARM64_WORKAROUND_834220) || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { - if (!__translate_far_to_hpfar(far, &hpfar)) + if (!__translate_far_to_hpfar(*far, hpfar)) return false; } else { - hpfar = read_sysreg(hpfar_el2); + *hpfar = read_sysreg(hpfar_el2); } + return true; +} + +static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) +{ + u8 ec; + u64 esr; + u64 hpfar, far; + + esr = vcpu->arch.fault.esr_el2; + ec = ESR_ELx_EC(esr); + + if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) + return true; + + if (!__get_fault_info(esr, &far, &hpfar)) + return false; + vcpu->arch.fault.far_el2 = far; vcpu->arch.fault.hpfar_el2 = hpfar; return true; From patchwork Tue Nov 17 18:16:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4472AC64E75 for ; Tue, 17 Nov 2020 18:17:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F15782222E for ; Tue, 17 Nov 2020 18:17:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OEoAZqf/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731109AbgKQSRT (ORCPT ); Tue, 17 Nov 2020 13:17:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731086AbgKQSRT (ORCPT ); Tue, 17 Nov 2020 13:17:19 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9337CC0617A6 for ; Tue, 17 Nov 2020 10:17:18 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id h13so7925542wrr.7 for ; Tue, 17 Nov 2020 10:17:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=U9buW3EkB/K6NinBuCPzqj4JKuwVqT/SXDU0SZ8bieQ=; b=OEoAZqf/nKz8y4wwbNx4ce9JZPHSdATgOxmylZEZ63UUl4SLy8rF5lv+5jh8HRhWHV l3jGcsZVeScg2fnHeX8O1ZoHr2neKk6P/nYD6Urly0tw8n40b2B5togib9oJ6S8u3J2G DFJzS6t4+o1+m2Wrt+X107G2mNv5796ZBQ3LThG4GqG3MHwAyh/szE5NsseL5rxg6NbE jradhxYJAJ+gyWiQzQs70x7NpeqFztQHGIra3pXzLaVk+w+fYaQrxiIKC6MLVCNPhS1a WAn/iamjzPdbSB5+T9yPzgX/YFXLTSRG0UxqXOOA97dB1iR3N49f6Pd3LVxRUZ/1Iq49 GbaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=U9buW3EkB/K6NinBuCPzqj4JKuwVqT/SXDU0SZ8bieQ=; b=Nz4Mb4vo77rhjSTnkxlRd7hjxSo5gQvgr8WBvjIvnTKkzM7RdYhG/AFKYNrlYSkg/e Gn/1ewx971x8VvJQpbOwgpTHrlTIDT3asEP6vZOx8PVD86Zl/OQHF09lx8VIR8e6AcZg ylVrSyyVCVSR+OpsSMH4QiI0zqbw2sT94/UzQSN0NgikQAb/VAW5b4YhBlp2odc7wKB3 tDHhcO2boBb7sFgMZukcXqIG9D3WzGAy6eBpccA2yfRKBNOIaymDC0ZIo5LM9ADrROBS iTIwzEMmWNMjfgp45tDGT7DxofdIFK50aBcEcKe656rjpVLiNWfhimLhf92ecC5bTJ3S OcNg== X-Gm-Message-State: AOAM532t71ZX61WZM8CjtpsRQfVEQWjBaavfCRxoqQWF0aiQ8QHqs9t/ JLKOZNQGJSdsZP4mRtC6xr1Fs1s12fns X-Google-Smtp-Source: ABdhPJxdY+lqNlpPzdnmipW4jOsekrda/kjR1w92gZ+51XZgmNLFCJj9D0pJqUaOPEVX/Dsw6fZ/dN80t1P3 Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:adf:a3ca:: with SMTP id m10mr842044wrb.228.1605637037211; Tue, 17 Nov 2020 10:17:17 -0800 (PST) Date: Tue, 17 Nov 2020 18:16:05 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-26-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 25/27] KVM: arm64: Reserve memory for host stage 2 From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Extend the memory pool allocated for the hypervisor to include enough pages to map all of memory at page granularity for the host stage 2. While at it, also reserve some memory for device mappings. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mm.h | 36 ++++++++++++++++++++++++---- arch/arm64/kvm/hyp/nvhe/setup.c | 12 ++++++++++ arch/arm64/kvm/hyp/reserved_mem.c | 2 ++ 3 files changed, 46 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 5a3ad6f4e5bc..b79be2580164 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -52,15 +52,12 @@ static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) return total; } -static inline unsigned long hyp_s1_pgtable_size(void) +static inline unsigned long __hyp_pgtable_total_size(void) { struct hyp_memblock_region *reg; unsigned long nr_pages, res = 0; int i; - if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) - return 0; - for (i = 0; i < kvm_nvhe_sym(hyp_memblock_nr); i++) { reg = &kvm_nvhe_sym(hyp_memory)[i]; nr_pages = (reg->end - reg->start) >> PAGE_SHIFT; @@ -68,6 +65,18 @@ static inline unsigned long hyp_s1_pgtable_size(void) res += nr_pages << PAGE_SHIFT; } + return res; +} + +static inline unsigned long hyp_s1_pgtable_size(void) +{ + unsigned long res, nr_pages; + + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return 0; + + res = __hyp_pgtable_total_size(); + /* Allow 1 GiB for private mappings */ nr_pages = (1 << 30) >> PAGE_SHIFT; nr_pages = __hyp_pgtable_max_pages(nr_pages); @@ -76,4 +85,23 @@ static inline unsigned long hyp_s1_pgtable_size(void) return res; } +static inline unsigned long host_s2_mem_pgtable_size(void) +{ + unsigned long max_pgd_sz = 16 << PAGE_SHIFT; + + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return 0; + + return __hyp_pgtable_total_size() + max_pgd_sz; +} + +static inline unsigned long host_s2_dev_pgtable_size(void) +{ + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0) + return 0; + + /* Allow 1 GiB for private mappings */ + return __hyp_pgtable_max_pages((1 << 30) >> PAGE_SHIFT) << PAGE_SHIFT; +} + #endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 9679c97b875b..b73e6b08cfba 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -24,6 +24,8 @@ unsigned long hyp_nr_cpus; static void *stacks_base; static void *vmemmap_base; static void *hyp_pgt_base; +static void *host_s2_mem_pgt_base; +static void *host_s2_dev_pgt_base; static int divide_memory_pool(void *virt, unsigned long size) { @@ -46,6 +48,16 @@ static int divide_memory_pool(void *virt, unsigned long size) if (!hyp_pgt_base) return -ENOMEM; + nr_pages = host_s2_mem_pgtable_size() >> PAGE_SHIFT; + host_s2_mem_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!host_s2_mem_pgt_base) + return -ENOMEM; + + nr_pages = host_s2_dev_pgtable_size() >> PAGE_SHIFT; + host_s2_dev_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!host_s2_dev_pgt_base) + return -ENOMEM; + return 0; } diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c index 02b0b18006f5..c2c0484b6211 100644 --- a/arch/arm64/kvm/hyp/reserved_mem.c +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -47,6 +47,8 @@ void __init reserve_kvm_hyp(void) hyp_mem_size += num_possible_cpus() << PAGE_SHIFT; hyp_mem_size += hyp_s1_pgtable_size(); + hyp_mem_size += host_s2_mem_pgtable_size(); + hyp_mem_size += host_s2_dev_pgtable_size(); /* * The hyp_vmemmap needs to be backed by pages, but these pages From patchwork Tue Nov 17 18:16:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 326271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F9F5C64E7A for ; Tue, 17 Nov 2020 18:17:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2AFA3221FD for ; Tue, 17 Nov 2020 18:17:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BwMpHLzz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731130AbgKQSRV (ORCPT ); Tue, 17 Nov 2020 13:17:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731090AbgKQSRU (ORCPT ); Tue, 17 Nov 2020 13:17:20 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 351E9C0617A6 for ; Tue, 17 Nov 2020 10:17:20 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id e22so12978445qte.22 for ; Tue, 17 Nov 2020 10:17:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=E22eq9BAxEe+AO1nuitRnkTYq4b0+jwZ2QqbOY4WjS4=; b=BwMpHLzzaf2zH+ej4IZ+ZkIk/2jWr7L+FHUmN9hGsyCqOGOmdabExESKfOLgOuQEMx NHx/9YL1ICL/V6NTINPSEuafCnh+jEpymTyAt+ffdheOuo7jOJ8vS/yeL41NvksNjLOE x8t+7P7Bm81+jxi2AQWLRtbYG98F660Av+2v6E1SVR18iUDIbjud6M7WtnnUxBYtf4yg aCkEG1Nbb7InqizJzHfB0W0Noq7MOQtxHUbyYkRV8CEDKX/4SUjh+2KgSkkYcs5KTcOv 9gcPbPB/EtvtRoo0fWhkrwKl79dMwsmHYYo8KVVxYYexbKJ9YZA/vmIK3+v8nlSTkUDs TWaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=E22eq9BAxEe+AO1nuitRnkTYq4b0+jwZ2QqbOY4WjS4=; b=JZAaCruKhayjwNT/P5MwiL31HlhJ41/cp0LKifH5S7Fdy59YaYF71NwiuLjW61rBrv TvU8x7WEKmaunJNbLGgJtStXohvGZ8y7CJsd147vXoL03M7TbMfkWvycPq2zENnZ5wbw DkQbT7eaD9IkAQr3p6Lkw9zvs5BRYSN2kuPGKz3uap1Vct8pDRg8LKILOe/mcraXGALH VFpoSeLKq0XWPPex0ET1rEIBQlCNNRj35+7N1lAKTkg1Dv12BLJQCxw57VZp5w0pGCyS wg1sGQCFBfzdnQPkiIVn0He0T92vyGWnkqbWOqu81Q/N/jJFbBPmEfIdwWLBCH3dymG2 pCsQ== X-Gm-Message-State: AOAM5312w/6+P+j+oghagELVmJClzdKQT49fOZv1alaH6yUb6xpZfRA2 YNtE1yl+mWq7oDJ9qXqMQAHn0mXWC25O X-Google-Smtp-Source: ABdhPJz/mVc1v+qYJqOE8yeahxM0e0LR02PFTS4Ofl1niXl14opy+AyQuh/+jpD0RcsQFfuxJf4epmdjTYs2 Sender: "qperret via sendgmr" X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:ad4:53c8:: with SMTP id k8mr531726qvv.40.1605637039404; Tue, 17 Nov 2020 10:17:19 -0800 (PST) Date: Tue, 17 Nov 2020 18:16:06 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-27-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 26/27] KVM: arm64: Sort the memblock regions list From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , open list , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, android-kvm@google.com, Quentin Perret Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The hypervisor will need the list of memblock regions sorted by increasing start address to make look-ups more efficient. Make the host do the hard work early while it is still trusted to avoid the need for a sorting library at EL2. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/hyp/reserved_mem.c | 18 ++++++++++++++++++ 3 files changed, 20 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 53b01d25e7d9..ec304a5c728b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -746,6 +746,7 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); extern phys_addr_t hyp_mem_base; extern phys_addr_t hyp_mem_size; void __init reserve_kvm_hyp(void); +void kvm_sort_memblock_regions(void); #else static inline void reserve_kvm_hyp(void) { } #endif diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e06c95a10dba..8160a0d12a58 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1685,6 +1685,7 @@ static int kvm_hyp_enable_protection(void) return ret; kvm_set_hyp_vector(); + kvm_sort_memblock_regions(); ret = kvm_call_hyp_nvhe(__kvm_hyp_protect, hyp_mem_base, hyp_mem_size, num_possible_cpus(), kern_hyp_va(per_cpu_base)); if (ret) diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c index c2c0484b6211..7da8e2915c1c 100644 --- a/arch/arm64/kvm/hyp/reserved_mem.c +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -6,6 +6,7 @@ #include #include +#include #include @@ -31,6 +32,23 @@ void __init early_init_dt_add_memory_hyp(u64 base, u64 size) kvm_nvhe_sym(hyp_memblock_nr)++; } +static int cmp_hyp_memblock(const void *p1, const void *p2) +{ + const struct hyp_memblock_region *r1 = p1; + const struct hyp_memblock_region *r2 = p2; + + return r1->start < r2->start ? -1 : (r1->start > r2->start); +} + +void kvm_sort_memblock_regions(void) +{ + sort(kvm_nvhe_sym(hyp_memory), + kvm_nvhe_sym(hyp_memblock_nr), + sizeof(struct hyp_memblock_region), + cmp_hyp_memblock, + NULL); +} + extern bool enable_protected_kvm; void __init reserve_kvm_hyp(void) {