From patchwork Fri Dec 14 11:58:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 153829 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1972928ljp; Fri, 14 Dec 2018 04:01:16 -0800 (PST) X-Google-Smtp-Source: AFSGD/XlxfeUahBaf0XkjHiw5TFcRhohcYDROLeX5zTdYXRZVMvCg51YilV3qiIOq8jrfDKL8Za5 X-Received: by 2002:a81:33c4:: with SMTP id z187mr2684202ywz.294.1544788876334; Fri, 14 Dec 2018 04:01:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544788876; cv=none; d=google.com; s=arc-20160816; b=fP5RDBZ8qXwe4TxuYJXVvd2G9tNAxGU750wpKOiClopaaJm2OH3gbkHy1QvcieMuPX G5cHs2QyzsxR/4zPpmwuzHWa2jNXzOa3/FVwEUQkN+fhPqh3urioYe9k9Ulj1E1lBCGy vqUkR6ShyzDQ41c8HDpvuCxx3co1MfsBFfSArBlryTTIaulfMfbTM1EqTqOnay/HzyQu CkTzXGn5pEc6OxWzpzFqc60Xcc4BREghDS9oWMXKogwqwT6/HYEVhdd0PHRFYHDHcPMb SdcGZIeAdxszuJYULAMKa9ePY3OTEe9PtbDkNFTKbuz+Hx6wNxsjZRFZaX+xIfJw6EHP +3Tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from; bh=1QaCdM9HmcZGKuPgaRPQTCc2fZakpKUrGXbtyarw0Rs=; b=VCPMebNytyZ4LYw1xUwWoPJRf7lzfwOJk4HF1b/ObiBCnbHKOXmj/GtQcr5VPq5OGt ngBGouWq9ZZm9Qwf7K+rL70juEInHupfxyQemsS54oGFzV5g1054IaH0dsF09HHg+PxA ghTP+/ik9Htams1JY0eV1KW5IXQ8h9/G7pZpoKlGGLetng9jGEQCuLQBQ8npv0DlbOFu zxP2gagiUwBdQIvUSaS26EXakvRYl6ju9bbTVO0tb6qoPrBMQLscLLFX02uOuZxbB4pB Ohg9zhfTzQhZCIm9ACl1LWRTfNGKxkzfLx+eGNzWRcaCidJn5fUs6xQKzZpgYm51zLOM nw7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id t7si2729981ywt.158.2018.12.14.04.01.16 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 14 Dec 2018 04:01:16 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7W-0001Hn-HD; Fri, 14 Dec 2018 11:59:06 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7U-0001Gy-Pf for xen-devel@lists.xenproject.org; Fri, 14 Dec 2018 11:59:04 +0000 X-Inumbo-ID: a66a9e38-ff97-11e8-bd65-33d11be83611 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTP id a66a9e38-ff97-11e8-bd65-33d11be83611; Fri, 14 Dec 2018 11:59:03 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C633715AD; Fri, 14 Dec 2018 03:59:02 -0800 (PST) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D0D603F575; Fri, 14 Dec 2018 03:59:01 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Fri, 14 Dec 2018 11:58:51 +0000 Message-Id: <20181214115855.6713-2-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181214115855.6713-1-julien.grall@arm.com> References: <20181214115855.6713-1-julien.grall@arm.com> Subject: [Xen-devel] [PATCH for-4.12 v3 1/5] xen/arm: vcpreg: Add wrappers to handle co-proc access trapped by HCR_EL2.TVM X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , sstabellini@kernel.org MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" A follow-up patch will require to emulate some accesses to some co-processors registers trapped by HCR_EL2.TVM. When set, all NS EL1 writes to the virtual memory control registers will be trapped to the hypervisor. This patch adds the infrastructure to passthrough the access to host registers. For convenience a bunch of helpers have been added to generate the different helpers. Note that HCR_EL2.TVM will be set in a follow-up patch dynamically. Signed-off-by: Julien Grall Reviewed-by: Stefano Stabellini --- Changes in v3: - Add Stefano's reviewed-by Changes in v2: - Add missing include vreg.h - Fixup mask TMV_REG32_COMBINED - Update comments --- xen/arch/arm/vcpreg.c | 149 +++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/cpregs.h | 1 + 2 files changed, 150 insertions(+) diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c index 7b783e4bcc..550c25ec3f 100644 --- a/xen/arch/arm/vcpreg.c +++ b/xen/arch/arm/vcpreg.c @@ -23,8 +23,129 @@ #include #include #include +#include #include +/* + * Macros to help generating helpers for registers trapped when + * HCR_EL2.TVM is set. + * + * Note that it only traps NS write access from EL1. + * + * - TVM_REG() should not be used outside of the macros. It is there to + * help defining TVM_REG32() and TVM_REG64() + * - TVM_REG32(regname, xreg) and TVM_REG64(regname, xreg) are used to + * resp. generate helper accessing 32-bit and 64-bit register. "regname" + * is the Arm32 name and "xreg" the Arm64 name. + * - TVM_REG32_COMBINED(lowreg, hireg, xreg) are used to generate a + * pair of register sharing the same Arm64 register, but are 2 distinct + * Arm32 registers. "lowreg" and "hireg" contains the name for on Arm32 + * registers, "xreg" contains the name for the combined register on Arm64. + * The definition of "lowreg" and "higreg" match the Armv8 specification, + * this means "lowreg" is an alias to xreg[31:0] and "high" is an alias to + * xreg[63:32]. + * + */ + +/* The name is passed from the upper macro to workaround macro expansion. */ +#define TVM_REG(sz, func, reg...) \ +static bool func(struct cpu_user_regs *regs, uint##sz##_t *r, bool read) \ +{ \ + GUEST_BUG_ON(read); \ + WRITE_SYSREG##sz(*r, reg); \ + \ + return true; \ +} + +#define TVM_REG32(regname, xreg) TVM_REG(32, vreg_emulate_##regname, xreg) +#define TVM_REG64(regname, xreg) TVM_REG(64, vreg_emulate_##regname, xreg) + +#ifdef CONFIG_ARM_32 +#define TVM_REG32_COMBINED(lowreg, hireg, xreg) \ + /* Use TVM_REG directly to workaround macro expansion. */ \ + TVM_REG(32, vreg_emulate_##lowreg, lowreg) \ + TVM_REG(32, vreg_emulate_##hireg, hireg) + +#else /* CONFIG_ARM_64 */ +#define TVM_REG32_COMBINED(lowreg, hireg, xreg) \ +static bool vreg_emulate_##xreg(struct cpu_user_regs *regs, uint32_t *r, \ + bool read, bool hi) \ +{ \ + register_t reg = READ_SYSREG(xreg); \ + \ + GUEST_BUG_ON(read); \ + if ( hi ) /* reg[63:32] is AArch32 register hireg */ \ + { \ + reg &= GENMASK(31, 0); \ + reg |= ((uint64_t)*r) << 32; \ + } \ + else /* reg[31:0] is AArch32 register lowreg. */ \ + { \ + reg &= GENMASK(63, 32); \ + reg |= *r; \ + } \ + WRITE_SYSREG(reg, xreg); \ + \ + return true; \ +} \ + \ +static bool vreg_emulate_##lowreg(struct cpu_user_regs *regs, uint32_t *r, \ + bool read) \ +{ \ + return vreg_emulate_##xreg(regs, r, read, false); \ +} \ + \ +static bool vreg_emulate_##hireg(struct cpu_user_regs *regs, uint32_t *r, \ + bool read) \ +{ \ + return vreg_emulate_##xreg(regs, r, read, true); \ +} +#endif + +/* Defining helpers for emulating co-processor registers. */ +TVM_REG32(SCTLR, SCTLR_EL1) +/* + * AArch32 provides two way to access TTBR* depending on the access + * size, whilst AArch64 provides one way. + * + * When using AArch32, for simplicity, use the same access size as the + * guest. + */ +#ifdef CONFIG_ARM_32 +TVM_REG32(TTBR0_32, TTBR0_32) +TVM_REG32(TTBR1_32, TTBR1_32) +#else +TVM_REG32(TTBR0_32, TTBR0_EL1) +TVM_REG32(TTBR1_32, TTBR1_EL1) +#endif +TVM_REG64(TTBR0, TTBR0_EL1) +TVM_REG64(TTBR1, TTBR1_EL1) +/* AArch32 registers TTBCR and TTBCR2 share AArch64 register TCR_EL1. */ +TVM_REG32_COMBINED(TTBCR, TTBCR2, TCR_EL1) +TVM_REG32(DACR, DACR32_EL2) +TVM_REG32(DFSR, ESR_EL1) +TVM_REG32(IFSR, IFSR32_EL2) +/* AArch32 registers DFAR and IFAR shares AArch64 register FAR_EL1. */ +TVM_REG32_COMBINED(DFAR, IFAR, FAR_EL1) +TVM_REG32(ADFSR, AFSR0_EL1) +TVM_REG32(AIFSR, AFSR1_EL1) +/* AArch32 registers MAIR0 and MAIR1 share AArch64 register MAIR_EL1. */ +TVM_REG32_COMBINED(MAIR0, MAIR1, MAIR_EL1) +/* AArch32 registers AMAIR0 and AMAIR1 share AArch64 register AMAIR_EL1. */ +TVM_REG32_COMBINED(AMAIR0, AMAIR1, AMAIR_EL1) +TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1) + +/* Macro to generate easily case for co-processor emulation. */ +#define GENERATE_CASE(reg, sz) \ + case HSR_CPREG##sz(reg): \ + { \ + bool res; \ + \ + res = vreg_emulate_cp##sz(regs, hsr, vreg_emulate_##reg); \ + ASSERT(res); \ + break; \ + } + void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr) { const struct hsr_cp32 cp32 = hsr.cp32; @@ -65,6 +186,31 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr) break; /* + * HCR_EL2.TVM + * + * ARMv8 (DDI 0487D.a): Table D1-38 + */ + GENERATE_CASE(SCTLR, 32) + GENERATE_CASE(TTBR0_32, 32) + GENERATE_CASE(TTBR1_32, 32) + GENERATE_CASE(TTBCR, 32) + GENERATE_CASE(TTBCR2, 32) + GENERATE_CASE(DACR, 32) + GENERATE_CASE(DFSR, 32) + GENERATE_CASE(IFSR, 32) + GENERATE_CASE(DFAR, 32) + GENERATE_CASE(IFAR, 32) + GENERATE_CASE(ADFSR, 32) + GENERATE_CASE(AIFSR, 32) + /* AKA PRRR */ + GENERATE_CASE(MAIR0, 32) + /* AKA NMRR */ + GENERATE_CASE(MAIR1, 32) + GENERATE_CASE(AMAIR0, 32) + GENERATE_CASE(AMAIR1, 32) + GENERATE_CASE(CONTEXTIDR, 32) + + /* * MDCR_EL2.TPM * * ARMv7 (DDI 0406C.b): B1.14.17 @@ -193,6 +339,9 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr) return inject_undef_exception(regs, hsr); break; + GENERATE_CASE(TTBR0, 64) + GENERATE_CASE(TTBR1, 64) + /* * CPTR_EL2.T{0..9,12..13} * diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h index 97a3c6f1c1..8fd344146e 100644 --- a/xen/include/asm-arm/cpregs.h +++ b/xen/include/asm-arm/cpregs.h @@ -140,6 +140,7 @@ /* CP15 CR2: Translation Table Base and Control Registers */ #define TTBCR p15,0,c2,c0,2 /* Translation Table Base Control Register */ +#define TTBCR2 p15,0,c2,c0,3 /* Translation Table Base Control Register 2 */ #define TTBR0 p15,0,c2 /* Translation Table Base Reg. 0 */ #define TTBR1 p15,1,c2 /* Translation Table Base Reg. 1 */ #define HTTBR p15,4,c2 /* Hyp. Translation Table Base Register */ From patchwork Fri Dec 14 11:58:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 153830 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1972933ljp; Fri, 14 Dec 2018 04:01:16 -0800 (PST) X-Google-Smtp-Source: AFSGD/WiAhPxX0XOadsjpBABhOC5ihV0FmItE/+zvUIeRHhcgYCXuans+DON23rh1ILnG48UJvuI X-Received: by 2002:a25:3cc1:: with SMTP id j184mr2494315yba.439.1544788876553; Fri, 14 Dec 2018 04:01:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544788876; cv=none; d=google.com; s=arc-20160816; b=E3zdwUxF/z3NTUXMXeMWzObt+AECp6mrLk7Ff006i31/lq1D2pOKDy0xktOkqySVzk N+SmoUq3qp4ZEEikv87WBdnmjZaaum9bdYvXbHQygQLOfWrf0Fa0aVTz6JtSgLF+b1s5 Mww3/XAYKukyayY1TggyM9JeVC60FwjSfUxXnJjFLoU83EFthYLeMpS5KsFmXituk++R 937zfxkzMV6wEQ+VpzJ6eo8C1rYJeMFfaWqqcajsCasBJSzPVlPePn9h7GGh+MqEMmYA v/ebc/E5B829xA3nY0dlqZqN70/uyfBG+JEZorFoCLPZgU3RABsKET+oY5xmlQXm2TCe tg9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from; bh=TmkIxKLRSpJZMhU+QPf0in+WlSAESKcnR2kh+8bN3ds=; b=TeRSZyJg7HesuCtBnFUT2NELuLSkT5vJhgq9JaTXk9cKYSSNuFfy/biqZSPSm52HXA CucRGJWVJvcaK1wwAA12fhWRxkHUt9TA12VFfdsFDjyxzCRehr9DxzCVNaS0oX4KRh7q cmxZEt1ad/5amKXS+HNAl97rA8HTDNMXhCzQEdeDw8BkNwk3z/YHAvq3MEks31ylPL6z 3r3LAV0C+DGOjPlelRASK7Ds4dQoxEKNkxZ4u7gLg5TQgNYJbGiyM9wyN5364/1EmI3B vGRlMZjRjZRomNcMmE6v8Va2GcUB+8esU/fC0Tcw9kqVzyYYosZv/SegQNIOcIYfSOd6 dowA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id f192-v6si2658505yba.423.2018.12.14.04.01.16 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 14 Dec 2018 04:01:16 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7W-0001Hy-QX; Fri, 14 Dec 2018 11:59:06 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7V-0001Hi-DS for xen-devel@lists.xenproject.org; Fri, 14 Dec 2018 11:59:05 +0000 X-Inumbo-ID: a6e9704a-ff97-11e8-8e1d-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id a6e9704a-ff97-11e8-8e1d-bc764e045a96; Fri, 14 Dec 2018 11:59:04 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CC97B1650; Fri, 14 Dec 2018 03:59:03 -0800 (PST) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0FEAD3F575; Fri, 14 Dec 2018 03:59:02 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Fri, 14 Dec 2018 11:58:52 +0000 Message-Id: <20181214115855.6713-3-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181214115855.6713-1-julien.grall@arm.com> References: <20181214115855.6713-1-julien.grall@arm.com> Subject: [Xen-devel] [PATCH for-4.12 v3 2/5] xen/arm: vsysreg: Add wrapper to handle sysreg access trapped by HCR_EL2.TVM X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , sstabellini@kernel.org MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" A follow-up patch will require to emulate some accesses to system registers trapped by HCR_EL2.TVM. When set, all NS EL1 writes to the virtual memory control registers will be trapped to the hypervisor. This patch adds the infrastructure to passthrough the access to the host registers. Note that HCR_EL2.TVM will be set in a follow-up patch dynamically. Signed-off-by: Julien Grall Reviewed-by: Stefano Stabellini --- Changes in v2: - Add missing include vreg.h - Update documentation reference to the lastest one --- xen/arch/arm/arm64/vsysreg.c | 58 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c index 6e60824572..16ac9c344a 100644 --- a/xen/arch/arm/arm64/vsysreg.c +++ b/xen/arch/arm/arm64/vsysreg.c @@ -21,8 +21,49 @@ #include #include #include +#include #include +/* + * Macro to help generating helpers for registers trapped when + * HCR_EL2.TVM is set. + * + * Note that it only traps NS write access from EL1. + */ +#define TVM_REG(reg) \ +static bool vreg_emulate_##reg(struct cpu_user_regs *regs, \ + uint64_t *r, bool read) \ +{ \ + GUEST_BUG_ON(read); \ + WRITE_SYSREG64(*r, reg); \ + \ + return true; \ +} + +/* Defining helpers for emulating sysreg registers. */ +TVM_REG(SCTLR_EL1) +TVM_REG(TTBR0_EL1) +TVM_REG(TTBR1_EL1) +TVM_REG(TCR_EL1) +TVM_REG(ESR_EL1) +TVM_REG(FAR_EL1) +TVM_REG(AFSR0_EL1) +TVM_REG(AFSR1_EL1) +TVM_REG(MAIR_EL1) +TVM_REG(AMAIR_EL1) +TVM_REG(CONTEXTIDR_EL1) + +/* Macro to generate easily case for co-processor emulation */ +#define GENERATE_CASE(reg) \ + case HSR_SYSREG_##reg: \ + { \ + bool res; \ + \ + res = vreg_emulate_sysreg64(regs, hsr, vreg_emulate_##reg); \ + ASSERT(res); \ + break; \ + } + void do_sysreg(struct cpu_user_regs *regs, const union hsr hsr) { @@ -44,6 +85,23 @@ void do_sysreg(struct cpu_user_regs *regs, break; /* + * HCR_EL2.TVM + * + * ARMv8 (DDI 0487D.a): Table D1-38 + */ + GENERATE_CASE(SCTLR_EL1) + GENERATE_CASE(TTBR0_EL1) + GENERATE_CASE(TTBR1_EL1) + GENERATE_CASE(TCR_EL1) + GENERATE_CASE(ESR_EL1) + GENERATE_CASE(FAR_EL1) + GENERATE_CASE(AFSR0_EL1) + GENERATE_CASE(AFSR1_EL1) + GENERATE_CASE(MAIR_EL1) + GENERATE_CASE(AMAIR_EL1) + GENERATE_CASE(CONTEXTIDR_EL1) + + /* * MDCR_EL2.TDRA * * ARMv8 (DDI 0487A.d): D1-1508 Table D1-57 From patchwork Fri Dec 14 11:58:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 153827 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1972841ljp; Fri, 14 Dec 2018 04:01:12 -0800 (PST) X-Google-Smtp-Source: AFSGD/XJh/5g1qSUy7QaNzhEPR6+jtnzzUeSEohNl01MCbzczPbTflVgrKuhIHAykHvUE5jhBNl/ X-Received: by 2002:a81:6d46:: with SMTP id i67mr2625173ywc.1.1544788872271; Fri, 14 Dec 2018 04:01:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544788872; cv=none; d=google.com; s=arc-20160816; b=cZlPtwfYPijUXyNs+96xc6I6M2Mu+AvqlHGkIODAxGN4gqf7Co/26Yo0TcXqixf3TX x43SC7fkO9l87YirMN+vxK4+3KGWZSYfHUy75BlOaYEKqJfib+qVR8sNZd6wU89y5EXv fN3bvgn3+WmmizEWTDmFsSD2eXrX8P2oWD+2PkR90+PDPm67q7FoZQe0hOtD81rmiLn6 IrTQWlBpFy+Ry2LB4HlPsMNgFfLAX+htWPJsIUOmthetMbtOOAA6c6Z/OLkKJPL+VNS3 vc6KKgwczm31ooWaZN2GsgdyuEgQj1fC5IE5Kq/Lzsz9xX/QO+q2mhVsh9qCgss+AqEW vCzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from; bh=Al+qjQT1iLXqzA4Ecolhn87M3Mkl7jk1Vvik4XS+GK8=; b=qTI4A/6lO4r4cSO1w65sXZBST/6Yy775kluVpBIZXkTiAsQhR227rqQ2IApGHXRLKb yPnufA6Il4JDK9/gJLnZHI1EtpCsrgSC+ohour2dXPMl3CVMFi2wtV6/Nh5ybdnCr/Y6 oXT6gmCtEMR9P8Ai5qbeUhDb0RP+WkwWFkscVWaSF90mad6tGCN9zPtl/cAhDYUDuDlA Z4cPkGj+p7u+2hH3Admjgoap10hZ+VvHxFs2aEKrrBXP+Vjdslge1SaoF9Sg4t1hN/PY IOh/ag76Mnc5UtxFw2fnSWIowyKTQyCaIzERWk3hwUCj4Y9xkO2EG02Ds/5232Z2JF9S zFJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id t190-v6si2438227ybf.478.2018.12.14.04.01.11 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 14 Dec 2018 04:01:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7X-0001IQ-9g; Fri, 14 Dec 2018 11:59:07 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7W-0001Hq-OL for xen-devel@lists.xenproject.org; Fri, 14 Dec 2018 11:59:06 +0000 X-Inumbo-ID: a79143f4-ff97-11e8-8e1d-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id a79143f4-ff97-11e8-8e1d-bc764e045a96; Fri, 14 Dec 2018 11:59:05 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D28B215AD; Fri, 14 Dec 2018 03:59:04 -0800 (PST) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 15CB93F575; Fri, 14 Dec 2018 03:59:03 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Fri, 14 Dec 2018 11:58:53 +0000 Message-Id: <20181214115855.6713-4-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181214115855.6713-1-julien.grall@arm.com> References: <20181214115855.6713-1-julien.grall@arm.com> Subject: [Xen-devel] [PATCH for-4.12 v3 3/5] xen/arm: p2m: Add support for preemption in p2m_cache_flush_range X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , sstabellini@kernel.org MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" p2m_cache_flush_range does not yet support preemption, this may be an issue as cleaning the cache can take a long time. While the current caller (XEN_DOMCTL_cacheflush) does not stricly require preemption, this will be necessary for new caller in a follow-up patch. The preemption implemented is quite simple, a counter is incremented by: - 1 on region skipped - 10 for each page requiring a flush When the counter reach 512 or above, we will check if preemption is needed. If not, the counter will be reset to 0. If yes, the function will stop, update start (to allow resuming later on) and return -ERESTART. This allows the caller to decide how the preemption will be done. For now, XEN_DOMCTL_cacheflush will continue to ignore the preemption. Signed-off-by: Julien Grall Reviewed-by: Stefano Stabellini --- Changes in v2: - Patch added --- xen/arch/arm/domctl.c | 8 +++++++- xen/arch/arm/p2m.c | 35 ++++++++++++++++++++++++++++++++--- xen/include/asm-arm/p2m.h | 4 +++- 3 files changed, 42 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c index 20691528a6..9da88b8c64 100644 --- a/xen/arch/arm/domctl.c +++ b/xen/arch/arm/domctl.c @@ -54,6 +54,7 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d, { gfn_t s = _gfn(domctl->u.cacheflush.start_pfn); gfn_t e = gfn_add(s, domctl->u.cacheflush.nr_pfns); + int rc; if ( domctl->u.cacheflush.nr_pfns > (1U<= 512 ) + { + if ( softirq_pending(smp_processor_id()) ) + { + rc = -ERESTART; + break; + } + count = 0; + } + /* * We want to flush page by page as: * - it may not be possible to map the full block (can be up to 1GB) @@ -1568,22 +1591,28 @@ int p2m_cache_flush_range(struct domain *d, gfn_t start, gfn_t end) if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_any_ram(t) ) { + count++; start = next_block_gfn; continue; } } + count += 10; + flush_page_to_ram(mfn_x(mfn), false); start = gfn_add(start, 1); mfn = mfn_add(mfn, 1); } - invalidate_icache(); + if ( rc != -ERESTART ) + invalidate_icache(); p2m_read_unlock(p2m); - return 0; + *pstart = start; + + return rc; } mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 7c1d930b1d..a633e27cc9 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -232,8 +232,10 @@ bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn); /* * Clean & invalidate caches corresponding to a region [start,end) of guest * address space. + * + * start will get updated if the function is preempted. */ -int p2m_cache_flush_range(struct domain *d, gfn_t start, gfn_t end); +int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end); /* * Map a region in the guest p2m with a specific p2m type. From patchwork Fri Dec 14 11:58:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 153832 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1973115ljp; Fri, 14 Dec 2018 04:01:24 -0800 (PST) X-Google-Smtp-Source: AFSGD/W4WJoerWT0IITbhljEQzzD0VVif45C3ZeXbPw6KRGcc1xt19jG1ufrJOC+Ca+gh3yJ0Ctw X-Received: by 2002:a0d:f2c5:: with SMTP id b188mr985741ywf.341.1544788884007; Fri, 14 Dec 2018 04:01:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544788884; cv=none; d=google.com; s=arc-20160816; b=hBauYLyGT8VsG1RiIKaIy2qPZnw/MU7k8pY3okUQRbaNlISYvz313fsmYloVRDmkET 0L61Dn4YQXUxMd+vMULq2rTMDSm/239+XTj5Q6BNxRE54D0BFlGrIViNZQoohQRh8svf 6Xmq6dOS7H5pP8mPQ36MDnThjdJ1YP5hncEb+Wf34OHnGJvUjQMc2eqZpwpUeyC6ELIy 9p1xw8rsnGlWmImfajAdT1SbPNQi8+MzJvyCEEm/xj38ZGslfA0lK2keY7K8uOOTOq3a knuuQlYwJGa5U5MtQ15KJIdEjP4auDeryfIO+iO1yns+5K9G2VPN+MB5vuiJussvn0Vo Kawg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from; bh=nVXo4H/DmVbZ7cpkUx7Vsl8JAMnR+sSnAEiKsPnzMjk=; b=ArHiJyL2aeNyPQmZiK7SZOVk6P8sM5g/g5SMT9o6NyB5Mm8W03A21dYU2VV7740iRO oZXREn+Ohaiy04f4g+1JGGF8C4zzAwYcXdwpHEfwy4JkGqtNvEJBUyc+hDEf4W8ZyoSG vfzA4Azfeuu9cR8/e/8Mc8gr5NYEiiXqe+oskcjOklP/nJlLDCQerZ6CaUSEq43/J5D+ 1K+APNMeUf9mjeW4aOTTjh+UXw+WJ81H7weLy0d8dZseR1Pdkr6d38Sssy5Fqrm1J+No NdqrbTyxi7U212sGYwQmLGWheIHsbgelO2+wBjQfY4nLCiEOGGFFdl/ZxPPvAx6by/y+ 19Bg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id h2si2557344ybm.253.2018.12.14.04.01.23 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 14 Dec 2018 04:01:23 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7Y-0001J3-Lt; Fri, 14 Dec 2018 11:59:08 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7X-0001I9-0K for xen-devel@lists.xenproject.org; Fri, 14 Dec 2018 11:59:07 +0000 X-Inumbo-ID: a82fb37a-ff97-11e8-b29d-77c07e4a61e9 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTP id a82fb37a-ff97-11e8-b29d-77c07e4a61e9; Fri, 14 Dec 2018 11:59:06 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07E001684; Fri, 14 Dec 2018 03:59:06 -0800 (PST) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1BF043F575; Fri, 14 Dec 2018 03:59:04 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Fri, 14 Dec 2018 11:58:54 +0000 Message-Id: <20181214115855.6713-5-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181214115855.6713-1-julien.grall@arm.com> References: <20181214115855.6713-1-julien.grall@arm.com> Subject: [Xen-devel] [PATCH for-4.12 v3 4/5] xen/arm: Implement Set/Way operations X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , sstabellini@kernel.org MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Set/Way operations are used to perform maintenance on a given cache. At the moment, Set/Way operations are not trapped and therefore a guest OS will directly act on the local cache. However, a vCPU may migrate to another pCPU in the middle of the processor. This will result to have cache with stall data (Set/Way are not propagated) potentially causing crash. This may be the cause of heisenbug noticed in Osstest [1]. Furthermore, Set/Way operations are not available on system cache. This means that OS, such as Linux 32-bit, relying on those operations to fully clean the cache before disabling MMU may break because data may sits in system caches and not in RAM. For more details about Set/Way, see the talk "The Art of Virtualizing Cache Maintenance" given at Xen Summit 2018 [2]. In the context of Xen, we need to trap Set/Way operations and emulate them. From the Arm Arm (B1.14.4 in DDI 046C.c), Set/Way operations are difficult to virtualized. So we can assume that a guest OS using them will suffer the consequence (i.e slowness) until developer removes all the usage of Set/Way. As the software is not allowed to infer the Set/Way to Physical Address mapping, Xen will need to go through the guest P2M and clean & invalidate all the entries mapped. Because Set/Way happen in batch (a loop on all Set/Way of a cache), Xen would need to go through the P2M for every instructions. This is quite expensive and would severely impact the guest OS. The implementation is re-using the KVM policy to limit the number of flush: - If we trap a Set/Way operations, we enable VM trapping (i.e HVC_EL2.TVM) to detect cache being turned on/off, and do a full clean. - We clean the caches when turning on and off - Once the caches are enabled, we stop trapping VM instructions [1] https://lists.xenproject.org/archives/html/xen-devel/2017-09/msg03191.html [2] https://fr.slideshare.net/xen_com_mgr/virtualizing-cache Signed-off-by: Julien Grall --- Changes in v2: - Fix emulation for Set/Way cache flush arm64 sysreg - Add support for preemption - Check cache status on every VM traps in Arm64 - Remove spurious change --- xen/arch/arm/arm64/vsysreg.c | 17 ++++++++ xen/arch/arm/p2m.c | 92 ++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/traps.c | 25 +++++++++++- xen/arch/arm/vcpreg.c | 22 +++++++++++ xen/include/asm-arm/domain.h | 8 ++++ xen/include/asm-arm/p2m.h | 20 ++++++++++ 6 files changed, 183 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c index 16ac9c344a..8a85507d9d 100644 --- a/xen/arch/arm/arm64/vsysreg.c +++ b/xen/arch/arm/arm64/vsysreg.c @@ -34,9 +34,14 @@ static bool vreg_emulate_##reg(struct cpu_user_regs *regs, \ uint64_t *r, bool read) \ { \ + struct vcpu *v = current; \ + bool cache_enabled = vcpu_has_cache_enabled(v); \ + \ GUEST_BUG_ON(read); \ WRITE_SYSREG64(*r, reg); \ \ + p2m_toggle_cache(v, cache_enabled); \ + \ return true; \ } @@ -85,6 +90,18 @@ void do_sysreg(struct cpu_user_regs *regs, break; /* + * HCR_EL2.TSW + * + * ARMv8 (DDI 0487B.b): Table D1-42 + */ + case HSR_SYSREG_DCISW: + case HSR_SYSREG_DCCSW: + case HSR_SYSREG_DCCISW: + if ( !hsr.sysreg.read ) + p2m_set_way_flush(current); + break; + + /* * HCR_EL2.TVM * * ARMv8 (DDI 0487D.a): Table D1-38 diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 5639e4b64c..125d858d02 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include @@ -1615,6 +1616,97 @@ int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end) return rc; } +/* + * Clean & invalidate RAM associated to the guest vCPU. + * + * The function can only work with the current vCPU and should be called + * with IRQ enabled as the vCPU could get preempted. + */ +void p2m_flush_vm(struct vcpu *v) +{ + int rc; + gfn_t start = _gfn(0); + + ASSERT(v == current); + ASSERT(local_irq_is_enabled()); + ASSERT(v->arch.need_flush_to_ram); + + do + { + rc = p2m_cache_flush_range(v->domain, &start, _gfn(ULONG_MAX)); + if ( rc == -ERESTART ) + do_softirq(); + } while ( rc == -ERESTART ); + + if ( rc != 0 ) + gprintk(XENLOG_WARNING, + "P2M has not been correctly cleaned (rc = %d)\n", + rc); + + v->arch.need_flush_to_ram = false; +} + +/* + * See note at ARMv7 ARM B1.14.4 (DDI 0406C.c) (TL;DR: S/W ops are not + * easily virtualized). + * + * Main problems: + * - S/W ops are local to a CPU (not broadcast) + * - We have line migration behind our back (speculation) + * - System caches don't support S/W at all (damn!) + * + * In the face of the above, the best we can do is to try and convert + * S/W ops to VA ops. Because the guest is not allowed to infer the S/W + * to PA mapping, it can only use S/W to nuke the whole cache, which is + * rather a good thing for us. + * + * Also, it is only used when turning caches on/off ("The expected + * usage of the cache maintenance instructions that operate by set/way + * is associated with the powerdown and powerup of caches, if this is + * required by the implementation."). + * + * We use the following policy: + * - If we trap a S/W operation, we enabled VM trapping to detect + * caches being turned on/off, and do a full clean. + * + * - We flush the caches on both caches being turned on and off. + * + * - Once the caches are enabled, we stop trapping VM ops. + */ +void p2m_set_way_flush(struct vcpu *v) +{ + /* This function can only work with the current vCPU. */ + ASSERT(v == current); + + if ( !(v->arch.hcr_el2 & HCR_TVM) ) + { + v->arch.need_flush_to_ram = true; + vcpu_hcr_set_flags(v, HCR_TVM); + } +} + +void p2m_toggle_cache(struct vcpu *v, bool was_enabled) +{ + bool now_enabled = vcpu_has_cache_enabled(v); + + /* This function can only work with the current vCPU. */ + ASSERT(v == current); + + /* + * If switching the MMU+caches on, need to invalidate the caches. + * If switching it off, need to clean the caches. + * Clean + invalidate does the trick always. + */ + if ( was_enabled != now_enabled ) + { + v->arch.need_flush_to_ram = true; + } + + /* Caches are now on, stop trapping VM ops (until a S/W op) */ + if ( now_enabled ) + vcpu_hcr_clear_flags(v, HCR_TVM); +} + mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn) { return p2m_lookup(d, gfn, NULL); diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 02665cc7b4..221c762ada 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -97,7 +97,7 @@ register_t get_default_hcr_flags(void) { return (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM| (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) | - HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB); + HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW); } static enum { @@ -2258,10 +2258,33 @@ static void check_for_pcpu_work(void) } } +/* + * Process pending work for the vCPU. Any call should be fast or + * implement preemption. + */ +static void check_for_vcpu_work(void) +{ + struct vcpu *v = current; + + if ( likely(!v->arch.need_flush_to_ram) ) + return; + + /* + * Give a chance for the pCPU to process work before handling the vCPU + * pending work. + */ + check_for_pcpu_work(); + + local_irq_enable(); + p2m_flush_vm(v); + local_irq_disable(); +} + void leave_hypervisor_tail(void) { local_irq_disable(); + check_for_vcpu_work(); check_for_pcpu_work(); vgic_sync_to_lrs(); diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c index 550c25ec3f..cdc91cdf5b 100644 --- a/xen/arch/arm/vcpreg.c +++ b/xen/arch/arm/vcpreg.c @@ -51,9 +51,14 @@ #define TVM_REG(sz, func, reg...) \ static bool func(struct cpu_user_regs *regs, uint##sz##_t *r, bool read) \ { \ + struct vcpu *v = current; \ + bool cache_enabled = vcpu_has_cache_enabled(v); \ + \ GUEST_BUG_ON(read); \ WRITE_SYSREG##sz(*r, reg); \ \ + p2m_toggle_cache(v, cache_enabled); \ + \ return true; \ } @@ -71,6 +76,8 @@ static bool func(struct cpu_user_regs *regs, uint##sz##_t *r, bool read) \ static bool vreg_emulate_##xreg(struct cpu_user_regs *regs, uint32_t *r, \ bool read, bool hi) \ { \ + struct vcpu *v = current; \ + bool cache_enabled = vcpu_has_cache_enabled(v); \ register_t reg = READ_SYSREG(xreg); \ \ GUEST_BUG_ON(read); \ @@ -86,6 +93,8 @@ static bool vreg_emulate_##xreg(struct cpu_user_regs *regs, uint32_t *r, \ } \ WRITE_SYSREG(reg, xreg); \ \ + p2m_toggle_cache(v, cache_enabled); \ + \ return true; \ } \ \ @@ -186,6 +195,19 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr) break; /* + * HCR_EL2.TSW + * + * ARMv7 (DDI 0406C.b): B1.14.6 + * ARMv8 (DDI 0487B.b): Table D1-42 + */ + case HSR_CPREG32(DCISW): + case HSR_CPREG32(DCCSW): + case HSR_CPREG32(DCCISW): + if ( !cp32.read ) + p2m_set_way_flush(current); + break; + + /* * HCR_EL2.TVM * * ARMv8 (DDI 0487D.a): Table D1-38 diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 175de44927..f16b973e0d 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -202,6 +202,14 @@ struct arch_vcpu struct vtimer phys_timer; struct vtimer virt_timer; bool vtimer_initialized; + + /* + * The full P2M may require some cleaning (e.g when emulation + * set/way). As the action can take a long time, it requires + * preemption. So this is deferred until we return to the guest. + */ + bool need_flush_to_ram; + } __cacheline_aligned; void vcpu_show_execution_state(struct vcpu *); diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index a633e27cc9..79abcb5a63 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -6,6 +6,8 @@ #include #include +#include + #define paddr_bits PADDR_BITS /* Holds the bit size of IPAs in p2m tables. */ @@ -237,6 +239,12 @@ bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn); */ int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end); +void p2m_set_way_flush(struct vcpu *v); + +void p2m_toggle_cache(struct vcpu *v, bool was_enabled); + +void p2m_flush_vm(struct vcpu *v); + /* * Map a region in the guest p2m with a specific p2m type. * The memory attributes will be derived from the p2m type. @@ -364,6 +372,18 @@ static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, return -EOPNOTSUPP; } +/* + * A vCPU has cache enabled only when the MMU is enabled and data cache + * is enabled. + */ +static inline bool vcpu_has_cache_enabled(struct vcpu *v) +{ + /* Only works with the current vCPU */ + ASSERT(current == v); + + return (READ_SYSREG32(SCTLR_EL1) & (SCTLR_C|SCTLR_M)) == (SCTLR_C|SCTLR_M); +} + #endif /* _XEN_P2M_H */ /* From patchwork Fri Dec 14 11:58:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 153831 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1972953ljp; Fri, 14 Dec 2018 04:01:17 -0800 (PST) X-Google-Smtp-Source: AFSGD/XNszvnSIwHGDvE5aa4zB8d/3G72CKv1yZtTNnONDoRVfYCKAyZ0JXwVmycG0+duy1QZL6v X-Received: by 2002:a0d:cf06:: with SMTP id r6mr2695952ywd.62.1544788877361; Fri, 14 Dec 2018 04:01:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544788877; cv=none; d=google.com; s=arc-20160816; b=RWUfhLibAKBNNJJ0TEGZ7u8uIBLN02lRXTLgq09UeNzOWb/Wt3fguNwR0bEEwJ4Ka9 XKbrKH1yqM3pdDIBC9xZqAxTyutuVoVWry1mYLYtw2dBWuFcXkDiYeyXcryKcKIULCzF g+oVIrQRCFCqgpYFH2MHRGbyQ9BCO+Xw7rsXxpxAyyjwQOugMX3XIDF9qY2wxA+a/k0x NRaOIwSfBnQI8oAKf+TfvIWoLq6R2MbQRXBcPJicLOeS/H80haLZtrXDrcAapn0X1hLQ NEaqeIEPz56JCmof93s/m+g40VRzsg3Y+w3P0LWmngdjN2AnAVPrKp1yi7I6LBtN2Yx7 HvDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from; bh=JyoeGXz8553Mw3SmjXaX29drQpwqfUB2Gz5ByZT4Ko4=; b=0ueReiVA2nG5I/WULoctepGAhsS5ypR/UloMgmdBkhluYFEMgDpBr5CMUqnWnygats H7+IUo4bZUWiC8naU3p9CmZtJiukHn9bOyzDYGqS57/3/llKwUjw9slYPmAJFG9nVM5D PFRsF/WN80uCKADvtqLKTTbiCQ7w0TtkJ8kmOXRiK2UfQIcuCzvdL5AfPWHk/LBpAqne Q9T1Uco08dX0wTKjo2Y2DoHQNB34SV/LR6yFCLRcK/1A+ZsXblLOnpnldx6GIBzbreo4 g50GK989GWsAmXV2MvAhSzYbqZJOnP/Jt6FMiwKTz5GtxJx2hHmn+SUo6KeHZpSsxDHv uUbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id g22si2727808ywb.286.2018.12.14.04.01.17 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 14 Dec 2018 04:01:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7a-0001Jj-VR; Fri, 14 Dec 2018 11:59:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1gXm7Z-0001JG-9b for xen-devel@lists.xenproject.org; Fri, 14 Dec 2018 11:59:09 +0000 X-Inumbo-ID: a9897076-ff97-11e8-80c9-d3ba97b277cd Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTP id a9897076-ff97-11e8-80c9-d3ba97b277cd; Fri, 14 Dec 2018 11:59:08 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 373E9EBD; Fri, 14 Dec 2018 03:59:08 -0800 (PST) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4745A3F575; Fri, 14 Dec 2018 03:59:06 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Fri, 14 Dec 2018 11:58:55 +0000 Message-Id: <20181214115855.6713-6-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181214115855.6713-1-julien.grall@arm.com> References: <20181214115855.6713-1-julien.grall@arm.com> Subject: [Xen-devel] [PATCH for-4.12 v3 5/5] xen/arm: Track page accessed between batch of Set/Way operations X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: sstabellini@kernel.org, Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , =?utf-8?q?Ro?= =?utf-8?q?ger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" At the moment, the implementation of Set/Way operations will go through all the entries of the guest P2M and flush them. However, this is very expensive and may render unusable a guest OS using them. For instance, Linux 32-bit will use Set/Way operations during secondary CPU bring-up. As the implementation is really expensive, it may be possible to hit the CPU bring-up timeout. To limit the Set/Way impact, we track what pages has been of the guest has been accessed between batch of Set/Way operations. This is done using bit[0] (aka valid bit) of the P2M entry. This patch adds a new per-arch helper is introduced to perform actions just before the guest is first unpaused. This will be used to invalidate the P2M to track access from the start of the guest. Signed-off-by: Julien Grall Reviewed-by: Jan Beulich Reviewed-by: Stefano Stabellini --- While we can spread d->creation_finished all over the code, the per-arch helper to perform actions just before the guest is first unpaused can bring a lot of benefit for both architecture. For instance, on Arm, the flush to the instruction cache could be delayed until the domain is first run. This would improve greatly the performance of creating guest. I am still doing the benchmark whether having a command line option is worth it. I will provide numbers as soon as I have them. Changes in v3: - Add Jan reviewed-by for non-ARM pieces Cc: Stefano Stabellini Cc: Julien Grall Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Cc: Wei Liu --- xen/arch/arm/domain.c | 14 ++++++++++++++ xen/arch/arm/p2m.c | 29 +++++++++++++++++++++++++++-- xen/arch/x86/domain.c | 4 ++++ xen/common/domain.c | 5 ++++- xen/include/asm-arm/p2m.h | 2 ++ xen/include/xen/domain.h | 2 ++ 6 files changed, 53 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 1d926dcb29..41f101746e 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -767,6 +767,20 @@ int arch_domain_soft_reset(struct domain *d) return -ENOSYS; } +void arch_domain_creation_finished(struct domain *d) +{ + /* + * To avoid flushing the whole guest RAM on the first Set/Way, we + * invalidate the P2M to track what has been accessed. + * + * This is only turned when IOMMU is not used or the page-table are + * not shared because bit[0] (e.g valid bit) unset will result + * IOMMU fault that could be not fixed-up. + */ + if ( !iommu_use_hap_pt(d) ) + p2m_invalidate_root(p2m_get_hostp2m(d)); +} + static int is_guest_pv32_psr(uint32_t psr) { switch (psr & PSR_MODE_MASK) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 125d858d02..347028c325 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1079,6 +1079,22 @@ static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn) } /* + * Invalidate all entries in the root page-tables. This is + * useful to get fault on entry and do an action. + */ +void p2m_invalidate_root(struct p2m_domain *p2m) +{ + unsigned int i; + + p2m_write_lock(p2m); + + for ( i = 0; i < P2M_ROOT_LEVEL; i++ ) + p2m_invalidate_table(p2m, page_to_mfn(p2m->root + i)); + + p2m_write_unlock(p2m); +} + +/* * Resolve any translation fault due to change in the p2m. This * includes break-before-make and valid bit cleared. */ @@ -1587,10 +1603,12 @@ int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end) */ if ( gfn_eq(start, next_block_gfn) ) { - mfn = p2m_get_entry(p2m, start, &t, NULL, &order, NULL); + bool valid; + + mfn = p2m_get_entry(p2m, start, &t, NULL, &order, &valid); next_block_gfn = gfn_next_boundary(start, order); - if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_any_ram(t) ) + if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_any_ram(t) || !valid ) { count++; start = next_block_gfn; @@ -1624,6 +1642,7 @@ int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end) */ void p2m_flush_vm(struct vcpu *v) { + struct p2m_domain *p2m = p2m_get_hostp2m(v->domain); int rc; gfn_t start = _gfn(0); @@ -1643,6 +1662,12 @@ void p2m_flush_vm(struct vcpu *v) "P2M has not been correctly cleaned (rc = %d)\n", rc); + /* + * Invalidate the p2m to track which page was modified by the guest + * between call of p2m_flush_vm(). + */ + p2m_invalidate_root(p2m); + v->arch.need_flush_to_ram = false; } diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index f0e0cdbb0e..3729887d00 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -762,6 +762,10 @@ int arch_domain_soft_reset(struct domain *d) return ret; } +void arch_domain_creation_finished(struct domain *d) +{ +} + /* * These are the masks of CR4 bits (subject to hardware availability) which a * PV guest may not legitimiately attempt to modify. diff --git a/xen/common/domain.c b/xen/common/domain.c index 78cc5249e8..c623daec56 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1116,8 +1116,11 @@ int domain_unpause_by_systemcontroller(struct domain *d) * Creation is considered finished when the controller reference count * first drops to 0. */ - if ( new == 0 ) + if ( new == 0 && !d->creation_finished ) + { d->creation_finished = true; + arch_domain_creation_finished(d); + } domain_unpause(d); diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 79abcb5a63..01cd3ee4b5 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -231,6 +231,8 @@ int p2m_set_entry(struct p2m_domain *p2m, bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn); +void p2m_invalidate_root(struct p2m_domain *p2m); + /* * Clean & invalidate caches corresponding to a region [start,end) of guest * address space. diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 33e41486cb..d1bfc82f57 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -70,6 +70,8 @@ void arch_domain_unpause(struct domain *d); int arch_domain_soft_reset(struct domain *d); +void arch_domain_creation_finished(struct domain *d); + void arch_p2m_set_access_required(struct domain *d, bool access_required); int arch_set_info_guest(struct vcpu *, vcpu_guest_context_u);