From patchwork Thu Nov 9 13:59:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Tokarev X-Patchwork-Id: 742582 Delivered-To: patch@linaro.org Received: by 2002:a05:6000:110f:b0:32d:baff:b0ca with SMTP id z15csp858960wrw; Thu, 9 Nov 2023 06:12:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjyZsiETQRX4nw8rDNYk0iyuolbYZ9hjJXE5SKrZ7fxxG3sjSF2OZ+iUoUVW4x62OkM/wT X-Received: by 2002:a5d:47a8:0:b0:32d:8e54:29fa with SMTP id 8-20020a5d47a8000000b0032d8e5429famr4480815wrb.44.1699539134489; Thu, 09 Nov 2023 06:12:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699539134; cv=none; d=google.com; s=arc-20160816; b=VY1CWrURQIZkpoq4ZnyxoR5LcPtHiLu1Ca72mn6zbCjIsipqhKydhKmk4KSQWXt0Xv qh1rAv2AHjsosYBZcbctqcroNY2SiLIVGzWk2CmROQgV9VxX96ddzbWi/GKQDqcgddiW NZ01SusE6EjoOYgKGnrLIWodKSg5mRIihYBR+rMQHjj2ZJKvr5MXZPhJDpJM7tXSKAwR bFd/kYiRnz5cD1tvuwNFI0vSXQGgssqSqK143flwRdi5d3ZwUbGaQNckfvBliW4KEsKf SSDgxSPw1m8Nv4O7i+r41rdiNDA076vvEl0WYvqu0gHxZCr7jeS7QZAF13D1OgDfiZBG H//w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=rYxhCUNA/ZR8JxgV/LoGvYU7RUWuFV52mpo//M+IXVk=; fh=xJ1URYKcMN3TM0/XAv5v+aCN+5tIbzAdcfBx5UNgoLw=; b=WFYLyRxNW02vuZviGesGbojo7U+OUlQP3VSUfK9OSVTCEhWRQE9sARZUyUmqxUA7/h 1uzU56QtPg5LguHb0+6ac09lYYIdzyBWH2wETx4ziecpUmm7/lVYH61ZduBqsVEcGw6Y cKpP3+/Yf/Henov7QceWiJESZQ+lZVR0DvxwQSveF4NC0NvqT1t19F2Eo+vbpfyhocV9 M3eAKmKGAwgVestB0rG1eO9Ft27d2i6E3eRP0KD6p89/rKTAQqzZBuAtDQZbRoEom3Yi hQcXTvQA8t7HKf+l4ZPPEIEPeOttdyD9sAdnuJtEulsWGOAlvokhyR8VJsspggZmrUtZ G+lQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org" Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id p10-20020a5d68ca000000b00327db904eb6si5509891wrw.372.2023.11.09.06.12.14 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Thu, 09 Nov 2023 06:12:14 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org" Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r15ga-0007pv-NM; Thu, 09 Nov 2023 09:07:09 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r15dj-0000Zl-V5; Thu, 09 Nov 2023 09:04:17 -0500 Received: from isrv.corpit.ru ([86.62.121.231]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r15de-0007Ck-Vp; Thu, 09 Nov 2023 09:04:11 -0500 Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2]) by isrv.corpit.ru (Postfix) with ESMTP id E817B31BE9; Thu, 9 Nov 2023 16:59:57 +0300 (MSK) Received: from tls.msk.ru (mjt.wg.tls.msk.ru [192.168.177.130]) by tsrv.corpit.ru (Postfix) with SMTP id F1BB834520; Thu, 9 Nov 2023 16:59:49 +0300 (MSK) Received: (nullmailer pid 1462928 invoked by uid 1000); Thu, 09 Nov 2023 13:59:47 -0000 From: Michael Tokarev To: qemu-devel@nongnu.org Cc: qemu-stable@nongnu.org, Peter Maydell , Richard Henderson , Michael Tokarev Subject: [Stable-7.2.7 49/62] target/arm: Fix handling of SW and NSW bits for stage 2 walks Date: Thu, 9 Nov 2023 16:59:17 +0300 Message-Id: <20231109135933.1462615-49-mjt@tls.msk.ru> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=86.62.121.231; envelope-from=mjt@tls.msk.ru; helo=isrv.corpit.ru X-Spam_score_int: -68 X-Spam_score: -6.9 X-Spam_bar: ------ X-Spam_report: (-6.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_HI=-5, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Peter Maydell We currently don't correctly handle the VSTCR_EL2.SW and VTCR_EL2.NSW configuration bits. These allow configuration of whether the stage 2 page table walks for Secure IPA and NonSecure IPA should do their descriptor reads from Secure or NonSecure physical addresses. (This is separate from how the translation table base address and other parameters are set: an NS IPA always uses VTTBR_EL2 and VTCR_EL2 for its base address and walk parameters, regardless of the NSW bit, and similarly for Secure.) Provide a new function ptw_idx_for_stage_2() which returns the MMU index to use for descriptor reads, and use it to set up the .in_ptw_idx wherever we call get_phys_addr_lpae(). For a stage 2 walk, wherever we call get_phys_addr_lpae(): * .in_ptw_idx should be ptw_idx_for_stage_2() of the .in_mmu_idx * .in_secure should be true if .in_mmu_idx is Stage2_S This allows us to correct S1_ptw_translate() so that it consistently always sets its (out_secure, out_phys) to the result it gets from the S2 walk (either by calling get_phys_addr_lpae() or by TLB lookup). This makes better conceptual sense because the S2 walk should return us an (address space, address) tuple, not an address that we then randomly assign to S or NS. Our previous handling of SW and NSW was broken, so guest code trying to use these bits to put the s2 page tables in the "other" address space wouldn't work correctly. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1600 Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson Message-id: 20230504135425.2748672-3-peter.maydell@linaro.org (cherry picked from commit fcc0b0418fff655f20fd0cf86a1bbdc41fd2e7c6) Signed-off-by: Michael Tokarev diff --git a/target/arm/ptw.c b/target/arm/ptw.c index e593bc339a..97c85f3c95 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -103,6 +103,37 @@ ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env) return stage_1_mmu_idx(arm_mmu_idx(env)); } +/* + * Return where we should do ptw loads from for a stage 2 walk. + * This depends on whether the address we are looking up is a + * Secure IPA or a NonSecure IPA, which we know from whether this is + * Stage2 or Stage2_S. + * If this is the Secure EL1&0 regime we need to check the NSW and SW bits. + */ +static ARMMMUIdx ptw_idx_for_stage_2(CPUARMState *env, ARMMMUIdx stage2idx) +{ + bool s2walk_secure; + + /* + * We're OK to check the current state of the CPU here because + * (1) we always invalidate all TLBs when the SCR_EL3.NS bit changes + * (2) there's no way to do a lookup that cares about Stage 2 for a + * different security state to the current one for AArch64, and AArch32 + * never has a secure EL2. (AArch32 ATS12NSO[UP][RW] allow EL3 to do + * an NS stage 1+2 lookup while the NS bit is 0.) + */ + if (!arm_is_secure_below_el3(env) || !arm_el_is_aa64(env, 3)) { + return ARMMMUIdx_Phys_NS; + } + if (stage2idx == ARMMMUIdx_Stage2_S) { + s2walk_secure = !(env->cp15.vstcr_el2 & VSTCR_SW); + } else { + s2walk_secure = !(env->cp15.vtcr_el2 & VTCR_NSW); + } + return s2walk_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS; + +} + static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx) { return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0; @@ -220,7 +251,6 @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw, ARMMMUIdx mmu_idx = ptw->in_mmu_idx; ARMMMUIdx s2_mmu_idx = ptw->in_ptw_idx; uint8_t pte_attrs; - bool pte_secure; ptw->out_virt = addr; @@ -232,8 +262,8 @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw, if (regime_is_stage2(s2_mmu_idx)) { S1Translate s2ptw = { .in_mmu_idx = s2_mmu_idx, - .in_ptw_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS, - .in_secure = is_secure, + .in_ptw_idx = ptw_idx_for_stage_2(env, s2_mmu_idx), + .in_secure = s2_mmu_idx == ARMMMUIdx_Stage2_S, .in_debug = true, }; GetPhysAddrResult s2 = { }; @@ -244,12 +274,12 @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw, } ptw->out_phys = s2.f.phys_addr; pte_attrs = s2.cacheattrs.attrs; - pte_secure = s2.f.attrs.secure; + ptw->out_secure = s2.f.attrs.secure; } else { /* Regime is physical. */ ptw->out_phys = addr; pte_attrs = 0; - pte_secure = is_secure; + ptw->out_secure = s2_mmu_idx == ARMMMUIdx_Phys_S; } ptw->out_host = NULL; ptw->out_rw = false; @@ -270,7 +300,7 @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw, ptw->out_phys = full->phys_addr | (addr & ~TARGET_PAGE_MASK); ptw->out_rw = full->prot & PAGE_WRITE; pte_attrs = full->pte_attrs; - pte_secure = full->attrs.secure; + ptw->out_secure = full->attrs.secure; #else g_assert_not_reached(); #endif @@ -293,11 +323,6 @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw, } } - /* Check if page table walk is to secure or non-secure PA space. */ - ptw->out_secure = (is_secure - && !(pte_secure - ? env->cp15.vstcr_el2 & VSTCR_SW - : env->cp15.vtcr_el2 & VTCR_NSW)); ptw->out_be = regime_translation_big_endian(env, mmu_idx); return true; @@ -2610,7 +2635,7 @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw, hwaddr ipa; int s1_prot, s1_lgpgsz; bool is_secure = ptw->in_secure; - bool ret, ipa_secure, s2walk_secure; + bool ret, ipa_secure; ARMCacheAttrs cacheattrs1; bool is_el0; uint64_t hcr; @@ -2624,20 +2649,11 @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw, ipa = result->f.phys_addr; ipa_secure = result->f.attrs.secure; - if (is_secure) { - /* Select TCR based on the NS bit from the S1 walk. */ - s2walk_secure = !(ipa_secure - ? env->cp15.vstcr_el2 & VSTCR_SW - : env->cp15.vtcr_el2 & VTCR_NSW); - } else { - assert(!ipa_secure); - s2walk_secure = false; - } is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0; - ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; - ptw->in_ptw_idx = s2walk_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS; - ptw->in_secure = s2walk_secure; + ptw->in_mmu_idx = ipa_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; + ptw->in_secure = ipa_secure; + ptw->in_ptw_idx = ptw_idx_for_stage_2(env, ptw->in_mmu_idx); /* * S1 is done, now do S2 translation. @@ -2729,6 +2745,16 @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw, ptw->in_ptw_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; break; + case ARMMMUIdx_Stage2: + case ARMMMUIdx_Stage2_S: + /* + * Second stage lookup uses physical for ptw; whether this is S or + * NS may depend on the SW/NSW bits if this is a stage 2 lookup for + * the Secure EL2&0 regime. + */ + ptw->in_ptw_idx = ptw_idx_for_stage_2(env, mmu_idx); + break; + case ARMMMUIdx_E10_0: s1_mmu_idx = ARMMMUIdx_Stage1_E0; goto do_twostage; @@ -2752,7 +2778,7 @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw, /* fall through */ default: - /* Single stage and second stage uses physical for ptw. */ + /* Single stage uses physical for ptw. */ ptw->in_ptw_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS; break; }