From patchwork Sat Oct 5 20:06:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 833012 Delivered-To: patch@linaro.org Received: by 2002:adf:8b52:0:b0:367:895a:4699 with SMTP id v18csp1339392wra; Sat, 5 Oct 2024 13:10:30 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUbO+Bhr2rEQyzxepbze/9URysY19+SD/em4CDQsQmuLsY4g/zFUcQOXLa3/25VXlLAn6IoQA==@linaro.org X-Google-Smtp-Source: AGHT+IENbv1zdXXXPSGUBUG+g6ZYcZ9vQ01X0eK7dOHNDQ+hJT938+mn25l4Ziwda9M49q17DFEx X-Received: by 2002:ad4:4ba7:0:b0:6cb:9c0a:d7ad with SMTP id 6a1803df08f44-6cb9c0ad7b0mr81983546d6.10.1728159030092; Sat, 05 Oct 2024 13:10:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1728159030; cv=none; d=google.com; s=arc-20240605; b=jABoEVFRpQR9y3aWnP7PyCU+eRV9Oyl8kZUrCZNZScF3ElcK6WYjtKihWzCcpYeOI1 VDMq1NufR0s7V/rFJdX5OTh7X2/ChdqvEMBRY5/wq2dA2JPeBp91Te8ahWPkdzodL/mN irxf6yP9CNY2AbMbjzXLcIas7S4iqj7dmPsd7AbzUU0c5RSW7plMHgS0esh12HP2l0lI GO4q5USs052MbPLhSDSbsKUVx8wvLuCsZie5UyCqJ/5C2KYuyZ2hVTUUxAz+YXvUQTkg vnvks0yAjhK973oNc/k7OvG3hTisnfwLuXN/tzr6WwscGQFT7TZ6fCjE2AOOehJoe2AU dizA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=OPq3M0kuo1Nn8l2/hQhKXzGFF76ZzptYhUbdCidOcco=; fh=rQg8EQ8KH04w/0VkDYF6/ktgEyHJ2dqn1eatHCfBaLw=; b=XRdP3R9vO1rXXkvATlEahybT5cvMbz5vxg6XRLub3zlwcCtzDU14zcgUk4sjeJh0fN dP2LUFSeeuX4TdJWqTz9XZIRlSj+rI2E0ogAtiA+cNkJ0KQMgjnRAFUaYfzhDJOcKJFH uKdrePlMtOWI2eoZPWvCDsgwp9+5qdKjvHGkIwJtudgFUsCdZraYkD9ZkMoPBSkuwkyX xV7skpvlD2CoRstUcenw//DBvcdlRRRQjjJn90tkcTxWyklJIin16wsBJPp7cDqSIW3P xqroGdpma0CCWg0GqEv85/dhHL7hS3sEXoMvr3rFKYM7h0AbeuCpdSE4r9i5o0qQfZIa 2MuQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ctumXebQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org; dara=neutral header.i=@linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 6a1803df08f44-6cba475a7acsi27531176d6.297.2024.10.05.13.10.29 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sat, 05 Oct 2024 13:10:30 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ctumXebQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org; dara=neutral header.i=@linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sxB2y-0006WL-7U; Sat, 05 Oct 2024 16:06:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sxB2w-0006V4-DS for qemu-devel@nongnu.org; Sat, 05 Oct 2024 16:06:34 -0400 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sxB2s-0001pb-HY for qemu-devel@nongnu.org; Sat, 05 Oct 2024 16:06:32 -0400 Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-20b9b35c7c7so22719975ad.1 for ; Sat, 05 Oct 2024 13:06:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728158781; x=1728763581; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OPq3M0kuo1Nn8l2/hQhKXzGFF76ZzptYhUbdCidOcco=; b=ctumXebQxONXJtiRS6QP2JrV9gIES24w0rbyA1zRuXT7gJXu3SyWlr3WEpwMNAFdx3 WDqcwQjce+gvAZeoxIymSG3X2+CyKiJDCNp0W1IolcCeAEMcgZ7Zt0R+wLCt67/6rypg C9p+1Yqj+li8GrvSR00JWalAgiphg2EFi9PJPdYh0JpJf482At+dgc3aG1RcGjC2y8LS qi9heu3ytK1y0Ptfw6+SGREnC4Qv3I5LtVxC/jFCjsgrT1dmi9i8bqUsaA6DcbF+DZGx HOfhBGgLSufIBAeSKjFQsXH1YThTEWPP7SxpyKqWLVfxTH0SgREfDKBfZMnlWFsCkcdo 8sYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728158781; x=1728763581; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OPq3M0kuo1Nn8l2/hQhKXzGFF76ZzptYhUbdCidOcco=; b=OP89oCgQJw6ObkJQED0W5oGFfwQ8dWrVA1BJuHvCh4oeoWdR3dvHAIskwaygmm+vrl Vfgo0aEFqMDA0LpTiT758qiGSCgZAkFaQiG2rTC9+Ncli0b+466gtZorerPeTgD9jLWE aq3qyiRBIWarsKvld1i8PVrjB1iUO4aGIpUJ1uM9gLTAPH9wzvWqASFFH8t8n/dngJXK XMqUtgbjQpj7gwHJ3JUV+t6CgixHAuljGkRw7uJhZ73PMsc4nNbGLZuMw5g5QLS87pFw j0lnvtU6HTAakMHoNqomDKGaxxeD68fFBmU93nkTc6qqnOcWvN5G+FmI64V2V/ERzpEV 6dRg== X-Gm-Message-State: AOJu0YzpwMpMWhgKhdDV2xjkJFjOGAFFsIi+baArAOFmANTWENbrtuZM mD7ZmhqrYmnkS8uQgxmOPpEqGhaexXiprjvcX+3/tcjM98Sn+KBVw48T1cs5jI59VlCmDZxxfWU P X-Received: by 2002:a17:902:da92:b0:20b:6188:fc5e with SMTP id d9443c01a7336-20bfdfff45emr91588865ad.28.1728158781473; Sat, 05 Oct 2024 13:06:21 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20c13931055sm16493405ad.139.2024.10.05.13.06.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 05 Oct 2024 13:06:21 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: deller@kernel.org, peter.maydell@linaro.org, alex.bennee@linaro.org, linux-parisc@vger.kernel.org, qemu-arm@nongnu.org Subject: [PATCH v2 21/21] target/arm: Fix alignment fault priority in get_phys_addr_lpae Date: Sat, 5 Oct 2024 13:06:00 -0700 Message-ID: <20241005200600.493604-22-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241005200600.493604-1-richard.henderson@linaro.org> References: <20241005200600.493604-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::632; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x632.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Now that we have the MemOp for the access, we can order the alignment fault caused by memory type before the permission fault for the page. For subsequent page hits, permission and stage 2 checks are known to pass, and so the TLB_CHECK_ALIGNED fault raised in generic code is not mis-ordered. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- target/arm/ptw.c | 51 ++++++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 21 deletions(-) diff --git a/target/arm/ptw.c b/target/arm/ptw.c index 0a1a820362..dd40268397 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -2129,6 +2129,36 @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw, device = S1_attrs_are_device(result->cacheattrs.attrs); } + /* + * Enable alignment checks on Device memory. + * + * Per R_XCHFJ, the correct ordering for alignment, permission, + * and stage 2 faults is: + * - Alignment fault caused by the memory type + * - Permission fault + * - A stage 2 fault on the memory access + * Perform the alignment check now, so that we recognize it in + * the correct order. Set TLB_CHECK_ALIGNED so that any subsequent + * softmmu tlb hit will also check the alignment; clear along the + * non-device path so that tlb_fill_flags is consistent in the + * event of restart_atomic_update. + * + * In v7, for a CPU without the Virtualization Extensions this + * access is UNPREDICTABLE; we choose to make it take the alignment + * fault as is required for a v7VE CPU. (QEMU doesn't emulate any + * CPUs with ARM_FEATURE_LPAE but not ARM_FEATURE_V7VE anyway.) + */ + if (device) { + unsigned a_bits = memop_atomicity_bits(memop); + if (address & ((1 << a_bits) - 1)) { + fi->type = ARMFault_Alignment; + goto do_fault; + } + result->f.tlb_fill_flags = TLB_CHECK_ALIGNED; + } else { + result->f.tlb_fill_flags = 0; + } + if (!(result->f.prot & (1 << access_type))) { fi->type = ARMFault_Permission; goto do_fault; @@ -2156,27 +2186,6 @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw, result->f.attrs.space = out_space; result->f.attrs.secure = arm_space_is_secure(out_space); - /* - * Enable alignment checks on Device memory. - * - * Per R_XCHFJ, this check is mis-ordered. The correct ordering - * for alignment, permission, and stage 2 faults should be: - * - Alignment fault caused by the memory type - * - Permission fault - * - A stage 2 fault on the memory access - * but due to the way the TCG softmmu TLB operates, we will have - * implicitly done the permission check and the stage2 lookup in - * finding the TLB entry, so the alignment check cannot be done sooner. - * - * In v7, for a CPU without the Virtualization Extensions this - * access is UNPREDICTABLE; we choose to make it take the alignment - * fault as is required for a v7VE CPU. (QEMU doesn't emulate any - * CPUs with ARM_FEATURE_LPAE but not ARM_FEATURE_V7VE anyway.) - */ - if (device) { - result->f.tlb_fill_flags |= TLB_CHECK_ALIGNED; - } - /* * For FEAT_LPA2 and effective DS, the SH field in the attributes * was re-purposed for output address bits. The SH attribute in