From patchwork Wed Oct 9 00:04:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 833827 Delivered-To: patch@linaro.org Received: by 2002:adf:a1d9:0:b0:367:895a:4699 with SMTP id v25csp556319wrv; Tue, 8 Oct 2024 17:07:42 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCW2tNPoHfeMQnYPHbidhBYs/UtrW3r6jssP0GGKCagHPLNEREZw7DPb1Vz2jgqpAxQjWTrfHg==@linaro.org X-Google-Smtp-Source: AGHT+IGF+nPEybP1JCy4bT3FtUiO+vwns+L8gdAKtJv/ZLyvhWS6pvAgHHbck0XWPxPBQSQfZqfb X-Received: by 2002:a05:620a:17a3:b0:7af:c67c:9efe with SMTP id af79cd13be357-7b079541c7bmr100736685a.24.1728432462252; Tue, 08 Oct 2024 17:07:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1728432462; cv=none; d=google.com; s=arc-20240605; b=Y6Oc2dEAzi3qN+OYvCAOwzOizpnYauA60yxxgCiFOOm+8PFZi+8eXKLGQn6JhQUB8c EF11ju2wqmNSfMajB9SCekbBUUQQVAcGFjDCryf6jqVEwB6ITQN/DCh5WC7y5TJQVJrT SSPLEEbWiBRvuSifX4WDPXdnKheR2aXK1QQvwb246uOGv2lQudNkB+Dx57UtF/xg85LU 36ujI8OZtS0NtpY0HRlbKzDkPxxe+tizgmkE56/iOtnWqokQeOG4wKZ99wrm0XQSyTBB 6ziQkhlczuZr85n/OSnUczKxiL1zJWLHAJSYBAkwH0AvifI7zi0nUkuyWEFYTUUk0Dy0 6Weg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=G2SEeBS4yBn5Ue1nQmA6MrWE5kNjL002NNNwwbKYNls=; fh=rQg8EQ8KH04w/0VkDYF6/ktgEyHJ2dqn1eatHCfBaLw=; b=eTLM3VydJZt6QNW/e5LPQYNLULCTllo3jXFIfmnkdI9ejUAXzLR593lDUZEXcv96vs b02AZftoUlveJfZbFslNvFhcJITS6E08ZzOZ7/PsXvOwhZdJXG06jDL1ioNGZF8XfcAP bJlxyk+NGbORmBBvSHNqtGlUFO3bZY9r52i9TsFn3wkabz1ETCRNDHZDoCraOIPquOtN K9A+bsVZ2F3HM3EjtIYx9RO+tERtPZfsDfvqLXlT97njUuoI+o2f4gjbtysiV01iWrUy 2TivHj56YIHVItoKZqzmxT/ft65Do+cPMgdIyjiDHbwaDeSJizBN4HtzpMEWhoHORc6g Bwow==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nyHqqErl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org; dara=neutral header.i=@linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d75a77b69052e-45f071569d8si16246951cf.47.2024.10.08.17.07.42 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 08 Oct 2024 17:07:42 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nyHqqErl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org; dara=neutral header.i=@linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syKCi-0004Y1-8q; Tue, 08 Oct 2024 20:05:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syKCb-0004Tu-TS for qemu-devel@nongnu.org; Tue, 08 Oct 2024 20:05:18 -0400 Received: from mail-pl1-x62d.google.com ([2607:f8b0:4864:20::62d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syKCZ-0002wv-TL for qemu-devel@nongnu.org; Tue, 08 Oct 2024 20:05:17 -0400 Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-20b5affde14so45287395ad.3 for ; Tue, 08 Oct 2024 17:05:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728432314; x=1729037114; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=G2SEeBS4yBn5Ue1nQmA6MrWE5kNjL002NNNwwbKYNls=; b=nyHqqErlHFWGeF3OTMPKHNnTLBYpT0gqymAgIym+zh+g0cf4yo1tfdNkczWpJWT7o2 uGVqoWRAiwItJjt7tDoETyRyXjE7/yMggihV1w4C1pY2AL4E3Nlca7hentoSfgU2HcVn Jby+Yq35tTuPY8Zk1NDbckEP8VcYbbqM9QWJRFnKnEjMVNtw1DXGPSBpcw4T2ZBOvvR6 DR62s6lp9QbMOjFQiK8eFOxqErW30qVgIwEGzs70vTjFXg3aYMWu1Lxq6WVjkoUBtSSd GeyC/Atpvjqgi3ELMXQ1V5x9YhRXtqCJIfRT4jNkqXF5rT8eBQw9H/iGrUSsnfwsb7r+ ZPWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728432314; x=1729037114; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G2SEeBS4yBn5Ue1nQmA6MrWE5kNjL002NNNwwbKYNls=; b=ZJudyl19112scXkuZ9aUy00ILY0SN2nZFNyAnSy+3IcNTrTe9HpksLfmZ+njW8G0lb O4Gb2F9olS7Eeft9y3OM949RACcZ1homl/M8rtg32KHu68IMhUYVZhUEppuz55vVd8+Q Bmy3sT7Qk4xlP4I73mZ4f/UCYoGyBae8EZ/bAq5G+fGpequpHmIlL38lKxBsOQgxt0tv RizjShSnDlhJPGGc5fwrpoGbc3u9ZO41OAtQwdixEeqaIWStCKBLrNWBW320Z1jqbkJE Gz4dggb8RqG76a5rIUNm1nPZCvOYTtCZP4+DYXI5wHVmYiVFtznomgGBdvQrhigiHbG3 3O5Q== X-Gm-Message-State: AOJu0YwcEwtxAprvBmeWvC3bQ08i9enHSgjgfrKDGxoCS+6Y0ev8IWyW f4RD6QgcoAm2un8i5tIRkLuYd24uGcIreD1COTajP3v7SJZo+TnvclK8cSGXbJPTln4+z8JJMfa b X-Received: by 2002:a17:903:22ce:b0:20b:a5b5:b89 with SMTP id d9443c01a7336-20c63751f24mr14941525ad.35.1728432314287; Tue, 08 Oct 2024 17:05:14 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20c138cecf2sm60705105ad.104.2024.10.08.17.05.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Oct 2024 17:05:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: deller@kernel.org, peter.maydell@linaro.org, alex.bennee@linaro.org, linux-parisc@vger.kernel.org, qemu-arm@nongnu.org Subject: [PATCH v3 20/20] target/arm: Fix alignment fault priority in get_phys_addr_lpae Date: Tue, 8 Oct 2024 17:04:53 -0700 Message-ID: <20241009000453.315652-21-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009000453.315652-1-richard.henderson@linaro.org> References: <20241009000453.315652-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62d; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Now that we have the MemOp for the access, we can order the alignment fault caused by memory type before the permission fault for the page. For subsequent page hits, permission and stage 2 checks are known to pass, and so the TLB_CHECK_ALIGNED fault raised in generic code is not mis-ordered. Reviewed-by: Peter Maydell Signed-off-by: Richard Henderson --- target/arm/ptw.c | 51 ++++++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 21 deletions(-) diff --git a/target/arm/ptw.c b/target/arm/ptw.c index 0a1a820362..dd40268397 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -2129,6 +2129,36 @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw, device = S1_attrs_are_device(result->cacheattrs.attrs); } + /* + * Enable alignment checks on Device memory. + * + * Per R_XCHFJ, the correct ordering for alignment, permission, + * and stage 2 faults is: + * - Alignment fault caused by the memory type + * - Permission fault + * - A stage 2 fault on the memory access + * Perform the alignment check now, so that we recognize it in + * the correct order. Set TLB_CHECK_ALIGNED so that any subsequent + * softmmu tlb hit will also check the alignment; clear along the + * non-device path so that tlb_fill_flags is consistent in the + * event of restart_atomic_update. + * + * In v7, for a CPU without the Virtualization Extensions this + * access is UNPREDICTABLE; we choose to make it take the alignment + * fault as is required for a v7VE CPU. (QEMU doesn't emulate any + * CPUs with ARM_FEATURE_LPAE but not ARM_FEATURE_V7VE anyway.) + */ + if (device) { + unsigned a_bits = memop_atomicity_bits(memop); + if (address & ((1 << a_bits) - 1)) { + fi->type = ARMFault_Alignment; + goto do_fault; + } + result->f.tlb_fill_flags = TLB_CHECK_ALIGNED; + } else { + result->f.tlb_fill_flags = 0; + } + if (!(result->f.prot & (1 << access_type))) { fi->type = ARMFault_Permission; goto do_fault; @@ -2156,27 +2186,6 @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw, result->f.attrs.space = out_space; result->f.attrs.secure = arm_space_is_secure(out_space); - /* - * Enable alignment checks on Device memory. - * - * Per R_XCHFJ, this check is mis-ordered. The correct ordering - * for alignment, permission, and stage 2 faults should be: - * - Alignment fault caused by the memory type - * - Permission fault - * - A stage 2 fault on the memory access - * but due to the way the TCG softmmu TLB operates, we will have - * implicitly done the permission check and the stage2 lookup in - * finding the TLB entry, so the alignment check cannot be done sooner. - * - * In v7, for a CPU without the Virtualization Extensions this - * access is UNPREDICTABLE; we choose to make it take the alignment - * fault as is required for a v7VE CPU. (QEMU doesn't emulate any - * CPUs with ARM_FEATURE_LPAE but not ARM_FEATURE_V7VE anyway.) - */ - if (device) { - result->f.tlb_fill_flags |= TLB_CHECK_ALIGNED; - } - /* * For FEAT_LPA2 and effective DS, the SH field in the attributes * was re-purposed for output address bits. The SH attribute in