From patchwork Sat Oct 5 20:05:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 833010 Delivered-To: patch@linaro.org Received: by 2002:adf:8b52:0:b0:367:895a:4699 with SMTP id v18csp1339323wra; Sat, 5 Oct 2024 13:10:15 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWjdsab/yabBFp97/qLIzvxp0vbL+WY/GKVeIwU1IT1Ji+nZyTJRLPH4APyCmlflba37QAjiQ==@linaro.org X-Google-Smtp-Source: AGHT+IH6K92xeRbh+JBeh+gewxx+Y36ImLNX76ioOm7p/niBPP8S5iZIHq3rLIE9Vs8/62y/l5rs X-Received: by 2002:a05:620a:448a:b0:79f:af4:66f1 with SMTP id af79cd13be357-7ae6f4886e6mr972680385a.50.1728159014955; Sat, 05 Oct 2024 13:10:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1728159014; cv=none; d=google.com; s=arc-20240605; b=lR+mUj1ra52CcvAs76V5D2fcDGbj+auJYOIPilllsPfGNu77L8IC1hdxlXfZwJdrNH wMctgFUXSSpLN23FiWB02z4cDd/xSRvpNtKG53oXwlkiK0xBgHoyaZ7mB/Jbfoukyzp/ gmNE+Xfpjwswl4pEjdZ2lnGhPRqHXkUqoLH5WU9k4rlCvhtFIiSw6MN07Zjtlsg8ejhn JThKyYg/TY2BrdR5I9xAIZIyuNd2ZhaEJj0V0jLlpJvwy/hAGQ/8iZnBMY6tl2/u4wJt FMU+xovG6xTgiQZ+2lZj1TNGV+WD0O0oyUTIyHdPb5ITWW1QhlsGu9TudogW/hfhSsxj cxCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=aSpIks7wgvzrdGiSzq/HYW4dr1jFM7gjMRozlRGemJM=; fh=rQg8EQ8KH04w/0VkDYF6/ktgEyHJ2dqn1eatHCfBaLw=; b=ivEqiv7mrhPljtM7Qg5/ZfH9VAFGPiv6vte1mJ9nNl/aLJ2qvdDBJTZ7ZwDo5crWmu 0S06pAOEiddEZxBudaK5gM2O2dY7X7eOrtlqsQboSO/VU9liXjb7tJ3ksV50bCD/V2qF ToQoM4WJvGMOhv0kgs9+CA4SIF6Ona3bl+zslhyKajCEaw7bsjHNRAc3RKyjgsbkRPpS zsIpOiBQhiaTTiybqhVq09m74kzvsEYqXSrRWrPjHR4bzU+gl+G1zlb/iWOvsLcZkNHv grwAkT2hcXTqySgXxdJkh7Tze63H9hFYW/gvUkHMZz0S8jdP/F9VHw0gXWUnT+Decto1 Tvnw==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Uv+nHdFw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org; dara=neutral header.i=@linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id af79cd13be357-7ae756158casi252658685a.25.2024.10.05.13.10.14 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sat, 05 Oct 2024 13:10:14 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Uv+nHdFw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org; dara=neutral header.i=@linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sxB2f-0006OV-Jh; Sat, 05 Oct 2024 16:06:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sxB2Z-0006Ik-KS for qemu-devel@nongnu.org; Sat, 05 Oct 2024 16:06:11 -0400 Received: from mail-pj1-x102a.google.com ([2607:f8b0:4864:20::102a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sxB2X-0001jz-Cp for qemu-devel@nongnu.org; Sat, 05 Oct 2024 16:06:11 -0400 Received: by mail-pj1-x102a.google.com with SMTP id 98e67ed59e1d1-2e07b031da3so2368507a91.3 for ; Sat, 05 Oct 2024 13:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728158768; x=1728763568; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aSpIks7wgvzrdGiSzq/HYW4dr1jFM7gjMRozlRGemJM=; b=Uv+nHdFwsLvSThV9MoBoki4al0a6ekJ9ww4iF6f00UhKcW1izC8pMylkj6ikNzksDc gPxWkE86daY7vcZjvcX160ygxFYiVaJ5XiYSSOQWEOJZAgJ3ddfv3QIM5MmZf7f5CSWa LLuNZDa3chTDKUd3CL9YFQgl1vzE9VE2aZZ6JqehlpC+Sh3sMCN32C7aOOe8OY+Lsys3 ge/dW60XHX57kKh4L4lORsVLWn1SnoK7YDDTRimjSy27gN1A0GEHU0oVvDMVTpC8iPdd /9qwqpEfwDoEPCtpgHYVHeOFfzJTjsJ/LNAtUa9xpSa9pY1MNBglXKMk+xJ9BuWRrShO padg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728158768; x=1728763568; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aSpIks7wgvzrdGiSzq/HYW4dr1jFM7gjMRozlRGemJM=; b=rMC2bjsJZ0eaLL+3f+fmReM5N0TLd2zuH20bki9BC01idxrZQ1OMWzoqR718Oha1Ry v6yRtbtx4hIht80j6N9XvD/4O3g5Hfj8BH8hf2b2E1lT1vp+i6Mo7L6yu7JE6iWtWxmQ 2RY6heVhrDb+iwZxY02TNjsc290dd3Fn9J4UtVed8MkEojzoH6SYdbfSqY2EDoijTHNp k15ZeXP52hWS0/NXbfZeR6+qu/BTS20qCvhIVZKDPMbrCKp7mt9Jjxdq4BDz25ddLvEu geRa0AczNkZz1NEoRDt6EWz3jjtATq6vkEuxMFqZ/ByO/Sl0PY6r8+5I8AUdEqBW2ict celQ== X-Gm-Message-State: AOJu0YzIPGGoq0Hv/oRyJAAxhReM1chsqqSNDkzfeX4YcWmun/TYOok1 tlybM3430S10rMU1EUh1RpQm/tBpSschMCjNev8wk6AbdGJec/IJKoM9lISz70tzU/VvLPhiOqh N X-Received: by 2002:a17:90a:a107:b0:2e0:7560:9334 with SMTP id 98e67ed59e1d1-2e1e636f75bmr8187622a91.36.1728158768046; Sat, 05 Oct 2024 13:06:08 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20c13931055sm16493405ad.139.2024.10.05.13.06.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 05 Oct 2024 13:06:07 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: deller@kernel.org, peter.maydell@linaro.org, alex.bennee@linaro.org, linux-parisc@vger.kernel.org, qemu-arm@nongnu.org Subject: [PATCH v2 07/21] accel/tcg: Use the tlb_fill_align hook Date: Sat, 5 Oct 2024 13:05:46 -0700 Message-ID: <20241005200600.493604-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241005200600.493604-1-richard.henderson@linaro.org> References: <20241005200600.493604-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::102a; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org When we have a tlb miss, defer the alignment check to the new tlb_fill_align hook. Move the existing alignment check so that we only perform it with a tlb hit. Signed-off-by: Richard Henderson Reviewed-by: Helge Deller Reviewed-by: Peter Maydell --- accel/tcg/cputlb.c | 89 +++++++++++++++++++++++++--------------------- 1 file changed, 49 insertions(+), 40 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 4bc34c8a37..0e6ae65a39 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1616,14 +1616,14 @@ typedef struct MMULookupLocals { * tlb_fill will longjmp out. Return true if the softmmu tlb for * @mmu_idx may have resized. */ -static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, +static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, int mmu_idx, MMUAccessType access_type, uintptr_t ra) { vaddr addr = data->addr; uintptr_t index = tlb_index(cpu, mmu_idx, addr); CPUTLBEntry *entry = tlb_entry(cpu, mmu_idx, addr); uint64_t tlb_addr = tlb_read_idx(entry, access_type); - bool maybe_resized = false; + bool did_tlb_fill = false; CPUTLBEntryFull *full; int flags; @@ -1631,17 +1631,26 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, if (!tlb_hit(tlb_addr, addr)) { if (!victim_tlb_hit(cpu, mmu_idx, index, access_type, addr & TARGET_PAGE_MASK)) { - bool ok = cpu->cc->tcg_ops->tlb_fill(cpu, addr, data->size, - access_type, mmu_idx, - false, ra); + bool ok = cpu->cc->tcg_ops->tlb_fill_align(cpu, addr, memop, + data->size, access_type, + mmu_idx, false, ra); assert(ok); - maybe_resized = true; + did_tlb_fill = true; index = tlb_index(cpu, mmu_idx, addr); entry = tlb_entry(cpu, mmu_idx, addr); } tlb_addr = tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; } + if (!did_tlb_fill) { + /* We didn't use tlb_fill_align, so alignment not yet checked. */ + unsigned a_bits = memop_alignment_bits(memop); + + if (unlikely(addr & ((1 << a_bits) - 1))) { + cpu_unaligned_access(cpu, addr, access_type, mmu_idx, ra); + } + } + full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; flags = tlb_addr & (TLB_FLAGS_MASK & ~TLB_FORCE_SLOW); flags |= full->slow_flags[access_type]; @@ -1651,7 +1660,7 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, /* Compute haddr speculatively; depending on flags it might be invalid. */ data->haddr = (void *)((uintptr_t)addr + entry->addend); - return maybe_resized; + return did_tlb_fill; } /** @@ -1702,7 +1711,6 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data, static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType type, MMULookupLocals *l) { - unsigned a_bits; bool crosspage; int flags; @@ -1711,12 +1719,6 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, tcg_debug_assert(l->mmu_idx < NB_MMU_MODES); - /* Handle CPU specific unaligned behaviour */ - a_bits = memop_alignment_bits(l->memop); - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra); - } - l->page[0].addr = addr; l->page[0].size = memop_size(l->memop); l->page[1].addr = (addr + l->page[0].size - 1) & TARGET_PAGE_MASK; @@ -1724,7 +1726,7 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, crosspage = (addr ^ l->page[1].addr) & TARGET_PAGE_MASK; if (likely(!crosspage)) { - mmu_lookup1(cpu, &l->page[0], l->mmu_idx, type, ra); + mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra); flags = l->page[0].flags; if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { @@ -1743,8 +1745,8 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, * Lookup both pages, recognizing exceptions from either. If the * second lookup potentially resized, refresh first CPUTLBEntryFull. */ - mmu_lookup1(cpu, &l->page[0], l->mmu_idx, type, ra); - if (mmu_lookup1(cpu, &l->page[1], l->mmu_idx, type, ra)) { + mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra); + if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) { uintptr_t index = tlb_index(cpu, l->mmu_idx, addr); l->page[0].full = &cpu->neg.tlb.d[l->mmu_idx].fulltlb[index]; } @@ -1770,7 +1772,7 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, * Device memory type require alignment. */ if (unlikely(flags & TLB_CHECK_ALIGNED)) { - a_bits = memop_atomicity_bits(l->memop); + unsigned a_bits = memop_atomicity_bits(l->memop); if (addr & ((1 << a_bits) - 1)) { cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra); } @@ -1788,34 +1790,18 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, { uintptr_t mmu_idx = get_mmuidx(oi); MemOp mop = get_memop(oi); - int a_bits = memop_alignment_bits(mop); uintptr_t index; CPUTLBEntry *tlbe; vaddr tlb_addr; void *hostaddr; CPUTLBEntryFull *full; + bool did_tlb_fill = false; tcg_debug_assert(mmu_idx < NB_MMU_MODES); /* Adjust the given return address. */ retaddr -= GETPC_ADJ; - /* Enforce guest required alignment. */ - if (unlikely(a_bits > 0 && (addr & ((1 << a_bits) - 1)))) { - /* ??? Maybe indicate atomic op to cpu_unaligned_access */ - cpu_unaligned_access(cpu, addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* Enforce qemu required alignment. */ - if (unlikely(addr & (size - 1))) { - /* We get here if guest alignment was not requested, - or was not enforced by cpu_unaligned_access above. - We might widen the access and emulate, but for now - mark an exception and exit the cpu loop. */ - goto stop_the_world; - } - index = tlb_index(cpu, mmu_idx, addr); tlbe = tlb_entry(cpu, mmu_idx, addr); @@ -1824,10 +1810,11 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, if (!tlb_hit(tlb_addr, addr)) { if (!victim_tlb_hit(cpu, mmu_idx, index, MMU_DATA_STORE, addr & TARGET_PAGE_MASK)) { - bool ok = cpu->cc->tcg_ops->tlb_fill(cpu, addr, size, - MMU_DATA_STORE, mmu_idx, - false, retaddr); + bool ok = cpu->cc->tcg_ops->tlb_fill_align(cpu, addr, mop, size, + MMU_DATA_STORE, mmu_idx, + false, retaddr); assert(ok); + did_tlb_fill = true; index = tlb_index(cpu, mmu_idx, addr); tlbe = tlb_entry(cpu, mmu_idx, addr); } @@ -1841,8 +1828,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, * but addr_read will only be -1 if PAGE_READ was unset. */ if (unlikely(tlbe->addr_read == -1)) { - cpu->cc->tcg_ops->tlb_fill(cpu, addr, size, MMU_DATA_LOAD, - mmu_idx, false, retaddr); + cpu->cc->tcg_ops->tlb_fill_align(cpu, addr, mop, size, MMU_DATA_LOAD, + mmu_idx, false, retaddr); /* * Since we don't support reads and writes to different * addresses, and we do have the proper page loaded for @@ -1850,6 +1837,28 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, */ g_assert_not_reached(); } + + /* Enforce guest required alignment, if not handled by tlb_fill_align. */ + if (!did_tlb_fill) { + int a_bits = memop_alignment_bits(mop); + if (unlikely(a_bits > 0 && (addr & ((1 << a_bits) - 1)))) { + /* ??? Maybe indicate atomic op to cpu_unaligned_access */ + cpu_unaligned_access(cpu, addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + } + + /* Enforce qemu required alignment. */ + if (unlikely(addr & (size - 1))) { + /* + * We get here if guest alignment was not requested, + * or was not enforced by cpu_unaligned_access above. + * We might widen the access and emulate, but for now + * mark an exception and exit the cpu loop. + */ + goto stop_the_world; + } + /* Collect tlb flags for read. */ tlb_addr |= tlbe->addr_read;