From patchwork Thu May 9 06:02:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 163678 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp552749ilr; Wed, 8 May 2019 23:17:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqyeLF0shfb+VzIAA66x+Hzg84WjCcsgOS3d8SwzP6Tg5Prv8Zr4Zz+zfirRVwK8LKBkHiEB X-Received: by 2002:a1c:f909:: with SMTP id x9mr1465876wmh.18.1557382665671; Wed, 08 May 2019 23:17:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557382665; cv=none; d=google.com; s=arc-20160816; b=i1Sqnb+q618C43jYfyPEGL/OG7MajSUn96zpDrMJG1DR5i37xz9RqY/um3CgjUeWp8 16YwpM7V4gfKb3qRRJgWn1fztf1CmGht9/OmD70jfOuu3jFH8NOAb2PaHs3+PoSc+Rcm DCIwKRFp/DobfxdpArClo2vHexswT+PJ33zqUHMcy3LcQxgrNCokkB/M8BZHB8Hz906s HfnQrZGq9W+Y5p5m0XZHQgwj2mqzgy8couvaUOs8uvNq+udgsh2WikYkQ3SWgmsexPBV qm7Yhtl4CRcxQ99jOnf5cyEJOrphnRfW2rdXaROxCKKrJ20PTunGtTmEeJ8nS1nreXMX 6+sQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=YkL30ObwA5LfaPr3mD64WDgRkjuiQZ31K7AKyEDrMAQ=; b=jBniXwNka+pEIXRdS6ZSx6e4aehLoRReyvNX2qhHelLl2/3C6Py8poMS82f7hun2O4 CMYFnFcJenjDb7ezQDmFncJLZhWREMs22Rm6Xp49yDrJ6by5Zz1lSVJ4fnyL6Mfz5mKi sjlBKmr0SEuGZ8WnsXTpUrag7PdeE6RGI3fsA6PfV2tRygTniQ5zs+X0Hu71uuK3DsCd EVwSK1RT+Bzhi0btHcs0ZYjqm5y1cfI5eGx3MthpqsSTk/fCEQlrAm6dnjZFsLr6MmE4 D73gSaF24Z55/cWb+UIJ5k61OlILqgNgbhZ8NNVQEO5K5fG9q3uObqtjrO84D8OlreZQ PbmQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="Nn/JuwJi"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o82si811349wmo.101.2019.05.08.23.17.45 for (version=TLS1 cipher=AES128-SHA bits=128/128); Wed, 08 May 2019 23:17:45 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="Nn/JuwJi"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:48783 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hOcNE-0003Kx-GL for patch@linaro.org; Thu, 09 May 2019 02:17:44 -0400 Received: from eggs.gnu.org ([209.51.188.92]:45181) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hOc9P-0006O3-Gr for qemu-devel@nongnu.org; Thu, 09 May 2019 02:03:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hOc9O-0007Km-3Q for qemu-devel@nongnu.org; Thu, 09 May 2019 02:03:27 -0400 Received: from mail-pf1-x444.google.com ([2607:f8b0:4864:20::444]:41597) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hOc9N-0007Ji-RL for qemu-devel@nongnu.org; Thu, 09 May 2019 02:03:26 -0400 Received: by mail-pf1-x444.google.com with SMTP id l132so715524pfc.8 for ; Wed, 08 May 2019 23:03:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=YkL30ObwA5LfaPr3mD64WDgRkjuiQZ31K7AKyEDrMAQ=; b=Nn/JuwJiX6TcpSzu6dXO9Urrs10bZYTTIsRv+xCY3Pi3z/OXaOCgqRKDo1Ayo1NusG Wea4sHJNqeefH+85CyMiDxvUkQLsJwSUr3UHpHbxfgbdKHXVc7orVutzOGeJSOz80/Iu qSuF4DH/T0ojQ53ogwBuERIsXpDB4gcoV5sXGIRiXASszx1oNzNj1wIlBW17UVEdkezL yPAJJP188jMbMgj08vjXEOGwbr7FxaTQyOr8bSuf/U/aC+I0ucowWBYwqUyjK6tED24s bHZ8qtDKrRvk+tuAdKQZzY4YAPqkwWWBZCIumTCxYxjNFr5390R01nd8tnRl/2BdWSXz ATaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=YkL30ObwA5LfaPr3mD64WDgRkjuiQZ31K7AKyEDrMAQ=; b=pVw1ZaYlRYXCDUllFKOlVyJNzsRzV4H4mKS9n5S2yDAVTNbJepW+r2b2NSLRl474UZ E7m6Lyr9Y2Sqn59v0stHHxvx8IScwnJmvRkSLjYVZPg9i6udmd/yGovkqPt80OZrWYBK gW2D+nDuBIvVjugCEovXNoQdqL2/B/Sfx0DlOHpccodNyFaLSw3Vd6ifBZljbmFHSf6k S/Z50UTZR1KIbzMLt68+XzOdlLBqxQyJIovwSUViKuKkCbuxsBszTdlS8rLkGSDZfLpR 3sQKNa2Ma51y02h3zftfwoyCpnpeiZDzDy/y0iRaTqhcuKODjxXzKc3IY0NUYZLj96aJ 6uHA== X-Gm-Message-State: APjAAAVbktz9/UjBADJH4qSoqARLNkr47c49Rphh8XwCKM/CM+9zk/Vq bni8YLT75P31UBhRfdfrQLkAr8B/HFg= X-Received: by 2002:a65:448b:: with SMTP id l11mr3118182pgq.185.1557381803541; Wed, 08 May 2019 23:03:23 -0700 (PDT) Received: from localhost.localdomain (97-113-27-95.tukw.qwest.net. [97.113.27.95]) by smtp.gmail.com with ESMTPSA id n7sm1496109pff.45.2019.05.08.23.03.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 08 May 2019 23:03:22 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 8 May 2019 23:02:46 -0700 Message-Id: <20190509060246.4031-28-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190509060246.4031-1-richard.henderson@linaro.org> References: <20190509060246.4031-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::444 Subject: [Qemu-devel] [PATCH v2 27/27] tcg: Use tlb_fill probe from tlb_vaddr_to_host X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Most of the existing users would continue around a loop which would fault the tlb entry in via a normal load/store. But for SVE we have a true non-faulting case which requires the new probing form of tlb_fill. Reviewed-by: Peter Maydell Signed-off-by: Richard Henderson --- v2: Update function docs comment. --- include/exec/cpu_ldst.h | 50 ++++++----------------------- accel/tcg/cputlb.c | 69 ++++++++++++++++++++++++++++++++++++----- target/arm/sve_helper.c | 6 +--- 3 files changed, 72 insertions(+), 53 deletions(-) -- 2.17.1 Reviewed-by: Philippe Mathieu-Daudé diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h index d78041d7a0..7b28a839d2 100644 --- a/include/exec/cpu_ldst.h +++ b/include/exec/cpu_ldst.h @@ -433,50 +433,20 @@ static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx, * @mmu_idx: MMU index to use for lookup * * Look up the specified guest virtual index in the TCG softmmu TLB. - * If the TLB contains a host virtual address suitable for direct RAM - * access, then return it. Otherwise (TLB miss, TLB entry is for an - * I/O access, etc) return NULL. - * - * This is the equivalent of the initial fast-path code used by - * TCG backends for guest load and store accesses. + * If we can translate a host virtual address suitable for direct RAM + * access, without causing a guest exception, then return it. + * Otherwise (TLB entry is for an I/O access, guest software + * TLB fill required, etc) return NULL. */ +#ifdef CONFIG_USER_ONLY static inline void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, - int access_type, int mmu_idx) + MMUAccessType access_type, int mmu_idx) { -#if defined(CONFIG_USER_ONLY) return g2h(addr); -#else - CPUTLBEntry *tlbentry = tlb_entry(env, mmu_idx, addr); - abi_ptr tlb_addr; - uintptr_t haddr; - - switch (access_type) { - case 0: - tlb_addr = tlbentry->addr_read; - break; - case 1: - tlb_addr = tlb_addr_write(tlbentry); - break; - case 2: - tlb_addr = tlbentry->addr_code; - break; - default: - g_assert_not_reached(); - } - - if (!tlb_hit(tlb_addr, addr)) { - /* TLB entry is for a different page */ - return NULL; - } - - if (tlb_addr & ~TARGET_PAGE_MASK) { - /* IO access */ - return NULL; - } - - haddr = addr + tlbentry->addend; - return (void *)haddr; -#endif /* defined(CONFIG_USER_ONLY) */ } +#else +void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, + MMUAccessType access_type, int mmu_idx); +#endif #endif /* CPU_LDST_H */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index dfcd9ae168..45a5c4e123 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1007,6 +1007,16 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, } } +static inline target_ulong tlb_read_ofs(CPUTLBEntry *entry, size_t ofs) +{ +#if TCG_OVERSIZED_GUEST + return *(target_ulong *)((uintptr_t)entry + ofs); +#else + /* ofs might correspond to .addr_write, so use atomic_read */ + return atomic_read((target_ulong *)((uintptr_t)entry + ofs)); +#endif +} + /* Return true if ADDR is present in the victim tlb, and has been copied back to the main tlb. */ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, @@ -1017,14 +1027,7 @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, assert_cpu_is_self(ENV_GET_CPU(env)); for (vidx = 0; vidx < CPU_VTLB_SIZE; ++vidx) { CPUTLBEntry *vtlb = &env->tlb_v_table[mmu_idx][vidx]; - target_ulong cmp; - - /* elt_ofs might correspond to .addr_write, so use atomic_read */ -#if TCG_OVERSIZED_GUEST - cmp = *(target_ulong *)((uintptr_t)vtlb + elt_ofs); -#else - cmp = atomic_read((target_ulong *)((uintptr_t)vtlb + elt_ofs)); -#endif + target_ulong cmp = tlb_read_ofs(vtlb, elt_ofs); if (cmp == page) { /* Found entry in victim tlb, swap tlb and iotlb. */ @@ -1108,6 +1111,56 @@ void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx, } } +void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, + MMUAccessType access_type, int mmu_idx) +{ + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + uintptr_t tlb_addr, page; + size_t elt_ofs; + + switch (access_type) { + case MMU_DATA_LOAD: + elt_ofs = offsetof(CPUTLBEntry, addr_read); + break; + case MMU_DATA_STORE: + elt_ofs = offsetof(CPUTLBEntry, addr_write); + break; + case MMU_INST_FETCH: + elt_ofs = offsetof(CPUTLBEntry, addr_code); + break; + default: + g_assert_not_reached(); + } + + page = addr & TARGET_PAGE_MASK; + tlb_addr = tlb_read_ofs(entry, elt_ofs); + + if (!tlb_hit_page(tlb_addr, page)) { + uintptr_t index = tlb_index(env, mmu_idx, addr); + + if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page)) { + CPUState *cs = ENV_GET_CPU(env); + CPUClass *cc = CPU_GET_CLASS(cs); + + if (!cc->tlb_fill(cs, addr, 0, access_type, mmu_idx, true, 0)) { + /* Non-faulting page table read failed. */ + return NULL; + } + + /* TLB resize via tlb_fill may have moved the entry. */ + entry = tlb_entry(env, mmu_idx, addr); + } + tlb_addr = tlb_read_ofs(entry, elt_ofs); + } + + if (tlb_addr & ~TARGET_PAGE_MASK) { + /* IO access */ + return NULL; + } + + return (void *)(addr + entry->addend); +} + /* Probe for a read-modify-write atomic operation. Do not allow unaligned * operations, or io operations to proceed. Return the host address. */ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index bc847250dd..fd434c66ea 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -4598,11 +4598,7 @@ static void sve_ldnf1_r(CPUARMState *env, void *vg, const target_ulong addr, * in the real world, obviously.) * * Then there are the annoying special cases with watchpoints... - * - * TODO: Add a form of tlb_fill that does not raise an exception, - * with a form of tlb_vaddr_to_host and a set of loads to match. - * The non_fault_vaddr_to_host would handle everything, usually, - * and the loads would handle the iomem path for watchpoints. + * TODO: Add a form of non-faulting loads using cc->tlb_fill(probe=true). */ host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx); split = max_for_page(addr, mem_off, mem_max);