From patchwork Fri Jun 29 21:37:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 140657 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp1333517ljj; Fri, 29 Jun 2018 14:38:47 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcqkl0W9z52amiVtBW+QJzx4/AO18Jo536sq04T6dklfKO3uyVuBc2qAGhOhRudQmcbS7q1 X-Received: by 2002:a0c:b753:: with SMTP id q19-v6mr15027623qve.82.1530308327440; Fri, 29 Jun 2018 14:38:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530308327; cv=none; d=google.com; s=arc-20160816; b=F3EHl5OCaxODZL/XLxx4JlflrN46RaMIw9z1LL/cwcAWWjVdABRrhRFs8iyKagGB30 6zQFq/RIFOXJGjvL2GKvc17tP0XSOMiVaHQ0aTuOnqIqrBQFVaDSJiGbVtslU/6bR5JW L39hpocDiJumdnOfCUCrrfSoGDbtYxEdkMyCHd5tBSmzjq8VcyHbOtL8nmwVSNM3pYv0 om/ASSXRDhi3R91JelR/CXSXjjRfbCrvCTnFFajUJQ8IYuXnBqwls5cfqPSMa7zie1Ff TbpJeXoiOTsoY1bJB4wGqRHkC0IzM+D/rXZ/FPdydCNyqBQEIKpOgXe3jyJXZF98Ii6r wkqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:message-id:date:to:from :dkim-signature:arc-authentication-results; bh=5wb7+ZxQFg4v7OEYEGmpMKaKjQOwEZwS8e6Np4KmaJQ=; b=GsuA5ErZY34fxuGQV4ucvQiUQb0BYFa3Q6Yq0jyJiWjX9ZiVvdCSZE4Djj5A4uIRJz OZX9igUDnyJ39WPm72YSQl1PV7lqRYlwXsIJRjIbjE6uHP+/PdYgxWp0S+pDadmwUR8k Vv/OylY6XKZJNFzh8tYItrzSKnyxRsPNqlXlhdubFuCV7riQMhY27J+Mcvyx9RitH3fQ FdtC+xUhxUkjlEvZSpJzcnhF870OC9P3r+JI+UV6pp5NcwBthlm1DnnV55zd2KabQ1ew +k5iVBGoP3DKb/F88KZVRz6NIjMlz/Kb9JKIGu8hiMU+ArUXyQvzLDhyVKYWgLcZpaIj 8ITg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=gT4OiAkg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id h67-v6si1484148qke.86.2018.06.29.14.38.47 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 29 Jun 2018 14:38:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=gT4OiAkg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44632 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fZ16M-0008PK-Vt for patch@linaro.org; Fri, 29 Jun 2018 17:38:47 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37331) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fZ14q-0007oe-Tg for qemu-devel@nongnu.org; Fri, 29 Jun 2018 17:37:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fZ14l-0000kj-Ru for qemu-devel@nongnu.org; Fri, 29 Jun 2018 17:37:12 -0400 Received: from mail-pg0-x244.google.com ([2607:f8b0:400e:c05::244]:40170) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fZ14l-0000jP-It for qemu-devel@nongnu.org; Fri, 29 Jun 2018 17:37:07 -0400 Received: by mail-pg0-x244.google.com with SMTP id w8-v6so4516135pgp.7 for ; Fri, 29 Jun 2018 14:37:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=5wb7+ZxQFg4v7OEYEGmpMKaKjQOwEZwS8e6Np4KmaJQ=; b=gT4OiAkg+bTUWB7wFLCfYRxGTL6bY8MPZTRYot7XRM3N1+e4/scb6IRn8ctkwx3i3n M6tJzapelOFwcdC/677ZdttJAVMrGmQtwPapw55sFOBe9GWd1kW9FS6Qr3On1YKBN6o6 NiLGthVz51rda76+0dMa8us0xAFFXPqDfJOsA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=5wb7+ZxQFg4v7OEYEGmpMKaKjQOwEZwS8e6Np4KmaJQ=; b=bDyetlYbpzknSPK9/fq3yjiLPaXQK5D4fX6e+DdR6x3d/peXi8h1ESP6RegYtoKlyq 4rd9J3Hkumf8MIPW5JJmzvF/maB8Rq8vlwfOilex1vXLZDbA34vbkvuWSBR5YLTIo/vM 77z3RhujO4Mq0wQOceyGJxgnXl4iroowsIg6S4qOMMwDympTUjaga0szrE0gIRKSQaj2 iwYDbpuudCRBEhcu9i8D3o9cQbVgj5YHHAei+lvs7ykpXVzjp+rpxcVxXT0zSUbTBJ2h Cp8XnqXV0eiGahdOFTumeQsHvhMWHAHvXjNU5JSA9Jzqc9RZcxbLlJ4/h/c+lDVHfglm x98A== X-Gm-Message-State: APt69E2rjGHHObwMW//8eYWJDFKcrUxkwoQmTkDuoITWeRS8Xb7SAgXq G4BgNlePjTcigYXS9beEV42ktqaIXII= X-Received: by 2002:a63:7454:: with SMTP id e20-v6mr14293061pgn.410.1530308226060; Fri, 29 Jun 2018 14:37:06 -0700 (PDT) Received: from cloudburst.twiddle.net (97-126-112-211.tukw.qwest.net. [97.126.112.211]) by smtp.gmail.com with ESMTPSA id t3-v6sm16172984pgf.43.2018.06.29.14.37.04 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 29 Jun 2018 14:37:05 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Fri, 29 Jun 2018 14:37:03 -0700 Message-Id: <20180629213703.13833-1-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c05::244 Subject: [Qemu-devel] [PATCH] accel/tcg: Avoid caching overwritten tlb entries X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, alex.bennee@linaro.org, laurent@vivier.eu Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" When installing a TLB entry, remove any cached version of the same page in the VTLB. If the existing TLB entry matches, do not copy into the VTLB, but overwrite it. Signed-off-by: Richard Henderson --- This may fix some problems with Q800 that Laurent reported. On IRC, Peter suggested that regardless of the m68k ptest insn, we need to be more careful with installed TLB entries. I added some temporary logging and concur. This sort of overwrite happens often when writable pages are marked read-only in order to track a dirty bit. After the dirty bit is set, we re-install the TLB entry as read-write. I'm mildly surprised we haven't run into problems before... r~ --- accel/tcg/cputlb.c | 60 +++++++++++++++++++++++++--------------------- 1 file changed, 33 insertions(+), 27 deletions(-) -- 2.17.1 Reviewed-by: Peter Maydell diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index cc90a5fe92..250b024c5d 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -235,17 +235,30 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); } - - -static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr) +static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, + target_ulong page) { - if (tlb_hit_page(tlb_entry->addr_read, addr) || - tlb_hit_page(tlb_entry->addr_write, addr) || - tlb_hit_page(tlb_entry->addr_code, addr)) { + return (tlb_hit_page(tlb_entry->addr_read, page) || + tlb_hit_page(tlb_entry->addr_write, page) || + tlb_hit_page(tlb_entry->addr_code, page)); +} + +static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong page) +{ + if (tlb_hit_page_anyprot(tlb_entry, page)) { memset(tlb_entry, -1, sizeof(*tlb_entry)); } } +static inline void tlb_flush_vtlb_page(CPUArchState *env, int mmu_idx, + target_ulong page) +{ + int k; + for (k = 0; k < CPU_VTLB_SIZE; k++) { + tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], page); + } +} + static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data) { CPUArchState *env = cpu->env_ptr; @@ -271,14 +284,7 @@ static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data) i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); - } - - /* check whether there are entries that need to be flushed in the vtlb */ - for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - int k; - for (k = 0; k < CPU_VTLB_SIZE; k++) { - tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); - } + tlb_flush_vtlb_page(env, mmu_idx, addr); } tb_flush_jmp_cache(cpu, addr); @@ -310,7 +316,6 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu, unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS; int page = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); int mmu_idx; - int i; assert_cpu_is_self(cpu); @@ -320,11 +325,7 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu, for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmap)) { tlb_flush_entry(&env->tlb_table[mmu_idx][page], addr); - - /* check whether there are vltb entries that need to be flushed */ - for (i = 0; i < CPU_VTLB_SIZE; i++) { - tlb_flush_entry(&env->tlb_v_table[mmu_idx][i], addr); - } + tlb_flush_vtlb_page(env, mmu_idx, addr); } } @@ -609,10 +610,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, target_ulong address; target_ulong code_address; uintptr_t addend; - CPUTLBEntry *te, *tv, tn; + CPUTLBEntry *te, tn; hwaddr iotlb, xlat, sz, paddr_page; target_ulong vaddr_page; - unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE; int asidx = cpu_asidx_from_attrs(cpu, attrs); assert_cpu_is_self(cpu); @@ -654,19 +654,25 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, addend = (uintptr_t)memory_region_get_ram_ptr(section->mr) + xlat; } + /* Make sure there's no cached translation for the new page. */ + tlb_flush_vtlb_page(env, mmu_idx, vaddr_page); + code_address = address; iotlb = memory_region_section_get_iotlb(cpu, section, vaddr_page, paddr_page, xlat, prot, &address); index = (vaddr_page >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); te = &env->tlb_table[mmu_idx][index]; - /* do not discard the translation in te, evict it into a victim tlb */ - tv = &env->tlb_v_table[mmu_idx][vidx]; - /* addr_write can race with tlb_reset_dirty_range */ - copy_tlb_helper(tv, te, true); + /* If the old entry matches the new page, just overwrite TE. */ + if (!tlb_hit_page_anyprot(te, vaddr_page)) { + unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE; + CPUTLBEntry *tv = &env->tlb_v_table[mmu_idx][vidx]; - env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index]; + /* Evict the old entry into the victim tlb. */ + copy_tlb_helper(tv, te, true); + env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index]; + } /* refill the tlb */ /*