From patchwork Wed Oct 31 12:21:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 149785 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp6722976ljp; Wed, 31 Oct 2018 05:24:19 -0700 (PDT) X-Google-Smtp-Source: AJdET5dNAN02clzClbOnnsxSS56+F7JewiZMNvWjHU6/6t1PcnBmqeADJI2j7QV+ctswIX9172Q5 X-Received: by 2002:ac8:76d9:: with SMTP id q25-v6mr2424401qtr.35.1540988659564; Wed, 31 Oct 2018 05:24:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540988659; cv=none; d=google.com; s=arc-20160816; b=ecAQ4X0AWOSPPrWt+pxQ+3KpmmYOcwq/Ge3sW03F+oBbJHLAJGz8lhfkXPazPn0J1+ frnIPEeTlnWSEZELXZPqX80CB/gMYYe+TW1OronUEkwf7+B6qN00SBD/L1iMv2OSoiq3 ePYIKVp3X9CRd2bNBSXHNOXk2Dg30lBom9R8Kiqggig1qy0X5eFzLG8Kyb8sXASA9aki J3oqUyNntwI6Y8RV8ABbx877lj2T4J0W+L7jIP0Xw0CuCNNlqGEtqYtA6af6wurYJUS4 KLlI/dC7UOdyO1sB0YJR+UxjHMggFvhP247hmXeUb6UCv8dhEZO9tFnLqoWxilU+hlDM R4ZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=qCyLB+n5EKVJXBopQEFZ+OzjPMF9tnCVECySDJjBV9c=; b=tmUShLotUsp3b+UZiuVtm+astaljSOB+pibr4s5KxIm0KL3gCo4hmJxQNuTVx66wJb dOy+p+RTEeWenE541dqW4QIn/ywZGzDn36AYUEaSEmbgE1gWQZ5yZM3j6g+DPF3dL6X5 jY6Q9wQRPyqrRUNVfYQK2Mskp6bASbin0n7zW7Z0nEuJk0E/IWN8l0RRS4hgrQJV+syo DHAz4QjP02XMZ562LcSrUXp//0QwbzidiNiAiesV2wGcVXkH6xUWE4esd8UOB9MrMTyo 67ECJoibjkw7qneYuVJADxdcr/S3N0S5FXFoYd3RUQ+fUoeLvDl9Z0TkrzNgiM0VSZOj 9IEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=EvBAOR8P; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id y14si14488953qvo.142.2018.10.31.05.24.19 for (version=TLS1 cipher=AES128-SHA bits=128/128); Wed, 31 Oct 2018 05:24:19 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=EvBAOR8P; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59146 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpXm-0002ds-Uv for patch@linaro.org; Wed, 31 Oct 2018 08:24:19 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48812) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gHpVC-0000ZN-Gu for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gHpV0-0001Bh-8T for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:29 -0400 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]:35195) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gHpUx-00017G-43 for qemu-devel@nongnu.org; Wed, 31 Oct 2018 08:21:23 -0400 Received: by mail-wr1-x444.google.com with SMTP id w5-v6so16263316wrt.2 for ; Wed, 31 Oct 2018 05:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qCyLB+n5EKVJXBopQEFZ+OzjPMF9tnCVECySDJjBV9c=; b=EvBAOR8Pgw0WuSyve9PVJZXGXMnEH640FnJ46PM0OrAQv8yNyktRwNOLKRzsM/En2A yBFXm0XAseWULPfGbophDxRxnF41Cm4ufVC45ehvcNnfPGHZq7PCFchc+1SzJ341X45w QVB7nD5a2sOasBdOFgujuwU2QgiRobS+EEyNM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qCyLB+n5EKVJXBopQEFZ+OzjPMF9tnCVECySDJjBV9c=; b=aQORqMx1FxG6eOkdpDhlmtQ0zot27fKn4aXO+Kky2ny05mIkRHECqn2x3Mmd0ckpQU pC2PHys9pnnv3bsXf5pWsfUs0RqRXzv6Nz/1iln8+6FTO3GI1J5pdt08/eY8YzL1i0y9 gKlc9yO7axYgyW2caESx3gCl09do87KzjMfACROLRol08GPt1zJqYKVo6U0+2PC2Fnpr FhqG/9TmR5XTVL+TO728+lhkE81PcmzaDFJ7HNcd/x9N6oyvAo5eGJvDp9IkzziZT3Rm Zo8fNJWKGRDuKYTuX0Sn+x9QqwS4OPN1sX+mm8Zsst+rvNitI8IjBZ6gL9vY5ZrL0b5U H5JQ== X-Gm-Message-State: AGRZ1gIo4pYtAvMwWtwsVznYtKGsgiZh2hAG6yCg2cLmmUrbfUL+V6DA wZfqWSgkoCqv/8VfMQcOYeSG9sjJduI= X-Received: by 2002:adf:d101:: with SMTP id a1-v6mr2745320wri.17.1540988481635; Wed, 31 Oct 2018 05:21:21 -0700 (PDT) Received: from cloudburst.twiddle.net.lan (79-69-241-110.dynamic.dsl.as9105.com. [79.69.241.110]) by smtp.gmail.com with ESMTPSA id v2-v6sm13450362wru.17.2018.10.31.05.21.20 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 05:21:20 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 31 Oct 2018 12:21:10 +0000 Message-Id: <20181031122119.1669-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181031122119.1669-1-richard.henderson@linaro.org> References: <20181031122119.1669-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::444 Subject: [Qemu-devel] [PULL 01/10] cputlb: Move tlb_lock to CPUTLBCommon X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This is the first of several moves to reduce the size of the CPU_COMMON_TLB macro and improve some locality of refernce. Tested-by: Emilio G. Cota Reviewed-by: Emilio G. Cota Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 17 ++++++++++++--- accel/tcg/cputlb.c | 48 ++++++++++++++++++++--------------------- 2 files changed, 38 insertions(+), 27 deletions(-) -- 2.17.2 diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index 4ff62f32bf..9005923b4d 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -141,10 +141,21 @@ typedef struct CPUIOTLBEntry { MemTxAttrs attrs; } CPUIOTLBEntry; +/* + * Data elements that are shared between all MMU modes. + */ +typedef struct CPUTLBCommon { + /* lock serializes updates to tlb_table and tlb_v_table */ + QemuSpin lock; +} CPUTLBCommon; + +/* + * The meaning of each of the MMU modes is defined in the target code. + * Note that NB_MMU_MODES is not yet defined; we can only reference it + * within preprocessor defines that will be expanded later. + */ #define CPU_COMMON_TLB \ - /* The meaning of the MMU modes is defined in the target code. */ \ - /* tlb_lock serializes updates to tlb_table and tlb_v_table */ \ - QemuSpin tlb_lock; \ + CPUTLBCommon tlb_c; \ CPUTLBEntry tlb_table[NB_MMU_MODES][CPU_TLB_SIZE]; \ CPUTLBEntry tlb_v_table[NB_MMU_MODES][CPU_VTLB_SIZE]; \ CPUIOTLBEntry iotlb[NB_MMU_MODES][CPU_TLB_SIZE]; \ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index af57aca5e4..d4e07056be 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -78,7 +78,7 @@ void tlb_init(CPUState *cpu) { CPUArchState *env = cpu->env_ptr; - qemu_spin_init(&env->tlb_lock); + qemu_spin_init(&env->tlb_c.lock); } /* flush_all_helper: run fn across all cpus @@ -134,15 +134,15 @@ static void tlb_flush_nocheck(CPUState *cpu) tlb_debug("(count: %zu)\n", tlb_flush_count()); /* - * tlb_table/tlb_v_table updates from any thread must hold tlb_lock. + * tlb_table/tlb_v_table updates from any thread must hold tlb_c.lock. * However, updates from the owner thread (as is the case here; see the * above assert_cpu_is_self) do not need atomic_set because all reads * that do not hold the lock are performed by the same owner thread. */ - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); memset(env->tlb_table, -1, sizeof(env->tlb_table)); memset(env->tlb_v_table, -1, sizeof(env->tlb_v_table)); - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); cpu_tb_jmp_cache_clear(cpu); @@ -195,7 +195,7 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) tlb_debug("start: mmu_idx:0x%04lx\n", mmu_idx_bitmask); - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmask)) { @@ -205,7 +205,7 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0])); } } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); cpu_tb_jmp_cache_clear(cpu); @@ -262,7 +262,7 @@ static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, tlb_hit_page(tlb_entry->addr_code, page); } -/* Called with tlb_lock held */ +/* Called with tlb_c.lock held */ static inline void tlb_flush_entry_locked(CPUTLBEntry *tlb_entry, target_ulong page) { @@ -271,7 +271,7 @@ static inline void tlb_flush_entry_locked(CPUTLBEntry *tlb_entry, } } -/* Called with tlb_lock held */ +/* Called with tlb_c.lock held */ static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_idx, target_ulong page) { @@ -304,12 +304,12 @@ static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data) } addr &= TARGET_PAGE_MASK; - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { tlb_flush_entry_locked(tlb_entry(env, mmu_idx, addr), addr); tlb_flush_vtlb_page_locked(env, mmu_idx, addr); } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); tb_flush_jmp_cache(cpu, addr); } @@ -345,14 +345,14 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu, tlb_debug("flush page addr:"TARGET_FMT_lx" mmu_idx:0x%lx\n", addr, mmu_idx_bitmap); - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmap)) { tlb_flush_entry_locked(tlb_entry(env, mmu_idx, addr), addr); tlb_flush_vtlb_page_locked(env, mmu_idx, addr); } } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); tb_flush_jmp_cache(cpu, addr); } @@ -479,7 +479,7 @@ void tlb_unprotect_code(ram_addr_t ram_addr) * te->addr_write with atomic_set. We don't need to worry about this for * oversized guests as MTTCG is disabled for them. * - * Called with tlb_lock held. + * Called with tlb_c.lock held. */ static void tlb_reset_dirty_range_locked(CPUTLBEntry *tlb_entry, uintptr_t start, uintptr_t length) @@ -501,7 +501,7 @@ static void tlb_reset_dirty_range_locked(CPUTLBEntry *tlb_entry, } /* - * Called with tlb_lock held. + * Called with tlb_c.lock held. * Called only from the vCPU context, i.e. the TLB's owner thread. */ static inline void copy_tlb_helper_locked(CPUTLBEntry *d, const CPUTLBEntry *s) @@ -511,7 +511,7 @@ static inline void copy_tlb_helper_locked(CPUTLBEntry *d, const CPUTLBEntry *s) /* This is a cross vCPU call (i.e. another vCPU resetting the flags of * the target vCPU). - * We must take tlb_lock to avoid racing with another vCPU update. The only + * We must take tlb_c.lock to avoid racing with another vCPU update. The only * thing actually updated is the target TLB entry ->addr_write flags. */ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length) @@ -521,7 +521,7 @@ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length) int mmu_idx; env = cpu->env_ptr; - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { unsigned int i; @@ -535,10 +535,10 @@ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length) length); } } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); } -/* Called with tlb_lock held */ +/* Called with tlb_c.lock held */ static inline void tlb_set_dirty1_locked(CPUTLBEntry *tlb_entry, target_ulong vaddr) { @@ -557,7 +557,7 @@ void tlb_set_dirty(CPUState *cpu, target_ulong vaddr) assert_cpu_is_self(cpu); vaddr &= TARGET_PAGE_MASK; - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { tlb_set_dirty1_locked(tlb_entry(env, mmu_idx, vaddr), vaddr); } @@ -568,7 +568,7 @@ void tlb_set_dirty(CPUState *cpu, target_ulong vaddr) tlb_set_dirty1_locked(&env->tlb_v_table[mmu_idx][k], vaddr); } } - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); } /* Our TLB does not support large pages, so remember the area covered by @@ -669,7 +669,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, * a longer critical section, but this is not a concern since the TLB lock * is unlikely to be contended. */ - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); /* Make sure there's no cached translation for the new page. */ tlb_flush_vtlb_page_locked(env, mmu_idx, vaddr_page); @@ -736,7 +736,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, } copy_tlb_helper_locked(te, &tn); - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); } /* Add a new TLB entry, but without specifying the memory @@ -917,11 +917,11 @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, /* Found entry in victim tlb, swap tlb and iotlb. */ CPUTLBEntry tmptlb, *tlb = &env->tlb_table[mmu_idx][index]; - qemu_spin_lock(&env->tlb_lock); + qemu_spin_lock(&env->tlb_c.lock); copy_tlb_helper_locked(&tmptlb, tlb); copy_tlb_helper_locked(tlb, vtlb); copy_tlb_helper_locked(vtlb, &tmptlb); - qemu_spin_unlock(&env->tlb_lock); + qemu_spin_unlock(&env->tlb_c.lock); CPUIOTLBEntry tmpio, *io = &env->iotlb[mmu_idx][index]; CPUIOTLBEntry *vio = &env->iotlb_v[mmu_idx][vidx];