From patchwork Thu Oct 27 15:10:25 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 79746 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp707516qge; Thu, 27 Oct 2016 09:10:05 -0700 (PDT) X-Received: by 10.37.65.129 with SMTP id o123mr7397275yba.65.1477584605407; Thu, 27 Oct 2016 09:10:05 -0700 (PDT) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id w64si2995174ywe.170.2016.10.27.09.10.05 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 27 Oct 2016 09:10:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42549 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bznFk-0002Mt-KJ for patch@linaro.org; Thu, 27 Oct 2016 12:10:04 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45650) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bzmTT-0003th-Aq for qemu-devel@nongnu.org; Thu, 27 Oct 2016 11:20:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bzmTL-0004Uz-K2 for qemu-devel@nongnu.org; Thu, 27 Oct 2016 11:20:11 -0400 Received: from mail-wm0-x233.google.com ([2a00:1450:400c:c09::233]:34898) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1bzmTL-0004Sq-3t for qemu-devel@nongnu.org; Thu, 27 Oct 2016 11:20:03 -0400 Received: by mail-wm0-x233.google.com with SMTP id e69so41517242wmg.0 for ; Thu, 27 Oct 2016 08:20:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jkQDqAWHcxqjCuhf8Ot9uzuzyR5oknwMSYCuBNrWZCI=; b=GMVnO0kcX0fZbGXsZEqBbXw8O1xCp3uY0M0KvqbmmhiJyNkIr5h90l6Dtpu+xyA5TS z1MJfFC+Ues5001MbGoSwKEKUjt7TramO1sRZtr4Bj/oVlfZxdbqF25TA2nSchefWqvG OqtDgIFD8tKx6DsSwiRYZfN9l1NLFDT4Tx594= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jkQDqAWHcxqjCuhf8Ot9uzuzyR5oknwMSYCuBNrWZCI=; b=BW6+0rpg3lV8EcyyRGr8H9NHaLtBlYMNPcTd2fM38Mg13A+4MkCLLQ0vt3VvCub0kM +ilmBWPjF9yehnFJ7/1oSlnT8QKWULSRS6CHTwa5sTw6T0GvJ4jtdKXxjg+tQ+i0gN0v yfEpRPvI+v/Wk6CdE7IEf+loARK9MBgJyCEqp+h7nz0QeDZT3BEY4nlUBR+JmIuEZEWN GGOb6KnhcSNZRB1VO+fyqAxuT3nBORcDffA0vhfu0xW4Tk9VF78RWxgX69tXVWjzfQGK op1DQ+xHKjOmEWKfkjyVaVpyfFBzhvNLn6Eqhz2w1MJpYqDrb31/c9YGiQmotDCyCkl5 ECjQ== X-Gm-Message-State: ABUngvfDbduw/l5oDOyGZ5yjSaZtFwsGFBTAJGsqEYcnmVhjZs+77bn+1/2m4UZyvXW9m5yo X-Received: by 10.194.82.163 with SMTP id j3mr8870860wjy.56.1477581600322; Thu, 27 Oct 2016 08:20:00 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id kq7sm9056524wjb.0.2016.10.27.08.19.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Oct 2016 08:19:58 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 506D93E00A6; Thu, 27 Oct 2016 16:11:01 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: pbonzini@redhat.com Date: Thu, 27 Oct 2016 16:10:25 +0100 Message-Id: <20161027151030.20863-29-alex.bennee@linaro.org> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161027151030.20863-1-alex.bennee@linaro.org> References: <20161027151030.20863-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::233 Subject: [Qemu-devel] [PATCH v5 28/33] cputlb: make tlb_flush_by_mmuidx safe for MTTCG X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mttcg@listserver.greensocs.com, peter.maydell@linaro.org, claudio.fontana@huawei.com, nikunj@linux.vnet.ibm.com, Peter Crosthwaite , jan.kiszka@siemens.com, mark.burton@greensocs.com, a.rigo@virtualopensystems.com, qemu-devel@nongnu.org, cota@braap.org, serge.fdrv@gmail.com, bobby.prani@gmail.com, rth@twiddle.net, =?UTF-8?q?Alex=20Benn=C3=A9e?= , fred.konrad@greensocs.com Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" These flushes allow a per-mmuidx granularity to the TLB flushing and are currently only used by the ARM model. As it is possible to hammer the other vCPU threads with flushes (and build up long queues of identical flushes) we extend mechanism used for the global tlb_flush and set a bitmap describing all the pending flushes. The updates are done atomically to avoid corruption of the bitmap but repeating a flush is certainly not a problem. Signed-off-by: Alex Bennée --- v5 - fix tlb_flush_page_by_mmuidx to defer all checks to async work - convert to run_on_cpu_data - additional tlb_debugs You can't be checking a cross cpu env-> variable WARNING: ThreadSanitizer: data race (pid=1962) Read of size 8 at 0x7dd00005e998 by thread T2: #0 tlb_flush_page_by_mmuidx /home/alex/lsrc/qemu/qemu.git/cputlb.c:285 (qemu-system-aarch64+0x0000004a1732) #1 tlbi_aa64_vae1is_write /home/alex/lsrc/qemu/qemu.git/target-arm/helper.c:3023 (qemu-system-aarch64+0x000000672a98) #2 helper_set_cp_reg64 /home/alex/lsrc/qemu/qemu.git/target-arm/op_helper.c:744 (qemu-system-aarch64+0x000000668699) #3 (0x000040029eb5) #4 cpu_loop_exec_tb /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:558 (qemu-system-aarch64+0x000000430d00) #5 cpu_exec /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:646 (qemu-system-aarch64+0x0000004310e5) #6 tcg_cpu_exec /home/alex/lsrc/qemu/qemu.git/cpus.c:1156 (qemu-system-aarch64+0x000000474d6f) #7 qemu_tcg_cpu_thread_fn /home/alex/lsrc/qemu/qemu.git/cpus.c:1345 (qemu-system-aarch64+0x000000475641) #8 (libtsan.so.0+0x0000000230d9) Previous write of size 8 at 0x7dd00005e998 by thread T4: #0 tlb_add_large_page /home/alex/lsrc/qemu/qemu.git/cputlb.c:459 (qemu-system-aarch64+0x0000004a1ebf) #1 tlb_set_page_with_attrs /home/alex/lsrc/qemu/qemu.git/cputlb.c:487 (qemu-system-aarch64+0x0000004a2002) #2 arm_tlb_fill /home/alex/lsrc/qemu/qemu.git/target-arm/helper.c:8116 (qemu-system-aarch64+0x0000006849de) #3 tlb_fill /home/alex/lsrc/qemu/qemu.git/target-arm/op_helper.c:127 (qemu-system-aarch64+0x000000666b4c) #4 helper_le_ldul_mmu /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:127 (qemu-system-aarch64+0x0000004a4bba) #5 (0x000040017833) #6 cpu_loop_exec_tb /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:558 (qemu-system-aarch64+0x000000430d00) #7 cpu_exec /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:646 (qemu-system-aarch64+0x0000004310e5) #8 tcg_cpu_exec /home/alex/lsrc/qemu/qemu.git/cpus.c:1156 (qemu-system-aarch64+0x000000474d6f) #9 qemu_tcg_cpu_thread_fn /home/alex/lsrc/qemu/qemu.git/cpus.c:1345 (qemu-system-aarch64+0x000000475641) #10 (libtsan.so.0+0x0000000230d9) Location is heap block of size 125904 at 0x7dd000040000 allocated by main thread: #0 malloc (libtsan.so.0+0x0000000254a3) #1 g_malloc (libglib-2.0.so.0+0x00000004f728) #2 object_new qom/object.c:488 (qemu-system-aarch64+0x000000b157c3) #3 machvirt_init /home/alex/lsrc/qemu/qemu.git/hw/arm/virt.c:1289 (qemu-system-aarch64+0x0000005d733e) #4 main /home/alex/lsrc/qemu/qemu.git/vl.c:4573 (qemu-system-aarch64+0x00000070f2eb) Thread T2 'CPU 0/TCG' (tid=1965, running) created by main thread at: #0 pthread_create (libtsan.so.0+0x000000027577) #1 qemu_thread_create util/qemu-thread-posix.c:471 (qemu-system-aarch64+0x000000c710a6) #2 qemu_tcg_init_vcpu /home/alex/lsrc/qemu/qemu.git/cpus.c:1528 (qemu-system-aarch64+0x000000475f09) #3 qemu_init_vcpu /home/alex/lsrc/qemu/qemu.git/cpus.c:1605 (qemu-system-aarch64+0x00000047645e) #4 arm_cpu_realizefn /home/alex/lsrc/qemu/qemu.git/target-arm/cpu.c:708 (qemu-system-aarch64+0x00000068de38) #5 device_set_realized hw/core/qdev.c:918 (qemu-system-aarch64+0x00000080b429) #6 property_set_bool qom/object.c:1854 (qemu-system-aarch64+0x000000b19cb9) #7 object_property_set qom/object.c:1088 (qemu-system-aarch64+0x000000b177b5) #8 object_property_set_qobject qom/qom-qobject.c:27 (qemu-system-aarch64+0x000000b1b77a) #9 object_property_set_bool qom/object.c:1157 (qemu-system-aarch64+0x000000b17ac4) #10 machvirt_init /home/alex/lsrc/qemu/qemu.git/hw/arm/virt.c:1332 (qemu-system-aarch64+0x0000005d7576) #11 main /home/alex/lsrc/qemu/qemu.git/vl.c:4573 (qemu-system-aarch64+0x00000070f2eb) Thread T4 'CPU 2/TCG' (tid=1967, running) created by main thread at: #0 pthread_create (libtsan.so.0+0x000000027577) #1 qemu_thread_create util/qemu-thread-posix.c:471 (qemu-system-aarch64+0x000000c710a6) #2 qemu_tcg_init_vcpu /home/alex/lsrc/qemu/qemu.git/cpus.c:1528 (qemu-system-aarch64+0x000000475f09) #3 qemu_init_vcpu /home/alex/lsrc/qemu/qemu.git/cpus.c:1605 (qemu-system-aarch64+0x00000047645e) #4 arm_cpu_realizefn /home/alex/lsrc/qemu/qemu.git/target-arm/cpu.c:708 (qemu-system-aarch64+0x00000068de38) #5 device_set_realized hw/core/qdev.c:918 (qemu-system-aarch64+0x00000080b429) #6 property_set_bool qom/object.c:1854 (qemu-system-aarch64+0x000000b19cb9) #7 object_property_set qom/object.c:1088 (qemu-system-aarch64+0x000000b177b5) #8 object_property_set_qobject qom/qom-qobject.c:27 (qemu-system-aarch64+0x000000b1b77a) #9 object_property_set_bool qom/object.c:1157 (qemu-system-aarch64+0x000000b17ac4) #10 machvirt_init /home/alex/lsrc/qemu/qemu.git/hw/arm/virt.c:1332 (qemu-system-aarch64+0x0000005d7576) #11 main /home/alex/lsrc/qemu/qemu.git/vl.c:4573 (qemu-system-aarch64+0x00000070f2eb) SUMMARY: ThreadSanitizer: data race /home/alex/lsrc/qemu/qemu.git/cputlb.c:285 tlb_flush_page_by_mmuidx debug for mmu_idx mmu_idx debug --- cputlb.c | 169 +++++++++++++++++++++++++++++++++++++++++------------- include/qom/cpu.h | 13 +++-- 2 files changed, 137 insertions(+), 45 deletions(-) -- 2.10.1 diff --git a/cputlb.c b/cputlb.c index 981cb42..602cbb3 100644 --- a/cputlb.c +++ b/cputlb.c @@ -81,6 +81,22 @@ static inline run_on_cpu_data host_int(int hint) return d; } +static inline run_on_cpu_data host_unsigned(unsigned hun) +{ + run_on_cpu_data d = { .host_unsigned = hun }; + return d; +} + +static inline run_on_cpu_data host_ulong(unsigned long hlong) +{ + run_on_cpu_data d = { .host_unsigned_long = hlong }; + return d; +} + +/* We currently can't handle more than 16 bits in the MMUIDX bitmask. + */ +QEMU_BUILD_BUG_ON(NB_MMU_MODES > 16); +#define ALL_MMUIDX_BITS ((1 << NB_MMU_MODES) - 1) /* statistics */ int tlb_flush_count; @@ -105,7 +121,7 @@ static void tlb_flush_nocheck(CPUState *cpu, int flush_global) tb_unlock(); - atomic_mb_set(&cpu->pending_tlb_flush, false); + atomic_mb_set(&cpu->pending_tlb_flush, 0); } static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data data) @@ -128,7 +144,8 @@ static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data data) void tlb_flush(CPUState *cpu, int flush_global) { if (cpu->created && !qemu_cpu_is_self(cpu)) { - if (atomic_bool_cmpxchg(&cpu->pending_tlb_flush, false, true)) { + if (atomic_mb_read(&cpu->pending_tlb_flush) != ALL_MMUIDX_BITS) { + atomic_mb_set(&cpu->pending_tlb_flush, ALL_MMUIDX_BITS); async_run_on_cpu(cpu, tlb_flush_global_async_work, host_int(flush_global)); } @@ -137,39 +154,77 @@ void tlb_flush(CPUState *cpu, int flush_global) } } -static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, va_list argp) +static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) { CPUArchState *env = cpu->env_ptr; + unsigned long mmu_idx_bitmask = data.host_unsigned_long; + int mmu_idx; assert_cpu_is_self(cpu); - tlb_debug("start\n"); tb_lock(); - for (;;) { - int mmu_idx = va_arg(argp, int); + tlb_debug("start: mmu_idx:0x%04lx\n", mmu_idx_bitmask); - if (mmu_idx < 0) { - break; - } + for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - tlb_debug("%d\n", mmu_idx); + if (test_bit(mmu_idx, &mmu_idx_bitmask)) { + tlb_debug("%d\n", mmu_idx); - memset(env->tlb_table[mmu_idx], -1, sizeof(env->tlb_table[0])); - memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0])); + memset(env->tlb_table[mmu_idx], -1, sizeof(env->tlb_table[0])); + memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0])); + } } memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache)); + tlb_debug("done\n"); + tb_unlock(); } +/* Helper function to slurp va_args list into a bitmap + */ +static inline unsigned long make_mmu_index_bitmap(va_list args) +{ + unsigned long bitmap = 0; + int mmu_index = va_arg(args, int); + + /* An empty va_list would be a bad call */ + g_assert(mmu_index > 0); + + do { + set_bit(mmu_index, &bitmap); + mmu_index = va_arg(args, int); + } while (mmu_index >= 0); + + return bitmap; +} + void tlb_flush_by_mmuidx(CPUState *cpu, ...) { va_list argp; + unsigned long mmu_idx_bitmap; + va_start(argp, cpu); - v_tlb_flush_by_mmuidx(cpu, argp); + mmu_idx_bitmap = make_mmu_index_bitmap(argp); va_end(argp); + + tlb_debug("mmu_idx: 0x%04lx\n", mmu_idx_bitmap); + + if (!qemu_cpu_is_self(cpu)) { + uint16_t pending_flushes = + mmu_idx_bitmap & ~atomic_mb_read(&cpu->pending_tlb_flush); + if (pending_flushes) { + tlb_debug("reduced mmu_idx: 0x%" PRIx16 "\n", pending_flushes); + + atomic_or(&cpu->pending_tlb_flush, pending_flushes); + async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work, + host_int(pending_flushes)); + } + } else { + tlb_flush_by_mmuidx_async_work(cpu, host_ulong(mmu_idx_bitmap)); + } } static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr) @@ -233,16 +288,50 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr) } } -void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) +/* As we are going to hijack the bottom bits of the page address for a + * mmuidx bit mask we need to fail to build if we can't do that + */ +QEMU_BUILD_BUG_ON(NB_MMU_MODES > TARGET_PAGE_BITS); + +static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu, + run_on_cpu_data data) { CPUArchState *env = cpu->env_ptr; - int i, k; - va_list argp; - - va_start(argp, addr); + target_ulong addr_and_mmuidx = (target_ulong) data.target_ptr; + target_ulong addr = addr_and_mmuidx & TARGET_PAGE_MASK; + unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS; + int page = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + int mmu_idx; + int i; assert_cpu_is_self(cpu); - tlb_debug("addr "TARGET_FMT_lx"\n", addr); + + tlb_debug("page:%d addr:"TARGET_FMT_lx" mmu_idx%" PRIxPTR "\n", + page, addr, mmu_idx_bitmap); + + for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { + if (test_bit(mmu_idx, &mmu_idx_bitmap)) { + tlb_flush_entry(&env->tlb_table[mmu_idx][page], addr); + + /* check whether there are vltb entries that need to be flushed */ + for (i = 0; i < CPU_VTLB_SIZE; i++) { + tlb_flush_entry(&env->tlb_v_table[mmu_idx][i], addr); + } + } + } + + tb_flush_jmp_cache(cpu, addr); +} + +static void tlb_check_page_and_flush_by_mmuidx_async_work(CPUState *cpu, + run_on_cpu_data data) +{ + CPUArchState *env = cpu->env_ptr; + target_ulong addr_and_mmuidx = (target_ulong) data.target_ptr; + target_ulong addr = addr_and_mmuidx & TARGET_PAGE_MASK; + unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS; + + tlb_debug("addr:"TARGET_FMT_lx" mmu_idx: %04lx\n", addr, mmu_idx_bitmap); /* Check if we need to flush due to large pages. */ if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { @@ -250,33 +339,35 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", env->tlb_flush_addr, env->tlb_flush_mask); - v_tlb_flush_by_mmuidx(cpu, argp); - va_end(argp); - return; + tlb_flush_by_mmuidx_async_work(cpu, host_ulong(mmu_idx_bitmap)); + } else { + tlb_flush_page_by_mmuidx_async_work(cpu, data); } +} - addr &= TARGET_PAGE_MASK; - i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - - for (;;) { - int mmu_idx = va_arg(argp, int); +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) +{ + unsigned long mmu_idx_bitmap; + target_ulong addr_and_mmu_idx; + va_list argp; - if (mmu_idx < 0) { - break; - } + va_start(argp, addr); + mmu_idx_bitmap = make_mmu_index_bitmap(argp); + va_end(argp); - tlb_debug("idx %d\n", mmu_idx); + tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%lx\n", addr, mmu_idx_bitmap); - tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); + /* This should already be page aligned */ + addr_and_mmu_idx = addr & TARGET_PAGE_MASK; + addr_and_mmu_idx |= mmu_idx_bitmap; - /* check whether there are vltb entries that need to be flushed */ - for (k = 0; k < CPU_VTLB_SIZE; k++) { - tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); - } + if (!qemu_cpu_is_self(cpu)) { + async_run_on_cpu(cpu, tlb_check_page_and_flush_by_mmuidx_async_work, + target_ptr(addr_and_mmu_idx)); + } else { + tlb_check_page_and_flush_by_mmuidx_async_work( + cpu, target_ptr(addr_and_mmu_idx)); } - va_end(argp); - - tb_flush_jmp_cache(cpu, addr); } void tlb_flush_page_all(target_ulong addr) diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 1fe5b99..4faf795 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -238,6 +238,7 @@ struct kvm_run; typedef union { int host_int; unsigned host_unsigned; + unsigned long host_unsigned_long; uintptr_t host_ptr; void *void_ptr; /* for (run_on_cpu_data) NULL casts */ vaddr target_ptr; @@ -391,17 +392,17 @@ struct CPUState { */ bool throttle_thread_scheduled; + /* The pending_tlb_flush flag is set and cleared atomically to + * avoid potential races. The aim of the flag is to avoid + * unnecessary flushes. + */ + uint16_t pending_tlb_flush; + /* Note that this is accessed at the start of every TB via a negative offset from AREG0. Leave this field at the end so as to make the (absolute value) offset as small as possible. This reduces code size, especially for hosts without large memory offsets. */ uint32_t tcg_exit_req; - - /* The pending_tlb_flush flag is set and cleared atomically to - * avoid potential races. The aim of the flag is to avoid - * unnecessary flushes. - */ - bool pending_tlb_flush; }; QTAILQ_HEAD(CPUTailQ, CPUState);