From patchwork Wed Feb 1 15:05:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 93071 Delivered-To: patch@linaro.org Received: by 10.140.20.99 with SMTP id 90csp2474250qgi; Wed, 1 Feb 2017 07:41:07 -0800 (PST) X-Received: by 10.200.39.130 with SMTP id w2mr3183541qtw.280.1485963667658; Wed, 01 Feb 2017 07:41:07 -0800 (PST) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id d142si12260876qkb.188.2017.02.01.07.41.07 for (version=TLS1 cipher=AES128-SHA bits=128/128); Wed, 01 Feb 2017 07:41:07 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:51639 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYx1v-0006Ay-71 for patch@linaro.org; Wed, 01 Feb 2017 10:41:07 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37347) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cYwUE-0007Jo-7b for qemu-devel@nongnu.org; Wed, 01 Feb 2017 10:06:24 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cYwU8-0008JU-3m for qemu-devel@nongnu.org; Wed, 01 Feb 2017 10:06:18 -0500 Received: from mail-wm0-x22b.google.com ([2a00:1450:400c:c09::22b]:38501) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1cYwU7-0008JF-PX for qemu-devel@nongnu.org; Wed, 01 Feb 2017 10:06:12 -0500 Received: by mail-wm0-x22b.google.com with SMTP id r141so42522612wmg.1 for ; Wed, 01 Feb 2017 07:06:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fpez4BYQLJ0QkL0in1dVY+X9ufVWr9yUIbfQEELbj3c=; b=FCw1lm3Ktbqmibm7HoTXfvY72PiXJFU3TLN+Wx2Yw+IsaKSvrP2GRNXnKzdY7Ymlnu x0ex4+H2g5xgb7DCm9xajyTODRFDECgnZbPMjWiTbxvuOb9WJ/XufmOg1FPtGIwZ7Tgw Rx01E2/WBtAdPPoNMcS9T7hJoIw16/ay1yJxE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fpez4BYQLJ0QkL0in1dVY+X9ufVWr9yUIbfQEELbj3c=; b=kt6kjdzECkFxhZyeIz6CVZzR/WtF6O8EtK1vbWD5q1emO9MI0JFAM2wYowW8ZzlFCO l2Opv4wapi4EDLodMezPfeRK60zMpsEpYpjvYm/HJESn4QQ0KSbs8pcLKVMZEeTGmmrN lsBB4m56hV5np81vIj+hZc354ojyZhBwtmF+VvQ3yysYzihtnNbFd2iMbUGesQGxc8LB DS4dcohUyPt6IxVL05QM8RjNBj1+QFtB4l82hwCfrhJhwoUIEhTEhqtrUQI+O6Zr5z3M DWPMBwv4K0HJqiBHhdwG7vgkQqCXtF2wtWG0GmYC07Qk9kV/HuuDoZ7+x1+uTtUn4Il/ x1KQ== X-Gm-Message-State: AIkVDXIHe9uuOdNxdsRQSyEtggpPKWprbi6/PslMU2LuprNUOswabgyMS2BLZetTAuF06ozZ X-Received: by 10.28.11.135 with SMTP id 129mr22740732wml.111.1485961570705; Wed, 01 Feb 2017 07:06:10 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id w99sm34653184wrb.5.2017.02.01.07.05.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 01 Feb 2017 07:06:05 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id AE8BE3E0ECC; Wed, 1 Feb 2017 15:05:54 +0000 (GMT) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com, nikunj@linux.vnet.ibm.com Date: Wed, 1 Feb 2017 15:05:45 +0000 Message-Id: <20170201150553.9381-18-alex.bennee@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170201150553.9381-1-alex.bennee@linaro.org> References: <20170201150553.9381-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::22b Subject: [Qemu-devel] [PATCH v9 17/25] cputlb: add tlb_flush_by_mmuidx async routines X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Peter Crosthwaite , jan.kiszka@siemens.com, mark.burton@greensocs.com, serge.fdrv@gmail.com, pbonzini@redhat.com, =?utf-8?q?Alex_Benn=C3=A9e?= , bamvor.zhangjian@linaro.org, rth@twiddle.net Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This converts the remaining TLB flush routines to use async work when detecting a cross-vCPU flush. The only minor complication is having to serialise the var_list of MMU indexes into a form that can be punted to an asynchronous job. The pending_tlb_flush field on QOM's CPU structure also becomes a bitfield rather than a boolean. Signed-off-by: Alex Bennée Reviewed-by: Richard Henderson --- v7 - un-merged from the atomic cputlb patch in the last series - fix long line reported by checkpatch v8 - re-base merge/fixes --- cputlb.c | 110 +++++++++++++++++++++++++++++++++++++++++++----------- include/qom/cpu.h | 2 +- 2 files changed, 89 insertions(+), 23 deletions(-) -- 2.11.0 diff --git a/cputlb.c b/cputlb.c index 97e5c12de8..c50254be26 100644 --- a/cputlb.c +++ b/cputlb.c @@ -68,6 +68,11 @@ * target_ulong even on 32 bit builds */ QEMU_BUILD_BUG_ON(sizeof(target_ulong) > sizeof(run_on_cpu_data)); +/* We currently can't handle more than 16 bits in the MMUIDX bitmask. + */ +QEMU_BUILD_BUG_ON(NB_MMU_MODES > 16); +#define ALL_MMUIDX_BITS ((1 << NB_MMU_MODES) - 1) + /* statistics */ int tlb_flush_count; @@ -102,7 +107,7 @@ static void tlb_flush_nocheck(CPUState *cpu) tb_unlock(); - atomic_mb_set(&cpu->pending_tlb_flush, false); + atomic_mb_set(&cpu->pending_tlb_flush, 0); } static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data data) @@ -113,7 +118,8 @@ static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data data) void tlb_flush(CPUState *cpu) { if (cpu->created && !qemu_cpu_is_self(cpu)) { - if (atomic_cmpxchg(&cpu->pending_tlb_flush, false, true) == true) { + if (atomic_mb_read(&cpu->pending_tlb_flush) != ALL_MMUIDX_BITS) { + atomic_mb_set(&cpu->pending_tlb_flush, ALL_MMUIDX_BITS); async_run_on_cpu(cpu, tlb_flush_global_async_work, RUN_ON_CPU_NULL); } @@ -122,17 +128,18 @@ void tlb_flush(CPUState *cpu) } } -static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) +static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) { CPUArchState *env = cpu->env_ptr; - unsigned long mmu_idx_bitmask = idxmap; + unsigned long mmu_idx_bitmask = data.host_int; int mmu_idx; assert_cpu_is_self(cpu); - tlb_debug("start\n"); tb_lock(); + tlb_debug("start: mmu_idx:0x%04lx\n", mmu_idx_bitmask); + for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmask)) { @@ -145,12 +152,30 @@ static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache)); + tlb_debug("done\n"); + tb_unlock(); } void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { - v_tlb_flush_by_mmuidx(cpu, idxmap); + tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); + + if (!qemu_cpu_is_self(cpu)) { + uint16_t pending_flushes = idxmap; + pending_flushes &= ~atomic_mb_read(&cpu->pending_tlb_flush); + + if (pending_flushes) { + tlb_debug("reduced mmu_idx: 0x%" PRIx16 "\n", pending_flushes); + + atomic_or(&cpu->pending_tlb_flush, pending_flushes); + async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work, + RUN_ON_CPU_HOST_INT(pending_flushes)); + } + } else { + tlb_flush_by_mmuidx_async_work(cpu, + RUN_ON_CPU_HOST_INT(idxmap)); + } } static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr) @@ -215,27 +240,26 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr) } } -void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap) +/* As we are going to hijack the bottom bits of the page address for a + * mmuidx bit mask we need to fail to build if we can't do that + */ +QEMU_BUILD_BUG_ON(NB_MMU_MODES > TARGET_PAGE_BITS_MIN); + +static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu, + run_on_cpu_data data) { CPUArchState *env = cpu->env_ptr; - unsigned long mmu_idx_bitmap = idxmap; - int i, page, mmu_idx; + target_ulong addr_and_mmuidx = (target_ulong) data.target_ptr; + target_ulong addr = addr_and_mmuidx & TARGET_PAGE_MASK; + unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS; + int page = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + int mmu_idx; + int i; assert_cpu_is_self(cpu); - tlb_debug("addr "TARGET_FMT_lx"\n", addr); - - /* Check if we need to flush due to large pages. */ - if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { - tlb_debug("forced full flush (" - TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", - env->tlb_flush_addr, env->tlb_flush_mask); - - v_tlb_flush_by_mmuidx(cpu, idxmap); - return; - } - addr &= TARGET_PAGE_MASK; - page = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + tlb_debug("page:%d addr:"TARGET_FMT_lx" mmu_idx:0x%lx\n", + page, addr, mmu_idx_bitmap); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmap)) { @@ -251,6 +275,48 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap) tb_flush_jmp_cache(cpu, addr); } +static void tlb_check_page_and_flush_by_mmuidx_async_work(CPUState *cpu, + run_on_cpu_data data) +{ + CPUArchState *env = cpu->env_ptr; + target_ulong addr_and_mmuidx = (target_ulong) data.target_ptr; + target_ulong addr = addr_and_mmuidx & TARGET_PAGE_MASK; + unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS; + + tlb_debug("addr:"TARGET_FMT_lx" mmu_idx: %04lx\n", addr, mmu_idx_bitmap); + + /* Check if we need to flush due to large pages. */ + if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { + tlb_debug("forced full flush (" + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", + env->tlb_flush_addr, env->tlb_flush_mask); + + tlb_flush_by_mmuidx_async_work(cpu, + RUN_ON_CPU_HOST_INT(mmu_idx_bitmap)); + } else { + tlb_flush_page_by_mmuidx_async_work(cpu, data); + } +} + +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap) +{ + target_ulong addr_and_mmu_idx; + + tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%" PRIx16 "\n", addr, idxmap); + + /* This should already be page aligned */ + addr_and_mmu_idx = addr & TARGET_PAGE_MASK; + addr_and_mmu_idx |= idxmap; + + if (!qemu_cpu_is_self(cpu)) { + async_run_on_cpu(cpu, tlb_check_page_and_flush_by_mmuidx_async_work, + RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); + } else { + tlb_check_page_and_flush_by_mmuidx_async_work( + cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); + } +} + void tlb_flush_page_all(target_ulong addr) { CPUState *cpu; diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 7f1d6a81a0..d996e5a0f4 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -403,7 +403,7 @@ struct CPUState { * avoid potential races. The aim of the flag is to avoid * unnecessary flushes. */ - bool pending_tlb_flush; + uint16_t pending_tlb_flush; }; QTAILQ_HEAD(CPUTailQ, CPUState);