From patchwork Fri Aug 14 10:41:14 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 52422 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by patches.linaro.org (Postfix) with ESMTPS id D74E922B12 for ; Fri, 14 Aug 2015 10:41:30 +0000 (UTC) Received: by lbsm2 with SMTP id m2sf25595856lbs.1 for ; Fri, 14 Aug 2015 03:41:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=tLNkPeIHaZR/1GtXeZCg7cKQ6oK8PXgpTCawgbUkFnU=; b=gTKWzeUdLc/lE/5JDPCZ5PbN91mSmNpnV5ftW+aMhEkikMO9iFpSmu5z2C87wO3kFf k/g0R+edyxqyctNOBngKavt7x0VFodm8OKwGmLgGBFxlsuiGTV5svRgXak+HsBCHBwQe GbJYyIlYgo/scrMwreOytPHUvC3zPFBdyy0cH+b7MKA3K47Yz4JtDflIFsmOQBTB1TJM l69w4wMDhJalXL9B0OwxSx9VXANuGKtKjXsgg62uTVxGF8MRgkKKO3SY2xvLxsMsCScs 7Qdqn6z+IkaO9ImDfapazejz481F46KndyPN2Hjg6vZNTXhqJz0tSySnw/ro2cn7t9eF CnmQ== X-Gm-Message-State: ALoCoQlkCOLTx127Smn64k0xp5q11+skTArZfg5XwZeOiNVG/E20T/bbboUtAlgUc2D46uINFvPF X-Received: by 10.152.25.233 with SMTP id f9mr12687300lag.5.1439548889923; Fri, 14 Aug 2015 03:41:29 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.207.38 with SMTP id lt6ls349177lac.50.gmail; Fri, 14 Aug 2015 03:41:29 -0700 (PDT) X-Received: by 10.152.44.130 with SMTP id e2mr42939768lam.14.1439548889788; Fri, 14 Aug 2015 03:41:29 -0700 (PDT) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com. [209.85.215.54]) by mx.google.com with ESMTPS id la8si5718410lac.28.2015.08.14.03.41.29 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Aug 2015 03:41:29 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) client-ip=209.85.215.54; Received: by lalv9 with SMTP id v9so41478232lal.0 for ; Fri, 14 Aug 2015 03:41:29 -0700 (PDT) X-Received: by 10.112.166.2 with SMTP id zc2mr42444488lbb.29.1439548889636; Fri, 14 Aug 2015 03:41:29 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.7.198 with SMTP id l6csp1518582lba; Fri, 14 Aug 2015 03:41:28 -0700 (PDT) X-Received: by 10.67.7.199 with SMTP id de7mr85288380pad.107.1439548884295; Fri, 14 Aug 2015 03:41:24 -0700 (PDT) Received: from mnementh.archaic.org.uk (mnementh.archaic.org.uk. [2001:8b0:1d0::1]) by mx.google.com with ESMTPS id mc9si8359481pdb.199.2015.08.14.03.41.22 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 14 Aug 2015 03:41:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::1 as permitted sender) client-ip=2001:8b0:1d0::1; Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.80) (envelope-from ) id 1ZQCQJ-0000WL-5W; Fri, 14 Aug 2015 11:41:19 +0100 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, "Edgar E. Iglesias" , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Subject: [PATCH v2 1/6] cputlb: Add functions for flushing TLB for a single MMU index Date: Fri, 14 Aug 2015 11:41:14 +0100 Message-Id: <1439548879-1972-2-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1439548879-1972-1-git-send-email-peter.maydell@linaro.org> References: <1439548879-1972-1-git-send-email-peter.maydell@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Guest CPU TLB maintenance operations may be sufficiently specialized to only need to flush TLB entries corresponding to a particular MMU index. Implement cputlb functions for this, to avoid the inefficiency of flushing TLB entries which we don't need to. Signed-off-by: Peter Maydell --- cputlb.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++ include/exec/exec-all.h | 47 ++++++++++++++++++++++++ 2 files changed, 144 insertions(+) diff --git a/cputlb.c b/cputlb.c index a506086..4bc6c24 100644 --- a/cputlb.c +++ b/cputlb.c @@ -69,6 +69,47 @@ void tlb_flush(CPUState *cpu, int flush_global) tlb_flush_count++; } +static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, va_list argp) +{ + CPUArchState *env = cpu->env_ptr; + +#if defined(DEBUG_TLB) + printf("tlb_flush_by_mmuidx:"); +#endif + /* must reset current TB so that interrupts cannot modify the + links while we are modifying them */ + cpu->current_tb = NULL; + + for (;;) { + int mmu_idx = va_arg(argp, int); + + if (mmu_idx < 0) { + break; + } + +#if defined(DEBUG_TLB) + printf(" %d", mmu_idx); +#endif + + memset(env->tlb_table[mmu_idx], -1, sizeof(env->tlb_table[0])); + memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0])); + } + +#if defined(DEBUG_TLB) + printf("\n"); +#endif + + memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache)); +} + +void tlb_flush_by_mmuidx(CPUState *cpu, ...) +{ + va_list argp; + va_start(argp, cpu); + v_tlb_flush_by_mmuidx(cpu, argp); + va_end(argp); +} + static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr) { if (addr == (tlb_entry->addr_read & @@ -121,6 +162,62 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr) tb_flush_jmp_cache(cpu, addr); } +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) +{ + CPUArchState *env = cpu->env_ptr; + int i, k; + va_list argp; + + va_start(argp, addr); + +#if defined(DEBUG_TLB) + printf("tlb_flush_page_by_mmu_idx: " TARGET_FMT_lx, addr); +#endif + /* Check if we need to flush due to large pages. */ + if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { +#if defined(DEBUG_TLB) + printf(" forced full flush (" + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", + env->tlb_flush_addr, env->tlb_flush_mask); +#endif + v_tlb_flush_by_mmuidx(cpu, argp); + va_end(argp); + return; + } + /* must reset current TB so that interrupts cannot modify the + links while we are modifying them */ + cpu->current_tb = NULL; + + addr &= TARGET_PAGE_MASK; + i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + + for (;;) { + int mmu_idx = va_arg(argp, int); + + if (mmu_idx < 0) { + break; + } + +#if defined(DEBUG_TLB) + printf(" %d", mmu_idx); +#endif + + tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); + + /* check whether there are vltb entries that need to be flushed */ + for (k = 0; k < CPU_VTLB_SIZE; k++) { + tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); + } + } + va_end(argp); + +#if defined(DEBUG_TLB) + printf("\n"); +#endif + + tb_flush_jmp_cache(cpu, addr); +} + /* update the TLBs so that writes to code in the virtual page 'addr' can be detected */ void tlb_protect_code(ram_addr_t ram_addr) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index a6fce04..4933683 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -96,8 +96,46 @@ bool qemu_in_vcpu_thread(void); void cpu_reload_memory_map(CPUState *cpu); void tcg_cpu_address_space_init(CPUState *cpu, AddressSpace *as); /* cputlb.c */ +/** + * tlb_flush_page: + * @cpu: CPU whose TLB should be flushed + * @addr: virtual address of page to be flushed + * + * Flush one page from the TLB of the specified CPU, for all + * MMU indexes. + */ void tlb_flush_page(CPUState *cpu, target_ulong addr); +/** + * tlb_flush: + * @cpu: CPU whose TLB should be flushed + * @flush_global: ignored + * + * Flush the entire TLB for the specified CPU. + * The flush_global flag is in theory an indicator of whether the whole + * TLB should be flushed, or only those entries not marked global. + * In practice QEMU does not implement any global/not global flag for + * TLB entries, and the argument is ignored. + */ void tlb_flush(CPUState *cpu, int flush_global); +/** + * tlb_flush_page_by_mmuidx: + * @cpu: CPU whose TLB should be flushed + * @addr: virtual address of page to be flushed + * @...: list of MMU indexes to flush, terminated by a negative value + * + * Flush one page from the TLB of the specified CPU, for the specified + * MMU indexes. + */ +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...); +/** + * tlb_flush_by_mmuidx: + * @cpu: CPU whose TLB should be flushed + * @...: list of MMU indexes to flush, terminated by a negative value + * + * Flush all entries from the TLB of the specified CPU, for the specified + * MMU indexes. + */ +void tlb_flush_by_mmuidx(CPUState *cpu, ...); void tlb_set_page(CPUState *cpu, target_ulong vaddr, hwaddr paddr, int prot, int mmu_idx, target_ulong size); @@ -115,6 +153,15 @@ static inline void tlb_flush_page(CPUState *cpu, target_ulong addr) static inline void tlb_flush(CPUState *cpu, int flush_global) { } + +static inline void tlb_flush_page_by_mmuidx(CPUState *cpu, + target_ulong addr, ...) +{ +} + +static inline void tlb_flush_by_mmuidx(CPUState *cpu, ...) +{ +} #endif #define CODE_GEN_ALIGN 16 /* must be >= of the size of a icache line */