From patchwork Fri Aug 7 12:33:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 52044 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by patches.linaro.org (Postfix) with ESMTPS id DED5022EAD for ; Fri, 7 Aug 2015 12:33:50 +0000 (UTC) Received: by lbbnr7 with SMTP id nr7sf33981687lbb.2 for ; Fri, 07 Aug 2015 05:33:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=7cgDfZ10B5pzdqCC3Zdnb4k9D92xYNrYZaBDDHtXWe4=; b=IpCB3OJCNEFgG2hjS7XTAkyRPN/6KfTBYGzpvtScOluG65WYyv0ZGlVpNy3jWx+L/U AswJN71w8469sjT+DDvxDXZWqczsdAwO0DhM2KSWSU/HY7KBA+2Zt67MBD9+nS/xB6VJ CVat9iwEOJ0oR9zZRcNpKcu4UHY2EqBtmroHCGR0RFQs3tepvOJzV9qE3+UnmA3fCc3J rOZs6dKv45U53kP44OMSuk1eX7EExD8vTHF4098EfEeJPwXHRC2nKlhzU+woNBDXC866 JBZWwhOYet3dYTDaOZpWfHlfQ1LKZzPvt3AJzbCXyA5xNZaLqsrytjWnSlwCj68I4eu8 9rMA== X-Gm-Message-State: ALoCoQku42xfFN6bhe84utJcL7OSxmq2MeL3TCu/z/OUFBu/GRtkZSy5io9HAb7V64dCv5bdtCNf X-Received: by 10.180.186.36 with SMTP id fh4mr982574wic.7.1438950829859; Fri, 07 Aug 2015 05:33:49 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.42.207 with SMTP id q15ls527108lal.59.gmail; Fri, 07 Aug 2015 05:33:49 -0700 (PDT) X-Received: by 10.152.88.106 with SMTP id bf10mr7197723lab.82.1438950829685; Fri, 07 Aug 2015 05:33:49 -0700 (PDT) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id r10si7325192lar.147.2015.08.07.05.33.49 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 Aug 2015 05:33:49 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by lagz9 with SMTP id z9so21174765lag.3 for ; Fri, 07 Aug 2015 05:33:49 -0700 (PDT) X-Received: by 10.112.166.2 with SMTP id zc2mr7209440lbb.29.1438950829591; Fri, 07 Aug 2015 05:33:49 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.7.198 with SMTP id l6csp348011lba; Fri, 7 Aug 2015 05:33:48 -0700 (PDT) X-Received: by 10.180.86.138 with SMTP id p10mr6919015wiz.39.1438950819204; Fri, 07 Aug 2015 05:33:39 -0700 (PDT) Received: from mnementh.archaic.org.uk (mnementh.archaic.org.uk. [2001:8b0:1d0::1]) by mx.google.com with ESMTPS id d2si10714985wiz.96.2015.08.07.05.33.38 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 07 Aug 2015 05:33:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::1 as permitted sender) client-ip=2001:8b0:1d0::1; Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.80) (envelope-from ) id 1ZNgq3-0007S7-0D; Fri, 07 Aug 2015 13:33:31 +0100 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, "Edgar E. Iglesias" , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Subject: [PATCH 1/6] cputlb: Add functions for flushing TLB for a single MMU index Date: Fri, 7 Aug 2015 13:33:25 +0100 Message-Id: <1438950810-28618-2-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1438950810-28618-1-git-send-email-peter.maydell@linaro.org> References: <1438950810-28618-1-git-send-email-peter.maydell@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Guest CPU TLB maintenance operations may be sufficiently specialized to only need to flush TLB entries corresponding to a particular MMU index. Implement cputlb functions for this, to avoid the inefficiency of flushing TLB entries which we don't need to. Signed-off-by: Peter Maydell --- cputlb.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++ include/exec/exec-all.h | 47 ++++++++++++++++++++++++++++ 2 files changed, 128 insertions(+) diff --git a/cputlb.c b/cputlb.c index a506086..a1996ba 100644 --- a/cputlb.c +++ b/cputlb.c @@ -69,6 +69,39 @@ void tlb_flush(CPUState *cpu, int flush_global) tlb_flush_count++; } +static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, va_list argp) +{ + CPUArchState *env = cpu->env_ptr; + +#if defined(DEBUG_TLB) + printf("tlb_flush_by_mmuidx %d:\n", mmu_idx); +#endif + /* must reset current TB so that interrupts cannot modify the + links while we are modifying them */ + cpu->current_tb = NULL; + + for (;;) { + int mmu_idx = va_arg(argp, int); + + if (mmu_idx < 0) { + break; + } + + memset(env->tlb_table[mmu_idx], -1, sizeof(env->tlb_table[0])); + memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0])); + } + + memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache)); +} + +void tlb_flush_by_mmuidx(CPUState *cpu, ...) +{ + va_list argp; + va_start(argp, cpu); + v_tlb_flush_by_mmuidx(cpu, argp); + va_end(argp); +} + static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr) { if (addr == (tlb_entry->addr_read & @@ -121,6 +154,54 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr) tb_flush_jmp_cache(cpu, addr); } +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) +{ + CPUArchState *env = cpu->env_ptr; + int i, k; + va_list argp; + + va_start(argp, addr); + +#if defined(DEBUG_TLB) + printf("tlb_flush_page_by_mmu_idx: " TARGET_FMT_lx ", %d\n", addr, mmu_idx); +#endif + /* Check if we need to flush due to large pages. */ + if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { +#if defined(DEBUG_TLB) + printf("tlb_flush_page_by_mmu_idx: forced full flush (" + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", + env->tlb_flush_addr, env->tlb_flush_mask); +#endif + v_tlb_flush_by_mmuidx(cpu, argp); + va_end(argp); + return; + } + /* must reset current TB so that interrupts cannot modify the + links while we are modifying them */ + cpu->current_tb = NULL; + + addr &= TARGET_PAGE_MASK; + i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + + for (;;) { + int mmu_idx = va_arg(argp, int); + + if (mmu_idx < 0) { + break; + } + + tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); + + /* check whether there are vltb entries that need to be flushed */ + for (k = 0; k < CPU_VTLB_SIZE; k++) { + tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); + } + } + va_end(argp); + + tb_flush_jmp_cache(cpu, addr); +} + /* update the TLBs so that writes to code in the virtual page 'addr' can be detected */ void tlb_protect_code(ram_addr_t ram_addr) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index a6fce04..4933683 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -96,8 +96,46 @@ bool qemu_in_vcpu_thread(void); void cpu_reload_memory_map(CPUState *cpu); void tcg_cpu_address_space_init(CPUState *cpu, AddressSpace *as); /* cputlb.c */ +/** + * tlb_flush_page: + * @cpu: CPU whose TLB should be flushed + * @addr: virtual address of page to be flushed + * + * Flush one page from the TLB of the specified CPU, for all + * MMU indexes. + */ void tlb_flush_page(CPUState *cpu, target_ulong addr); +/** + * tlb_flush: + * @cpu: CPU whose TLB should be flushed + * @flush_global: ignored + * + * Flush the entire TLB for the specified CPU. + * The flush_global flag is in theory an indicator of whether the whole + * TLB should be flushed, or only those entries not marked global. + * In practice QEMU does not implement any global/not global flag for + * TLB entries, and the argument is ignored. + */ void tlb_flush(CPUState *cpu, int flush_global); +/** + * tlb_flush_page_by_mmuidx: + * @cpu: CPU whose TLB should be flushed + * @addr: virtual address of page to be flushed + * @...: list of MMU indexes to flush, terminated by a negative value + * + * Flush one page from the TLB of the specified CPU, for the specified + * MMU indexes. + */ +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...); +/** + * tlb_flush_by_mmuidx: + * @cpu: CPU whose TLB should be flushed + * @...: list of MMU indexes to flush, terminated by a negative value + * + * Flush all entries from the TLB of the specified CPU, for the specified + * MMU indexes. + */ +void tlb_flush_by_mmuidx(CPUState *cpu, ...); void tlb_set_page(CPUState *cpu, target_ulong vaddr, hwaddr paddr, int prot, int mmu_idx, target_ulong size); @@ -115,6 +153,15 @@ static inline void tlb_flush_page(CPUState *cpu, target_ulong addr) static inline void tlb_flush(CPUState *cpu, int flush_global) { } + +static inline void tlb_flush_page_by_mmuidx(CPUState *cpu, + target_ulong addr, ...) +{ +} + +static inline void tlb_flush_by_mmuidx(CPUState *cpu, ...) +{ +} #endif #define CODE_GEN_ALIGN 16 /* must be >= of the size of a icache line */