From patchwork Thu Aug 11 15:24:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 73788 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp185250qga; Thu, 11 Aug 2016 09:08:33 -0700 (PDT) X-Received: by 10.200.54.107 with SMTP id n40mr12026988qtb.44.1470931713428; Thu, 11 Aug 2016 09:08:33 -0700 (PDT) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id d141si626457qkg.226.2016.08.11.09.08.33 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 11 Aug 2016 09:08:33 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49511 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXsX2-0005ul-UV for patch@linaro.org; Thu, 11 Aug 2016 12:08:32 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58804) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXrwa-00021j-Fh for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:30:58 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bXrwY-0007t5-8v for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:30:51 -0400 Received: from mail-wm0-x22f.google.com ([2a00:1450:400c:c09::22f]:36661) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXrwX-0007sv-Up for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:30:50 -0400 Received: by mail-wm0-x22f.google.com with SMTP id q128so7384105wma.1 for ; Thu, 11 Aug 2016 08:30:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZSNPbEEEHhfJ4rasqJbqLk+uTGchrmGYfZ+8ZNDibfQ=; b=f+j4mLEzhaikCoRvLhOgU76LLD8WcvKqUr0flp2qdJiCr8L410HPLw6DIKjxxSWTIA J338NxfY46WSc3IhvdP1F78T9PQ9+B8yc40ag66AuqAQmbYI9085FmoMqBrI4k3w1lmj Ldr1CSU2+WUXCF95FzxzFpYhOdtSbV6VAhNG4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZSNPbEEEHhfJ4rasqJbqLk+uTGchrmGYfZ+8ZNDibfQ=; b=Ntz1oHKDhd+o8O4DcFW0moGTYOXG3wF9R74Fjic5aoDTtHBiTnv9WLJ/g3paWhmr0S 4gqwPRo4obfn5FSJ2iGk7yZ3PPtFI+ex5kPJfsHrzUqYLCy6wmDV/Lmmh/OKvbBVVVOk b9xYIHWM9nqK/2gJoPclgsQfpZ9SbPKWMLfneIksJjlxL2Fxi3pXdGNXGzhZJpS0DszS R2pUbu6JSITpDfBwi/fAGbyWtAuyUI42ZwSYAksY7OpSlNfmfb9bwKIPsYAFaJP+29Pt iQL6q0vv0e40nHkLg+ySIlDHDDPMQ8vIF7bA14j9JuOoBNGKqJUDcGQBQIAocaDh+R7k aLEA== X-Gm-Message-State: AEkooutmJOTmSihuif0307MRY3EjAv4rMo7JMHG1EwgEFUhTaEy9iGBTp91XVgkrLLwyKC1l X-Received: by 10.194.142.78 with SMTP id ru14mr12053776wjb.41.1470929449089; Thu, 11 Aug 2016 08:30:49 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id i7sm3186167wjg.42.2016.08.11.08.30.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Aug 2016 08:30:43 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id D81743E0358; Thu, 11 Aug 2016 16:24:31 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com, nikunj@linux.vnet.ibm.com Date: Thu, 11 Aug 2016 16:24:23 +0100 Message-Id: <1470929064-4092-28-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1470929064-4092-1-git-send-email-alex.bennee@linaro.org> References: <1470929064-4092-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::22f Subject: [Qemu-devel] [RFC v4 27/28] cputlb: make tlb_reset_dirty safe for MTTCG X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, claudio.fontana@huawei.com, Peter Crosthwaite , jan.kiszka@siemens.com, mark.burton@greensocs.com, serge.fdrv@gmail.com, pbonzini@redhat.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , rth@twiddle.net Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" The main use case for tlb_reset_dirty is to set the TLB_NOTDIRTY flags in TLB entries to force the slow-path on writes. This is used to mark page ranges containing code which has been translated so it can be invalidated if written to. To do this safely we need to ensure the TLB entries in question for all vCPUs are updated before we attempt to run the code otherwise a race could be introduced. To achieve this we atomically set the flag in tlb_reset_dirty_range and take care when setting it when the TLB entry is filled. The helper function is made static as it isn't used outside of cputlb. Signed-off-by: Alex Bennée --- cputlb.c | 55 +++++++++++++++++++++++++++++++++++---------------- include/exec/cputlb.h | 2 -- 2 files changed, 38 insertions(+), 19 deletions(-) -- 2.7.4 diff --git a/cputlb.c b/cputlb.c index 945ea02..faeb195 100644 --- a/cputlb.c +++ b/cputlb.c @@ -275,32 +275,50 @@ void tlb_unprotect_code(ram_addr_t ram_addr) cpu_physical_memory_set_dirty_flag(ram_addr, DIRTY_MEMORY_CODE); } -static bool tlb_is_dirty_ram(CPUTLBEntry *tlbe) -{ - return (tlbe->addr_write & (TLB_INVALID_MASK|TLB_MMIO|TLB_NOTDIRTY)) == 0; -} -void tlb_reset_dirty_range(CPUTLBEntry *tlb_entry, uintptr_t start, +/* + * Dirty write flag handling + * + * When the TCG code writes to a location it looks up the address in + * the TLB and uses that data to compute the final address. If any of + * the lower bits of the address are set then the slow path is forced. + * There are a number of reasons to do this but for normal RAM the + * most usual is detecting writes to code regions which may invalidate + * generated code. + * + * Because we want other vCPUs to respond to changes straight away we + * update the te->addr_write field atomically. If the TLB entry has + * been changed by the vCPU in the mean time we skip the update. + */ + +static void tlb_reset_dirty_range(CPUTLBEntry *tlb_entry, uintptr_t start, uintptr_t length) { - uintptr_t addr; + /* paired with atomic_mb_set in tlb_set_page_with_attrs */ + uintptr_t orig_addr = atomic_mb_read(&tlb_entry->addr_write); + uintptr_t addr = orig_addr; - if (tlb_is_dirty_ram(tlb_entry)) { - addr = (tlb_entry->addr_write & TARGET_PAGE_MASK) + tlb_entry->addend; + if ((addr & (TLB_INVALID_MASK | TLB_MMIO | TLB_NOTDIRTY)) == 0) { + addr &= TARGET_PAGE_MASK; + addr += atomic_read(&tlb_entry->addend); if ((addr - start) < length) { - tlb_entry->addr_write |= TLB_NOTDIRTY; + uintptr_t notdirty_addr = orig_addr | TLB_NOTDIRTY; + atomic_cmpxchg(&tlb_entry->addr_write, orig_addr, notdirty_addr); } } } +/* This is a cross vCPU call (i.e. another vCPU resetting the flags of + * the target vCPU). As such care needs to be taken that we don't + * dangerously race with another vCPU update. The only thing actually + * updated is the target TLB entry ->addr_write flags. + */ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length) { CPUArchState *env; int mmu_idx; - assert_cpu_is_self(cpu); - env = cpu->env_ptr; for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { unsigned int i; @@ -386,7 +404,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, MemoryRegionSection *section; unsigned int index; target_ulong address; - target_ulong code_address; + target_ulong code_address, write_address; uintptr_t addend; CPUTLBEntry *te; hwaddr iotlb, xlat, sz; @@ -443,21 +461,24 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, } else { te->addr_code = -1; } + + write_address = -1; if (prot & PAGE_WRITE) { if ((memory_region_is_ram(section->mr) && section->readonly) || memory_region_is_romd(section->mr)) { /* Write access calls the I/O callback. */ - te->addr_write = address | TLB_MMIO; + write_address = address | TLB_MMIO; } else if (memory_region_is_ram(section->mr) && cpu_physical_memory_is_clean( memory_region_get_ram_addr(section->mr) + xlat)) { - te->addr_write = address | TLB_NOTDIRTY; + write_address = address | TLB_NOTDIRTY; } else { - te->addr_write = address; + write_address = address; } - } else { - te->addr_write = -1; } + + /* Pairs with flag setting in tlb_reset_dirty_range */ + atomic_mb_set(&te->addr_write, write_address); } /* Add a new TLB entry, but without specifying the memory diff --git a/include/exec/cputlb.h b/include/exec/cputlb.h index d454c00..3f94178 100644 --- a/include/exec/cputlb.h +++ b/include/exec/cputlb.h @@ -23,8 +23,6 @@ /* cputlb.c */ void tlb_protect_code(ram_addr_t ram_addr); void tlb_unprotect_code(ram_addr_t ram_addr); -void tlb_reset_dirty_range(CPUTLBEntry *tlb_entry, uintptr_t start, - uintptr_t length); extern int tlb_flush_count; #endif