From patchwork Mon Sep 23 23:00:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 174235 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp3407967ill; Mon, 23 Sep 2019 16:12:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqwjo1a30c79tM0mGNLh6Fb6+cUOTsJwFevd8+gOddNma5UHhl/ogoWOMCLNi0IBvGVj+va6 X-Received: by 2002:a17:906:7cc7:: with SMTP id h7mr1965363ejp.204.1569280323495; Mon, 23 Sep 2019 16:12:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569280323; cv=none; d=google.com; s=arc-20160816; b=Vy+yDd2OMu7iWLqqLHyW1aP7SoZLoKtPIXd7yT1NeeM3ZbtILQJtYnkB7dWP5rB6kK ytZvhSXbt0InMKIticdP9HfEkgMwm5u48lXLxhKBmaPcUJTFheGjK9UjYiyXbIC4eYxx hrPVjAuWUB3ANzaC5UGyW/b3mDOYQ0ZrijfoVjvX34y09NOV2pJWOZ2roCPj8H8PK/Ly uAfoG0gfccT7iNYT4y4NTvN8/H+R13naYYak1inMbN1zhzZN8V5Mh0s2UF8KovulKtbU uOTWPujPCEAOCe4Knt014QkoJmf3/9coJA917VJtUl/ioresUK9PY1iwJtN0Eii2bI9h mRFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=Fy8yqGvXJfEtemqcYdzYU9XShb4v7XS8MjArgMABc6w=; b=q8HHsY0Er4iwzHHllRotRvLWmDDnSSGgF+j019CyWE1wLUy3mdI7T6MLafaOSeEl+F EJiFEBZjy5hy8+14zc9ZB271GyFaRM9dQGpEMaTwp4ndd18OLAsLh4bV0xy12rgw30xC O31lj9tXMXXev/kxSGpyL1FsB8zMS9YdS34ln02TJ1AuN/kw2Iwx9e4iwc5HU5qFzdmE pjeqhRIxCf8MXaSrZJl516yo7ooWtdPM3xncPOlqHJmWS3PLK3jOROOd9TwkuQtao4rt pmXl4iNLRXgARQs1ycKagXoOCNlJQLZO/Ajl5UeGDOGXDgLYXnOTeJhS/LKCJ1KWQyt3 JmOw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Uh7DC6M4; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w6si7280301eda.336.2019.09.23.16.12.03 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 23 Sep 2019 16:12:03 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Uh7DC6M4; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:37166 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iCXUw-0006TH-3L for patch@linaro.org; Mon, 23 Sep 2019 19:12:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35951) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iCXJo-0005kB-HU for qemu-devel@nongnu.org; Mon, 23 Sep 2019 19:00:34 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iCXJm-0008AQ-M6 for qemu-devel@nongnu.org; Mon, 23 Sep 2019 19:00:32 -0400 Received: from mail-pf1-x443.google.com ([2607:f8b0:4864:20::443]:36260) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1iCXJm-00089y-A1 for qemu-devel@nongnu.org; Mon, 23 Sep 2019 19:00:30 -0400 Received: by mail-pf1-x443.google.com with SMTP id y22so10153179pfr.3 for ; Mon, 23 Sep 2019 16:00:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Fy8yqGvXJfEtemqcYdzYU9XShb4v7XS8MjArgMABc6w=; b=Uh7DC6M4fZGv0z+2vD70cpe39Z/CcwRdC5TWSZMpLxDgRH3zdsH/nTiIQKQowcqoRs So9Sh/D+dKzaLJxlP4ESNzSa+R97yG44+I6ynK41HU6bklBucFWfTWII/vq9ZKmUHdze BDxl3/O6ddErC5kbccJ0IeZttVhg+G5C15XSwaM2HpRgVc6b0+bSm9u63qqL/1tY+gNn kxZDyUDnJH47GksZKxejMA4Y75iROr+PgOPI/sRsuvICMMKCpnEbFkcyDXBc73FWGlxJ luvY2gwEv7XkRk7PBW4euFoP5ornrj0LiL6nk+MCEXTBw9KjnMp/NZw3eaGX1o3dP8Ly Vieg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Fy8yqGvXJfEtemqcYdzYU9XShb4v7XS8MjArgMABc6w=; b=FKXKgFTC+/tJOmhqYimtVgoq8V+nu6n62QSiUR1S0RQ4iobHjPYxvnJONaKvZ0JXj1 mTeO5DzV/41Unz11IwOrrt4xd8zQGFQik9vEzq+iuLQpKUHftr78US0CafYKOSy0Scaq kmDKxN1XtowVGPPvieNAg7tFkOlkw0WvEPlBTSIgZOMMd3+vgYNsef4FSmOJhQPVqYVM MIIlst34i8BqNbR1iK38S0kQkifV2pH45sD1uWRgaJmjZOVSQ6bWcJJ8JR/vkMJOMOEX 5+JzdP79WzOPoM8MQ60FIN8PxQQeTnXNJSr9pxj4m/yWOdiaR1At4SjN0eyiDdzTov4h nAFg== X-Gm-Message-State: APjAAAWobTAi+kXCOVnUljYZPOOoLZVyPjYdcMNEvE0u5RuyPIjA0rzI MTT5iuOynGo8rr5EUoOIpJztmUIQTrU= X-Received: by 2002:a62:2f84:: with SMTP id v126mr2173907pfv.167.1569279628832; Mon, 23 Sep 2019 16:00:28 -0700 (PDT) Received: from localhost.localdomain ([12.206.46.59]) by smtp.gmail.com with ESMTPSA id 74sm11674810pfy.78.2019.09.23.16.00.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Sep 2019 16:00:28 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH v4 15/16] cputlb: Pass retaddr to tb_invalidate_phys_page_fast Date: Mon, 23 Sep 2019 16:00:03 -0700 Message-Id: <20190923230004.9231-16-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190923230004.9231-1-richard.henderson@linaro.org> References: <20190923230004.9231-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::443 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, alex.bennee@linaro.org, stefanha@redhat.com, david@redhat.com Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Rather than rely on cpu->mem_io_pc, pass retaddr down directly. Within tb_invalidate_phys_page_range__locked, the is_cpu_write_access parameter is non-zero exactly when retaddr would be non-zero, so that is a simple replacement. Recognize that current_tb_not_found is true only when mem_io_pc (and now retaddr) are also non-zero, so remove a redundant test. Reviewed-by: David Hildenbrand Signed-off-by: Richard Henderson --- accel/tcg/translate-all.h | 3 ++- accel/tcg/cputlb.c | 6 +----- accel/tcg/translate-all.c | 39 +++++++++++++++++++-------------------- 3 files changed, 22 insertions(+), 26 deletions(-) -- 2.17.1 Reviewed-by: Alex BennĂ©e diff --git a/accel/tcg/translate-all.h b/accel/tcg/translate-all.h index 31f2117188..135c1ea96a 100644 --- a/accel/tcg/translate-all.h +++ b/accel/tcg/translate-all.h @@ -27,7 +27,8 @@ struct page_collection *page_collection_lock(tb_page_addr_t start, tb_page_addr_t end); void page_collection_unlock(struct page_collection *set); void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len); + tb_page_addr_t start, int len, + uintptr_t retaddr); void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end); void tb_check_watchpoint(CPUState *cpu); diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 0ca6ee60b3..ea5d12c59d 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1093,11 +1093,7 @@ static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size, if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { struct page_collection *pages = page_collection_lock(ram_addr, ram_addr + size); - - /* We require mem_io_pc in tb_invalidate_phys_page_range. */ - cpu->mem_io_pc = retaddr; - - tb_invalidate_phys_page_fast(pages, ram_addr, size); + tb_invalidate_phys_page_fast(pages, ram_addr, size, retaddr); page_collection_unlock(pages); } diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index de4b697163..db77fb221b 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1889,7 +1889,7 @@ static void tb_invalidate_phys_page_range__locked(struct page_collection *pages, PageDesc *p, tb_page_addr_t start, tb_page_addr_t end, - int is_cpu_write_access) + uintptr_t retaddr) { TranslationBlock *tb; tb_page_addr_t tb_start, tb_end; @@ -1897,9 +1897,9 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, #ifdef TARGET_HAS_PRECISE_SMC CPUState *cpu = current_cpu; CPUArchState *env = NULL; - int current_tb_not_found = is_cpu_write_access; + bool current_tb_not_found = retaddr != 0; + bool current_tb_modified = false; TranslationBlock *current_tb = NULL; - int current_tb_modified = 0; target_ulong current_pc = 0; target_ulong current_cs_base = 0; uint32_t current_flags = 0; @@ -1931,24 +1931,21 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, if (!(tb_end <= start || tb_start >= end)) { #ifdef TARGET_HAS_PRECISE_SMC if (current_tb_not_found) { - current_tb_not_found = 0; - current_tb = NULL; - if (cpu->mem_io_pc) { - /* now we have a real cpu fault */ - current_tb = tcg_tb_lookup(cpu->mem_io_pc); - } + current_tb_not_found = false; + /* now we have a real cpu fault */ + current_tb = tcg_tb_lookup(retaddr); } if (current_tb == tb && (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) { - /* If we are modifying the current TB, we must stop - its execution. We could be more precise by checking - that the modification is after the current PC, but it - would require a specialized function to partially - restore the CPU state */ - - current_tb_modified = 1; - cpu_restore_state_from_tb(cpu, current_tb, - cpu->mem_io_pc, true); + /* + * If we are modifying the current TB, we must stop + * its execution. We could be more precise by checking + * that the modification is after the current PC, but it + * would require a specialized function to partially + * restore the CPU state. + */ + current_tb_modified = true; + cpu_restore_state_from_tb(cpu, current_tb, retaddr, true); cpu_get_tb_cpu_state(env, ¤t_pc, ¤t_cs_base, ¤t_flags); } @@ -2042,7 +2039,8 @@ void tb_invalidate_phys_range(target_ulong start, target_ulong end) * Call with all @pages in the range [@start, @start + len[ locked. */ void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len) + tb_page_addr_t start, int len, + uintptr_t retaddr) { PageDesc *p; @@ -2069,7 +2067,8 @@ void tb_invalidate_phys_page_fast(struct page_collection *pages, } } else { do_invalidate: - tb_invalidate_phys_page_range__locked(pages, p, start, start + len, 1); + tb_invalidate_phys_page_range__locked(pages, p, start, start + len, + retaddr); } } #else