From patchwork Wed Sep 18 05:26:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 173942 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp2022439ill; Tue, 17 Sep 2019 22:33:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqxXnxJvrhQecL6hdQEsR9/DM2yK+8br/wlqH2YpLfxYflZ3PAQ0Lm1sWkqmPxZT7q2rSNOY X-Received: by 2002:aa7:da01:: with SMTP id r1mr8412658eds.87.1568784838735; Tue, 17 Sep 2019 22:33:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568784838; cv=none; d=google.com; s=arc-20160816; b=AgzlOxhQU67KaUC/wcAoKFz/t/Z/0KWEG4U5MhJ0YW5lt2QpUjYcehp+INoDNP4Zcr cueYom4CZchUdNnA38D2lYCEXpP1rIswV53fh2gDFAOfjjAu3yzYB4M61S6XhhoG9X3L FcFCsj9cWURoxH8xn0EUaIdIgJwfpfbXbAXLVoVPbZehquVtEl5ejwWggzKXhfDg0rb+ 78kZRki/7sqGas9cA0FOSEB0UvwmzNyjhJnIUh4VclbjNrSkarHkDqhel2JHSFKQJWN/ FOa+jj4OQw9aEr285aLKQq8buRdh1f93yx6+E3ILeKmUMkZiI+QRG1KNrUtgqmqCI2RT fErA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=ofyFRD9BzLQX+w/pf8Sdqdw8HjQ7Yrs4+r4WgJtUSDw=; b=gOriDn7Rvyz0DYfXlLMQ7dfGrBVhdmR3DwZLQwT+aFfh9UgIy5cg6f9FMIA/aXY8iK GAVNpocoZPQAgjkm8mtoN3fn/AeRVFpAEMNmXNf2Xyjid4vhPzF2OdQGVIwodXyDs/lE OrJYqK04FSh4wV0jjubvwbG88nKIOrbXX/EktyvlQ/VGQqc5zbnjeqpoYPsGACQ2t+48 TNlyL5dRk9ETCS+zAcS/uxoQc+emNkzKMaWFVlCQ31MeMBstHXwfYiUdaFpqP1dQkpLj 7H2V42IkMQDRYtdodhhti+zHcYhadpR9omGsik0V8vPGPnWCLLfCrkIZwnkkiYkoWZ02 plrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=P+wLyu+S; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id be13si1857674edb.362.2019.09.17.22.33.58 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 17 Sep 2019 22:33:58 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=P+wLyu+S; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:54730 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASbF-0006ju-IJ for patch@linaro.org; Wed, 18 Sep 2019 01:33:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53570) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASUM-0002LE-8E for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iASUK-00072F-9Y for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:50 -0400 Received: from mail-pg1-x542.google.com ([2607:f8b0:4864:20::542]:39105) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1iASUK-00071G-0O for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:48 -0400 Received: by mail-pg1-x542.google.com with SMTP id u17so3301379pgi.6 for ; Tue, 17 Sep 2019 22:26:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ofyFRD9BzLQX+w/pf8Sdqdw8HjQ7Yrs4+r4WgJtUSDw=; b=P+wLyu+SuDRFMOpwlWZ3V0rbe8ckD4kt+H5JiZjGy/FL+1T2D5gbX9EIoMy/7R24L9 oKTFjtKMZRc6L+V2psfC5xYijtBVX9nd0NuOUCHRlOANXzy9fNTrdX7sQGm/hp3DXdqZ +CdfRuOaJmkL1dHxXJ8EI36720S31ZepH1cd0muCpOigigv8w7ImeaM4lWevrJG4jfRS YdoCh4dNkg3QQbW15DJXT1HnQd2+JS03EOdQqhQDW1puUBN2IYDcuDJ64I4KmGoy6UuH uOTTBr7e1j9nvMrAHQi6CvDFERng9S7Xw2aQOrNH+cD0Bui9/o24W+g5UK4OKansxxzL skIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ofyFRD9BzLQX+w/pf8Sdqdw8HjQ7Yrs4+r4WgJtUSDw=; b=iEz0Ihwme65JMPkaZY4n3yEn0wWn526JVVS81Mh0Hsjtb2nOIGE+Kdmr8B8CQVrJJI 4BzhKWK2ZX425NJSSh1ERdEI5KqqGrkx0VWc6dsAEOBVMSYxrL5S8gpgv0Hcv02wV+V1 Zo/9ISBpeouJ5XVZxxDtTkROkCAEIQ54L+SNX0v/3mttRtvU28RTn0OSKMoJxdo9U8y1 02e0QsiWjLfjsKi7GeCV2HFZYQhUl7huZNy5waJJKnq13lpDzYIz6nVEJ73uyVYbwv1q m1m8YhQEwwoL6E6iSrEai3jdJNdqLvwjw52ujeIDCsT0hXsm8N8jo294yBY/6KRL69La c26g== X-Gm-Message-State: APjAAAWaGjCI2H8qC0RNaqsTghUVI2XhNybTACZgTMhrmutv572BH3Oc +NUk7Zgjb1CQ1af2fiviTlVXyCFCqoU= X-Received: by 2002:a63:34cb:: with SMTP id b194mr2276638pga.446.1568784405343; Tue, 17 Sep 2019 22:26:45 -0700 (PDT) Received: from localhost.localdomain (97-113-7-119.tukw.qwest.net. [97.113.7.119]) by smtp.gmail.com with ESMTPSA id a1sm3457234pgd.74.2019.09.17.22.26.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2019 22:26:44 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 17 Sep 2019 22:26:39 -0700 Message-Id: <20190918052641.21300-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190918052641.21300-1-richard.henderson@linaro.org> References: <20190918052641.21300-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::542 Subject: [Qemu-devel] [RFC 1/3] exec: Adjust notdirty tracing X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, alex.bennee@linaro.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" The memory_region_tb_read tracepoint is unreachable, since notdirty is supposed to apply only to reads. The memory_region_tb_write tracepoint is mis-named, because notdirty is not only used for TB invalidation. It is also used for e.g. VGA RAM updates. Replace memory_region_tb_write with memory_notdirty_write, and place it in memory_notdirty_write_prepare where it can catch all of the instances. Add memory_notdirty_dirty to log when we no longer intercept writes to a page. Signed-off-by: Richard Henderson --- exec.c | 3 +++ memory.c | 4 ---- trace-events | 4 ++-- 3 files changed, 5 insertions(+), 6 deletions(-) -- 2.17.1 diff --git a/exec.c b/exec.c index 8b998974f8..9babe57615 100644 --- a/exec.c +++ b/exec.c @@ -2755,6 +2755,8 @@ void memory_notdirty_write_prepare(NotDirtyInfo *ndi, ndi->size = size; ndi->pages = NULL; + trace_memory_notdirty_write(mem_vaddr, ram_addr, size); + assert(tcg_enabled()); if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { ndi->pages = page_collection_lock(ram_addr, ram_addr + size); @@ -2779,6 +2781,7 @@ void memory_notdirty_write_complete(NotDirtyInfo *ndi) /* we remove the notdirty callback only if the code has been flushed */ if (!cpu_physical_memory_is_clean(ndi->ram_addr)) { + trace_memory_notdirty_dirty(ndi->mem_vaddr); tlb_set_dirty(ndi->cpu, ndi->mem_vaddr); } } diff --git a/memory.c b/memory.c index b9dd6b94ca..57c44c97db 100644 --- a/memory.c +++ b/memory.c @@ -438,7 +438,6 @@ static MemTxResult memory_region_read_accessor(MemoryRegion *mr, /* Accesses to code which has previously been translated into a TB show * up in the MMIO path, as accesses to the io_mem_notdirty * MemoryRegion. */ - trace_memory_region_tb_read(get_cpu_index(), addr, tmp, size); } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) { hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp, size); @@ -465,7 +464,6 @@ static MemTxResult memory_region_read_with_attrs_accessor(MemoryRegion *mr, /* Accesses to code which has previously been translated into a TB show * up in the MMIO path, as accesses to the io_mem_notdirty * MemoryRegion. */ - trace_memory_region_tb_read(get_cpu_index(), addr, tmp, size); } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) { hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp, size); @@ -490,7 +488,6 @@ static MemTxResult memory_region_write_accessor(MemoryRegion *mr, /* Accesses to code which has previously been translated into a TB show * up in the MMIO path, as accesses to the io_mem_notdirty * MemoryRegion. */ - trace_memory_region_tb_write(get_cpu_index(), addr, tmp, size); } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) { hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_write(get_cpu_index(), mr, abs_addr, tmp, size); @@ -515,7 +512,6 @@ static MemTxResult memory_region_write_with_attrs_accessor(MemoryRegion *mr, /* Accesses to code which has previously been translated into a TB show * up in the MMIO path, as accesses to the io_mem_notdirty * MemoryRegion. */ - trace_memory_region_tb_write(get_cpu_index(), addr, tmp, size); } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) { hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_write(get_cpu_index(), mr, abs_addr, tmp, size); diff --git a/trace-events b/trace-events index 823a4ae64e..5c9a1631e7 100644 --- a/trace-events +++ b/trace-events @@ -52,14 +52,14 @@ dma_map_wait(void *dbs) "dbs=%p" find_ram_offset(uint64_t size, uint64_t offset) "size: 0x%" PRIx64 " @ 0x%" PRIx64 find_ram_offset_loop(uint64_t size, uint64_t candidate, uint64_t offset, uint64_t next, uint64_t mingap) "trying size: 0x%" PRIx64 " @ 0x%" PRIx64 ", offset: 0x%" PRIx64" next: 0x%" PRIx64 " mingap: 0x%" PRIx64 ram_block_discard_range(const char *rbname, void *hva, size_t length, bool need_madvise, bool need_fallocate, int ret) "%s@%p + 0x%zx: madvise: %d fallocate: %d ret: %d" +memory_notdirty_write(uint64_t vaddr, uint64_t ram_addr, unsigned size) "0x%" PRIx64 " ram_addr 0x%" PRIx64 " size %u" +memory_notdirty_dirty(uint64_t vaddr) "0x%" PRIx64 # memory.c memory_region_ops_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_ops_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_subpage_read(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_subpage_write(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u" -memory_region_tb_read(int cpu_index, uint64_t addr, uint64_t value, unsigned size) "cpu %d addr 0x%"PRIx64" value 0x%"PRIx64" size %u" -memory_region_tb_write(int cpu_index, uint64_t addr, uint64_t value, unsigned size) "cpu %d addr 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_ram_device_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u" flatview_new(void *view, void *root) "%p (root %p)" From patchwork Wed Sep 18 05:26:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 173940 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp2017643ill; Tue, 17 Sep 2019 22:27:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqzevoSXKPUN+EQyuklJzuEUdpBC2FJb5O2mfNQYYZzULoNp4l2w3p3zwXC//L/bT7W+wIgx X-Received: by 2002:a37:b4c7:: with SMTP id d190mr2364323qkf.202.1568784473964; Tue, 17 Sep 2019 22:27:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568784473; cv=none; d=google.com; s=arc-20160816; b=s/Lssc8AKK8/v+DONT8wD37F0VkQBRAHLkT7sy6KUYsnbjosFo7Weld0CFhkZ54you Ly4py2LgTJnC2PCU5162q1N/XB01Jm3ebRiJdOlaPKxvnCVsF0R3fi9wyxWXZPssyQnE M2+LziPgigw0jVDxbEeofw5StVaLrl1wYgZomm814AomOugs8UZb2Z2oe6vHeijwSndM wP+GUPHBEKi6A4k1eeZt8hevYfOaGXopUmChyCWKvmG8KYbK8vnAZxgmmP9esEHPRNee PAJOnhhtiNqJD4DoHG7Zpquk5QrzXT8ItEsaDIYmQ0zMrWM7vqsUx1Z9JbuH+JGb6rT9 h/0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=eXPsXxfGFS9uUqqPWcAIj6xMUAWCK7/m700bnBt4I/g=; b=XmB4WVkp0vmhhfsn/3l+xT024zzJd6klERCYTxMGeoGUylaLy55NSaCjH1rkePO3ln 5gJimxhhOgPqoHc9NxsRKApn3+niwxU1ORmviyYNHbU8WEoktOPLxox/RiySrq6/4P1l 69C4FX+LyuFeHPaxImHQixk0HRoqd/ohBRrFkf4IoBjPiS8hvYfdeC5PGeIMwGjkyc89 Thpb79J/r23/QmF6OZuGTFRAjsB8ILI9PQKrVYRhHeFx5YwRdlNmAmk1exmrEcgtf65e vkr+8kkKPNyOXf2LF9gvp+rkXG9SBWGPY6wcl4do8BNNxoqQRiVC/vICRUUF5eptDWOg VVJQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=VTZ54ZA5; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j29si3499489qtj.292.2019.09.17.22.27.53 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 17 Sep 2019 22:27:53 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=VTZ54ZA5; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:54672 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASVM-0002NX-TX for patch@linaro.org; Wed, 18 Sep 2019 01:27:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53586) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASUO-0002LV-9s for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iASUK-00072f-PT for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:52 -0400 Received: from mail-pg1-x541.google.com ([2607:f8b0:4864:20::541]:40547) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1iASUK-00071w-Gy for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:48 -0400 Received: by mail-pg1-x541.google.com with SMTP id w10so3299328pgj.7 for ; Tue, 17 Sep 2019 22:26:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eXPsXxfGFS9uUqqPWcAIj6xMUAWCK7/m700bnBt4I/g=; b=VTZ54ZA5ekmnyDYQYsZ6NPEEyVFlLIFyBcteVaQNn3vmesK8ygPPDNnnVQhN3obbtR 4ZpfjqcuVE8J/Tfy2wq/1j97aeBqcGrFTAyt1b6RrM4emZg+9NDJyVOSpfcvDoc7cbTS 5CjhBkPgr2pXUMBUPP/H2t7LCB+w9s7CM1LZklJg3bf0Mnh7hOshE/eRDoYXCFdjRFRe /y+TbZB2VC4pKmIjrAcSfqbQsrGng5y1WaZNK124Aggm5gVY6XAm6faLJroAoi4cCXf7 p04kh3yG7YK7tQIv8jrsNtfggCRvftog1P/SMzwBr/YOqsLisPfnbXsRJ1b64WiLf+/K NMhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eXPsXxfGFS9uUqqPWcAIj6xMUAWCK7/m700bnBt4I/g=; b=FliFiVhPSDX95DuXU3WdXvKzfMImN9CW7q5BxhjZEQX0ZZdimnnRh97hhCF5P/uttl r+pIyUsvocRW3n1x1q+AEfROMLdYBe6N0ySgwKv2+OZo5xF1Z36v1SyskidUdIdNW/km TDGnMkLD91fBWPLt/YJMwapcg8yDWBu77QdFcoLaPaaL9Ywg2KpxHcORxq6MTSGrMOD8 1CmoX8VXld2YE62zoF5nzL69wQVwMynCfc5PCM9lAx7kk/R3iYzDnSr8jUlilpWjNFX2 +IrSjXxZ2XMIpUVeToPRhtlEogoyRiQRh38X7fwsy2C9th/5ufZGFHPa37gq5le9xiEs SiGA== X-Gm-Message-State: APjAAAUlZhM4f1E3JAAcqJPaQ2V5LuS7DRH8zt+r9eDnqDYbypLsPJBZ ys54HU4j92DO1lqYBv5hc5iUtsCc/UA= X-Received: by 2002:a62:5214:: with SMTP id g20mr2190363pfb.103.1568784407045; Tue, 17 Sep 2019 22:26:47 -0700 (PDT) Received: from localhost.localdomain (97-113-7-119.tukw.qwest.net. [97.113.7.119]) by smtp.gmail.com with ESMTPSA id a1sm3457234pgd.74.2019.09.17.22.26.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2019 22:26:46 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 17 Sep 2019 22:26:40 -0700 Message-Id: <20190918052641.21300-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190918052641.21300-1-richard.henderson@linaro.org> References: <20190918052641.21300-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::541 Subject: [Qemu-devel] [RFC 2/3] cputlb: Move NOTDIRTY handling from I/O path to TLB path X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, alex.bennee@linaro.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Pages that we want to track for NOTDIRTY are RAM. We do not really need to go through the I/O path to handle them. Create cpu_notdirty_write() from the corpses of memory_notdirty_write_prepare and memory_notdirty_write_complete. Use this new function to implement all of the notdirty handling. This merge is enabled by a previous patch, 9458a9a1df1a ("memory: fix race between TCG and accesses"), which forces users of the dirty bitmap to delay reads until all vcpu have exited any TB. Thus we no longer require the actual write to happen between *_prepare and *_complete. Signed-off-by: Richard Henderson --- include/exec/cpu-common.h | 1 - include/exec/memory-internal.h | 53 +++--------------- accel/tcg/cputlb.c | 66 +++++++++++++---------- exec.c | 98 ++++++---------------------------- memory.c | 16 ------ 5 files changed, 61 insertions(+), 173 deletions(-) -- 2.17.1 diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index f7dbe75fbc..06c60c82be 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -101,7 +101,6 @@ void qemu_flush_coalesced_mmio_buffer(void); void cpu_flush_icache_range(hwaddr start, hwaddr len); extern struct MemoryRegion io_mem_rom; -extern struct MemoryRegion io_mem_notdirty; typedef int (RAMBlockIterFunc)(RAMBlock *rb, void *opaque); diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h index ef4fb92371..55f75e7315 100644 --- a/include/exec/memory-internal.h +++ b/include/exec/memory-internal.h @@ -52,67 +52,28 @@ void mtree_print_dispatch(struct AddressSpaceDispatch *d, struct page_collection; -/* Opaque struct for passing info from memory_notdirty_write_prepare() - * to memory_notdirty_write_complete(). Callers should treat all fields - * as private, with the exception of @active. - * - * @active is a field which is not touched by either the prepare or - * complete functions, but which the caller can use if it wishes to - * track whether it has called prepare for this struct and so needs - * to later call the complete function. - */ -typedef struct { - CPUState *cpu; - struct page_collection *pages; - ram_addr_t ram_addr; - vaddr mem_vaddr; - unsigned size; - bool active; -} NotDirtyInfo; - /** - * memory_notdirty_write_prepare: call before writing to non-dirty memory - * @ndi: pointer to opaque NotDirtyInfo struct + * cpu_notdirty_write: call before writing to non-dirty memory * @cpu: CPU doing the write * @mem_vaddr: virtual address of write * @ram_addr: the ram address of the write * @size: size of write in bytes * - * Any code which writes to the host memory corresponding to - * guest RAM which has been marked as NOTDIRTY must wrap those - * writes in calls to memory_notdirty_write_prepare() and - * memory_notdirty_write_complete(): + * Any code which writes to the host memory corresponding to guest RAM + * which has been marked as NOTDIRTY must call cpu_notdirty_write(). * - * NotDirtyInfo ndi; - * memory_notdirty_write_prepare(&ndi, ....); - * ... perform write here ... - * memory_notdirty_write_complete(&ndi); - * - * These calls will ensure that we flush any TCG translated code for + * This function ensures that we flush any TCG translated code for * the memory being written, update the dirty bits and (if possible) * remove the slowpath callback for writing to the memory. * * This must only be called if we are using TCG; it will assert otherwise. * - * We may take locks in the prepare call, so callers must ensure that - * they don't exit (via longjump or otherwise) without calling complete. - * * This call must only be made inside an RCU critical section. * (Note that while we're executing a TCG TB we're always in an - * RCU critical section, which is likely to be the case for callers - * of these functions.) + * RCU critical section, which is likely to be the case for any callers.) */ -void memory_notdirty_write_prepare(NotDirtyInfo *ndi, - CPUState *cpu, - vaddr mem_vaddr, - ram_addr_t ram_addr, - unsigned size); -/** - * memory_notdirty_write_complete: finish write to non-dirty memory - * @ndi: pointer to the opaque NotDirtyInfo struct which was initialized - * by memory_not_dirty_write_prepare(). - */ -void memory_notdirty_write_complete(NotDirtyInfo *ndi); +void cpu_notdirty_write(CPUState *cpu, vaddr mem_vaddr, + ram_addr_t ram_addr, unsigned size); #endif #endif diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 354a75927a..7c4c763b88 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -904,7 +904,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, mr = section->mr; mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; cpu->mem_io_pc = retaddr; - if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) { + if (mr != &io_mem_rom && !cpu->can_do_io) { cpu_io_recompile(cpu, retaddr); } @@ -945,7 +945,7 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); mr = section->mr; mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; - if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) { + if (mr != &io_mem_rom && !cpu->can_do_io) { cpu_io_recompile(cpu, retaddr); } cpu->mem_io_vaddr = addr; @@ -1117,16 +1117,26 @@ void *probe_access(CPUArchState *env, target_ulong addr, int size, return NULL; } - /* Handle watchpoints. */ - if (tlb_addr & TLB_WATCHPOINT) { - cpu_check_watchpoint(env_cpu(env), addr, size, - env_tlb(env)->d[mmu_idx].iotlb[index].attrs, - wp_access, retaddr); - } + if (unlikely(tlb_addr & (TLB_WATCHPOINT | TLB_NOTDIRTY | TLB_MMIO))) { + CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index]; - if (tlb_addr & (TLB_NOTDIRTY | TLB_MMIO)) { - /* I/O access */ - return NULL; + /* Reject memory mapped I/O. */ + if (tlb_addr & TLB_MMIO) { + /* I/O access */ + return NULL; + } + + /* Handle watchpoints. */ + if (tlb_addr & TLB_WATCHPOINT) { + cpu_check_watchpoint(env_cpu(env), addr, size, iotlbentry->attrs, + wp_access, retaddr); + } + + /* Handle clean pages. */ + if (tlb_addr & TLB_NOTDIRTY) { + cpu_notdirty_write(env_cpu(env), addr, + addr + iotlbentry->addr, size); + } } return (void *)((uintptr_t)addr + entry->addend); @@ -1185,8 +1195,7 @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, /* Probe for a read-modify-write atomic operation. Do not allow unaligned * operations, or io operations to proceed. Return the host address. */ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr, - NotDirtyInfo *ndi) + TCGMemOpIdx oi, uintptr_t retaddr) { size_t mmu_idx = get_mmuidx(oi); uintptr_t index = tlb_index(env, mmu_idx, addr); @@ -1227,7 +1236,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, tlb_addr = tlb_addr_write(tlbe) & ~TLB_INVALID_MASK; } - /* Notice an IO access or a needs-MMU-lookup access */ + /* Notice an IO access */ if (unlikely(tlb_addr & TLB_MMIO)) { /* There's really nothing that can be done to support this apart from stop-the-world. */ @@ -1246,12 +1255,10 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, hostaddr = (void *)((uintptr_t)addr + tlbe->addend); - ndi->active = false; if (unlikely(tlb_addr & TLB_NOTDIRTY)) { - ndi->active = true; - memory_notdirty_write_prepare(ndi, env_cpu(env), addr, - qemu_ram_addr_from_host_nofail(hostaddr), - 1 << s_bits); + CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index]; + cpu_notdirty_write(env_cpu(env), addr, + addr + iotlbentry->addr, 1 << s_bits); } return hostaddr; @@ -1603,12 +1610,18 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val, } /* Handle I/O access. */ - if (likely(tlb_addr & (TLB_MMIO | TLB_NOTDIRTY))) { + if (likely(tlb_addr & TLB_MMIO)) { io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, op ^ (tlb_addr & TLB_BSWAP ? MO_BSWAP : 0)); return; } + /* Handle clean pages. This is always RAM. */ + if (tlb_addr & TLB_NOTDIRTY) { + cpu_notdirty_write(env_cpu(env), addr, + addr + iotlbentry->addr, size); + } + if (unlikely(tlb_addr & TLB_BSWAP)) { haddr = (void *)((uintptr_t)addr + entry->addend); direct_swap(haddr, val); @@ -1735,14 +1748,9 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, #define EXTRA_ARGS , TCGMemOpIdx oi, uintptr_t retaddr #define ATOMIC_NAME(X) \ HELPER(glue(glue(glue(atomic_ ## X, SUFFIX), END), _mmu)) -#define ATOMIC_MMU_DECLS NotDirtyInfo ndi -#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, retaddr, &ndi) -#define ATOMIC_MMU_CLEANUP \ - do { \ - if (unlikely(ndi.active)) { \ - memory_notdirty_write_complete(&ndi); \ - } \ - } while (0) +#define ATOMIC_MMU_DECLS +#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, retaddr) +#define ATOMIC_MMU_CLEANUP #define DATA_SIZE 1 #include "atomic_template.h" @@ -1770,7 +1778,7 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, #undef ATOMIC_MMU_LOOKUP #define EXTRA_ARGS , TCGMemOpIdx oi #define ATOMIC_NAME(X) HELPER(glue(glue(atomic_ ## X, SUFFIX), END)) -#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, GETPC(), &ndi) +#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, GETPC()) #define DATA_SIZE 1 #include "atomic_template.h" diff --git a/exec.c b/exec.c index 9babe57615..219198e80e 100644 --- a/exec.c +++ b/exec.c @@ -88,7 +88,7 @@ static MemoryRegion *system_io; AddressSpace address_space_io; AddressSpace address_space_memory; -MemoryRegion io_mem_rom, io_mem_notdirty; +MemoryRegion io_mem_rom; static MemoryRegion io_mem_unassigned; #endif @@ -191,8 +191,7 @@ typedef struct subpage_t { } subpage_t; #define PHYS_SECTION_UNASSIGNED 0 -#define PHYS_SECTION_NOTDIRTY 1 -#define PHYS_SECTION_ROM 2 +#define PHYS_SECTION_ROM 1 static void io_mem_init(void); static void memory_map_init(void); @@ -1473,9 +1472,7 @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu, if (memory_region_is_ram(section->mr)) { /* Normal RAM. */ iotlb = memory_region_get_ram_addr(section->mr) + xlat; - if (!section->readonly) { - iotlb |= PHYS_SECTION_NOTDIRTY; - } else { + if (section->readonly) { iotlb |= PHYS_SECTION_ROM; } } else { @@ -2743,85 +2740,33 @@ ram_addr_t qemu_ram_addr_from_host(void *ptr) } /* Called within RCU critical section. */ -void memory_notdirty_write_prepare(NotDirtyInfo *ndi, - CPUState *cpu, - vaddr mem_vaddr, - ram_addr_t ram_addr, - unsigned size) +void cpu_notdirty_write(CPUState *cpu, vaddr mem_vaddr, + ram_addr_t ram_addr, unsigned size) { - ndi->cpu = cpu; - ndi->ram_addr = ram_addr; - ndi->mem_vaddr = mem_vaddr; - ndi->size = size; - ndi->pages = NULL; - trace_memory_notdirty_write(mem_vaddr, ram_addr, size); assert(tcg_enabled()); if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { - ndi->pages = page_collection_lock(ram_addr, ram_addr + size); - tb_invalidate_phys_page_fast(ndi->pages, ram_addr, size); - } -} + struct page_collection *pages; -/* Called within RCU critical section. */ -void memory_notdirty_write_complete(NotDirtyInfo *ndi) -{ - if (ndi->pages) { - assert(tcg_enabled()); - page_collection_unlock(ndi->pages); - ndi->pages = NULL; + pages = page_collection_lock(ram_addr, ram_addr + size); + tb_invalidate_phys_page_fast(pages, ram_addr, size); + page_collection_unlock(pages); } - /* Set both VGA and migration bits for simplicity and to remove + /* + * Set both VGA and migration bits for simplicity and to remove * the notdirty callback faster. */ - cpu_physical_memory_set_dirty_range(ndi->ram_addr, ndi->size, - DIRTY_CLIENTS_NOCODE); - /* we remove the notdirty callback only if the code has been - flushed */ - if (!cpu_physical_memory_is_clean(ndi->ram_addr)) { - trace_memory_notdirty_dirty(ndi->mem_vaddr); - tlb_set_dirty(ndi->cpu, ndi->mem_vaddr); + cpu_physical_memory_set_dirty_range(ram_addr, size, DIRTY_CLIENTS_NOCODE); + + /* We remove the notdirty callback only if the code has been flushed. */ + if (!cpu_physical_memory_is_clean(ram_addr)) { + trace_memory_notdirty_dirty(mem_vaddr); + tlb_set_dirty(cpu, mem_vaddr); } } -/* Called within RCU critical section. */ -static void notdirty_mem_write(void *opaque, hwaddr ram_addr, - uint64_t val, unsigned size) -{ - NotDirtyInfo ndi; - - memory_notdirty_write_prepare(&ndi, current_cpu, current_cpu->mem_io_vaddr, - ram_addr, size); - - stn_p(qemu_map_ram_ptr(NULL, ram_addr), size, val); - memory_notdirty_write_complete(&ndi); -} - -static bool notdirty_mem_accepts(void *opaque, hwaddr addr, - unsigned size, bool is_write, - MemTxAttrs attrs) -{ - return is_write; -} - -static const MemoryRegionOps notdirty_mem_ops = { - .write = notdirty_mem_write, - .valid.accepts = notdirty_mem_accepts, - .endianness = DEVICE_NATIVE_ENDIAN, - .valid = { - .min_access_size = 1, - .max_access_size = 8, - .unaligned = false, - }, - .impl = { - .min_access_size = 1, - .max_access_size = 8, - .unaligned = false, - }, -}; - /* Generate a debug exception if a watchpoint has been hit. */ void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len, MemTxAttrs attrs, int flags, uintptr_t ra) @@ -3051,13 +2996,6 @@ static void io_mem_init(void) NULL, NULL, UINT64_MAX); memory_region_init_io(&io_mem_unassigned, NULL, &unassigned_mem_ops, NULL, NULL, UINT64_MAX); - - /* io_mem_notdirty calls tb_invalidate_phys_page_fast, - * which can be called without the iothread mutex. - */ - memory_region_init_io(&io_mem_notdirty, NULL, ¬dirty_mem_ops, NULL, - NULL, UINT64_MAX); - memory_region_clear_global_locking(&io_mem_notdirty); } AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv) @@ -3067,8 +3005,6 @@ AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv) n = dummy_section(&d->map, fv, &io_mem_unassigned); assert(n == PHYS_SECTION_UNASSIGNED); - n = dummy_section(&d->map, fv, &io_mem_notdirty); - assert(n == PHYS_SECTION_NOTDIRTY); n = dummy_section(&d->map, fv, &io_mem_rom); assert(n == PHYS_SECTION_ROM); diff --git a/memory.c b/memory.c index 57c44c97db..a99b8c0767 100644 --- a/memory.c +++ b/memory.c @@ -434,10 +434,6 @@ static MemTxResult memory_region_read_accessor(MemoryRegion *mr, tmp = mr->ops->read(mr->opaque, addr, size); if (mr->subpage) { trace_memory_region_subpage_read(get_cpu_index(), mr, addr, tmp, size); - } else if (mr == &io_mem_notdirty) { - /* Accesses to code which has previously been translated into a TB show - * up in the MMIO path, as accesses to the io_mem_notdirty - * MemoryRegion. */ } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) { hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp, size); @@ -460,10 +456,6 @@ static MemTxResult memory_region_read_with_attrs_accessor(MemoryRegion *mr, r = mr->ops->read_with_attrs(mr->opaque, addr, &tmp, size, attrs); if (mr->subpage) { trace_memory_region_subpage_read(get_cpu_index(), mr, addr, tmp, size); - } else if (mr == &io_mem_notdirty) { - /* Accesses to code which has previously been translated into a TB show - * up in the MMIO path, as accesses to the io_mem_notdirty - * MemoryRegion. */ } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) { hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp, size); @@ -484,10 +476,6 @@ static MemTxResult memory_region_write_accessor(MemoryRegion *mr, if (mr->subpage) { trace_memory_region_subpage_write(get_cpu_index(), mr, addr, tmp, size); - } else if (mr == &io_mem_notdirty) { - /* Accesses to code which has previously been translated into a TB show - * up in the MMIO path, as accesses to the io_mem_notdirty - * MemoryRegion. */ } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) { hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_write(get_cpu_index(), mr, abs_addr, tmp, size); @@ -508,10 +496,6 @@ static MemTxResult memory_region_write_with_attrs_accessor(MemoryRegion *mr, if (mr->subpage) { trace_memory_region_subpage_write(get_cpu_index(), mr, addr, tmp, size); - } else if (mr == &io_mem_notdirty) { - /* Accesses to code which has previously been translated into a TB show - * up in the MMIO path, as accesses to the io_mem_notdirty - * MemoryRegion. */ } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) { hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); trace_memory_region_ops_write(get_cpu_index(), mr, abs_addr, tmp, size); From patchwork Wed Sep 18 05:26:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 173939 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp2017615ill; Tue, 17 Sep 2019 22:27:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqzg6E9oMwHwZTIsmJWV3t/VxWOLIu92e/REQZFq4gObrCZpDnPWoEzTyBl3R9wGFsC51jCp X-Received: by 2002:a37:a709:: with SMTP id q9mr2134127qke.135.1568784471843; Tue, 17 Sep 2019 22:27:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568784471; cv=none; d=google.com; s=arc-20160816; b=YqvOno7RrpV6Zxhw2N6u0Hs52m6srfhWcOOLVRzTtH7dHcswIHhqzPt2qG88a658YS LWN1YdK9ttHKKpLQZaCSkjH+npgqScj3tsmSLjlC5HooGZnX2NAnFqnf4kEOxvjFlj6s DCUHzqrm+jlz5qH3jIDhA/za70ZhcgLl73gHrQgBP7k+EuAg2V71QqkAaxXpcfJ4J/PR iNJA/biqXI3HpjWav/kOcv/QmYtchZsgc2vujNIV/qHdUSmGR1Bk+3BOHjNhY7XrYWAc 2YOO7pYaKANrQNEJAXFFUsqcwsHszCaIp4MSqe+5IfLPRXpCuvdUVRm8zVNElpqqYmRJ /xkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=9uud/rh9P2t2h4DIelDsx28iFQpChIhakBWJac9SU70=; b=T2TbTJ4Fp1s0afNZjiA7SCvPJl0hlWAV4vjluUpPwC9L/72/bOFE2UkKmYtPKl7Hv7 pEFBc0YEBZ99eQxMzSKsKcr90R1bbbnehW9X2wDcYAHkOJLl1FCLUDRF82E2rtR3c/rS 02YmCr4q4tJMYi35WWSarQF7BWZAmDoYDI/94BlVHM+PimSpIket6FY0mTF/eM0cyW2e Tcj7bblMfgBbKyqzYVTL7onMwQqRhMg67CQ0lJ2BmLz6S/RPH9W/H5YF4Vcc31u08MBU BY+hb7Y0Qx1VN3Ql1WG4Q5p9E8hoGYxC2w7tWOyhiVSUaFSEe1d+lvVbdn7aAnt1KTvF mwow== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="y/+oaWBu"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id s94si3334394qtd.29.2019.09.17.22.27.51 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 17 Sep 2019 22:27:51 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="y/+oaWBu"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:54676 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASVK-0002Ny-RW for patch@linaro.org; Wed, 18 Sep 2019 01:27:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53587) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iASUO-0002LW-A2 for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iASUM-00073e-Bw for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:52 -0400 Received: from mail-pl1-x62a.google.com ([2607:f8b0:4864:20::62a]:44425) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1iASUM-00072u-5M for qemu-devel@nongnu.org; Wed, 18 Sep 2019 01:26:50 -0400 Received: by mail-pl1-x62a.google.com with SMTP id q15so255258pll.11 for ; Tue, 17 Sep 2019 22:26:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9uud/rh9P2t2h4DIelDsx28iFQpChIhakBWJac9SU70=; b=y/+oaWBuyI5yPWanOBQjTAR3Z0htuRA5jPPHDcShFxZq0c1RH+4WxEzr0sIC0N0v5Y VIlwG+MFMq2zgpeVY7Z0dt6D5ctHHywnDAip5nHY1T/6UuH2WWuIWvhMftLbkpzVb/NK PpWLsdyVBbn4CmKYy/4Ho5INVsfuK6Jo0OR4YoxKc/wYitS55B2tpsWrkJmGTFPQs+WN FCumZ0VmMe6h6zzD8cllxustDwK5Mjw+sa0R2ft6NBzZ5vUZQkmi3mv7zlPNgXJcHubF quN1bG3tY/JoESN0cgWQuUh5KeEJa2xJg+UhyLkfkuJ7+SEQlEMBG52i/835RSdur6Oy 9EGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9uud/rh9P2t2h4DIelDsx28iFQpChIhakBWJac9SU70=; b=IrJqCKLoMt6NZFJC4Nac3QQCdIFTGzXDdNq2sRYJzZSLdtjNzOq1keHBgaMbZVVpVM N3cj0hy3U2fZwyeewhZHmZkWROMjC9kZD6mDQFrtEudgYaffKzof7Xm/JHcWYsJ/ZLgY JWpuM9q1P4iPfx+TbrCEZhbNNmuBrn6Ce53ix6CIVRfMhKxrC9hCvDyFZcOqoh4rQce+ dgpdKPCBUI4oJAVsbe2dbReI2/2fiGQjQqridbCdYIXX38DVCetZnSniIcdN5S+uSuA9 tQDWMfcIsMG29iO0YQf6wvn1N+vd3wkoj7yju7jI6dTZZMiDwkL/t2paCzjcps83nTo3 kt9A== X-Gm-Message-State: APjAAAX5B1mQraRb92nN3zB8UWkXTbMgxigAdbatSo4LEXsw/+7hnpL0 LRFA1PSK8rNQ3FiPhQcET6RW+LzMlbI= X-Received: by 2002:a17:902:aa4a:: with SMTP id c10mr2305065plr.340.1568784408240; Tue, 17 Sep 2019 22:26:48 -0700 (PDT) Received: from localhost.localdomain (97-113-7-119.tukw.qwest.net. [97.113.7.119]) by smtp.gmail.com with ESMTPSA id a1sm3457234pgd.74.2019.09.17.22.26.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2019 22:26:47 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 17 Sep 2019 22:26:41 -0700 Message-Id: <20190918052641.21300-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190918052641.21300-1-richard.henderson@linaro.org> References: <20190918052641.21300-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::62a Subject: [Qemu-devel] [RFC 3/3] cputlb: Remove ATOMIC_MMU_DECLS X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, alex.bennee@linaro.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This macro no longer has a non-empty definition. Signed-off-by: Richard Henderson --- accel/tcg/atomic_template.h | 12 ------------ accel/tcg/cputlb.c | 1 - accel/tcg/user-exec.c | 1 - 3 files changed, 14 deletions(-) -- 2.17.1 diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h index 287433d809..107660d5d3 100644 --- a/accel/tcg/atomic_template.h +++ b/accel/tcg/atomic_template.h @@ -95,7 +95,6 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr, ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; DATA_TYPE ret; @@ -113,7 +112,6 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr, #if HAVE_ATOMIC128 ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP; ATOMIC_TRACE_LD; @@ -125,7 +123,6 @@ ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS) void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr, ABI_TYPE val EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; ATOMIC_TRACE_ST; @@ -137,7 +134,6 @@ void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr, ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; DATA_TYPE ret; @@ -151,7 +147,6 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ ABI_TYPE val EXTRA_ARGS) \ { \ - ATOMIC_MMU_DECLS; \ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \ DATA_TYPE ret; \ \ @@ -183,7 +178,6 @@ GEN_ATOMIC_HELPER(xor_fetch) ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ ABI_TYPE xval EXTRA_ARGS) \ { \ - ATOMIC_MMU_DECLS; \ XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \ XDATA_TYPE cmp, old, new, val = xval; \ \ @@ -229,7 +223,6 @@ GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new) ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr, ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; DATA_TYPE ret; @@ -247,7 +240,6 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr, #if HAVE_ATOMIC128 ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP; ATOMIC_TRACE_LD; @@ -259,7 +251,6 @@ ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS) void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr, ABI_TYPE val EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; ATOMIC_TRACE_ST; @@ -272,7 +263,6 @@ void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr, ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val EXTRA_ARGS) { - ATOMIC_MMU_DECLS; DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; ABI_TYPE ret; @@ -286,7 +276,6 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ ABI_TYPE val EXTRA_ARGS) \ { \ - ATOMIC_MMU_DECLS; \ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \ DATA_TYPE ret; \ \ @@ -316,7 +305,6 @@ GEN_ATOMIC_HELPER(xor_fetch) ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ ABI_TYPE xval EXTRA_ARGS) \ { \ - ATOMIC_MMU_DECLS; \ XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \ XDATA_TYPE ldo, ldn, old, new, val = xval; \ \ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 7c4c763b88..d048fc82c9 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1748,7 +1748,6 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, #define EXTRA_ARGS , TCGMemOpIdx oi, uintptr_t retaddr #define ATOMIC_NAME(X) \ HELPER(glue(glue(glue(atomic_ ## X, SUFFIX), END), _mmu)) -#define ATOMIC_MMU_DECLS #define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, retaddr) #define ATOMIC_MMU_CLEANUP diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 71c4bf6477..c353e452ea 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -748,7 +748,6 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, } /* Macro to call the above, with local variables from the use context. */ -#define ATOMIC_MMU_DECLS do {} while (0) #define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, DATA_SIZE, GETPC()) #define ATOMIC_MMU_CLEANUP do { clear_helper_retaddr(); } while (0)