From patchwork Mon Aug 22 23:57:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599443 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2048258mae; Mon, 22 Aug 2022 17:04:40 -0700 (PDT) X-Google-Smtp-Source: AA6agR52Ldn98frGdsGADLcOPTnrX9d1OJkDNFhLtsFAnM6fpbmZn7e2rCaKVXiGYZf+3cYUr381 X-Received: by 2002:a05:620a:4086:b0:6bb:a055:ab3a with SMTP id f6-20020a05620a408600b006bba055ab3amr13709962qko.369.1661213080582; Mon, 22 Aug 2022 17:04:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213080; cv=none; d=google.com; s=arc-20160816; b=RHbdkbXE5L8NXca3EhN63YIO5kqAxGSRQgkpWWOH/IqkBD5bM3aj05JiM/zjaNmmdr 1TFYPw+LwVV0sJ1sb0ZMneNyRm2rlDLFAUwfdUNMRFWsuwzJCZ1u0y3HHh8S3NHC/dW1 PyyW9fR5SbdAL+JAHrAiUYj27f0GnxZJlZ8zrXg3cnEDwROziSbU61F4I0Zx6TnL9EC0 mM+mvbpun2qSJR+aODlNDFPwyw7ZA53LpQmmTSE4PB9gaOn0GWcT2OyqUinfwQ1QOj/G OstbKSzokg5Qq2IuVlMiOA7HsVZ11ydQ4RbT7TOuZcJWj6HbDzqb/UZX2mIH2dp884b7 9Yzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Wg9S98nuc0V01fKpBS34jNfjOFmyH/Vau0r5TPJqky4=; b=QM3xuJzTo0qeu3M0Cc8bJ8g8/CysV1gK9kiX3SxiMLJz4kFzhyKLuvj8++JTwQNH7E +EeRJjTZTvfFGMQdS7EKG3kp5sa86uwaXWsDEAxBb/X/Lllyiy49+wdD+SkAq6oKTO0n NFtzhPQLmvE4gXELZ+NzOSGjSTp4ezf0JATflLHpvv+gkFkxbM24WvzXn5ekaLqMezNb 3+F2WgRGicx0STOIKVUaQt3JWbL3RqS9FqhrEQIP3Zck6dc2iahk4KSqo9Vat/jMM0AD BGG/VD71InyTg9zuDBlFjLWtydB0eytEKjt3hqEJhapRRaWbBvxlUUZJLrW1H/l6B1Oo s1oQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VG8IDTxx; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id bv10-20020a05622a0a0a00b0033c616d2192si5439808qtb.713.2022.08.22.17.04.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:04:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VG8IDTxx; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:41038 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHPM-0002dr-59 for patch@linaro.org; Mon, 22 Aug 2022 20:04:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48714) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJ4-0003zR-5l for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:10 -0400 Received: from mail-pf1-x42a.google.com ([2607:f8b0:4864:20::42a]:40498) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ1-0002hj-FK for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:09 -0400 Received: by mail-pf1-x42a.google.com with SMTP id y141so11874937pfb.7 for ; Mon, 22 Aug 2022 16:58:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=Wg9S98nuc0V01fKpBS34jNfjOFmyH/Vau0r5TPJqky4=; b=VG8IDTxxo0oJWmxcLREFeu4SxSE5jIVnJMAefcTs0X/Leldwxs5X1YvINHNXlcuXe0 NiZIcsJS7aPHyHejJciLFsOgc6jxMpl54sOg72D2BiqXQDRFDdhPYmROV63K+RnOj8UT 1V8CPCyV+EJrvq7QlNdYs59NS2NxM3gBteuF/PaeruPWM7KL6fX6nh/uYZCIoNAOZCtU Z+iEkciOFc3NwEgrAiQMuUrk8YjQjUk6+jMEPEISwNHex90fYEQpJ1V34iHSFuedf3rx i7P9qkyOG3YcQLe2y4/q6fLWNfVnJ+b48KVTE1RNBN4QwllaSCvoxacTSHzA9jPP6nR8 KdGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Wg9S98nuc0V01fKpBS34jNfjOFmyH/Vau0r5TPJqky4=; b=HJNw76AYtucd5WhTv519kIvCCpYf6NAf6OjDOY484OZg6znVdJ/p5I2znwzmiwMjZt nxKAcwssFuhvPxQ92lBhdQL2hNOmiwHIhjYqZt6y7w0TwC8x5to48ma0x2rK4xOO8800 14a60rIWVdleHYZe2ImtWbSG9RVBzrLiYoXQV5dFioQGkPmb3aCJON6cojeeK4UeaWvr eEqNyzgW4Sm57PmTgSMWD0VuGC6EDsUvnyCJHo+0hvRuE3ee8CMxVJm/zYV+RH+8WpdP 23bICqNQdLUxfY0FEG4Q2kKzyyMyJqvQn3bt3UNlimgjPM0OcDyMppY2eZ8PGPT+U1OO ritA== X-Gm-Message-State: ACgBeo3s/OLw6vJzUOzphH6AHnXfFYLlH9OL20YqkB2qpcUU4OkrOcz2 si5REnZS5WqfGV84/cnGzBWZ/Arg30VsGA== X-Received: by 2002:a65:6bca:0:b0:420:712f:ab98 with SMTP id e10-20020a656bca000000b00420712fab98mr18730664pgw.350.1661212685820; Mon, 22 Aug 2022 16:58:05 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:05 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 01/14] accel/tcg: Rename CPUIOTLBEntry to CPUTLBEntryFull Date: Mon, 22 Aug 2022 16:57:50 -0700 Message-Id: <20220822235803.1729290-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42a; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This structure will shortly contain more than just data for accessing MMIO. Rename the 'addr' member to 'xlat_section' to more clearly indicate its purpose. Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 22 ++++---- accel/tcg/cputlb.c | 102 +++++++++++++++++++------------------ target/arm/mte_helper.c | 14 ++--- target/arm/sve_helper.c | 4 +- target/arm/translate-a64.c | 2 +- 5 files changed, 73 insertions(+), 71 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index ba3cd32a1e..f70f54d850 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -108,6 +108,7 @@ typedef uint64_t target_ulong; # endif # endif +/* Minimalized TLB entry for use by TCG fast path. */ typedef struct CPUTLBEntry { /* bit TARGET_LONG_BITS to TARGET_PAGE_BITS : virtual address bit TARGET_PAGE_BITS-1..4 : Nonzero for accesses that should not @@ -131,14 +132,14 @@ typedef struct CPUTLBEntry { QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS)); -/* The IOTLB is not accessed directly inline by generated TCG code, - * so the CPUIOTLBEntry layout is not as critical as that of the - * CPUTLBEntry. (This is also why we don't want to combine the two - * structs into one.) +/* + * The full TLB entry, which is not accessed by generated TCG code, + * so the layout is not as critical as that of CPUTLBEntry. This is + * also why we don't want to combine the two structs. */ -typedef struct CPUIOTLBEntry { +typedef struct CPUTLBEntryFull { /* - * @addr contains: + * @xlat_section contains: * - in the lower TARGET_PAGE_BITS, a physical section number * - with the lower TARGET_PAGE_BITS masked off, an offset which * must be added to the virtual address to obtain: @@ -146,9 +147,9 @@ typedef struct CPUIOTLBEntry { * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM) * + the offset within the target MemoryRegion (otherwise) */ - hwaddr addr; + hwaddr xlat_section; MemTxAttrs attrs; -} CPUIOTLBEntry; +} CPUTLBEntryFull; /* * Data elements that are per MMU mode, minus the bits accessed by @@ -172,9 +173,8 @@ typedef struct CPUTLBDesc { size_t vindex; /* The tlb victim table, in two parts. */ CPUTLBEntry vtable[CPU_VTLB_SIZE]; - CPUIOTLBEntry viotlb[CPU_VTLB_SIZE]; - /* The iotlb. */ - CPUIOTLBEntry *iotlb; + CPUTLBEntryFull vfulltlb[CPU_VTLB_SIZE]; + CPUTLBEntryFull *fulltlb; } CPUTLBDesc; /* diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index a46f3a654d..a37275bf8e 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -200,13 +200,13 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast, } g_free(fast->table); - g_free(desc->iotlb); + g_free(desc->fulltlb); tlb_window_reset(desc, now, 0); /* desc->n_used_entries is cleared by the caller */ fast->mask = (new_size - 1) << CPU_TLB_ENTRY_BITS; fast->table = g_try_new(CPUTLBEntry, new_size); - desc->iotlb = g_try_new(CPUIOTLBEntry, new_size); + desc->fulltlb = g_try_new(CPUTLBEntryFull, new_size); /* * If the allocations fail, try smaller sizes. We just freed some @@ -215,7 +215,7 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast, * allocations to fail though, so we progressively reduce the allocation * size, aborting if we cannot even allocate the smallest TLB we support. */ - while (fast->table == NULL || desc->iotlb == NULL) { + while (fast->table == NULL || desc->fulltlb == NULL) { if (new_size == (1 << CPU_TLB_DYN_MIN_BITS)) { error_report("%s: %s", __func__, strerror(errno)); abort(); @@ -224,9 +224,9 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast, fast->mask = (new_size - 1) << CPU_TLB_ENTRY_BITS; g_free(fast->table); - g_free(desc->iotlb); + g_free(desc->fulltlb); fast->table = g_try_new(CPUTLBEntry, new_size); - desc->iotlb = g_try_new(CPUIOTLBEntry, new_size); + desc->fulltlb = g_try_new(CPUTLBEntryFull, new_size); } } @@ -258,7 +258,7 @@ static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDescFast *fast, int64_t now) desc->n_used_entries = 0; fast->mask = (n_entries - 1) << CPU_TLB_ENTRY_BITS; fast->table = g_new(CPUTLBEntry, n_entries); - desc->iotlb = g_new(CPUIOTLBEntry, n_entries); + desc->fulltlb = g_new(CPUTLBEntryFull, n_entries); tlb_mmu_flush_locked(desc, fast); } @@ -299,7 +299,7 @@ void tlb_destroy(CPUState *cpu) CPUTLBDescFast *fast = &env_tlb(env)->f[i]; g_free(fast->table); - g_free(desc->iotlb); + g_free(desc->fulltlb); } } @@ -1219,7 +1219,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, /* Evict the old entry into the victim tlb. */ copy_tlb_helper_locked(tv, te); - desc->viotlb[vidx] = desc->iotlb[index]; + desc->vfulltlb[vidx] = desc->fulltlb[index]; tlb_n_used_entries_dec(env, mmu_idx); } @@ -1236,8 +1236,8 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, * subtract here is that of the page base, and not the same as the * vaddr we add back in io_readx()/io_writex()/get_page_addr_code(). */ - desc->iotlb[index].addr = iotlb - vaddr_page; - desc->iotlb[index].attrs = attrs; + desc->fulltlb[index].xlat_section = iotlb - vaddr_page; + desc->fulltlb[index].attrs = attrs; /* Now calculate the new entry */ tn.addend = addend - vaddr_page; @@ -1341,7 +1341,7 @@ static inline void cpu_transaction_failed(CPUState *cpu, hwaddr physaddr, } } -static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, +static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full, int mmu_idx, target_ulong addr, uintptr_t retaddr, MMUAccessType access_type, MemOp op) { @@ -1353,9 +1353,9 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, bool locked = false; MemTxResult r; - section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); + section = iotlb_to_section(cpu, full->xlat_section, full->attrs); mr = section->mr; - mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; + mr_offset = (full->xlat_section & TARGET_PAGE_MASK) + addr; cpu->mem_io_pc = retaddr; if (!cpu->can_do_io) { cpu_io_recompile(cpu, retaddr); @@ -1365,14 +1365,14 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, qemu_mutex_lock_iothread(); locked = true; } - r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs); + r = memory_region_dispatch_read(mr, mr_offset, &val, op, full->attrs); if (r != MEMTX_OK) { hwaddr physaddr = mr_offset + section->offset_within_address_space - section->offset_within_region; cpu_transaction_failed(cpu, physaddr, addr, memop_size(op), access_type, - mmu_idx, iotlbentry->attrs, r, retaddr); + mmu_idx, full->attrs, r, retaddr); } if (locked) { qemu_mutex_unlock_iothread(); @@ -1382,8 +1382,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, } /* - * Save a potentially trashed IOTLB entry for later lookup by plugin. - * This is read by tlb_plugin_lookup if the iotlb entry doesn't match + * Save a potentially trashed CPUTLBEntryFull for later lookup by plugin. + * This is read by tlb_plugin_lookup if the fulltlb entry doesn't match * because of the side effect of io_writex changing memory layout. */ static void save_iotlb_data(CPUState *cs, hwaddr addr, @@ -1397,7 +1397,7 @@ static void save_iotlb_data(CPUState *cs, hwaddr addr, #endif } -static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, +static void io_writex(CPUArchState *env, CPUTLBEntryFull *full, int mmu_idx, uint64_t val, target_ulong addr, uintptr_t retaddr, MemOp op) { @@ -1408,9 +1408,9 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, bool locked = false; MemTxResult r; - section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); + section = iotlb_to_section(cpu, full->xlat_section, full->attrs); mr = section->mr; - mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; + mr_offset = (full->xlat_section & TARGET_PAGE_MASK) + addr; if (!cpu->can_do_io) { cpu_io_recompile(cpu, retaddr); } @@ -1420,20 +1420,20 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, * The memory_region_dispatch may trigger a flush/resize * so for plugins we save the iotlb_data just in case. */ - save_iotlb_data(cpu, iotlbentry->addr, section, mr_offset); + save_iotlb_data(cpu, full->xlat_section, section, mr_offset); if (!qemu_mutex_iothread_locked()) { qemu_mutex_lock_iothread(); locked = true; } - r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs); + r = memory_region_dispatch_write(mr, mr_offset, val, op, full->attrs); if (r != MEMTX_OK) { hwaddr physaddr = mr_offset + section->offset_within_address_space - section->offset_within_region; cpu_transaction_failed(cpu, physaddr, addr, memop_size(op), - MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r, + MMU_DATA_STORE, mmu_idx, full->attrs, r, retaddr); } if (locked) { @@ -1480,9 +1480,10 @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, copy_tlb_helper_locked(vtlb, &tmptlb); qemu_spin_unlock(&env_tlb(env)->c.lock); - CPUIOTLBEntry tmpio, *io = &env_tlb(env)->d[mmu_idx].iotlb[index]; - CPUIOTLBEntry *vio = &env_tlb(env)->d[mmu_idx].viotlb[vidx]; - tmpio = *io; *io = *vio; *vio = tmpio; + CPUTLBEntryFull *f1 = &env_tlb(env)->d[mmu_idx].fulltlb[index]; + CPUTLBEntryFull *f2 = &env_tlb(env)->d[mmu_idx].vfulltlb[vidx]; + CPUTLBEntryFull tmpf; + tmpf = *f1; *f1 = *f2; *f2 = tmpf; return true; } } @@ -1550,9 +1551,9 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr) } static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size, - CPUIOTLBEntry *iotlbentry, uintptr_t retaddr) + CPUTLBEntryFull *full, uintptr_t retaddr) { - ram_addr_t ram_addr = mem_vaddr + iotlbentry->addr; + ram_addr_t ram_addr = mem_vaddr + full->xlat_section; trace_memory_notdirty_write_access(mem_vaddr, ram_addr, size); @@ -1645,9 +1646,9 @@ int probe_access_flags(CPUArchState *env, target_ulong addr, /* Handle clean RAM pages. */ if (unlikely(flags & TLB_NOTDIRTY)) { uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index]; + CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; - notdirty_write(env_cpu(env), addr, 1, iotlbentry, retaddr); + notdirty_write(env_cpu(env), addr, 1, full, retaddr); flags &= ~TLB_NOTDIRTY; } @@ -1672,19 +1673,19 @@ void *probe_access(CPUArchState *env, target_ulong addr, int size, if (unlikely(flags & (TLB_NOTDIRTY | TLB_WATCHPOINT))) { uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index]; + CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; /* Handle watchpoints. */ if (flags & TLB_WATCHPOINT) { int wp_access = (access_type == MMU_DATA_STORE ? BP_MEM_WRITE : BP_MEM_READ); cpu_check_watchpoint(env_cpu(env), addr, size, - iotlbentry->attrs, wp_access, retaddr); + full->attrs, wp_access, retaddr); } /* Handle clean RAM pages. */ if (flags & TLB_NOTDIRTY) { - notdirty_write(env_cpu(env), addr, 1, iotlbentry, retaddr); + notdirty_write(env_cpu(env), addr, 1, full, retaddr); } } @@ -1715,7 +1716,7 @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, * should have just filled the TLB. The one corner case is io_writex * which can cause TLB flushes and potential resizing of the TLBs * losing the information we need. In those cases we need to recover - * data from a copy of the iotlbentry. As long as this always occurs + * data from a copy of the CPUTLBEntryFull. As long as this always occurs * from the same thread (which a mem callback will be) this is safe. */ @@ -1730,11 +1731,12 @@ bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx, if (likely(tlb_hit(tlb_addr, addr))) { /* We must have an iotlb entry for MMIO */ if (tlb_addr & TLB_MMIO) { - CPUIOTLBEntry *iotlbentry; - iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index]; + CPUTLBEntryFull *full; + full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; data->is_io = true; - data->v.io.section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); - data->v.io.offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; + data->v.io.section = + iotlb_to_section(cpu, full->xlat_section, full->attrs); + data->v.io.offset = (full->xlat_section & TARGET_PAGE_MASK) + addr; } else { data->is_io = false; data->v.ram.hostaddr = (void *)((uintptr_t)addr + tlbe->addend); @@ -1842,7 +1844,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, if (unlikely(tlb_addr & TLB_NOTDIRTY)) { notdirty_write(env_cpu(env), addr, size, - &env_tlb(env)->d[mmu_idx].iotlb[index], retaddr); + &env_tlb(env)->d[mmu_idx].fulltlb[index], retaddr); } return hostaddr; @@ -1950,7 +1952,7 @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, /* Handle anything that isn't just a straight memory access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUIOTLBEntry *iotlbentry; + CPUTLBEntryFull *full; bool need_swap; /* For anything that is unaligned, recurse through full_load. */ @@ -1958,20 +1960,20 @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, goto do_unaligned_access; } - iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index]; + full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; /* Handle watchpoints. */ if (unlikely(tlb_addr & TLB_WATCHPOINT)) { /* On watchpoint hit, this will longjmp out. */ cpu_check_watchpoint(env_cpu(env), addr, size, - iotlbentry->attrs, BP_MEM_READ, retaddr); + full->attrs, BP_MEM_READ, retaddr); } need_swap = size > 1 && (tlb_addr & TLB_BSWAP); /* Handle I/O access. */ if (likely(tlb_addr & TLB_MMIO)) { - return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, + return io_readx(env, full, mmu_idx, addr, retaddr, access_type, op ^ (need_swap * MO_BSWAP)); } @@ -2286,12 +2288,12 @@ store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val, */ if (unlikely(tlb_addr & TLB_WATCHPOINT)) { cpu_check_watchpoint(env_cpu(env), addr, size - size2, - env_tlb(env)->d[mmu_idx].iotlb[index].attrs, + env_tlb(env)->d[mmu_idx].fulltlb[index].attrs, BP_MEM_WRITE, retaddr); } if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) { cpu_check_watchpoint(env_cpu(env), page2, size2, - env_tlb(env)->d[mmu_idx].iotlb[index2].attrs, + env_tlb(env)->d[mmu_idx].fulltlb[index2].attrs, BP_MEM_WRITE, retaddr); } @@ -2355,7 +2357,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val, /* Handle anything that isn't just a straight memory access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUIOTLBEntry *iotlbentry; + CPUTLBEntryFull *full; bool need_swap; /* For anything that is unaligned, recurse through byte stores. */ @@ -2363,20 +2365,20 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val, goto do_unaligned_access; } - iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index]; + full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; /* Handle watchpoints. */ if (unlikely(tlb_addr & TLB_WATCHPOINT)) { /* On watchpoint hit, this will longjmp out. */ cpu_check_watchpoint(env_cpu(env), addr, size, - iotlbentry->attrs, BP_MEM_WRITE, retaddr); + full->attrs, BP_MEM_WRITE, retaddr); } need_swap = size > 1 && (tlb_addr & TLB_BSWAP); /* Handle I/O access. */ if (tlb_addr & TLB_MMIO) { - io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, + io_writex(env, full, mmu_idx, val, addr, retaddr, op ^ (need_swap * MO_BSWAP)); return; } @@ -2388,7 +2390,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val, /* Handle clean RAM pages. */ if (tlb_addr & TLB_NOTDIRTY) { - notdirty_write(env_cpu(env), addr, size, iotlbentry, retaddr); + notdirty_write(env_cpu(env), addr, size, full, retaddr); } haddr = (void *)((uintptr_t)addr + entry->addend); diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index d11a8c70d0..fdd23ab3f8 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -106,7 +106,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, return tags + index; #else uintptr_t index; - CPUIOTLBEntry *iotlbentry; + CPUTLBEntryFull *full; int in_page, flags; ram_addr_t ptr_ra; hwaddr ptr_paddr, tag_paddr, xlat; @@ -129,7 +129,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, assert(!(flags & TLB_INVALID_MASK)); /* - * Find the iotlbentry for ptr. This *must* be present in the TLB + * Find the CPUTLBEntryFull for ptr. This *must* be present in the TLB * because we just found the mapping. * TODO: Perhaps there should be a cputlb helper that returns a * matching tlb entry + iotlb entry. @@ -144,10 +144,10 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, g_assert(tlb_hit(comparator, ptr)); } # endif - iotlbentry = &env_tlb(env)->d[ptr_mmu_idx].iotlb[index]; + full = &env_tlb(env)->d[ptr_mmu_idx].fulltlb[index]; /* If the virtual page MemAttr != Tagged, access unchecked. */ - if (!arm_tlb_mte_tagged(&iotlbentry->attrs)) { + if (!arm_tlb_mte_tagged(&full->attrs)) { return NULL; } @@ -181,7 +181,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE; assert(ra != 0); cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, - iotlbentry->attrs, wp, ra); + full->attrs, wp, ra); } /* @@ -202,11 +202,11 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1); /* Look up the address in tag space. */ - tag_asi = iotlbentry->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS; + tag_asi = full->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS; tag_as = cpu_get_address_space(env_cpu(env), tag_asi); mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL, tag_access == MMU_DATA_STORE, - iotlbentry->attrs); + full->attrs); /* * Note that @mr will never be NULL. If there is nothing in the address diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index d6f7ef94fe..9cae8fd352 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -5384,8 +5384,8 @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env, g_assert(tlb_hit(comparator, addr)); # endif - CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index]; - info->attrs = iotlbentry->attrs; + CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; + info->attrs = full->attrs; } #endif diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index 163df8c615..b7787e7786 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -14634,7 +14634,7 @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s) * table entry even for that case. */ return (tlb_hit(entry->addr_code, addr) && - arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].iotlb[index].attrs)); + arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs)); #endif } From patchwork Mon Aug 22 23:57:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599250 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2044290mae; Mon, 22 Aug 2022 16:59:09 -0700 (PDT) X-Google-Smtp-Source: AA6agR5eL30Wv9wVtZOPP7lxDMe0UhdRRYolaX4RC2EwtWmO8Bltq0rUP6JFDa2+DG7w/Imys2DE X-Received: by 2002:ad4:5dcd:0:b0:476:a48d:4c43 with SMTP id m13-20020ad45dcd000000b00476a48d4c43mr17845473qvh.13.1661212749172; Mon, 22 Aug 2022 16:59:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661212749; cv=none; d=google.com; s=arc-20160816; b=wFXuuKB6l3GP59Qsq2QKGnVUojR7DS8LMpueUOYQduKMu9M7woT3CUxECDe9xFl/tX 4771Uh8QYgm3QgYGpCLH3lb3ygp/FnKKXhUBSUzoTQeWvFKeeXD3qXQso6cQpqyMPh93 BTq5vYl2+Rd0tUSOs/agF29dE39jzrO4D1dyr7KGtI5Fsmhxr/VxccbgNUQPJq2irkZr T4Ye0ShMP9XcN+e9+/ICTD4sMN3EXqUwhUiz/Mn9wrHz3We8rtxbL1fgGAV7ziz9Yjs5 1VHGRzU23QvvCYKoMHlQ8oztkwkVX7AoE6YU6iSVrBOWYR6p4nqSDZzE+4fkYqzuAojM tqhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8ckYZp2ZvZbbjSC1kLgVsysS87kN66e9geBqNJpjIqo=; b=h/eA5AM8RNGrA0lrJFnOGzwpyFNVdMLCTZgdI0WLqyLKdnl5viXy1zSDPoX/zwOkD1 MeB5BcJu/7nsCCgXMR0a+21bpe2N1JCjJndwjEcCApS6hkkRRHE60eM4poKQpYlTrGeJ VX7PPXUz5dJPRv9EPrv5TBc0xb8Xe+n07pC1qk4ud9V4fF4YzrtwMNGZHw3Xu+32qIxa r57s0CZjYiqpXc79cqO8LM6iM1apZAg3uix8SL+jpZm3GIoc8dgRZu5QTXjlR8O3rkFX wTX8r/C6Fe3Oc78NcZNY7pNhzLOZkW4kYsW1FaQtVTvo9/TjeMUD5nSgulI678V8Fa+p QQbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OdBvlSaM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q19-20020ac87353000000b0034485bd9ee1si4837821qtp.163.2022.08.22.16.59.09 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 16:59:09 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OdBvlSaM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:35122 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHJz-00046y-L2 for patch@linaro.org; Mon, 22 Aug 2022 19:59:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48712) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJ4-0003zP-2c for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:10 -0400 Received: from mail-pg1-x52a.google.com ([2607:f8b0:4864:20::52a]:37844) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ2-0002hl-Ff for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:09 -0400 Received: by mail-pg1-x52a.google.com with SMTP id bh13so10814033pgb.4 for ; Mon, 22 Aug 2022 16:58:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=8ckYZp2ZvZbbjSC1kLgVsysS87kN66e9geBqNJpjIqo=; b=OdBvlSaMvaYb9jCr0pBPIOL0J1G+mg+BN7ff7ZtUEhRmci9JwMz89HswSDATy864eV GyfySUy8+ODazJlKRDejY7HPeD7zJgmjchkVWI3En9Zkp5GqKtspSm126uRlpdAh0n4j jxbWfEwDtxsKunp2Nx7rGDclgEhgkh6XbJq7UViZIfPmVw2lydPZlSuWsnm9yNTnGOtd sKuf/gkSfNukugPO5qHQYvUjlCucTXUVo3Rdp0VfhRpVPkxADFKXCZpSaK1xJLlhWxa5 Rl4DYXfSfBG/WoIYGMvi8n1YZDMw+gKNNN3tDv83puhMW8GoXPnfVeDLClNRtJwyNk3H 2KTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=8ckYZp2ZvZbbjSC1kLgVsysS87kN66e9geBqNJpjIqo=; b=DgeRC5osoxinRbqCZjc9OWJtJfSnv2jrjJaKqas7bpcEgsUbusr607Nf6aTpedaSZk 8AUD76guaZH9EyhjNk4BL4pZbj17oFJyUAzxYw25Eo2vPfUdlY6UlUW5+K/6xlbbx1wq ZTpEuPA/pxMfxcN+0ke2BfIr58r+3nd0vUP2IeYyg5lueHg9hfAiAb/66g7cA+dX1rm4 yae59UhmVOqERtQMM1BADGrh9I6ZxZSQYo3s6rpxRGo1xrlKDWPjElG2eVK6ql9Ki9AS bje6Xiqb0thNUTmdJXxyGeOV8NVMkr4ufRhTwVw2E3Kxj8iznOx8NKPn+fleb8XsT9Wj Wbaw== X-Gm-Message-State: ACgBeo1KUw3FOuqje9WdGeZrC7piaHgczKMV2ghW/lN1AoPzsVcjXeR+ XGmEJbQAg2jO8DTrpHB0yiRuljrvDn4LXg== X-Received: by 2002:a63:4183:0:b0:41b:f6d3:8825 with SMTP id o125-20020a634183000000b0041bf6d38825mr18767232pga.193.1661212686590; Mon, 22 Aug 2022 16:58:06 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:06 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 02/14] accel/tcg: Drop addr member from SavedIOTLB Date: Mon, 22 Aug 2022 16:57:51 -0700 Message-Id: <20220822235803.1729290-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::52a; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x52a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This field is only written, not read; remove it. Signed-off-by: Richard Henderson --- include/hw/core/cpu.h | 1 - accel/tcg/cputlb.c | 7 +++---- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index 500503da13..9e47184513 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -218,7 +218,6 @@ struct CPUWatchpoint { * the memory regions get moved around by io_writex. */ typedef struct SavedIOTLB { - hwaddr addr; MemoryRegionSection *section; hwaddr mr_offset; } SavedIOTLB; diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index a37275bf8e..1509df96b4 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1386,12 +1386,11 @@ static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full, * This is read by tlb_plugin_lookup if the fulltlb entry doesn't match * because of the side effect of io_writex changing memory layout. */ -static void save_iotlb_data(CPUState *cs, hwaddr addr, - MemoryRegionSection *section, hwaddr mr_offset) +static void save_iotlb_data(CPUState *cs, MemoryRegionSection *section, + hwaddr mr_offset) { #ifdef CONFIG_PLUGIN SavedIOTLB *saved = &cs->saved_iotlb; - saved->addr = addr; saved->section = section; saved->mr_offset = mr_offset; #endif @@ -1420,7 +1419,7 @@ static void io_writex(CPUArchState *env, CPUTLBEntryFull *full, * The memory_region_dispatch may trigger a flush/resize * so for plugins we save the iotlb_data just in case. */ - save_iotlb_data(cpu, full->xlat_section, section, mr_offset); + save_iotlb_data(cpu, section, mr_offset); if (!qemu_mutex_iothread_locked()) { qemu_mutex_lock_iothread(); From patchwork Mon Aug 22 23:57:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599447 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2050984mae; Mon, 22 Aug 2022 17:09:14 -0700 (PDT) X-Google-Smtp-Source: AA6agR4Y7WXUbCpbqS/scCrzYFKZEhVKsD61RTWiYtHz1diEyGvhoeMLZXLnNxh83I0rwQvnsR/e X-Received: by 2002:a05:622a:5d2:b0:344:6180:29f3 with SMTP id d18-20020a05622a05d200b00344618029f3mr18154378qtb.79.1661213353944; Mon, 22 Aug 2022 17:09:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213353; cv=none; d=google.com; s=arc-20160816; b=Row70/+PVlvRSH6w8lWTupl60rGXT+2hUhzWY1is5sMlBUoyz7DSQcbhgrgqvU9lt8 tt3FI/9dvj+tyClnwE2mzRjCN5y4rKehoY6wszUCXGUskybe6vZ0gYw1e1XhaTcQAoEp i6rUtJSq3ITkFq45iNL7AY2jOW11QRPqlyacoGwFnSRiXEYBANKPe112JCO+W7csvRPW xPDTmt3Q1SFRFBIkCS+OvkTDKFsXZ/Mg65oBgYMUKq5JiwiOBZsA9SuPsWAKwllOfWQW FNF+BLic+3lDv7AAHbYmYSqQl7xVOKMhub4ptXpHQg+KCgV+LthA9jfMyno/AyGIx7yu 7JWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=AOC1J2N1TFaqmPODpiEeJvvId1/L1t5ozJ+BKFX5tCI=; b=o57Q4FSd3jYwDipx11VrYm0yXCY+/b1Ql24eFQIviZMSmyzb7OPAMP6KCaR3w+a1Cc TSHf1JBEWqWpwNUCPUA8rvE8bV3q5UOJ3gp+QsnQJOyGRfTLco74CesYkXojewTGdjf4 Rh5IpFIFuCZh4bZ6k+5UZvcuX91cbZxZtWYsGq3OAkfISUEi28jZRMm3jQBlh5+KAdef fWls6hT14Bwp934csFbtXDSo7xyaoBL6oIhlJ0stZgrIr+QpH4XOWG255n61dDhyoOKU py0tlFunfo8jpAFNlUQwB3Jho3UGXHGUyRc5PqzYQEtb2DsBtFTTqnhI+9k+WKOLrOyi AQQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BjuEw3wl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d12-20020a05622a05cc00b00342f1ba1ddfsi6389105qtb.294.2022.08.22.17.09.13 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:09:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BjuEw3wl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:33576 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHTl-0000AY-Ic for patch@linaro.org; Mon, 22 Aug 2022 20:09:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48716) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJ5-00041W-Fj for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:12 -0400 Received: from mail-pg1-x531.google.com ([2607:f8b0:4864:20::531]:45735) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ2-0002hw-OP for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:11 -0400 Received: by mail-pg1-x531.google.com with SMTP id f4so8972680pgc.12 for ; Mon, 22 Aug 2022 16:58:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=AOC1J2N1TFaqmPODpiEeJvvId1/L1t5ozJ+BKFX5tCI=; b=BjuEw3wloWDrnY7RqCrxF26rQ1wtDF2Ak+8vY4QlozgVGX71or1Tmidc0PKn+1qKSS dZbOZFyNsGWrPrDvhPsXoGw+SydzUTO8nestC9CMLQ7qRHy1FmgICKLlphBcIOWsYf9f mw0WX3N9z/dkrXdv4pQTqAlNWvn2A1sWT/KyFhLnOM/Xga6T5dtUzKu3V/fd6pKGvfJ2 sChwgfj8Aa+Y65VmloZ1Cc5VBSknkm/e5tLH1vle8hb5Amq3/cRCJ7T6Y5bnMl052kda n1uOcGFTjN15eSmkNj6HhbFe47cky6muhZk22I06H5lXUG1FNJ53maDQ6pWY9Ba5r3m4 gCAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=AOC1J2N1TFaqmPODpiEeJvvId1/L1t5ozJ+BKFX5tCI=; b=t+83NGVI8KbBLfjH1UOLtoP5qwbdSRE0wP8oASXjhOYF+OPmnWYqzywdhJLipR8Qky uSq+UtjigdRB6lYRV5Q8cefDyUUrcAOV1lfn2k4u3uMV2SpgYwfn/9Q9hV7oUUJbvj6P kt+w9vRf7WIvh2BQ75uoML4hIv9gccDZr4OBgrGiJfSt8VKTr2NJkiH2fUmeg97QOz9T IsWYSx1GehnukiCTe708txhJCZwoDyTvTn8QYb5s0wKKvkyl654AFObdWIQjj+Rqg0pL ZMb2bsMbSBABC63suYgHd4YuepDmFaE0y0/7Loy4+Y3ol8r67XZo+yANyyA5Lt3wXfU3 deaw== X-Gm-Message-State: ACgBeo0xfD3Tv6NZsUaX5F6X8fRQluPvQC4SBmUjm/zJi5E4MyCLEFZ6 hxouBharcBDnvPlgcJPVxwuqAoDkPxGpfg== X-Received: by 2002:a63:41c4:0:b0:429:8c1b:61df with SMTP id o187-20020a6341c4000000b004298c1b61dfmr18848741pga.518.1661212687496; Mon, 22 Aug 2022 16:58:07 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:07 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net, David Hildenbrand Subject: [PATCH 03/14] accel/tcg: Suppress auto-invalidate in probe_access_internal Date: Mon, 22 Aug 2022 16:57:52 -0700 Message-Id: <20220822235803.1729290-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::531; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x531.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" When PAGE_WRITE_INV is set when calling tlb_set_page, we immediately set TLB_INVALID_MASK in order to force tlb_fill to be called on the next lookup. Here in probe_access_internal, we have just called tlb_fill and eliminated true misses, thus the lookup must be valid. This allows us to remove a warning comment from s390x. There doesn't seem to be a reason to change the code though. Cc: David Hildenbrand Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 10 +++++++++- target/s390x/tcg/mem_helper.c | 4 ---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 1509df96b4..5359113e8d 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1602,6 +1602,7 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, } tlb_addr = tlb_read_ofs(entry, elt_ofs); + flags = TLB_FLAGS_MASK; page_addr = addr & TARGET_PAGE_MASK; if (!tlb_hit_page(tlb_addr, page_addr)) { if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page_addr)) { @@ -1617,10 +1618,17 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, /* TLB resize via tlb_fill may have moved the entry. */ entry = tlb_entry(env, mmu_idx, addr); + + /* + * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, + * to force the next access through tlb_fill. We've just + * called tlb_fill, so we know that this entry *is* valid. + */ + flags &= ~TLB_INVALID_MASK; } tlb_addr = tlb_read_ofs(entry, elt_ofs); } - flags = tlb_addr & TLB_FLAGS_MASK; + flags &= tlb_addr; /* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */ if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))) { diff --git a/target/s390x/tcg/mem_helper.c b/target/s390x/tcg/mem_helper.c index fc52aa128b..3758b9e688 100644 --- a/target/s390x/tcg/mem_helper.c +++ b/target/s390x/tcg/mem_helper.c @@ -148,10 +148,6 @@ static int s390_probe_access(CPUArchState *env, target_ulong addr, int size, #else int flags; - /* - * For !CONFIG_USER_ONLY, we cannot rely on TLB_INVALID_MASK or haddr==NULL - * to detect if there was an exception during tlb_fill(). - */ env->tlb_fill_exc = 0; flags = probe_access_flags(env, addr, access_type, mmu_idx, nonfault, phost, ra); From patchwork Mon Aug 22 23:57:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599450 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2054090mae; Mon, 22 Aug 2022 17:11:31 -0700 (PDT) X-Google-Smtp-Source: AA6agR5VBjpvj/V6A0b2ibos3Wa0MseKY6YqrBZa3kMp95U2Yvb+Sb5q+RDc9iWzy9ODpSeiPl4x X-Received: by 2002:a05:6214:b6a:b0:496:f8ea:846e with SMTP id ey10-20020a0562140b6a00b00496f8ea846emr1943876qvb.85.1661213491085; Mon, 22 Aug 2022 17:11:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213491; cv=none; d=google.com; s=arc-20160816; b=bqP/lIDLcNcMys6fL/xhT4ZrxohcmPX/jd5PKl+qYuWjfTwf1f35oX0/b69Nl6NcUo 8EmjGvRzx94bBOhlc7qUMigzRKfL0F5JUlqpYLBavOPUEv4gTJHpkqdMG3M4dr16ovtF g1NqVbpiuSTbwJMkK5XQhDaSGka7F+mDCginak8XQ+wcw2Xek4xBH90PtMhtTrEe2jUt 9cggxuGdY/pzIXNfVKkgva0jUqDIz4EVd4ZuHKK76jPYr5PzkxvG4cCNbhox1vftvsOJ VxExLBr2Ktv2BIeeTTY5p/3hCHMV9yRi4voBAca8BGpRgmLKdSkaATJtXXrPYceUH72G ICVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=sLuofTzD3XcbGEKkxECtfxPk5kwjXiJfOftJpAZ8w/Y=; b=UzpTdMYzeQzw26HdjigGdiauqBClWB/LfZ2lg5LISvAw/rmLqMhNPBLqpSgVYIsmo8 leuIgSDwkwc3P3Djv6psDkkPg6O3w3Jhwi8NnI4hl5orVC/5/qNuUFwXeU6WdUMqlsUz uOrjAdzlBkBVnPvQS2KjtWMi14s1XYVRGsAn6LXsX4zvBDvpv2y6LdGm8uCwN5Gd6wJd VH3ou+uBe0hPQc+n8DuhqD+6ZXdrPy1npEBsj6aDDh4mGt/gFTG5NaNdN93SNhPXrCUG 5mBX+U+N3q9iyBwS4cMqi8jHnppHqrm6F3SjosfZzQKDhRcIA4ARvdvUu2KxCKoN05Da 66dg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hAa8ivhg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o15-20020a0cfa8f000000b0045a85f98984si4046695qvn.11.2022.08.22.17.11.30 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:11:31 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hAa8ivhg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44766 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHVy-0005tG-Js for patch@linaro.org; Mon, 22 Aug 2022 20:11:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48718) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJ5-00041X-Mp for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:12 -0400 Received: from mail-pg1-x535.google.com ([2607:f8b0:4864:20::535]:36465) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ3-0002i6-O4 for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:11 -0400 Received: by mail-pg1-x535.google.com with SMTP id s206so10821921pgs.3 for ; Mon, 22 Aug 2022 16:58:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=sLuofTzD3XcbGEKkxECtfxPk5kwjXiJfOftJpAZ8w/Y=; b=hAa8ivhgN+/i5/MZdVNkeN2NCPvYkGAZMnrMzSjUOJTJKukvK6k0m7/0nDHpuuTNsG iom57IljShYppRYRkcfbWqv4C1aYIL1Dlo05hrXtA1GrQ9HhSvoAoFaFEkDxnh4A/t22 mBmo0oY/h/dBvSMh66Fqe7RJV6jBl3mPXXxO9OfZWnDJO/8t+ldbb4bq9b2w4awwSPHu ArgsbMC6u2H3cECie46E6/bpkQMQZnh0FbHp35kqpIyReA4flRpjH9WHG2/TtkOPRNbg HhGsXYyJUkLwNUUzpfEqADuLLlPPUnnTcYUzdMEc/P1B7oyl7kl4l019flsuaSlu92uP rYSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=sLuofTzD3XcbGEKkxECtfxPk5kwjXiJfOftJpAZ8w/Y=; b=2+OtPcs3KpVc7tSfghZsMCMp88LTm1eX1dsXGe9ZDeai/OcJIltFoUM4C0YKO1SHbI huHFa57Dd+UO67LuFKTqEXUUu6OC/Dgfj3vXKqmb1hdbpcGQSBbq0cH5YsL2sq0txh0H VKFR8X7giIPyoEsFGxOYpQKf/nFGDJiPQ+wygLjkyP+wvQyvAlz2MWsIDT03YXOSydOq oXv1UbLZMD7TLEOa8dowxNB6q2VLpBJqDDXwQpiRcvdYTNJAi+RLY3cQ8LNuGTYVdRpe eNK2VE3jfyiJE9FCPPdjqMCtKkRBAC+2Qac+/BsACf8uuNNHhXKXSs5sG/MNs7SrwCSr Wwvg== X-Gm-Message-State: ACgBeo2sUe/ialgq7ElhNn/Pa/0A0zS8jmHLWDG/WLRcQ5II3+zBlHk9 gxt1b+HD9DKT3RxllGIXjz2Gw06MgJnUXw== X-Received: by 2002:a63:d609:0:b0:42a:8bfc:6578 with SMTP id q9-20020a63d609000000b0042a8bfc6578mr9330870pgg.119.1661212688336; Mon, 22 Aug 2022 16:58:08 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:08 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 04/14] accel/tcg: Introduce probe_access_full Date: Mon, 22 Aug 2022 16:57:53 -0700 Message-Id: <20220822235803.1729290-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::535; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x535.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Add an interface to return the CPUTLBEntryFull struct that goes with the lookup. The result is not intended to be valid across multiple lookups, so the user must use the results immediately. Signed-off-by: Richard Henderson --- include/exec/exec-all.h | 11 +++++++++++ accel/tcg/cputlb.c | 44 +++++++++++++++++++++++++---------------- 2 files changed, 38 insertions(+), 17 deletions(-) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 311e5fb422..e366b5c1ba 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -435,6 +435,17 @@ int probe_access_flags(CPUArchState *env, target_ulong addr, MMUAccessType access_type, int mmu_idx, bool nonfault, void **phost, uintptr_t retaddr); +#ifndef CONFIG_USER_ONLY +/** + * probe_access_full: + * Like probe_access_flags, except also return into @pfull. + */ +int probe_access_full(CPUArchState *env, target_ulong addr, + MMUAccessType access_type, int mmu_idx, + bool nonfault, void **phost, + CPUTLBEntryFull **pfull, uintptr_t retaddr); +#endif + #define CODE_GEN_ALIGN 16 /* must be >= of the size of a icache line */ /* Estimated block size for TB allocation. */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 5359113e8d..1c59e701e6 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1579,7 +1579,8 @@ static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size, static int probe_access_internal(CPUArchState *env, target_ulong addr, int fault_size, MMUAccessType access_type, int mmu_idx, bool nonfault, - void **phost, uintptr_t retaddr) + void **phost, CPUTLBEntryFull **pfull, + uintptr_t retaddr) { uintptr_t index = tlb_index(env, mmu_idx, addr); CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); @@ -1613,10 +1614,12 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, mmu_idx, nonfault, retaddr)) { /* Non-faulting page table read failed. */ *phost = NULL; + *pfull = NULL; return TLB_INVALID_MASK; } /* TLB resize via tlb_fill may have moved the entry. */ + index = tlb_index(env, mmu_idx, addr); entry = tlb_entry(env, mmu_idx, addr); /* @@ -1630,6 +1633,8 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, } flags &= tlb_addr; + *pfull = &env_tlb(env)->d[mmu_idx].fulltlb[index]; + /* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */ if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))) { *phost = NULL; @@ -1641,37 +1646,44 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, return flags; } -int probe_access_flags(CPUArchState *env, target_ulong addr, - MMUAccessType access_type, int mmu_idx, - bool nonfault, void **phost, uintptr_t retaddr) +int probe_access_full(CPUArchState *env, target_ulong addr, + MMUAccessType access_type, int mmu_idx, + bool nonfault, void **phost, CPUTLBEntryFull **pfull, + uintptr_t retaddr) { - int flags; - - flags = probe_access_internal(env, addr, 0, access_type, mmu_idx, - nonfault, phost, retaddr); + int flags = probe_access_internal(env, addr, 0, access_type, mmu_idx, + nonfault, phost, pfull, retaddr); /* Handle clean RAM pages. */ if (unlikely(flags & TLB_NOTDIRTY)) { - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; - - notdirty_write(env_cpu(env), addr, 1, full, retaddr); + notdirty_write(env_cpu(env), addr, 1, *pfull, retaddr); flags &= ~TLB_NOTDIRTY; } return flags; } +int probe_access_flags(CPUArchState *env, target_ulong addr, + MMUAccessType access_type, int mmu_idx, + bool nonfault, void **phost, uintptr_t retaddr) +{ + CPUTLBEntryFull *full; + + return probe_access_full(env, addr, access_type, mmu_idx, + nonfault, phost, &full, retaddr); +} + void *probe_access(CPUArchState *env, target_ulong addr, int size, MMUAccessType access_type, int mmu_idx, uintptr_t retaddr) { + CPUTLBEntryFull *full; void *host; int flags; g_assert(-(addr | TARGET_PAGE_MASK) >= size); flags = probe_access_internal(env, addr, size, access_type, mmu_idx, - false, &host, retaddr); + false, &host, &full, retaddr); /* Per the interface, size == 0 merely faults the access. */ if (size == 0) { @@ -1679,9 +1691,6 @@ void *probe_access(CPUArchState *env, target_ulong addr, int size, } if (unlikely(flags & (TLB_NOTDIRTY | TLB_WATCHPOINT))) { - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; - /* Handle watchpoints. */ if (flags & TLB_WATCHPOINT) { int wp_access = (access_type == MMU_DATA_STORE @@ -1702,11 +1711,12 @@ void *probe_access(CPUArchState *env, target_ulong addr, int size, void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, MMUAccessType access_type, int mmu_idx) { + CPUTLBEntryFull *full; void *host; int flags; flags = probe_access_internal(env, addr, 0, access_type, - mmu_idx, true, &host, 0); + mmu_idx, true, &host, &full, 0); /* No combination of flags are expected by the caller. */ return flags ? NULL : host; From patchwork Mon Aug 22 23:57:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599452 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2055891mae; Mon, 22 Aug 2022 17:14:08 -0700 (PDT) X-Google-Smtp-Source: AA6agR5mvXrCreud0rACAIssj2BHw+7jaioS/jCWm6sOTvvQlUtPJdjESl+3rwtKxpYFumOpsVPA X-Received: by 2002:ac8:7f4a:0:b0:344:8235:a0b6 with SMTP id g10-20020ac87f4a000000b003448235a0b6mr16834730qtk.127.1661213648301; Mon, 22 Aug 2022 17:14:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213648; cv=none; d=google.com; s=arc-20160816; b=CpLYh80NZTSnf58SQS+al5A+Cot9BFSFkNZgBeGhkqWYWGSoLm7IaOu8uNCd/4NaDJ zVUrDH+NNk5mEaGTaxHfqsr7pqM5XNgDUv/tEaRpuzmw356pwoCQp/+AhKlx4qiZrYA2 YcslELsw/gyLFGnNBeWROUTf0gjMYsdoYR6YrrtWVqjEyt4TWiz8tiWku2q7v/Cev0TF wHyKLSDZiT73IccvjBNn7PBgxA3Xv0JU1ASWZsIRjoHVMujwrwAbXx5oKbJ/EC87JQ+c lyQhOhPFDS+e95Zbqaw1Prx7FiyS744S4nTy935Hx65cQzzG4KTasbXXcM5RZ488Nk/x oEMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=zDadwSbOhxwSvnxo6eTtnyAaaj8IGihgO9c5CmBo0zk=; b=omk3YehArMLi1S3Te3DlSPzvf+8PohvOEQD3eGRXgZL5B0f1NbvHS+WnxI+x6vTwme YbMssbDkEiKolZlmv1nYm/FBXoeYkIeaN/jvNqchCEq9MpKax8IicxdYRzk7/ZrqtPdc j2Ascj9L4Y7Zx0WD5+4yrrL9YfUlOWJL7bIy/7HJfLsFT+OwnUTtYIbt9espNxrZvXuy MtbIkCO7cNiMexr1e19UMsR5fYhlazj36q93Aq8rhlPDQfmewgR2LZbBWq52L/7/4wFt 21h+Gy090plRsHiW8LYXxpsKMEAaon8H0FRYn+3m05UZonbqdlBjNHSYLWj1+wrRQ0UL 5pQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Bg2gPq5F; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o14-20020a05622a138e00b00343d051f5f1si6355916qtk.640.2022.08.22.17.14.08 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:14:08 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Bg2gPq5F; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:55100 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHYV-000370-Pn for patch@linaro.org; Mon, 22 Aug 2022 20:14:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48720) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJ6-00041Z-DF for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:12 -0400 Received: from mail-pf1-x42b.google.com ([2607:f8b0:4864:20::42b]:36824) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ4-0002iD-Ei for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:12 -0400 Received: by mail-pf1-x42b.google.com with SMTP id w29so6320878pfj.3 for ; Mon, 22 Aug 2022 16:58:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=zDadwSbOhxwSvnxo6eTtnyAaaj8IGihgO9c5CmBo0zk=; b=Bg2gPq5FuQnGSPJAocTZYEU4cTyz6tSrLZyPbbawUAFz5tzL/K1+NsWo8ApRgBrjBv G79q37fwnWUIAZrTSrQxxGaO5GWKgNaG0XkenmsLK7+kvbTrrRC/CqAFJ9I+O/k98CMe ScU9RuXt8FqP66mgECxFj6+40qJCkbd0KGqtIjrsgY8q80ILngYj4ARvG0uKxm9Ycpeg sS28CHxnVqXXlEg44kO3NYV/RxMgmnUOgXva7mvZLndrG1r32oDDeo+yzdrxnBsha8QX 5qogVsVh4usBS8UpLI0HpWIkbKfyrdWRZIXf7yZrWYD0bCALxTnzbNk3VT5zvCuWVIQP IvIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=zDadwSbOhxwSvnxo6eTtnyAaaj8IGihgO9c5CmBo0zk=; b=Ik6Zp0EVAXYERY4gzIenAgPe7QxFp7NoqCUM94WHQEw9kTxvNSGNF2KJxoLhkQxQ03 mBP9dsgcGB9aMM3+IM1bMBf7GH5WMqTN0yZ9i5xINHX0LsFx8h968e2sCX5Nw3K55UP2 PcILxJRvddPjM2cxj3XxeP24d8DNY9EAN+L8PODxmsAt84rBF5wM8i9y7oB/VLF7lsD4 46JuinE/oDW+vDi+tt668lha0nSuS6ohiRsS2BzVuTWLDIQl1GzOgO/2WJfsnTzaWfv3 rw56XJn8jBxrMkSqnxrj2+v9SxwI4aC6EJr0i77mWr2YGbdJeG6TG6q9d0FRIWvIn3Zz tV3Q== X-Gm-Message-State: ACgBeo1q8u/mDUcorIAd0QhIJy/itiT4km6rH0emzIyiVX4hc372n2F/ +4CAJ+BZIr10G1bgkE3S4zewq/y59wafoA== X-Received: by 2002:a65:490e:0:b0:41c:5b91:e845 with SMTP id p14-20020a65490e000000b0041c5b91e845mr18745811pgs.436.1661212689108; Mon, 22 Aug 2022 16:58:09 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:08 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 05/14] accel/tcg: Introduce tlb_set_page_full Date: Mon, 22 Aug 2022 16:57:54 -0700 Message-Id: <20220822235803.1729290-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42b; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Now that we have collected all of the page data into CPUTLBEntryFull, provide an interface to record that all in one go, instead of using 4 arguments. This interface allows CPUTLBEntryFull to be extended without having to change the number of arguments. Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 14 ++++++++++ include/exec/exec-all.h | 22 +++++++++++++++ accel/tcg/cputlb.c | 62 ++++++++++++++++++++++++++++------------- 3 files changed, 78 insertions(+), 20 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index f70f54d850..5e12cc1854 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -148,7 +148,21 @@ typedef struct CPUTLBEntryFull { * + the offset within the target MemoryRegion (otherwise) */ hwaddr xlat_section; + + /* + * @phys_addr contains the physical address in the address space + * given by cpu_asidx_from_attrs(cpu, @attrs). + */ + hwaddr phys_addr; + + /* @attrs contains the memory transaction attributes for the page. */ MemTxAttrs attrs; + + /* @prot contains the complete protections for the page. */ + uint8_t prot; + + /* @lg_page_size contains the log2 of the page size. */ + uint8_t lg_page_size; } CPUTLBEntryFull; /* diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index e366b5c1ba..e7b54e8e5c 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -258,6 +258,28 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap, unsigned bits); +/** + * tlb_set_page_full: + * @cpu: CPU context + * @mmu_idx: mmu index of the tlb to modify + * @vaddr: virtual address of the entry to add + * @full: the details of the tlb entry + * + * Add an entry to @cpu tlb index @mmu_idx. All of the fields of + * @full must be filled, except for xlat_section, and constitute + * the complete description of the translated page. + * + * This is generally called by the target tlb_fill function after + * having performed a successful page table walk to find the physical + * address and attributes for the translation. + * + * At most one entry for a given virtual address is permitted. Only a + * single TARGET_PAGE_SIZE region is mapped; @full->ld_page_size is only + * used by tlb_flush_page. + */ +void tlb_set_page_full(CPUState *cpu, int mmu_idx, target_ulong vaddr, + CPUTLBEntryFull *full); + /** * tlb_set_page_with_attrs: * @cpu: CPU to add this TLB entry for diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 1c59e701e6..a93d715e42 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1095,16 +1095,16 @@ static void tlb_add_large_page(CPUArchState *env, int mmu_idx, env_tlb(env)->d[mmu_idx].large_page_mask = lp_mask; } -/* Add a new TLB entry. At most one entry for a given virtual address +/* + * Add a new TLB entry. At most one entry for a given virtual address * is permitted. Only a single TARGET_PAGE_SIZE region is mapped, the * supplied size is only used by tlb_flush_page. * * Called from TCG-generated code, which is under an RCU read-side * critical section. */ -void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, - hwaddr paddr, MemTxAttrs attrs, int prot, - int mmu_idx, target_ulong size) +void tlb_set_page_full(CPUState *cpu, int mmu_idx, + target_ulong vaddr, CPUTLBEntryFull *full) { CPUArchState *env = cpu->env_ptr; CPUTLB *tlb = env_tlb(env); @@ -1117,35 +1117,36 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, CPUTLBEntry *te, tn; hwaddr iotlb, xlat, sz, paddr_page; target_ulong vaddr_page; - int asidx = cpu_asidx_from_attrs(cpu, attrs); - int wp_flags; + int asidx, wp_flags, prot; bool is_ram, is_romd; assert_cpu_is_self(cpu); - if (size <= TARGET_PAGE_SIZE) { + if (full->lg_page_size <= TARGET_PAGE_BITS) { sz = TARGET_PAGE_SIZE; } else { - tlb_add_large_page(env, mmu_idx, vaddr, size); - sz = size; + sz = (hwaddr)1 << full->lg_page_size; + tlb_add_large_page(env, mmu_idx, vaddr, sz); } vaddr_page = vaddr & TARGET_PAGE_MASK; - paddr_page = paddr & TARGET_PAGE_MASK; + paddr_page = full->phys_addr & TARGET_PAGE_MASK; + prot = full->prot; + asidx = cpu_asidx_from_attrs(cpu, full->attrs); section = address_space_translate_for_iotlb(cpu, asidx, paddr_page, - &xlat, &sz, attrs, &prot); + &xlat, &sz, full->attrs, &prot); assert(sz >= TARGET_PAGE_SIZE); tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx " prot=%x idx=%d\n", - vaddr, paddr, prot, mmu_idx); + vaddr, full->phys_addr, prot, mmu_idx); address = vaddr_page; - if (size < TARGET_PAGE_SIZE) { + if (full->lg_page_size < TARGET_PAGE_BITS) { /* Repeat the MMU check and TLB fill on every access. */ address |= TLB_INVALID_MASK; } - if (attrs.byte_swap) { + if (full->attrs.byte_swap) { address |= TLB_BSWAP; } @@ -1236,8 +1237,10 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, * subtract here is that of the page base, and not the same as the * vaddr we add back in io_readx()/io_writex()/get_page_addr_code(). */ + desc->fulltlb[index] = *full; desc->fulltlb[index].xlat_section = iotlb - vaddr_page; - desc->fulltlb[index].attrs = attrs; + desc->fulltlb[index].phys_addr = paddr_page; + desc->fulltlb[index].prot = prot; /* Now calculate the new entry */ tn.addend = addend - vaddr_page; @@ -1272,15 +1275,34 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, qemu_spin_unlock(&tlb->c.lock); } -/* Add a new TLB entry, but without specifying the memory - * transaction attributes to be used. - */ +void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, + hwaddr paddr, MemTxAttrs attrs, int prot, + int mmu_idx, target_ulong size) +{ + CPUTLBEntryFull full = { + .phys_addr = paddr, + .attrs = attrs, + .prot = prot, + .lg_page_size = ctz64(size) + }; + + assert(is_power_of_2(size)); + tlb_set_page_full(cpu, mmu_idx, vaddr, &full); +} + void tlb_set_page(CPUState *cpu, target_ulong vaddr, hwaddr paddr, int prot, int mmu_idx, target_ulong size) { - tlb_set_page_with_attrs(cpu, vaddr, paddr, MEMTXATTRS_UNSPECIFIED, - prot, mmu_idx, size); + CPUTLBEntryFull full = { + .phys_addr = paddr, + .attrs = MEMTXATTRS_UNSPECIFIED, + .prot = prot, + .lg_page_size = ctz64(size) + }; + + assert(is_power_of_2(size)); + tlb_set_page_full(cpu, mmu_idx, vaddr, &full); } static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr) From patchwork Mon Aug 22 23:57:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599454 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2057952mae; Mon, 22 Aug 2022 17:17:54 -0700 (PDT) X-Google-Smtp-Source: AA6agR5e9YENYf6hM5jkLRmLSwQ1F4Bvlu3E8dxKjSHv+zyZHBm2HM+jV+U7mZQWaRT6iu8yJmJr X-Received: by 2002:a05:6214:4119:b0:474:877b:8bac with SMTP id kc25-20020a056214411900b00474877b8bacmr17642243qvb.1.1661213874113; Mon, 22 Aug 2022 17:17:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213874; cv=none; d=google.com; s=arc-20160816; b=gZBlC06fS1cHW2qoEq70x8yp3RutrXEWKpF6Fb9TmqrdnC/FOtDrrioaY9DvvihAIX 31Uy8p+MKmKHzPN2YMmNOJNh7gdzfH4k90/CHDut0Cf7oMn+w8OPxQJ9mjs9szW67S5M Uf/EuNv5EM26shEhe0psV9pv4uPs9CKd0lWxAMF2A0Y2wQiRICy+q1bME2LufTIe1m6e +ODhe24z9zGldM9zdhOELDfsUMpXsyucseOJgvh7yn8PQ7cz/eWmfzo39fZ6+bySX9wj JXSVag8c49gQd+HBH1NZfAO81x/MqqcI/BXLGRalH33TfO0MF9FV0eCuwvh5LylYk07F sxvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=L7q2sisR9jXPO2g1tObtbxm8Q6qrnHqWQnHO8ESVRzQ=; b=TKYkZTImcrt/coULUKZf+tEpiWieDhsmsBMotJMspo7S/oET/orVJBCkSRT/EgC5vf +8KPx2eOaEDYUsqVJ1ZvY9mj8EIX95q3sNR+scOb1tKuksrhqQq4ziyrgRgjaY0YQBhu kBLmdvdpXv+GGz4xCuXyE0NQe1Mglh6dW8MK+lnEzWXvR2SFMpoA1Do0XwB/enkQY1Nr bKCLaLQi2kYs4hSw9vLVaLKiL2uerPHHnUbOXT3IXVG/f9MR5AICiDtTFoLmOB/vUigr Eiza72kj+xegrUTZAVeNimCmvhaYu73zt0RhMPIPG3Xz9bxnhthfJT+hMcei1dFXD1mo KL3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cdyb46pt; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h6-20020ac85686000000b00342fa31b715si5099081qta.691.2022.08.22.17.17.54 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:17:54 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cdyb46pt; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43280 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHc9-0007Rw-OV for patch@linaro.org; Mon, 22 Aug 2022 20:17:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48722) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJ6-000425-GS for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:12 -0400 Received: from mail-pg1-x531.google.com ([2607:f8b0:4864:20::531]:42824) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ4-0002iR-VH for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:12 -0400 Received: by mail-pg1-x531.google.com with SMTP id 73so10797232pgb.9 for ; Mon, 22 Aug 2022 16:58:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=L7q2sisR9jXPO2g1tObtbxm8Q6qrnHqWQnHO8ESVRzQ=; b=cdyb46ptQFDVL2DDkIXhQffuTR+Q0Q1biN8FGfrRJ442qECibWAOhH/Ft6oIfFkyPy xc+V2YStgv2Z/b4ducHOVbGqDCjJdxJVbU+fSR40PE/wOqIvw4be8ivNcle1d9FSKxC/ PVVP4Cs/GV8vzbZBordMjYdwh+2hgWJt3QCtMBx1xobU1RS6qeRj8r/YNAGpVQV+/mmv DX8+g8F2WNoV5uEtvIMEmzlfm4D6xcc9unQpg7lILmX4EBHx0LJlRUUKmG7fu9aJ6pRV Oleia1biJmenJ5rbkWzcg0j0+0f09ZrmhsUaeseMljk5wjIEAsIP/VFRrxRvcwqKa8AE kuTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=L7q2sisR9jXPO2g1tObtbxm8Q6qrnHqWQnHO8ESVRzQ=; b=jXXZbjUM3Cd2e1ljjKAD3OJlyruv0MTA6TMepdpQcrZOhiGT9g/+8MNlKUQTQWyK4d ieNi24r7zExdrw2x0Gx9sJXMdud2PF/slYv0vkfInp6KP+NCwRhtsVtMyjP1Qnmhrw4L xH1pU8Erow6cTug8lMNbS7cV2frRAylDezr4sn52GVtnXzCdOoycTagZsbL8ZUNxAY8C xLc9mQuKc3A6LJVG+qYnBsV80PK+4TCYzzcQvpna9xWN2Unl9pbqUufnaIza90oMa4JE odunpV9bYFjJtS8fsbrk2zpGCoVFZ3XQyynNPowJn3+c7ov5Os4cXaN3U5BgS+q3u0RN nriQ== X-Gm-Message-State: ACgBeo33ApsSsrboxtEybu8SoInJoRhFsCdh7ejQu8qbek6rDlmHuewY FqjSl93jXpYM9JVSp94cOMBYSN7WDxijxg== X-Received: by 2002:a63:90c8:0:b0:41d:f6f6:49cc with SMTP id a191-20020a6390c8000000b0041df6f649ccmr14610995pge.223.1661212689775; Mon, 22 Aug 2022 16:58:09 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:09 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 06/14] include/exec: Introduce TARGET_PAGE_ENTRY_EXTRA Date: Mon, 22 Aug 2022 16:57:55 -0700 Message-Id: <20220822235803.1729290-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::531; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x531.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Allow the target to cache items from the guest page tables. Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index 5e12cc1854..67239b4e5e 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -163,6 +163,15 @@ typedef struct CPUTLBEntryFull { /* @lg_page_size contains the log2 of the page size. */ uint8_t lg_page_size; + + /* + * Allow target-specific additions to this structure. + * This may be used to cache items from the guest cpu + * page tables for later use by the implementation. + */ +#ifdef TARGET_PAGE_ENTRY_EXTRA + TARGET_PAGE_ENTRY_EXTRA +#endif } CPUTLBEntryFull; /* From patchwork Mon Aug 22 23:57:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599456 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2059750mae; Mon, 22 Aug 2022 17:21:26 -0700 (PDT) X-Google-Smtp-Source: AA6agR4FcSWX0RGpP6GDYKWQij3cexv08pAhlakdqemSA9fonz3VeVOnlgVWX1QY353AprecfaoY X-Received: by 2002:a05:6214:5093:b0:496:e4c4:669e with SMTP id kk19-20020a056214509300b00496e4c4669emr6052727qvb.21.1661214086305; Mon, 22 Aug 2022 17:21:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661214086; cv=none; d=google.com; s=arc-20160816; b=cAsC1op/fk0KI/bYOFFOarhbcY/XfZkYZiJmcNHUjMaV9adt3WAkaq+EqCEdF30wJx tKFq9xO+49apWhtDRRtXtKC8aDBq6qf6EBGcE5l3v10MB7MVvIH1g6W9c0r7+GTDj0Xz 3bj0u9KocNFpr21lZUd1N5PooGhujaS9LVKPwvVTKKYdSkpUz0pk8ybYkkYC3rPVt9Cz DJom8011G//joOBbn80S+3xakA041+mNB0vgHs1MYb4pI+9fU5cURmAe3ugWbGYepRz4 gI7x6KWwxe4OFYarkgU4nAsNmesahQDVZevBOxaFvI1Fh3ibcZvgoJ5MpF8gKq58u3P/ 77ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=2h0TkUJc4yOQJXhyjibyZnOW4r9x66hUQmDkisOxO7c=; b=gdacpU9LIK3ikEZv1oXP/wJW7soSQSbCQR8bxVdGtSR71bqqRxkj9qWCUgY3my1dYG uCkkz237luDq0uH8JJ2fGFF6AiI+EM4NeWrn/NwBuUT7K3Rh6aZCat/Wb84ghnZlH5QG Ym/oAmBk6hM/uBmHR+TNDQ1oo4Nze35B++mRupTw8uSz7ldxN7KUdDKoaNyOkQzUa2cg feZVzHOEDWphO4Hku4XF+F1753o5ne9Mj+AyCHtXPfCGBRVe0xQ+EMy/OwOkLHxhTfWy KracWGPkyfof3KlUpD+V3mvphR0Di9AIZ2Bl6urWIdbfwlTpyzL6fHvpr811ZOgIVqdk T5/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jQaWIit9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x11-20020ac87ecb000000b00342f7da72e2si4964542qtj.722.2022.08.22.17.21.26 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:21:26 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jQaWIit9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:34210 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHfZ-0003Z3-Ta for patch@linaro.org; Mon, 22 Aug 2022 20:21:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48726) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJ7-000476-RW for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:13 -0400 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]:44564) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ6-0002jV-0C for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:13 -0400 Received: by mail-pl1-x632.google.com with SMTP id g8so7453068plq.11 for ; Mon, 22 Aug 2022 16:58:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=2h0TkUJc4yOQJXhyjibyZnOW4r9x66hUQmDkisOxO7c=; b=jQaWIit9DJQsK5JU89O5yLCmnL1cwa7xYwv7V2w459VwYGg+VqoXJZaggx+uJPIi8Y Z4joCT9TrsiuJe6aytCCk3zAd85IarP5tWxijkwaONzsgk44Ct3FaPZvuwxQTvi/rRdh KH8PKBTf+XVuDMuEto0zAYXK6t10DVEHwnV+YYkLxfupFRWmXsQ7KgY4Mks9dUGeQYj8 ya8m/fVy6Q3Bk8L3HOv0uWA6QSvwbYJcr5fBNK9FrA1H4goTj4fL2UUutagREJS3k27w UTB6Dd++wx/4ODRN9zFiOm/a3/cw5SG+vtRgqL40Zd0iHJ1zj5vB+9GApBhlc3cdtmc2 qaxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=2h0TkUJc4yOQJXhyjibyZnOW4r9x66hUQmDkisOxO7c=; b=LEva+zmpjN1c617kuway+TrIVusBnCwn3o71TXa5gRz0x7TNKImu0z09ebk6h6vssM DhEwp60hxXUn+LH87odwUe9+NdAO5rMfYjDD1JpzMs+DMv53oxzlCy2or3sW0pJX1sFN 3YJhEPKlVQCvx2u4mTYNLEB9cOqXA37P7P8pCcuMSClQp5HJ5ylvhlwydx+Ay3YBl7xL Pm51GYq30q2rxH2vuXjpVzpHZIosnx1AfBzlIpNXuZsGmSDqrziKpcM7SuefWFoy4JGi H6Jgd+C9IpTmnfvTsX3LBpA42APrZO9R2SKs4PvXhJF0lg4KXXfmhdu+UElmTOxVqf6d aj4g== X-Gm-Message-State: ACgBeo3h8pyUDZ8C/dcKcfNbBX1gBPANQAX16kNllGIlAsYGD3ZAh63U i26eHPwobWObnh/zMArrNJ8n7uQl2HcLrg== X-Received: by 2002:a17:90b:1c8e:b0:1f7:524f:bfcc with SMTP id oo14-20020a17090b1c8e00b001f7524fbfccmr708311pjb.132.1661212690517; Mon, 22 Aug 2022 16:58:10 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:10 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 07/14] target/i386: Use MMUAccessType across excp_helper.c Date: Mon, 22 Aug 2022 16:57:56 -0700 Message-Id: <20220822235803.1729290-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::632; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x632.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Replace int is_write1 and magic numbers with the proper MMUAccessType access_type and enumerators. Signed-off-by: Richard Henderson --- target/i386/tcg/sysemu/excp_helper.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index 48feba7e75..414d8032de 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -30,8 +30,10 @@ typedef hwaddr (*MMUTranslateFunc)(CPUState *cs, hwaddr gphys, MMUAccessType acc #define GET_HPHYS(cs, gpa, access_type, prot) \ (get_hphys_func ? get_hphys_func(cs, gpa, access_type, prot) : gpa) -static int mmu_translate(CPUState *cs, hwaddr addr, MMUTranslateFunc get_hphys_func, - uint64_t cr3, int is_write1, int mmu_idx, int pg_mode, +static int mmu_translate(CPUState *cs, hwaddr addr, + MMUTranslateFunc get_hphys_func, + uint64_t cr3, MMUAccessType access_type, + int mmu_idx, int pg_mode, hwaddr *xlat, int *page_size, int *prot) { X86CPU *cpu = X86_CPU(cs); @@ -40,13 +42,13 @@ static int mmu_translate(CPUState *cs, hwaddr addr, MMUTranslateFunc get_hphys_f int32_t a20_mask; target_ulong pde_addr, pte_addr; int error_code = 0; - int is_dirty, is_write, is_user; + bool is_dirty, is_write, is_user; uint64_t rsvd_mask = PG_ADDRESS_MASK & ~MAKE_64BIT_MASK(0, cpu->phys_bits); uint32_t page_offset; uint32_t pkr; is_user = (mmu_idx == MMU_USER_IDX); - is_write = is_write1 & 1; + is_write = (access_type == MMU_DATA_STORE); a20_mask = x86_get_a20_mask(env); if (!(pg_mode & PG_MODE_NXE)) { @@ -264,14 +266,14 @@ do_check_protect_pse36: } *prot &= pkr_prot; - if ((pkr_prot & (1 << is_write1)) == 0) { - assert(is_write1 != 2); + if ((pkr_prot & (1 << access_type)) == 0) { + assert(access_type != MMU_INST_FETCH); error_code |= PG_ERROR_PK_MASK; goto do_fault_protect; } } - if ((*prot & (1 << is_write1)) == 0) { + if ((*prot & (1 << access_type)) == 0) { goto do_fault_protect; } @@ -297,7 +299,7 @@ do_check_protect_pse36: /* align to page_size */ pte &= PG_ADDRESS_MASK & ~(*page_size - 1); page_offset = addr & (*page_size - 1); - *xlat = GET_HPHYS(cs, pte + page_offset, is_write1, prot); + *xlat = GET_HPHYS(cs, pte + page_offset, access_type, prot); return PG_ERROR_OK; do_fault_rsvd: @@ -308,7 +310,7 @@ do_check_protect_pse36: error_code |= (is_write << PG_ERROR_W_BIT); if (is_user) error_code |= PG_ERROR_U_MASK; - if (is_write1 == 2 && + if (access_type == MMU_INST_FETCH && ((pg_mode & PG_MODE_NXE) || (pg_mode & PG_MODE_SMEP))) error_code |= PG_ERROR_I_D_MASK; return error_code; @@ -353,7 +355,7 @@ hwaddr get_hphys(CPUState *cs, hwaddr gphys, MMUAccessType access_type, * 1 = generate PF fault */ static int handle_mmu_fault(CPUState *cs, vaddr addr, int size, - int is_write1, int mmu_idx) + MMUAccessType access_type, int mmu_idx) { X86CPU *cpu = X86_CPU(cs); CPUX86State *env = &cpu->env; @@ -365,7 +367,7 @@ static int handle_mmu_fault(CPUState *cs, vaddr addr, int size, #if defined(DEBUG_MMU) printf("MMU fault: addr=%" VADDR_PRIx " w=%d mmu=%d eip=" TARGET_FMT_lx "\n", - addr, is_write1, mmu_idx, env->eip); + addr, access_type, mmu_idx, env->eip); #endif if (!(env->cr[0] & CR0_PG_MASK)) { @@ -393,7 +395,7 @@ static int handle_mmu_fault(CPUState *cs, vaddr addr, int size, } } - error_code = mmu_translate(cs, addr, get_hphys, env->cr[3], is_write1, + error_code = mmu_translate(cs, addr, get_hphys, env->cr[3], access_type, mmu_idx, pg_mode, &paddr, &page_size, &prot); } @@ -404,7 +406,7 @@ static int handle_mmu_fault(CPUState *cs, vaddr addr, int size, vaddr = addr & TARGET_PAGE_MASK; paddr &= TARGET_PAGE_MASK; - assert(prot & (1 << is_write1)); + assert(prot & (1 << access_type)); tlb_set_page_with_attrs(cs, vaddr, paddr, cpu_get_mem_attrs(env), prot, mmu_idx, page_size); return 0; From patchwork Mon Aug 22 23:57:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599455 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2058859mae; Mon, 22 Aug 2022 17:19:33 -0700 (PDT) X-Google-Smtp-Source: AA6agR7OGerIUk7xItuKXhAbO2Zjlq2wLRF1ncOSDH8nohW4SvWxunua0Dq8jqAccESJscHWYq+g X-Received: by 2002:ac8:5ac7:0:b0:344:8b65:954b with SMTP id d7-20020ac85ac7000000b003448b65954bmr17580049qtd.488.1661213973610; Mon, 22 Aug 2022 17:19:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213973; cv=none; d=google.com; s=arc-20160816; b=UwxrLre4+yIiHB5Mw0mAwYtpnBq2X2gVIiAFxDF7vQYJPo2l/03TPJ/RbtOq9+8urS +Ycyf+7GWwUoAtjUOUHcffTT5jM+noab4sOejyDiixQ4uBDvKb0PexRzwPMEklKAY65y MTCZtySOg31bu5M/q7k+Y4I0bvzUSQCEf0iAcy6uQxSMUokYP6yLshWaq6/fwREBZPw+ 2A5va4FW581bv4XbzdqwZ7gB/gbrH088q7ef9qqkIaGf8h69Oed0Jb5UhpHFklEnwcON sVZI1HOHFno89OmbXO+WD8Bu5K2oGk2H6zVNtL36W5oA3hiPb28yLVu5oc1YXEZ9Exsh ObPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=dKKPqc9i5CiF5LwFzHQi/Z8WbYaXN7VuC10J23PoypU=; b=zTtPqnrXRIWKzshe8AeI0e0p3MG8DfYv5kw8Stxn2B4AC1OkZY4Ry7U/maY/YAX85+ giT8LRnuiqS2qjkjt/4l11z7hYP6JXlZJJomCTcyGgZxZR/NxsoZfEJ8KoagP9/0AZK6 4kMXx7+7yj7iKogvKPi9DDTrj9OqfWx+H8ODkIUdZBNyBlC5WG7qa+clBtO5KXNHCML7 8KliwYzyJLJym2W3Dlsfw40L2N+iSpDiDbPCqa+M3DWS1gLIV/tcMJ5qgqzZzeMofnMH 1ye/v/ERLxgT+P5jgfWCRJuqac75LmNim6JOg0jueGD2WbT8sfqvKLNhQerWSUPvZkOa 0jrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=bTN5uk40; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t76-20020a37464f000000b006bb6780f144si4695424qka.493.2022.08.22.17.19.33 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:19:33 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=bTN5uk40; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59356 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHdl-0001yg-8F for patch@linaro.org; Mon, 22 Aug 2022 20:19:33 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48724) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJ7-000472-QQ for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:13 -0400 Received: from mail-pf1-x42a.google.com ([2607:f8b0:4864:20::42a]:40498) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ6-0002hj-64 for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:13 -0400 Received: by mail-pf1-x42a.google.com with SMTP id y141so11875074pfb.7 for ; Mon, 22 Aug 2022 16:58:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=dKKPqc9i5CiF5LwFzHQi/Z8WbYaXN7VuC10J23PoypU=; b=bTN5uk409jCZyBgsAvZZUlItev1zuARYymNFWT/7XNngYAKixJP/g/qRnZI65936Mf mCBC3jtffIaYZVN5TUZKRdvFvgWPcGdZZlrBE8dhvfaEeyTYEdrGkUMe5w8cHfK9iVLN EeCwmDQ+nE/cyLbSeefRvYpN48ZPCjUZkcgkcZJ/TjYeg9F6wBxb0n3JUR9Y3wqNodBi d3DYcEEv+KvqEMRBdG9XFHUjMazvcet0eeLDvIi5e9SfNPBiFr5yea2NgdKajvt1OL2Q 07wFqrOC2jkMp9eSz/1gFtTOMB3fhu67tOKnPcTuLeQlUnoqlH61j83HzB2h23UJPyhV Xkgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=dKKPqc9i5CiF5LwFzHQi/Z8WbYaXN7VuC10J23PoypU=; b=6NTfDHKIUcR5LpcmMPbn5BhCHVxqPB22OhqupcAjiJ1ANniH04BirdoeWjQZ0SuEjg +c7sq54tnRYB6R3UVloPLYwPQu4eQtf9eFIC2Fr4GLYw99JefMzFmFC1flylx2EbuVZP U2JTx3EM+AfxkoLBUZUpBlUVuTzmWzl3s6h5w7mMa1wepcAS2ueMu06yX+PSxwLY0Zvi OKB+VvNen3aX5Wnxst4byhN5h/grXcAdr/WBbZ1AOZAt6ZWIMefwk5Tvz/CdImWPg+Bg 1M6LLToyr0MIijIU6csSx4CyCz5RbbH8a+kf0H/Pwz9DRuRwN+XWYW0iDAMiyW0jRAVM dUNA== X-Gm-Message-State: ACgBeo1hRVvKyLF3sNZ9FLXqK06cG5KOhOuZ/qJkmMRg9BAAXer/h2oq 95pnmR6zlUC4NCqd2uxn85J3GRUAFeXnbg== X-Received: by 2002:a63:564f:0:b0:425:f2cd:d0ce with SMTP id g15-20020a63564f000000b00425f2cdd0cemr18394170pgm.143.1661212691328; Mon, 22 Aug 2022 16:58:11 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:10 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 08/14] target/i386: Direct call get_hphys from mmu_translate Date: Mon, 22 Aug 2022 16:57:57 -0700 Message-Id: <20220822235803.1729290-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42a; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Use a boolean to control the call to get_hphys instead of passing a null function pointer. Signed-off-by: Richard Henderson --- target/i386/tcg/sysemu/excp_helper.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index 414d8032de..ea195f7a28 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -24,14 +24,10 @@ #define PG_ERROR_OK (-1) -typedef hwaddr (*MMUTranslateFunc)(CPUState *cs, hwaddr gphys, MMUAccessType access_type, - int *prot); - #define GET_HPHYS(cs, gpa, access_type, prot) \ - (get_hphys_func ? get_hphys_func(cs, gpa, access_type, prot) : gpa) + (use_stage2 ? get_hphys(cs, gpa, access_type, prot) : gpa) -static int mmu_translate(CPUState *cs, hwaddr addr, - MMUTranslateFunc get_hphys_func, +static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, uint64_t cr3, MMUAccessType access_type, int mmu_idx, int pg_mode, hwaddr *xlat, int *page_size, int *prot) @@ -329,7 +325,7 @@ hwaddr get_hphys(CPUState *cs, hwaddr gphys, MMUAccessType access_type, return gphys; } - exit_info_1 = mmu_translate(cs, gphys, NULL, env->nested_cr3, + exit_info_1 = mmu_translate(cs, gphys, false, env->nested_cr3, access_type, MMU_USER_IDX, env->nested_pg_mode, &hphys, &page_size, &next_prot); if (exit_info_1 == PG_ERROR_OK) { @@ -395,7 +391,7 @@ static int handle_mmu_fault(CPUState *cs, vaddr addr, int size, } } - error_code = mmu_translate(cs, addr, get_hphys, env->cr[3], access_type, + error_code = mmu_translate(cs, addr, true, env->cr[3], access_type, mmu_idx, pg_mode, &paddr, &page_size, &prot); } From patchwork Mon Aug 22 23:57:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599445 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2050251mae; Mon, 22 Aug 2022 17:08:17 -0700 (PDT) X-Google-Smtp-Source: AA6agR6KdkTsiM9flgj/WMsNwowAUky0O01jAHz+g1jf5ccqM0O5juHFXQjgyiuBZzTbPg93FNhZ X-Received: by 2002:a05:620a:12ad:b0:6bb:e6e8:9a96 with SMTP id x13-20020a05620a12ad00b006bbe6e89a96mr9240014qki.209.1661213297626; Mon, 22 Aug 2022 17:08:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213297; cv=none; d=google.com; s=arc-20160816; b=xo1n8dzZDIrd41GG2DL/O10iVmNn2WgoXjSm4B1GvZ456JmnRittl3XQPC4UbsuaZ9 3yQjfViJzvYYZDFyLo25BZ9ad9hby2XaXQyCanpJv2Pg4F9z/nLf9ikdI9UxyuOJBER1 gvyq51t1rHsb/y1evOZbrwNI4eYXICs9qiwQd9adVhHDX0tVMo9xrJvMaX5Sm87wYfjS K9dRf99a46bQ3/PjsGjDuhn0ln4PO9GM3cWaUcl6gvJDegJJfC/uIM++e3ZSbD5oi7eB rOpcd52UUCxYzMq35rMnHhndj3o4nPhXpHrRht53a+tV0oabnr4JsSD/VmyWOwNjr+L8 J7lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ZWLjCyIIjAMwdCO5beBli2I/TkBBBRxqIUhK/buRwu8=; b=vaCEVpQhL7FG4up0en6ifVv43VbrPv+lGdijnpssoir1HMO0z15ubvJP2B5aGndyLi 0py9Bl+9B4I1BANMi+ukHQ41k7UUMo/v9i5KarYIx62AKsAdex76BTQVXUnkPXxSYMgY 4gCeDD//BMO00CPmeFbZtQC+LnLZ34oSm8Nyj5I5wCym5alKCFHAqk2VK5izexxrtCEb ra5lu7PHwimwIrE8ypdrH8NTyBcq4S6jRY4Yze42dNF6cGDXefAr/0fLxwlTlZhV7ABQ DSRNv92Pp4nnL3hBz84YvhgWuI2N0qYro+i//ij46TSGi6TP2zSBJcynujcfpVRk6kmr vQYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=gRE6an1P; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id g8-20020ac80708000000b003447c4eb12bsi4751726qth.160.2022.08.22.17.08.17 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:08:17 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=gRE6an1P; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:60294 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHSq-0006iF-TO for patch@linaro.org; Mon, 22 Aug 2022 20:08:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46826) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJA-0004Gh-Ov for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:17 -0400 Received: from mail-pf1-x42d.google.com ([2607:f8b0:4864:20::42d]:46928) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ8-0002kD-9n for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:16 -0400 Received: by mail-pf1-x42d.google.com with SMTP id p9so10796933pfq.13 for ; Mon, 22 Aug 2022 16:58:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=ZWLjCyIIjAMwdCO5beBli2I/TkBBBRxqIUhK/buRwu8=; b=gRE6an1PHkJuMoO7la1bvWgzHMhjz1mjAwParTyp7KaPIp4yoI+fMDTlJ/waNRF1vW PpXO21XX3I9YYF8tF9hLMvqfIlNHCLeho87RcRbKQEPGh+Zvcw1vCTaWRABwWMlF5O3B vFAapKJOIWbftKFJeSHwcJS0FoeqCfnzQIINuqKBhNFrGybf5frFaEe3LjEUwroWKl+1 EpgAajhgCz4T+fRAGpG+u6zv1S/djiwyjQgzrXnyGGx0moN2fiVwvksKsU4xJn7Yrb03 htNtIKUzMsyvFjSvt3cn4KNOy2devUtIgZCTqoWxQwSaKGHQNueR/fPoBHMJCD9qTL4M bYrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=ZWLjCyIIjAMwdCO5beBli2I/TkBBBRxqIUhK/buRwu8=; b=zSEDsrs2dh2rdLe+iyY7JMk4j92mmXeDmQu3dP5KPUu/axDLW78VtLUS2e2mKQmUh0 4R6MWGFztlG7SwIWPPRdBBvUfzIxL/BmTs0Hm3R9orTRvSrX9Nul5PzukuqqyhwdbWtt 4pyF0gfoLXgHmbF/CLJZ2lxXMJ9tV/26EHPMS8c8fNB8uBnjQMvYfftAij/E66EqJIHQ OZRzaTZwSgJj2H7GvX9VEu9PI6v1RsyWwYjDe/IZI1frwKSho0KR/kErqpHw0LTLdRcB wnLZ/kI34kZMrA1Zn00jYsqV1+S99OjCDx4tqt5hzc59EP8PMSjm4hBkIKCh5llwbQk3 Xzjw== X-Gm-Message-State: ACgBeo1L0AZT8Mg7qicIA2S7+SkvQWGWtHoqxKLQOLZZ7k6mJlMK+699 e/CJ+WuMfcW93g7E4ETW/oqLe23Qfhl9Cw== X-Received: by 2002:a63:698a:0:b0:41c:8dfa:e622 with SMTP id e132-20020a63698a000000b0041c8dfae622mr18451636pgc.465.1661212692728; Mon, 22 Aug 2022 16:58:12 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:12 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 09/14] target/i386: Introduce structures for mmu_translate Date: Mon, 22 Aug 2022 16:57:58 -0700 Message-Id: <20220822235803.1729290-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42d; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Create TranslateParams for inputs, TranslateResults for successful outputs, and TranslateFault for error outputs; return true on success. Move stage1 error paths from handle_mmu_fault to x86_cpu_tlb_fill; reorg the rest of handle_mmu_fault into get_physical_address. Signed-off-by: Richard Henderson --- target/i386/tcg/sysemu/excp_helper.c | 322 ++++++++++++++------------- 1 file changed, 171 insertions(+), 151 deletions(-) diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index ea195f7a28..a6b7562bf3 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -22,30 +22,45 @@ #include "exec/exec-all.h" #include "tcg/helper-tcg.h" -#define PG_ERROR_OK (-1) +typedef struct TranslateParams { + target_ulong addr; + target_ulong cr3; + int pg_mode; + int mmu_idx; + MMUAccessType access_type; + bool use_stage2; +} TranslateParams; + +typedef struct TranslateResult { + hwaddr paddr; + int prot; + int page_size; +} TranslateResult; + +typedef struct TranslateFault { + int exception_index; + int error_code; + target_ulong cr2; +} TranslateFault; #define GET_HPHYS(cs, gpa, access_type, prot) \ - (use_stage2 ? get_hphys(cs, gpa, access_type, prot) : gpa) + (in->use_stage2 ? get_hphys(cs, gpa, access_type, prot) : gpa) -static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, - uint64_t cr3, MMUAccessType access_type, - int mmu_idx, int pg_mode, - hwaddr *xlat, int *page_size, int *prot) +static bool mmu_translate(CPUX86State *env, const TranslateParams *in, + TranslateResult *out, TranslateFault *err) { - X86CPU *cpu = X86_CPU(cs); - CPUX86State *env = &cpu->env; + CPUState *cs = env_cpu(env); + X86CPU *cpu = env_archcpu(env); + const int32_t a20_mask = x86_get_a20_mask(env); + const target_ulong addr = in->addr; + const int pg_mode = in->pg_mode; + const bool is_user = (in->mmu_idx == MMU_USER_IDX); + const MMUAccessType access_type = in->access_type; uint64_t ptep, pte; - int32_t a20_mask; - target_ulong pde_addr, pte_addr; - int error_code = 0; - bool is_dirty, is_write, is_user; + hwaddr pde_addr, pte_addr; uint64_t rsvd_mask = PG_ADDRESS_MASK & ~MAKE_64BIT_MASK(0, cpu->phys_bits); - uint32_t page_offset; uint32_t pkr; - - is_user = (mmu_idx == MMU_USER_IDX); - is_write = (access_type == MMU_DATA_STORE); - a20_mask = x86_get_a20_mask(env); + int page_size; if (!(pg_mode & PG_MODE_NXE)) { rsvd_mask |= PG_NX_MASK; @@ -62,7 +77,7 @@ static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, uint64_t pml4e_addr, pml4e; if (la57) { - pml5e_addr = ((cr3 & ~0xfff) + + pml5e_addr = ((in->cr3 & ~0xfff) + (((addr >> 48) & 0x1ff) << 3)) & a20_mask; pml5e_addr = GET_HPHYS(cs, pml5e_addr, MMU_DATA_STORE, NULL); pml5e = x86_ldq_phys(cs, pml5e_addr); @@ -78,7 +93,7 @@ static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, } ptep = pml5e ^ PG_NX_MASK; } else { - pml5e = cr3; + pml5e = in->cr3; ptep = PG_NX_MASK | PG_USER_MASK | PG_RW_MASK; } @@ -114,7 +129,7 @@ static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, } if (pdpe & PG_PSE_MASK) { /* 1 GB page */ - *page_size = 1024 * 1024 * 1024; + page_size = 1024 * 1024 * 1024; pte_addr = pdpe_addr; pte = pdpe; goto do_check_protect; @@ -123,7 +138,7 @@ static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, #endif { /* XXX: load them when cr3 is loaded ? */ - pdpe_addr = ((cr3 & ~0x1f) + ((addr >> 27) & 0x18)) & + pdpe_addr = ((in->cr3 & ~0x1f) + ((addr >> 27) & 0x18)) & a20_mask; pdpe_addr = GET_HPHYS(cs, pdpe_addr, MMU_DATA_STORE, NULL); pdpe = x86_ldq_phys(cs, pdpe_addr); @@ -150,7 +165,7 @@ static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, ptep &= pde ^ PG_NX_MASK; if (pde & PG_PSE_MASK) { /* 2 MB page */ - *page_size = 2048 * 1024; + page_size = 2048 * 1024; pte_addr = pde_addr; pte = pde; goto do_check_protect; @@ -172,12 +187,12 @@ static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, } /* combine pde and pte nx, user and rw protections */ ptep &= pte ^ PG_NX_MASK; - *page_size = 4096; + page_size = 4096; } else { uint32_t pde; /* page directory entry */ - pde_addr = ((cr3 & ~0xfff) + ((addr >> 20) & 0xffc)) & + pde_addr = ((in->cr3 & ~0xfff) + ((addr >> 20) & 0xffc)) & a20_mask; pde_addr = GET_HPHYS(cs, pde_addr, MMU_DATA_STORE, NULL); pde = x86_ldl_phys(cs, pde_addr); @@ -188,7 +203,7 @@ static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, /* if PSE bit is set, then we use a 4MB page */ if ((pde & PG_PSE_MASK) && (pg_mode & PG_MODE_PSE)) { - *page_size = 4096 * 1024; + page_size = 4096 * 1024; pte_addr = pde_addr; /* Bits 20-13 provide bits 39-32 of the address, bit 21 is reserved. @@ -214,12 +229,12 @@ static int mmu_translate(CPUState *cs, hwaddr addr, bool use_stage2, } /* combine pde and pte user and rw protections */ ptep &= pte | PG_NX_MASK; - *page_size = 4096; + page_size = 4096; rsvd_mask = 0; } do_check_protect: - rsvd_mask |= (*page_size - 1) & PG_ADDRESS_MASK & ~PG_PSE_PAT_MASK; + rsvd_mask |= (page_size - 1) & PG_ADDRESS_MASK & ~PG_PSE_PAT_MASK; do_check_protect_pse36: if (pte & rsvd_mask) { goto do_fault_rsvd; @@ -231,17 +246,17 @@ do_check_protect_pse36: goto do_fault_protect; } - *prot = 0; - if (mmu_idx != MMU_KSMAP_IDX || !(ptep & PG_USER_MASK)) { - *prot |= PAGE_READ; + int prot = 0; + if (in->mmu_idx != MMU_KSMAP_IDX || !(ptep & PG_USER_MASK)) { + prot |= PAGE_READ; if ((ptep & PG_RW_MASK) || !(is_user || (pg_mode & PG_MODE_WP))) { - *prot |= PAGE_WRITE; + prot |= PAGE_WRITE; } } if (!(ptep & PG_NX_MASK) && - (mmu_idx == MMU_USER_IDX || + (is_user || !((pg_mode & PG_MODE_SMEP) && (ptep & PG_USER_MASK)))) { - *prot |= PAGE_EXEC; + prot |= PAGE_EXEC; } if (ptep & PG_USER_MASK) { @@ -260,164 +275,151 @@ do_check_protect_pse36: } else if (pkr_wd && (is_user || (pg_mode & PG_MODE_WP))) { pkr_prot &= ~PAGE_WRITE; } - - *prot &= pkr_prot; if ((pkr_prot & (1 << access_type)) == 0) { - assert(access_type != MMU_INST_FETCH); - error_code |= PG_ERROR_PK_MASK; - goto do_fault_protect; + goto do_fault_pk_protect; } + prot &= pkr_prot; } - if ((*prot & (1 << access_type)) == 0) { + if ((prot & (1 << access_type)) == 0) { goto do_fault_protect; } /* yes, it can! */ - is_dirty = is_write && !(pte & PG_DIRTY_MASK); - if (!(pte & PG_ACCESSED_MASK) || is_dirty) { - pte |= PG_ACCESSED_MASK; - if (is_dirty) { - pte |= PG_DIRTY_MASK; + { + uint32_t set = PG_ACCESSED_MASK; + if (access_type == MMU_DATA_STORE) { + set |= PG_DIRTY_MASK; + } + if (set & ~pte) { + pte |= set; + x86_stl_phys_notdirty(cs, pte_addr, pte); } - x86_stl_phys_notdirty(cs, pte_addr, pte); } if (!(pte & PG_DIRTY_MASK)) { /* only set write access if already dirty... otherwise wait for dirty access */ - assert(!is_write); - *prot &= ~PAGE_WRITE; + assert(access_type != MMU_DATA_STORE); + prot &= ~PAGE_WRITE; } - - pte = pte & a20_mask; + out->prot = prot; + out->page_size = page_size; /* align to page_size */ - pte &= PG_ADDRESS_MASK & ~(*page_size - 1); - page_offset = addr & (*page_size - 1); - *xlat = GET_HPHYS(cs, pte + page_offset, access_type, prot); - return PG_ERROR_OK; + out->paddr = (pte & a20_mask & PG_ADDRESS_MASK & ~(page_size - 1)) + | (addr & (page_size - 1)); + out->paddr = GET_HPHYS(cs, out->paddr, access_type, &out->prot); + return true; + int error_code; do_fault_rsvd: - error_code |= PG_ERROR_RSVD_MASK; + error_code = PG_ERROR_RSVD_MASK; + goto do_fault_cont; do_fault_protect: - error_code |= PG_ERROR_P_MASK; + error_code = PG_ERROR_P_MASK; + goto do_fault_cont; + do_fault_pk_protect: + assert(access_type != MMU_INST_FETCH); + error_code = PG_ERROR_PK_MASK | PG_ERROR_P_MASK; + goto do_fault_cont; do_fault: - error_code |= (is_write << PG_ERROR_W_BIT); - if (is_user) + error_code = 0; + do_fault_cont: + if (is_user) { error_code |= PG_ERROR_U_MASK; - if (access_type == MMU_INST_FETCH && - ((pg_mode & PG_MODE_NXE) || (pg_mode & PG_MODE_SMEP))) - error_code |= PG_ERROR_I_D_MASK; - return error_code; + } + switch (access_type) { + case MMU_DATA_LOAD: + break; + case MMU_DATA_STORE: + error_code |= PG_ERROR_W_MASK; + break; + case MMU_INST_FETCH: + if (pg_mode & (PG_MODE_NXE | PG_MODE_SMEP)) { + error_code |= PG_ERROR_I_D_MASK; + } + break; + } + err->exception_index = EXCP0E_PAGE; + err->error_code = error_code; + err->cr2 = addr; + return false; } hwaddr get_hphys(CPUState *cs, hwaddr gphys, MMUAccessType access_type, - int *prot) + int *prot) { CPUX86State *env = &X86_CPU(cs)->env; - uint64_t exit_info_1; - int page_size; - int next_prot; - hwaddr hphys; if (likely(!(env->hflags2 & HF2_NPT_MASK))) { return gphys; - } + } else { + TranslateParams in = { + .addr = gphys, + .cr3 = env->nested_cr3, + .pg_mode = env->nested_pg_mode, + .mmu_idx = MMU_USER_IDX, + .access_type = access_type, + .use_stage2 = false, + }; + TranslateResult out; + TranslateFault err; + uint64_t exit_info_1; - exit_info_1 = mmu_translate(cs, gphys, false, env->nested_cr3, - access_type, MMU_USER_IDX, env->nested_pg_mode, - &hphys, &page_size, &next_prot); - if (exit_info_1 == PG_ERROR_OK) { - if (prot) { - *prot &= next_prot; + if (mmu_translate(env, &in, &out, &err)) { + if (prot) { + *prot &= out.prot; + } + return out.paddr; } - return hphys; - } - x86_stq_phys(cs, env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), - gphys); - if (prot) { - exit_info_1 |= SVM_NPTEXIT_GPA; - } else { /* page table access */ - exit_info_1 |= SVM_NPTEXIT_GPT; + x86_stq_phys(cs, env->vm_vmcb + + offsetof(struct vmcb, control.exit_info_2), gphys); + exit_info_1 = err.error_code + | (prot ? SVM_NPTEXIT_GPA : SVM_NPTEXIT_GPT); + cpu_vmexit(env, SVM_EXIT_NPF, exit_info_1, env->retaddr); } - cpu_vmexit(env, SVM_EXIT_NPF, exit_info_1, env->retaddr); } -/* return value: - * -1 = cannot handle fault - * 0 = nothing more to do - * 1 = generate PF fault - */ -static int handle_mmu_fault(CPUState *cs, vaddr addr, int size, - MMUAccessType access_type, int mmu_idx) +static bool get_physical_address(CPUX86State *env, vaddr addr, + MMUAccessType access_type, int mmu_idx, + TranslateResult *out, TranslateFault *err) { - X86CPU *cpu = X86_CPU(cs); - CPUX86State *env = &cpu->env; - int error_code = PG_ERROR_OK; - int pg_mode, prot, page_size; - int32_t a20_mask; - hwaddr paddr; - hwaddr vaddr; - -#if defined(DEBUG_MMU) - printf("MMU fault: addr=%" VADDR_PRIx " w=%d mmu=%d eip=" TARGET_FMT_lx "\n", - addr, access_type, mmu_idx, env->eip); -#endif - if (!(env->cr[0] & CR0_PG_MASK)) { - a20_mask = x86_get_a20_mask(env); - paddr = addr & a20_mask; + out->paddr = addr & x86_get_a20_mask(env); + #ifdef TARGET_X86_64 if (!(env->hflags & HF_LMA_MASK)) { /* Without long mode we can only address 32bits in real mode */ - paddr = (uint32_t)paddr; + out->paddr = (uint32_t)out->paddr; } #endif - prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; - page_size = 4096; + out->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; + out->page_size = TARGET_PAGE_SIZE; + return true; } else { - pg_mode = get_pg_mode(env); - if (pg_mode & PG_MODE_LMA) { - int32_t sext; + TranslateParams in = { + .addr = addr, + .cr3 = env->cr[3], + .pg_mode = get_pg_mode(env), + .mmu_idx = mmu_idx, + .access_type = access_type, + .use_stage2 = true + }; + if (in.pg_mode & PG_MODE_LMA) { /* test virtual address sign extension */ - sext = (int64_t)addr >> (pg_mode & PG_MODE_LA57 ? 56 : 47); + int shift = in.pg_mode & PG_MODE_LA57 ? 56 : 47; + int64_t sext = (int64_t)addr >> shift; if (sext != 0 && sext != -1) { - env->error_code = 0; - cs->exception_index = EXCP0D_GPF; - return 1; + err->exception_index = EXCP0D_GPF; + err->error_code = 0; + err->cr2 = addr; + return false; } } - - error_code = mmu_translate(cs, addr, true, env->cr[3], access_type, - mmu_idx, pg_mode, - &paddr, &page_size, &prot); - } - - if (error_code == PG_ERROR_OK) { - /* Even if 4MB pages, we map only one 4KB page in the cache to - avoid filling it too fast */ - vaddr = addr & TARGET_PAGE_MASK; - paddr &= TARGET_PAGE_MASK; - - assert(prot & (1 << access_type)); - tlb_set_page_with_attrs(cs, vaddr, paddr, cpu_get_mem_attrs(env), - prot, mmu_idx, page_size); - return 0; - } else { - if (env->intercept_exceptions & (1 << EXCP0E_PAGE)) { - /* cr2 is not modified in case of exceptions */ - x86_stq_phys(cs, - env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), - addr); - } else { - env->cr[2] = addr; - } - env->error_code = error_code; - cs->exception_index = EXCP0E_PAGE; - return 1; + return mmu_translate(env, &in, out, err); } } @@ -425,15 +427,33 @@ bool x86_cpu_tlb_fill(CPUState *cs, vaddr addr, int size, MMUAccessType access_type, int mmu_idx, bool probe, uintptr_t retaddr) { - X86CPU *cpu = X86_CPU(cs); - CPUX86State *env = &cpu->env; + CPUX86State *env = cs->env_ptr; + TranslateResult out; + TranslateFault err; - env->retaddr = retaddr; - if (handle_mmu_fault(cs, addr, size, access_type, mmu_idx)) { - /* FIXME: On error in get_hphys we have already jumped out. */ - g_assert(!probe); - raise_exception_err_ra(env, cs->exception_index, - env->error_code, retaddr); + if (get_physical_address(env, addr, access_type, mmu_idx, &out, &err)) { + /* + * Even if 4MB pages, we map only one 4KB page in the cache to + * avoid filling it too fast. + */ + assert(out.prot & (1 << access_type)); + tlb_set_page_with_attrs(cs, addr & TARGET_PAGE_MASK, + out.paddr & TARGET_PAGE_MASK, + cpu_get_mem_attrs(env), + out.prot, mmu_idx, out.page_size); + return true; } - return true; + + /* FIXME: On error in get_hphys we have already jumped out. */ + g_assert(!probe); + + if (env->intercept_exceptions & (1 << err.exception_index)) { + /* cr2 is not modified in case of exceptions */ + x86_stq_phys(cs, env->vm_vmcb + + offsetof(struct vmcb, control.exit_info_2), + err.cr2); + } else { + env->cr[2] = err.cr2; + } + raise_exception_err_ra(env, err.exception_index, err.error_code, retaddr); } From patchwork Mon Aug 22 23:57:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599442 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2047624mae; Mon, 22 Aug 2022 17:03:31 -0700 (PDT) X-Google-Smtp-Source: AA6agR5lLoQcV1KU/cYMbo7w+YAXhLMUylwHbdnEm8t1x91/Chuypj+RNr3fRBi+CMZVnU32A3j5 X-Received: by 2002:a05:622a:514:b0:343:555a:d611 with SMTP id l20-20020a05622a051400b00343555ad611mr17433309qtx.486.1661213010972; Mon, 22 Aug 2022 17:03:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213010; cv=none; d=google.com; s=arc-20160816; b=Ox9VHXGMlDK++KPs407+T+T+yYTtOz0H70T+lWOoeb41Kz+Rx96PPmCrEGyddytPzK CuRGEBV1oY/AJo67IVFKGqscZIrza/xENDx6jfRjEJ45By8OsP8fE42+E9jlO1gCA9jT 1IC2yCMFwYaJSoBGlo0sU3jtKaaKGFk7yHU9VuTcupb7RFQjCOdxTkGC4ujQ266pGSX6 8yX3MP1DWW9gFmT2iX4eW6jcsdna78jsLTfXzrblQFPrk+frXVy0TXW80SaXIpCPIvKz KT1tVRWyypmumGZg6HDXl2slHauuxIVRWDxN9hJGBTMXYYZwLSCFFv0bqrKxQUtL7lVP efkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=wBA5wtnterigVJfmKiwsv2bUvfjFDn8Z+n998PE2YkE=; b=En8WGVeS+wexGzqDS/wscErDJjU9yuCe8SO25uj4Vkp4s9RERquPClCe+G5v7DbMxX v2pY5RoIRxiINiFijwwGRbhP/9bpRO1MmccTSWk4EotCtE5YX6smEw7YqTbbRH3S3A5R vn6UWcAMP4Mi8Oh5PR55uGS0joEl/stZp+52Dxtc5WoI0z57dFyUOJRxkMZbkZ0sFTo+ mhXaD7Fll3rxtGl6P9u59cmat5YMwsmhmbHSTSMPr5kpgL+hYHOFxgZO9tGcQVeiKBTx enqlmqfk6tNkxIw6IOpw7r4VeF+HuDF+BG50AFjTKaSL441VpnWbxnkgcvmqJKpLhPx+ nKLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jBDKmtXz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id jx6-20020a0562142b0600b00472efa62114si5369114qvb.584.2022.08.22.17.03.30 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:03:30 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jBDKmtXz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:35552 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHOE-0001gj-HJ for patch@linaro.org; Mon, 22 Aug 2022 20:03:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46824) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJA-0004GE-LM for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:17 -0400 Received: from mail-pg1-x531.google.com ([2607:f8b0:4864:20::531]:45735) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ8-0002hw-MR for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:16 -0400 Received: by mail-pg1-x531.google.com with SMTP id f4so8972837pgc.12 for ; Mon, 22 Aug 2022 16:58:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=wBA5wtnterigVJfmKiwsv2bUvfjFDn8Z+n998PE2YkE=; b=jBDKmtXzlHl9pgQ9sVIy+FMzeQ5tu9HEiaasqu2VnYOPEPa7+MvJi1lx+YWQk5kxHa SV0tb5cO36ftq8IzB2w7Xu09CMJxyTfyx3A0Rz079XKwh1//6at4bmeE8n+EATbNz+kM gg0sN8t/OHdCLOJ3uq9+IjFMxA4GcNTaLz27q1gix2B9mLS4sWmpEkq0O4R2ecm21Ts7 204jWw7nl5f+xRCR/U+W7Yd15lLZyaNxw9f5fhXwEdgXl1rVv7wyTAAk3hmhkdFPIqJx 6L8Vjtt8/u//Pj7PDSkYS74GoH/wwfScfR++q1gsCNxAGWqXWabxOorcicS6M3jbNmd4 xJEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=wBA5wtnterigVJfmKiwsv2bUvfjFDn8Z+n998PE2YkE=; b=PXcGeXxw31rdSCfeFLVvBZqzFT7ckaajwUwA3GjJxnOSzbfQ6SOK3RjCCaJzjL5ksn f1RxL5Xmm5T3dSalo1jAXF9TfkSCFVb5yYFI71nEThonEnwillXrjitBHfu80oqNqorH t/iYUbtXMrnd4Z6ffaCdwymkRDP9mu0Fgw6dL1HLFmHjX3xq1Ia7amPbnQD9adGW5JLZ pUkEC7YGlXEVhnSINzkwPXrNcG8092ZsE0ydhnvL5VQa8jEnfTscGUTwd/W4qNSszWQ6 JhPYqTx+6bbuHDRbjLS1qv7tNerUKy5z37CJteOQIzwY8JDiyoNbp5kzSFNjzm+9oL+s u2dw== X-Gm-Message-State: ACgBeo3RGKryp2SWE0+U6ePZpqmOPrYm3d1s1vDZ125Ubw+UCqTNW04x zD+oW4xorrNhLdY04k2SDBp0SmY/34ISig== X-Received: by 2002:a63:164d:0:b0:416:4bc:1c28 with SMTP id 13-20020a63164d000000b0041604bc1c28mr18516751pgw.302.1661212693523; Mon, 22 Aug 2022 16:58:13 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 10/14] target/i386: Reorg GET_HPHYS Date: Mon, 22 Aug 2022 16:57:59 -0700 Message-Id: <20220822235803.1729290-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::531; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x531.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Replace with PTE_HPHYS for the page table walk, and a direct call to mmu_translate for the final stage2 translation. Hoist the check for HF2_NPT_MASK out to get_physical_address, which avoids the recursive call when stage2 is disabled. We can now return all the way out to x86_cpu_tlb_fill before raising an exception, which means probe works. Signed-off-by: Richard Henderson --- target/i386/tcg/sysemu/excp_helper.c | 123 +++++++++++++++++++++------ 1 file changed, 95 insertions(+), 28 deletions(-) diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index a6b7562bf3..e9adaa3dad 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -37,18 +37,43 @@ typedef struct TranslateResult { int page_size; } TranslateResult; +typedef enum TranslateFaultStage2 { + S2_NONE, + S2_GPA, + S2_GPT, +} TranslateFaultStage2; + typedef struct TranslateFault { int exception_index; int error_code; target_ulong cr2; + TranslateFaultStage2 stage2; } TranslateFault; -#define GET_HPHYS(cs, gpa, access_type, prot) \ - (in->use_stage2 ? get_hphys(cs, gpa, access_type, prot) : gpa) +#define PTE_HPHYS(ADDR) \ + do { \ + if (in->use_stage2) { \ + nested_in.addr = (ADDR); \ + if (!mmu_translate(env, &nested_in, out, err)) { \ + err->stage2 = S2_GPT; \ + return false; \ + } \ + (ADDR) = out->paddr; \ + } \ + } while (0) static bool mmu_translate(CPUX86State *env, const TranslateParams *in, TranslateResult *out, TranslateFault *err) { + TranslateParams nested_in = { + /* Use store for page table entries, to allow A/D flag updates. */ + .access_type = MMU_DATA_STORE, + .cr3 = env->nested_cr3, + .pg_mode = env->nested_pg_mode, + .mmu_idx = MMU_USER_IDX, + .use_stage2 = false, + }; + CPUState *cs = env_cpu(env); X86CPU *cpu = env_archcpu(env); const int32_t a20_mask = x86_get_a20_mask(env); @@ -79,7 +104,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, if (la57) { pml5e_addr = ((in->cr3 & ~0xfff) + (((addr >> 48) & 0x1ff) << 3)) & a20_mask; - pml5e_addr = GET_HPHYS(cs, pml5e_addr, MMU_DATA_STORE, NULL); + PTE_HPHYS(pml5e_addr); pml5e = x86_ldq_phys(cs, pml5e_addr); if (!(pml5e & PG_PRESENT_MASK)) { goto do_fault; @@ -99,7 +124,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, pml4e_addr = ((pml5e & PG_ADDRESS_MASK) + (((addr >> 39) & 0x1ff) << 3)) & a20_mask; - pml4e_addr = GET_HPHYS(cs, pml4e_addr, MMU_DATA_STORE, NULL); + PTE_HPHYS(pml4e_addr); pml4e = x86_ldq_phys(cs, pml4e_addr); if (!(pml4e & PG_PRESENT_MASK)) { goto do_fault; @@ -114,7 +139,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, ptep &= pml4e ^ PG_NX_MASK; pdpe_addr = ((pml4e & PG_ADDRESS_MASK) + (((addr >> 30) & 0x1ff) << 3)) & a20_mask; - pdpe_addr = GET_HPHYS(cs, pdpe_addr, MMU_DATA_STORE, NULL); + PTE_HPHYS(pdpe_addr); pdpe = x86_ldq_phys(cs, pdpe_addr); if (!(pdpe & PG_PRESENT_MASK)) { goto do_fault; @@ -140,7 +165,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, /* XXX: load them when cr3 is loaded ? */ pdpe_addr = ((in->cr3 & ~0x1f) + ((addr >> 27) & 0x18)) & a20_mask; - pdpe_addr = GET_HPHYS(cs, pdpe_addr, MMU_DATA_STORE, NULL); + PTE_HPHYS(pdpe_addr); pdpe = x86_ldq_phys(cs, pdpe_addr); if (!(pdpe & PG_PRESENT_MASK)) { goto do_fault; @@ -154,7 +179,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, pde_addr = ((pdpe & PG_ADDRESS_MASK) + (((addr >> 21) & 0x1ff) << 3)) & a20_mask; - pde_addr = GET_HPHYS(cs, pde_addr, MMU_DATA_STORE, NULL); + PTE_HPHYS(pde_addr); pde = x86_ldq_phys(cs, pde_addr); if (!(pde & PG_PRESENT_MASK)) { goto do_fault; @@ -177,7 +202,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, } pte_addr = ((pde & PG_ADDRESS_MASK) + (((addr >> 12) & 0x1ff) << 3)) & a20_mask; - pte_addr = GET_HPHYS(cs, pte_addr, MMU_DATA_STORE, NULL); + PTE_HPHYS(pte_addr); pte = x86_ldq_phys(cs, pte_addr); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; @@ -194,7 +219,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, /* page directory entry */ pde_addr = ((in->cr3 & ~0xfff) + ((addr >> 20) & 0xffc)) & a20_mask; - pde_addr = GET_HPHYS(cs, pde_addr, MMU_DATA_STORE, NULL); + PTE_HPHYS(pde_addr); pde = x86_ldl_phys(cs, pde_addr); if (!(pde & PG_PRESENT_MASK)) { goto do_fault; @@ -222,7 +247,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, /* page directory entry */ pte_addr = ((pde & ~0xfff) + ((addr >> 10) & 0xffc)) & a20_mask; - pte_addr = GET_HPHYS(cs, pte_addr, MMU_DATA_STORE, NULL); + PTE_HPHYS(pte_addr); pte = x86_ldl_phys(cs, pte_addr); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; @@ -303,13 +328,31 @@ do_check_protect_pse36: assert(access_type != MMU_DATA_STORE); prot &= ~PAGE_WRITE; } - out->prot = prot; - out->page_size = page_size; /* align to page_size */ out->paddr = (pte & a20_mask & PG_ADDRESS_MASK & ~(page_size - 1)) | (addr & (page_size - 1)); - out->paddr = GET_HPHYS(cs, out->paddr, access_type, &out->prot); + + if (in->use_stage2) { + nested_in.addr = out->paddr; + nested_in.access_type = access_type; + + if (!mmu_translate(env, &nested_in, out, err)) { + err->stage2 = S2_GPA; + return false; + } + + /* Merge stage1 & stage2 protection bits. */ + prot &= out->prot; + + /* Re-verify resulting protection. */ + if ((prot & (1 << access_type)) == 0) { + goto do_fault_protect; + } + } + + out->prot = prot; + out->page_size = page_size; return true; int error_code; @@ -344,13 +387,36 @@ do_check_protect_pse36: err->exception_index = EXCP0E_PAGE; err->error_code = error_code; err->cr2 = addr; + err->stage2 = S2_NONE; return false; } +static G_NORETURN void raise_stage2(CPUX86State *env, TranslateFault *err, + uintptr_t retaddr) +{ + uint64_t exit_info_1 = err->error_code; + + switch (err->stage2) { + case S2_GPT: + exit_info_1 |= SVM_NPTEXIT_GPT; + break; + case S2_GPA: + exit_info_1 |= SVM_NPTEXIT_GPA; + break; + default: + g_assert_not_reached(); + } + + x86_stq_phys(env_cpu(env), + env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), + err->cr2); + cpu_vmexit(env, SVM_EXIT_NPF, exit_info_1, retaddr); +} + hwaddr get_hphys(CPUState *cs, hwaddr gphys, MMUAccessType access_type, int *prot) { - CPUX86State *env = &X86_CPU(cs)->env; + CPUX86State *env = cs->env_ptr; if (likely(!(env->hflags2 & HF2_NPT_MASK))) { return gphys; @@ -365,20 +431,16 @@ hwaddr get_hphys(CPUState *cs, hwaddr gphys, MMUAccessType access_type, }; TranslateResult out; TranslateFault err; - uint64_t exit_info_1; - if (mmu_translate(env, &in, &out, &err)) { - if (prot) { - *prot &= out.prot; - } - return out.paddr; + if (!mmu_translate(env, &in, &out, &err)) { + err.stage2 = prot ? SVM_NPTEXIT_GPA : SVM_NPTEXIT_GPT; + raise_stage2(env, &err, env->retaddr); } - x86_stq_phys(cs, env->vm_vmcb + - offsetof(struct vmcb, control.exit_info_2), gphys); - exit_info_1 = err.error_code - | (prot ? SVM_NPTEXIT_GPA : SVM_NPTEXIT_GPT); - cpu_vmexit(env, SVM_EXIT_NPF, exit_info_1, env->retaddr); + if (prot) { + *prot &= out.prot; + } + return out.paddr; } } @@ -405,7 +467,7 @@ static bool get_physical_address(CPUX86State *env, vaddr addr, .pg_mode = get_pg_mode(env), .mmu_idx = mmu_idx, .access_type = access_type, - .use_stage2 = true + .use_stage2 = env->hflags2 & HF2_NPT_MASK, }; if (in.pg_mode & PG_MODE_LMA) { @@ -444,8 +506,13 @@ bool x86_cpu_tlb_fill(CPUState *cs, vaddr addr, int size, return true; } - /* FIXME: On error in get_hphys we have already jumped out. */ - g_assert(!probe); + if (probe) { + return false; + } + + if (err.stage2 != S2_NONE) { + raise_stage2(env, &err, retaddr); + } if (env->intercept_exceptions & (1 << err.exception_index)) { /* cr2 is not modified in case of exceptions */ From patchwork Mon Aug 22 23:58:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599449 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2052931mae; Mon, 22 Aug 2022 17:10:40 -0700 (PDT) X-Google-Smtp-Source: AA6agR4UcSVIkmPvHZYvz4HkvtzcvmtwfYXbUcotrRqige8lHGM+w+k1ScUUybAfBqiJYyq2eL4x X-Received: by 2002:a05:6214:b6e:b0:496:c4de:c014 with SMTP id ey14-20020a0562140b6e00b00496c4dec014mr14158328qvb.23.1661213440488; Mon, 22 Aug 2022 17:10:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213440; cv=none; d=google.com; s=arc-20160816; b=mRbmtqpv3EkX0uQH9OiVP4QuHzp5SITkUVaTBDHRIzWndZM1aSi7I5sOxHorrdV7Me jtkqbugfhZ8wDtOI8+ZWsHyASciscGZ+ZuRojDYhBfXX7hHQ/UVGrbPpIXT6iiK+Fdiq 1yx9d/M/jj2tIzHzUtOxLKEVwcQAgorlr8lCLY2NqlpIC9U2mOX/Z5BGVM//HyPA54PK o46cc+gl2RlPt26l1fmhGAY8nw215iWafVKeWNE0gykcGq9lNZbH5LlycXFciI/p/Oqk gDe3fbLzhNQudA7TzJ7X/uDZnJyF+dfcB6pj5F84LZzHxRa5OD4XX/pgsADeIrePb3DL Y4bw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=cjbTu/Sdr1139wmrpq5UwqU/XMgo6vOar1YPdcE0zRo=; b=jPqvoHnQtu6GobpKwv56AzzjFENFmRFuwgPo6csQEmbh7NhvQyxjuVekndpwCxXrCw xShhHRO5cT7oRl3Snnpla02t56rldEu149z7GfFmtcu83qG1TCjtMc2deOiLgIPXhHNw V5QYdbSV4EhtvJE1UGvGv4u3r6cD7Z4zmDgUA3uj7yHQ98HTh3c+wm8aCffwo9F0JmzS hX5C31gvjbD/aPTuHZbewM/34U+venNLyboZQy7GsU4KRIuvl2GL8LK0mX011Npglt1F cBS4XL75ZUunNpWyq64d8J5nlhbik+Hib0jOJNQuy8o3AypcILu86qdP9U+Wcf7kY87A bnkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Gph8ef4B; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x5-20020ac87005000000b0034378a03dfcsi5487783qtm.728.2022.08.22.17.10.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:10:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Gph8ef4B; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:56826 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHVA-0003yW-25 for patch@linaro.org; Mon, 22 Aug 2022 20:10:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46828) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJC-0004K1-PE for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:20 -0400 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]:43790) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJ9-0002kY-Gf for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:17 -0400 Received: by mail-pl1-x62e.google.com with SMTP id 20so11335572plo.10 for ; Mon, 22 Aug 2022 16:58:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=cjbTu/Sdr1139wmrpq5UwqU/XMgo6vOar1YPdcE0zRo=; b=Gph8ef4B7a2hk9+x0cVBBSk2i2tLobIvn7oOKw4Y9Em8NJL0PkkKaUHAMw3xd/xhI4 KHQmb7Q/XusMAy7Ikm73VeASiApOdoJAg38typykursUv3skXXvPEAOozYanFfNJl2GZ jmUdy27dA0qqsJSiqo63O6rgCU06iEo+DRzrSoA7gJct6uXmQ1fzDBveO3tm6eEm6E7I Bfv3WRKlGgdqn/NKxjZi4HfW+1feACoMhP/6uyyNiOPVesx71ANR9Sy6u5Tvey5mpCAQ Xi7cKLts6eJSVfHbC3BAvlslrIrazjXWOsJpRYUm5AaOmUv1PSzFdtVpTbMDBDFRRf0o f5Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=cjbTu/Sdr1139wmrpq5UwqU/XMgo6vOar1YPdcE0zRo=; b=b8ZltDWRyw29HbW4ELJpcpnPptFs1AjlWH6wKnK548mDxZz4a5Kpt+70yz3xRw2sr7 owl3eQX6AeviK22cUiFKq0bmxEBreGtLqDAb0lDBqKzi1fO/JgLinQJupfkQwcLe+eCv 4KBkDwsUbhkQ9EdeIDIEH1v0PdOkXS9q3E12WuOkar5I1O3huxPmOu481wvlZOBV5GqC AvRHquej5VcT9Jr++3aBoPCzDNeKf7IXioZ+FiXeK3bR7bSlj8fpDj3GU2UIu6f9FJXb kmRoOlRLcnXaFbviz9n7SXsZ6ocF5uET3Qr4wT7XxvfMorUEfIRJsdJbuGAVvPuVBUsL wQ9Q== X-Gm-Message-State: ACgBeo3c+iiRuUbIAMIdY4nJ+MX5hmDzPoTmhoMPmCTnexMgvHFBWNJk EFcPivpTXtTC1EyXe48uyMqANS/bi4C1gQ== X-Received: by 2002:a17:90b:3842:b0:1f5:32be:8a1a with SMTP id nl2-20020a17090b384200b001f532be8a1amr721529pjb.130.1661212694205; Mon, 22 Aug 2022 16:58:14 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 11/14] target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX Date: Mon, 22 Aug 2022 16:58:00 -0700 Message-Id: <20220822235803.1729290-12-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62e; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" These new mmu indexes will be helpful for improving paging and code throughout the target. Signed-off-by: Richard Henderson --- target/i386/cpu-param.h | 2 +- target/i386/cpu.h | 3 + target/i386/tcg/sysemu/excp_helper.c | 82 ++++++++++++++++++---------- target/i386/tcg/sysemu/svm_helper.c | 3 + 4 files changed, 60 insertions(+), 30 deletions(-) diff --git a/target/i386/cpu-param.h b/target/i386/cpu-param.h index 9740bd7abd..abad52af20 100644 --- a/target/i386/cpu-param.h +++ b/target/i386/cpu-param.h @@ -23,6 +23,6 @@ # define TARGET_VIRT_ADDR_SPACE_BITS 32 #endif #define TARGET_PAGE_BITS 12 -#define NB_MMU_MODES 3 +#define NB_MMU_MODES 5 #endif diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 82004b65b9..9a40b54ae5 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -2144,6 +2144,9 @@ uint64_t cpu_get_tsc(CPUX86State *env); #define MMU_KSMAP_IDX 0 #define MMU_USER_IDX 1 #define MMU_KNOSMAP_IDX 2 +#define MMU_NESTED_IDX 3 +#define MMU_PHYS_IDX 4 + static inline int cpu_mmu_index(CPUX86State *env, bool ifetch) { return (env->hflags & HF_CPL_MASK) == 3 ? MMU_USER_IDX : diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index e9adaa3dad..f771d25f32 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -448,41 +448,65 @@ static bool get_physical_address(CPUX86State *env, vaddr addr, MMUAccessType access_type, int mmu_idx, TranslateResult *out, TranslateFault *err) { - if (!(env->cr[0] & CR0_PG_MASK)) { - out->paddr = addr & x86_get_a20_mask(env); + TranslateParams in; + bool use_stage2 = env->hflags2 & HF2_NPT_MASK; -#ifdef TARGET_X86_64 - if (!(env->hflags & HF_LMA_MASK)) { - /* Without long mode we can only address 32bits in real mode */ - out->paddr = (uint32_t)out->paddr; - } -#endif - out->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; - out->page_size = TARGET_PAGE_SIZE; - return true; - } else { - TranslateParams in = { - .addr = addr, - .cr3 = env->cr[3], - .pg_mode = get_pg_mode(env), - .mmu_idx = mmu_idx, - .access_type = access_type, - .use_stage2 = env->hflags2 & HF2_NPT_MASK, - }; + in.addr = addr; + in.access_type = access_type; - if (in.pg_mode & PG_MODE_LMA) { - /* test virtual address sign extension */ - int shift = in.pg_mode & PG_MODE_LA57 ? 56 : 47; - int64_t sext = (int64_t)addr >> shift; - if (sext != 0 && sext != -1) { - err->exception_index = EXCP0D_GPF; - err->error_code = 0; - err->cr2 = addr; + switch (mmu_idx) { + case MMU_PHYS_IDX: + break; + + case MMU_NESTED_IDX: + if (likely(use_stage2)) { + in.cr3 = env->nested_cr3; + in.pg_mode = env->nested_pg_mode; + in.mmu_idx = MMU_USER_IDX; + in.use_stage2 = false; + + if (!mmu_translate(env, &in, out, err)) { + err->stage2 = S2_GPA; return false; } + return true; } - return mmu_translate(env, &in, out, err); + break; + + default: + in.cr3 = env->cr[3]; + in.mmu_idx = mmu_idx; + in.use_stage2 = use_stage2; + in.pg_mode = get_pg_mode(env); + + if (likely(in.pg_mode)) { + if (in.pg_mode & PG_MODE_LMA) { + /* test virtual address sign extension */ + int shift = in.pg_mode & PG_MODE_LA57 ? 56 : 47; + int64_t sext = (int64_t)addr >> shift; + if (sext != 0 && sext != -1) { + err->exception_index = EXCP0D_GPF; + err->error_code = 0; + err->cr2 = addr; + return false; + } + } + return mmu_translate(env, &in, out, err); + } + break; } + + /* Translation disabled. */ + out->paddr = addr & x86_get_a20_mask(env); +#ifdef TARGET_X86_64 + if (!(env->hflags & HF_LMA_MASK)) { + /* Without long mode we can only address 32bits in real mode */ + out->paddr = (uint32_t)out->paddr; + } +#endif + out->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; + out->page_size = TARGET_PAGE_SIZE; + return true; } bool x86_cpu_tlb_fill(CPUState *cs, vaddr addr, int size, diff --git a/target/i386/tcg/sysemu/svm_helper.c b/target/i386/tcg/sysemu/svm_helper.c index 2b6f450af9..85b7741d94 100644 --- a/target/i386/tcg/sysemu/svm_helper.c +++ b/target/i386/tcg/sysemu/svm_helper.c @@ -271,6 +271,8 @@ void helper_vmrun(CPUX86State *env, int aflag, int next_eip_addend) env->hflags2 |= HF2_NPT_MASK; env->nested_pg_mode = get_pg_mode(env) & PG_MODE_SVM_MASK; + + tlb_flush_by_mmuidx(cs, 1 << MMU_NESTED_IDX); } /* enable intercepts */ @@ -720,6 +722,7 @@ void do_vmexit(CPUX86State *env) env->vm_vmcb + offsetof(struct vmcb, control.int_state), 0); } env->hflags2 &= ~HF2_NPT_MASK; + tlb_flush_by_mmuidx(cs, 1 << MMU_NESTED_IDX); /* Save the VM state in the vmcb */ svm_save_seg(env, env->vm_vmcb + offsetof(struct vmcb, save.es), From patchwork Mon Aug 22 23:58:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599457 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2060810mae; Mon, 22 Aug 2022 17:23:46 -0700 (PDT) X-Google-Smtp-Source: AA6agR5mIO78/YuG6USd6qu8XBbHztkc5I9+Ht/clwdz7s3rc+gcVwghQX9pzniQvCUpfoEDHaOB X-Received: by 2002:a37:2790:0:b0:6bb:cf35:8544 with SMTP id n138-20020a372790000000b006bbcf358544mr12462408qkn.581.1661214226109; Mon, 22 Aug 2022 17:23:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661214226; cv=none; d=google.com; s=arc-20160816; b=m67TcFSlqo4PmrgZStxyan/dc/3R3oueyfqX9JxJgMdoIZHjZD8Q2AdfXgR/V5xVrK CCLZV99hzMH8XIZo78DomEK6o+zbwSMaoa0GVfvwghiTr+V5kvDEl+9t+xo+NOf140Rx tMVFD6PJGhS/5jpY8CBxFYvwo6Bi4hxVTTgFXyjsBw375xf8JlxU1Z+IVwC6RokLl31N ZT1D1S9tDTaFL1o7LtVF6OIsNip162ORoVcnHJCb+QP76W0bMtZtgVbOxXoXUpFpkWiM hgn8DgadpKn/2ZgGM+cUTvykSbFFohM7tytGFB7ErKVgrEX++BiTpNtN/hX9kJZFrX3J lsgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ZuWAoydCsbhGhXC27sk5LZjx46/nNcBzVoMelGwV460=; b=bCPkBF2xFTGV9c+cjGOEbZeSW/skAIiUmfF5n0PrDyuJrVBQ3oPNAGKAqKbDjti2vT SFWk71FTSHHf8s7axntwQ9PIF5H9DFjj3gQxt8JLO6m3kIAooy7ZqXMIu543WqwOR+NB 1y/zKDhZZ5b+Wvf4ah7U9D80H0yKjbx9U+d+PZVw+vlKXwoixoR+atPR08ul4WclqyRP 2IP/n7KDQSiI/QjZV9RpLLJfgUiD6xJ1ClIGbVdgEHkJZ5RsZS+BY48iZ1UEeJzbaXHB upbE0bvCi1WTSpXoTTV2CQhGv3E8A/f76o5mygcjB+qEgnRXAzKuQf7e2K2ZKAEH9byR UJcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=osWygQU8; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 134-20020a37098c000000b006b953460819si5067445qkj.62.2022.08.22.17.23.46 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:23:46 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=osWygQU8; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59662 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHhp-0006Pu-GX for patch@linaro.org; Mon, 22 Aug 2022 20:23:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46830) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJD-0004Kk-9c for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:20 -0400 Received: from mail-pf1-x430.google.com ([2607:f8b0:4864:20::430]:41957) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJA-0002kd-Uq for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:19 -0400 Received: by mail-pf1-x430.google.com with SMTP id g129so8558657pfb.8 for ; Mon, 22 Aug 2022 16:58:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=ZuWAoydCsbhGhXC27sk5LZjx46/nNcBzVoMelGwV460=; b=osWygQU8pmzFo6+dC7vAgBaeEqGGZ8PvZHMKQfjR3LL5ysBDFS35euqPXlLKEGFcKg hMoFfQSYnp3u0HhXPGYWPtvaK8PEFY3LpLILa4qfWoRDMmjUFUDhapVabNMgYoncDXR+ i6X6weOYEEp2DzQQmeTh216fwLTJRwhGnsgg2gRp/POLX0Drls1txY9bkU8hafmh6qwK TEEjGn/CSAwzFjvaCboZ6MdC/4fQlbozudREYKVnjj9zqm5KrBVU87kw9NF4qQG3F8Qq vk7h7oBGACj7bW5AKbB4KhLBXgG6lAcJosnH2SKATlfX8aULs6RnK3FnVlOkWrjn4csa MJOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=ZuWAoydCsbhGhXC27sk5LZjx46/nNcBzVoMelGwV460=; b=Vag7CAbWqaM0jNP/2eyX1zXFmf3TRcth3EvQ5sNsAFx81J5Gn0ZnSany6saUtspMS6 vz9J443eCDdeKDu64ZUlfBIEpSFPCo8UIIiGRuTc7VXoFUrMISPBmur7iaXstLjWRK+5 7XHKx5IrMulpbTfeGPr4W+FTGPnOTalfo7qguaF0NlbxRz8Viy3+Jwm0iV/jNkXK34UL hJeTHumpkOYkcYXqyFq+SGdIQouEKtQq0TLE/afwwNRJWLFTTtEKxYcpNFOGshhr36lY 8wJMaVMK/gjKQXFuToJcpjgSQkZ1zHpR8/zW1Wlcf6iZ/hCI6Wx3tvuIlfGXtPWisJEt rzSA== X-Gm-Message-State: ACgBeo3Q4B8GGy8M/oc5d2G5Q94cvw2TguE9UhQqmbDGXR+Ye1byRa0T P7sziBcTcRahHbDwwbp1jZODQdIG0t00Qw== X-Received: by 2002:a63:1f1b:0:b0:429:b4be:72f0 with SMTP id f27-20020a631f1b000000b00429b4be72f0mr18315435pgf.622.1661212695062; Mon, 22 Aug 2022 16:58:15 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:14 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 12/14] target/i386: Use MMU_NESTED_IDX for vmload/vmsave Date: Mon, 22 Aug 2022 16:58:01 -0700 Message-Id: <20220822235803.1729290-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::430; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Use MMU_NESTED_IDX for each memory access, rather than just a single translation to physical. Adjust svm_save_seg and svm_load_seg to pass in mmu_idx. This removes the last use of get_hphys so remove it. Signed-off-by: Richard Henderson --- target/i386/cpu.h | 2 - target/i386/tcg/sysemu/excp_helper.c | 31 ---- target/i386/tcg/sysemu/svm_helper.c | 231 +++++++++++++++------------ 3 files changed, 126 insertions(+), 138 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 9a40b54ae5..10a5e79774 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -2381,8 +2381,6 @@ static inline bool ctl_has_irq(CPUX86State *env) return (env->int_ctl & V_IRQ_MASK) && (int_prio >= tpr); } -hwaddr get_hphys(CPUState *cs, hwaddr gphys, MMUAccessType access_type, - int *prot); #if defined(TARGET_X86_64) && \ defined(CONFIG_USER_ONLY) && \ defined(CONFIG_LINUX) diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index f771d25f32..11f64e5965 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -413,37 +413,6 @@ static G_NORETURN void raise_stage2(CPUX86State *env, TranslateFault *err, cpu_vmexit(env, SVM_EXIT_NPF, exit_info_1, retaddr); } -hwaddr get_hphys(CPUState *cs, hwaddr gphys, MMUAccessType access_type, - int *prot) -{ - CPUX86State *env = cs->env_ptr; - - if (likely(!(env->hflags2 & HF2_NPT_MASK))) { - return gphys; - } else { - TranslateParams in = { - .addr = gphys, - .cr3 = env->nested_cr3, - .pg_mode = env->nested_pg_mode, - .mmu_idx = MMU_USER_IDX, - .access_type = access_type, - .use_stage2 = false, - }; - TranslateResult out; - TranslateFault err; - - if (!mmu_translate(env, &in, &out, &err)) { - err.stage2 = prot ? SVM_NPTEXIT_GPA : SVM_NPTEXIT_GPT; - raise_stage2(env, &err, env->retaddr); - } - - if (prot) { - *prot &= out.prot; - } - return out.paddr; - } -} - static bool get_physical_address(CPUX86State *env, vaddr addr, MMUAccessType access_type, int mmu_idx, TranslateResult *out, TranslateFault *err) diff --git a/target/i386/tcg/sysemu/svm_helper.c b/target/i386/tcg/sysemu/svm_helper.c index 85b7741d94..8e88567399 100644 --- a/target/i386/tcg/sysemu/svm_helper.c +++ b/target/i386/tcg/sysemu/svm_helper.c @@ -27,19 +27,19 @@ /* Secure Virtual Machine helpers */ -static inline void svm_save_seg(CPUX86State *env, hwaddr addr, - const SegmentCache *sc) +static void svm_save_seg(CPUX86State *env, int mmu_idx, hwaddr addr, + const SegmentCache *sc) { - CPUState *cs = env_cpu(env); - - x86_stw_phys(cs, addr + offsetof(struct vmcb_seg, selector), - sc->selector); - x86_stq_phys(cs, addr + offsetof(struct vmcb_seg, base), - sc->base); - x86_stl_phys(cs, addr + offsetof(struct vmcb_seg, limit), - sc->limit); - x86_stw_phys(cs, addr + offsetof(struct vmcb_seg, attrib), - ((sc->flags >> 8) & 0xff) | ((sc->flags >> 12) & 0x0f00)); + cpu_stw_mmuidx_ra(env, addr + offsetof(struct vmcb_seg, selector), + sc->selector, mmu_idx, 0); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb_seg, base), + sc->base, mmu_idx, 0); + cpu_stl_mmuidx_ra(env, addr + offsetof(struct vmcb_seg, limit), + sc->limit, mmu_idx, 0); + cpu_stw_mmuidx_ra(env, addr + offsetof(struct vmcb_seg, attrib), + ((sc->flags >> 8) & 0xff) + | ((sc->flags >> 12) & 0x0f00), + mmu_idx, 0); } /* @@ -52,29 +52,36 @@ static inline void svm_canonicalization(CPUX86State *env, target_ulong *seg_base *seg_base = ((((long) *seg_base) << shift_amt) >> shift_amt); } -static inline void svm_load_seg(CPUX86State *env, hwaddr addr, - SegmentCache *sc) +static void svm_load_seg(CPUX86State *env, int mmu_idx, hwaddr addr, + SegmentCache *sc) { - CPUState *cs = env_cpu(env); unsigned int flags; - sc->selector = x86_lduw_phys(cs, - addr + offsetof(struct vmcb_seg, selector)); - sc->base = x86_ldq_phys(cs, addr + offsetof(struct vmcb_seg, base)); - sc->limit = x86_ldl_phys(cs, addr + offsetof(struct vmcb_seg, limit)); - flags = x86_lduw_phys(cs, addr + offsetof(struct vmcb_seg, attrib)); + sc->selector = + cpu_lduw_mmuidx_ra(env, addr + offsetof(struct vmcb_seg, selector), + mmu_idx, 0); + sc->base = + cpu_ldq_mmuidx_ra(env, addr + offsetof(struct vmcb_seg, base), + mmu_idx, 0); + sc->limit = + cpu_ldl_mmuidx_ra(env, addr + offsetof(struct vmcb_seg, limit), + mmu_idx, 0); + flags = + cpu_lduw_mmuidx_ra(env, addr + offsetof(struct vmcb_seg, attrib), + mmu_idx, 0); sc->flags = ((flags & 0xff) << 8) | ((flags & 0x0f00) << 12); + svm_canonicalization(env, &sc->base); } -static inline void svm_load_seg_cache(CPUX86State *env, hwaddr addr, - int seg_reg) +static void svm_load_seg_cache(CPUX86State *env, int mmu_idx, + hwaddr addr, int seg_reg) { - SegmentCache sc1, *sc = &sc1; + SegmentCache sc; - svm_load_seg(env, addr, sc); - cpu_x86_load_seg_cache(env, seg_reg, sc->selector, - sc->base, sc->limit, sc->flags); + svm_load_seg(env, mmu_idx, addr, &sc); + cpu_x86_load_seg_cache(env, seg_reg, sc.selector, + sc.base, sc.limit, sc.flags); } static inline bool is_efer_invalid_state (CPUX86State *env) @@ -199,13 +206,17 @@ void helper_vmrun(CPUX86State *env, int aflag, int next_eip_addend) env->vm_hsave + offsetof(struct vmcb, save.rflags), cpu_compute_eflags(env)); - svm_save_seg(env, env->vm_hsave + offsetof(struct vmcb, save.es), + svm_save_seg(env, MMU_PHYS_IDX, + env->vm_hsave + offsetof(struct vmcb, save.es), &env->segs[R_ES]); - svm_save_seg(env, env->vm_hsave + offsetof(struct vmcb, save.cs), + svm_save_seg(env, MMU_PHYS_IDX, + env->vm_hsave + offsetof(struct vmcb, save.cs), &env->segs[R_CS]); - svm_save_seg(env, env->vm_hsave + offsetof(struct vmcb, save.ss), + svm_save_seg(env, MMU_PHYS_IDX, + env->vm_hsave + offsetof(struct vmcb, save.ss), &env->segs[R_SS]); - svm_save_seg(env, env->vm_hsave + offsetof(struct vmcb, save.ds), + svm_save_seg(env, MMU_PHYS_IDX, + env->vm_hsave + offsetof(struct vmcb, save.ds), &env->segs[R_DS]); x86_stq_phys(cs, env->vm_hsave + offsetof(struct vmcb, save.rip), @@ -325,18 +336,18 @@ void helper_vmrun(CPUX86State *env, int aflag, int next_eip_addend) save.rflags)), ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK)); - svm_load_seg_cache(env, env->vm_vmcb + offsetof(struct vmcb, save.es), - R_ES); - svm_load_seg_cache(env, env->vm_vmcb + offsetof(struct vmcb, save.cs), - R_CS); - svm_load_seg_cache(env, env->vm_vmcb + offsetof(struct vmcb, save.ss), - R_SS); - svm_load_seg_cache(env, env->vm_vmcb + offsetof(struct vmcb, save.ds), - R_DS); - svm_load_seg(env, env->vm_vmcb + offsetof(struct vmcb, save.idtr), - &env->idt); - svm_load_seg(env, env->vm_vmcb + offsetof(struct vmcb, save.gdtr), - &env->gdt); + svm_load_seg_cache(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.es), R_ES); + svm_load_seg_cache(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.cs), R_CS); + svm_load_seg_cache(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.ss), R_SS); + svm_load_seg_cache(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.ds), R_DS); + svm_load_seg(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.idtr), &env->idt); + svm_load_seg(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.gdtr), &env->gdt); env->eip = x86_ldq_phys(cs, env->vm_vmcb + offsetof(struct vmcb, save.rip)); @@ -451,9 +462,8 @@ void helper_vmmcall(CPUX86State *env) void helper_vmload(CPUX86State *env, int aflag) { - CPUState *cs = env_cpu(env); + int mmu_idx = MMU_PHYS_IDX; target_ulong addr; - int prot; cpu_svm_check_intercept_param(env, SVM_EXIT_VMLOAD, 0, GETPC()); @@ -464,43 +474,52 @@ void helper_vmload(CPUX86State *env, int aflag) } if (virtual_vm_load_save_enabled(env, SVM_EXIT_VMLOAD, GETPC())) { - addr = get_hphys(cs, addr, MMU_DATA_LOAD, &prot); + mmu_idx = MMU_NESTED_IDX; } - qemu_log_mask(CPU_LOG_TB_IN_ASM, "vmload! " TARGET_FMT_lx - "\nFS: %016" PRIx64 " | " TARGET_FMT_lx "\n", - addr, x86_ldq_phys(cs, addr + offsetof(struct vmcb, - save.fs.base)), - env->segs[R_FS].base); - - svm_load_seg_cache(env, addr + offsetof(struct vmcb, save.fs), R_FS); - svm_load_seg_cache(env, addr + offsetof(struct vmcb, save.gs), R_GS); - svm_load_seg(env, addr + offsetof(struct vmcb, save.tr), &env->tr); - svm_load_seg(env, addr + offsetof(struct vmcb, save.ldtr), &env->ldt); + svm_load_seg_cache(env, mmu_idx, + addr + offsetof(struct vmcb, save.fs), R_FS); + svm_load_seg_cache(env, mmu_idx, + addr + offsetof(struct vmcb, save.gs), R_GS); + svm_load_seg(env, mmu_idx, + addr + offsetof(struct vmcb, save.tr), &env->tr); + svm_load_seg(env, mmu_idx, + addr + offsetof(struct vmcb, save.ldtr), &env->ldt); #ifdef TARGET_X86_64 - env->kernelgsbase = x86_ldq_phys(cs, addr + offsetof(struct vmcb, - save.kernel_gs_base)); - env->lstar = x86_ldq_phys(cs, addr + offsetof(struct vmcb, save.lstar)); - env->cstar = x86_ldq_phys(cs, addr + offsetof(struct vmcb, save.cstar)); - env->fmask = x86_ldq_phys(cs, addr + offsetof(struct vmcb, save.sfmask)); + env->kernelgsbase = + cpu_ldq_mmuidx_ra(env, + addr + offsetof(struct vmcb, save.kernel_gs_base), + mmu_idx, 0); + env->lstar = + cpu_ldq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.lstar), + mmu_idx, 0); + env->cstar = + cpu_ldq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.cstar), + mmu_idx, 0); + env->fmask = + cpu_ldq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.sfmask), + mmu_idx, 0); svm_canonicalization(env, &env->kernelgsbase); #endif - env->star = x86_ldq_phys(cs, addr + offsetof(struct vmcb, save.star)); - env->sysenter_cs = x86_ldq_phys(cs, - addr + offsetof(struct vmcb, save.sysenter_cs)); - env->sysenter_esp = x86_ldq_phys(cs, addr + offsetof(struct vmcb, - save.sysenter_esp)); - env->sysenter_eip = x86_ldq_phys(cs, addr + offsetof(struct vmcb, - save.sysenter_eip)); - + env->star = + cpu_ldq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.star), + mmu_idx, 0); + env->sysenter_cs = + cpu_ldq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.sysenter_cs), + mmu_idx, 0); + env->sysenter_esp = + cpu_ldq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.sysenter_esp), + mmu_idx, 0); + env->sysenter_eip = + cpu_ldq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.sysenter_eip), + mmu_idx, 0); } void helper_vmsave(CPUX86State *env, int aflag) { - CPUState *cs = env_cpu(env); + int mmu_idx = MMU_PHYS_IDX; target_ulong addr; - int prot; cpu_svm_check_intercept_param(env, SVM_EXIT_VMSAVE, 0, GETPC()); @@ -511,38 +530,36 @@ void helper_vmsave(CPUX86State *env, int aflag) } if (virtual_vm_load_save_enabled(env, SVM_EXIT_VMSAVE, GETPC())) { - addr = get_hphys(cs, addr, MMU_DATA_STORE, &prot); + mmu_idx = MMU_NESTED_IDX; } - qemu_log_mask(CPU_LOG_TB_IN_ASM, "vmsave! " TARGET_FMT_lx - "\nFS: %016" PRIx64 " | " TARGET_FMT_lx "\n", - addr, x86_ldq_phys(cs, - addr + offsetof(struct vmcb, save.fs.base)), - env->segs[R_FS].base); - - svm_save_seg(env, addr + offsetof(struct vmcb, save.fs), + svm_save_seg(env, mmu_idx, addr + offsetof(struct vmcb, save.fs), &env->segs[R_FS]); - svm_save_seg(env, addr + offsetof(struct vmcb, save.gs), + svm_save_seg(env, mmu_idx, addr + offsetof(struct vmcb, save.gs), &env->segs[R_GS]); - svm_save_seg(env, addr + offsetof(struct vmcb, save.tr), + svm_save_seg(env, mmu_idx, addr + offsetof(struct vmcb, save.tr), &env->tr); - svm_save_seg(env, addr + offsetof(struct vmcb, save.ldtr), + svm_save_seg(env, mmu_idx, addr + offsetof(struct vmcb, save.ldtr), &env->ldt); #ifdef TARGET_X86_64 - x86_stq_phys(cs, addr + offsetof(struct vmcb, save.kernel_gs_base), - env->kernelgsbase); - x86_stq_phys(cs, addr + offsetof(struct vmcb, save.lstar), env->lstar); - x86_stq_phys(cs, addr + offsetof(struct vmcb, save.cstar), env->cstar); - x86_stq_phys(cs, addr + offsetof(struct vmcb, save.sfmask), env->fmask); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.kernel_gs_base), + env->kernelgsbase, mmu_idx, 0); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.lstar), + env->lstar, mmu_idx, 0); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.cstar), + env->cstar, mmu_idx, 0); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.sfmask), + env->fmask, mmu_idx, 0); #endif - x86_stq_phys(cs, addr + offsetof(struct vmcb, save.star), env->star); - x86_stq_phys(cs, - addr + offsetof(struct vmcb, save.sysenter_cs), env->sysenter_cs); - x86_stq_phys(cs, addr + offsetof(struct vmcb, save.sysenter_esp), - env->sysenter_esp); - x86_stq_phys(cs, addr + offsetof(struct vmcb, save.sysenter_eip), - env->sysenter_eip); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.star), + env->star, mmu_idx, 0); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.sysenter_cs), + env->sysenter_cs, mmu_idx, 0); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.sysenter_esp), + env->sysenter_esp, mmu_idx, 0); + cpu_stq_mmuidx_ra(env, addr + offsetof(struct vmcb, save.sysenter_eip), + env->sysenter_eip, mmu_idx, 0); } void helper_stgi(CPUX86State *env) @@ -725,13 +742,17 @@ void do_vmexit(CPUX86State *env) tlb_flush_by_mmuidx(cs, 1 << MMU_NESTED_IDX); /* Save the VM state in the vmcb */ - svm_save_seg(env, env->vm_vmcb + offsetof(struct vmcb, save.es), + svm_save_seg(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.es), &env->segs[R_ES]); - svm_save_seg(env, env->vm_vmcb + offsetof(struct vmcb, save.cs), + svm_save_seg(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.cs), &env->segs[R_CS]); - svm_save_seg(env, env->vm_vmcb + offsetof(struct vmcb, save.ss), + svm_save_seg(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.ss), &env->segs[R_SS]); - svm_save_seg(env, env->vm_vmcb + offsetof(struct vmcb, save.ds), + svm_save_seg(env, MMU_PHYS_IDX, + env->vm_vmcb + offsetof(struct vmcb, save.ds), &env->segs[R_DS]); x86_stq_phys(cs, env->vm_vmcb + offsetof(struct vmcb, save.gdtr.base), @@ -812,14 +833,14 @@ void do_vmexit(CPUX86State *env) ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK | VM_MASK)); - svm_load_seg_cache(env, env->vm_hsave + offsetof(struct vmcb, save.es), - R_ES); - svm_load_seg_cache(env, env->vm_hsave + offsetof(struct vmcb, save.cs), - R_CS); - svm_load_seg_cache(env, env->vm_hsave + offsetof(struct vmcb, save.ss), - R_SS); - svm_load_seg_cache(env, env->vm_hsave + offsetof(struct vmcb, save.ds), - R_DS); + svm_load_seg_cache(env, MMU_PHYS_IDX, + env->vm_hsave + offsetof(struct vmcb, save.es), R_ES); + svm_load_seg_cache(env, MMU_PHYS_IDX, + env->vm_hsave + offsetof(struct vmcb, save.cs), R_CS); + svm_load_seg_cache(env, MMU_PHYS_IDX, + env->vm_hsave + offsetof(struct vmcb, save.ss), R_SS); + svm_load_seg_cache(env, MMU_PHYS_IDX, + env->vm_hsave + offsetof(struct vmcb, save.ds), R_DS); env->eip = x86_ldq_phys(cs, env->vm_hsave + offsetof(struct vmcb, save.rip)); From patchwork Mon Aug 22 23:58:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599453 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2057926mae; Mon, 22 Aug 2022 17:17:52 -0700 (PDT) X-Google-Smtp-Source: AA6agR6NfcCXE6KDS2Y0D4dyPuwIvNeYiXwBpEilo+ZWPG+r+7FvdcUBq5xzY+7Zfq9mY4XzpOq8 X-Received: by 2002:ae9:e903:0:b0:6ba:e5aa:d59e with SMTP id x3-20020ae9e903000000b006bae5aad59emr13783352qkf.214.1661213872311; Mon, 22 Aug 2022 17:17:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213872; cv=none; d=google.com; s=arc-20160816; b=XNVcZtgnI800knQ77nagcyn+pbW71KdrdGtIv7QFjO3yTxPr2mbLx4C1b0z62t+WmW t1hG0wfkfpblxPmuL/EcrucHzxOe+RluXE6UaGPimvA637yewWK3VJ0UaQ3XMoNNLGcE lglzna63ePmL040S71XzHgymRZBigy7zH1PXxIeLPKEcOSkmj5GYSsEXdEP/SXjyNxxQ CUsWMEY++cBTOiax5mff3zmAha9+Awa7aC/DqbaexQJDeLwj659npcDgu2Kwnnd7YPNJ 9MBCsbm3DN3qmGuDRIXpWsWfx/jilfP8lF9d9jbQosvIpZFLVxfMPyvtViUtQGV/kvLm Wozg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=1WOj2RIpy/NMuzgGtfm/77hXAf37fWZV2SfkPXv9pXE=; b=ff+bLDTU+I9E++NGmb582JSXJPm/DWlEVvM1XXOLnuB/3gLdjkmAJtUvHVf5z4RdAI CSK6FRMeJEW0hqFTBapuyuajwO+Me56QpymiUgb0gj8DenSDi+7UaKrLfZObH3LdjBq2 9Wb34JfDWIQ6ix/EsWpCNJ7dNsaGlsJaY+wjR4ocU/F1Vwl1Z++cjrDvIs7lhAqbu5K1 BvdA0tLy8uCRdZ/VVQSXtVoheQ+fS9uN0XCbF9J++tje0giUfQwdUso3e0QgIFNHoKYK RYRUUwQ0M/qlO8FibtQktS3qRKxYEWLIA8lThAh0lS7lGycl8i2FFA8KG0Dy4O7bOrIK NdbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Cvp07sYS; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id gu1-20020a056214260100b00496d2329e68si3188521qvb.43.2022.08.22.17.17.52 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:17:52 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Cvp07sYS; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:51762 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHc7-0007LU-UL for patch@linaro.org; Mon, 22 Aug 2022 20:17:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46832) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJE-0004Kn-9T for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:20 -0400 Received: from mail-pj1-x1030.google.com ([2607:f8b0:4864:20::1030]:37487) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJC-0002kq-Fq for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:20 -0400 Received: by mail-pj1-x1030.google.com with SMTP id x14-20020a17090a8a8e00b001fb61a71d99so395414pjn.2 for ; Mon, 22 Aug 2022 16:58:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=1WOj2RIpy/NMuzgGtfm/77hXAf37fWZV2SfkPXv9pXE=; b=Cvp07sYSSWvtQTGKbKncOyAa63NLefvqTKia1RxUU8uHdU8FIKUme41B2mGMplzjVX p94iLgZGAApmTS3UAAX3LXuBeCdbjp3+cKlAIggkql3sLwd87JafgF4VfiumURKp4ZJq AXGSFb3nIXky9K0EQGMcnu84UdrEQFz1fu9mvJj0GkHtGg+jpBrp7ChqFr4+Jz3fpjiy HJHcys8HD872ZZx3CUFyaYyGsvQJ+CeLHmCke0MssLuYUQ6mLeYuFdxvEKj0Ea7BchRx xhUF5aUffVqF1oKtWMpGnq7m+uTqHp683UKt4DDZyt7v3r38ECC34H+ozsXoVvMxcpYi QJwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=1WOj2RIpy/NMuzgGtfm/77hXAf37fWZV2SfkPXv9pXE=; b=A98/x4FEZnTBIRv4SJ3c+afnhT4Ctsnr0o+k+YUIEHS2iucXgOhQ/kCQPPc8OZtj1X keZXH2RMhl6h/1+SFQq/vQzSf3Q0JVN/P3H5TTI/Fx7Pb1h/sotovYaWzjIeYTEVHBaj Lj/VHuZ9m06FGJLpnNqNPD/hktj+gLHMJquffK3mOZY/H/4ZzOs6cW9TKgluDNgGPO5V miPP0tJT2saEtJfDZknjgyyMAYhRYZhrnkWVuwSNyiGXuufMd6oJlmML0Wm17yrtbTzX YbgQCthJm1WV5vJ9ipX2CHVynvFJUeOrtgkkHa1xLWdm/woqtiovP1YB/z2q3TzIZd45 fGqg== X-Gm-Message-State: ACgBeo3UN/hp/Ueyb8laWOJASqo9ABs8/AWwOcpKNaj/XCLKuEYyp5l+ z6DYwpb5ZxgbwtJwS2JsnMdwgiOB7yw0hA== X-Received: by 2002:a17:90b:17c8:b0:1f5:4724:981f with SMTP id me8-20020a17090b17c800b001f54724981fmr691232pjb.205.1661212696017; Mon, 22 Aug 2022 16:58:16 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:15 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 13/14] target/i386: Combine 5 sets of variables in mmu_translate Date: Mon, 22 Aug 2022 16:58:02 -0700 Message-Id: <20220822235803.1729290-14-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::1030; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1030.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" We don't need one variable set per translation level, which requires copying into pte/pte_addr for huge pages. Standardize on pte/pte_addr for all levels. Signed-off-by: Richard Henderson --- target/i386/tcg/sysemu/excp_helper.c | 178 ++++++++++++++------------- 1 file changed, 91 insertions(+), 87 deletions(-) diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index 11f64e5965..e5d9ff138e 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -82,7 +82,7 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, const bool is_user = (in->mmu_idx == MMU_USER_IDX); const MMUAccessType access_type = in->access_type; uint64_t ptep, pte; - hwaddr pde_addr, pte_addr; + hwaddr pte_addr; uint64_t rsvd_mask = PG_ADDRESS_MASK & ~MAKE_64BIT_MASK(0, cpu->phys_bits); uint32_t pkr; int page_size; @@ -92,116 +92,122 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, } if (pg_mode & PG_MODE_PAE) { - uint64_t pde, pdpe; - target_ulong pdpe_addr; - #ifdef TARGET_X86_64 if (pg_mode & PG_MODE_LMA) { - bool la57 = pg_mode & PG_MODE_LA57; - uint64_t pml5e_addr, pml5e; - uint64_t pml4e_addr, pml4e; - - if (la57) { - pml5e_addr = ((in->cr3 & ~0xfff) + - (((addr >> 48) & 0x1ff) << 3)) & a20_mask; - PTE_HPHYS(pml5e_addr); - pml5e = x86_ldq_phys(cs, pml5e_addr); - if (!(pml5e & PG_PRESENT_MASK)) { + if (pg_mode & PG_MODE_LA57) { + /* + * Page table level 5 + */ + pte_addr = ((in->cr3 & ~0xfff) + + (((addr >> 48) & 0x1ff) << 3)) & a20_mask; + PTE_HPHYS(pte_addr); + pte = x86_ldq_phys(cs, pte_addr); + if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } - if (pml5e & (rsvd_mask | PG_PSE_MASK)) { + if (pte & (rsvd_mask | PG_PSE_MASK)) { goto do_fault_rsvd; } - if (!(pml5e & PG_ACCESSED_MASK)) { - pml5e |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pml5e_addr, pml5e); + if (!(pte & PG_ACCESSED_MASK)) { + pte |= PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pte_addr, pte); } - ptep = pml5e ^ PG_NX_MASK; + ptep = pte ^ PG_NX_MASK; } else { - pml5e = in->cr3; + pte = in->cr3; ptep = PG_NX_MASK | PG_USER_MASK | PG_RW_MASK; } - pml4e_addr = ((pml5e & PG_ADDRESS_MASK) + - (((addr >> 39) & 0x1ff) << 3)) & a20_mask; - PTE_HPHYS(pml4e_addr); - pml4e = x86_ldq_phys(cs, pml4e_addr); - if (!(pml4e & PG_PRESENT_MASK)) { + /* + * Page table level 4 + */ + pte_addr = ((pte & PG_ADDRESS_MASK) + + (((addr >> 39) & 0x1ff) << 3)) & a20_mask; + PTE_HPHYS(pte_addr); + pte = x86_ldq_phys(cs, pte_addr); + if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } - if (pml4e & (rsvd_mask | PG_PSE_MASK)) { + if (pte & (rsvd_mask | PG_PSE_MASK)) { goto do_fault_rsvd; } - if (!(pml4e & PG_ACCESSED_MASK)) { - pml4e |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pml4e_addr, pml4e); + if (!(pte & PG_ACCESSED_MASK)) { + pte |= PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pte_addr, pte); } - ptep &= pml4e ^ PG_NX_MASK; - pdpe_addr = ((pml4e & PG_ADDRESS_MASK) + (((addr >> 30) & 0x1ff) << 3)) & - a20_mask; - PTE_HPHYS(pdpe_addr); - pdpe = x86_ldq_phys(cs, pdpe_addr); - if (!(pdpe & PG_PRESENT_MASK)) { + ptep &= pte ^ PG_NX_MASK; + + /* + * Page table level 3 + */ + pte_addr = ((pte & PG_ADDRESS_MASK) + + (((addr >> 30) & 0x1ff) << 3)) & a20_mask; + PTE_HPHYS(pte_addr); + pte = x86_ldq_phys(cs, pte_addr); + if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } - if (pdpe & rsvd_mask) { + if (pte & rsvd_mask) { goto do_fault_rsvd; } - ptep &= pdpe ^ PG_NX_MASK; - if (!(pdpe & PG_ACCESSED_MASK)) { - pdpe |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pdpe_addr, pdpe); + ptep &= pte ^ PG_NX_MASK; + if (!(pte & PG_ACCESSED_MASK)) { + pte |= PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pte_addr, pte); } - if (pdpe & PG_PSE_MASK) { + if (pte & PG_PSE_MASK) { /* 1 GB page */ page_size = 1024 * 1024 * 1024; - pte_addr = pdpe_addr; - pte = pdpe; goto do_check_protect; } } else #endif { - /* XXX: load them when cr3 is loaded ? */ - pdpe_addr = ((in->cr3 & ~0x1f) + ((addr >> 27) & 0x18)) & - a20_mask; - PTE_HPHYS(pdpe_addr); - pdpe = x86_ldq_phys(cs, pdpe_addr); - if (!(pdpe & PG_PRESENT_MASK)) { + /* + * Page table level 3 + */ + pte_addr = ((in->cr3 & ~0x1f) + ((addr >> 27) & 0x18)) & a20_mask; + PTE_HPHYS(pte_addr); + pte = x86_ldq_phys(cs, pte_addr); + if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } rsvd_mask |= PG_HI_USER_MASK; - if (pdpe & (rsvd_mask | PG_NX_MASK)) { + if (pte & (rsvd_mask | PG_NX_MASK)) { goto do_fault_rsvd; } ptep = PG_NX_MASK | PG_USER_MASK | PG_RW_MASK; } - pde_addr = ((pdpe & PG_ADDRESS_MASK) + (((addr >> 21) & 0x1ff) << 3)) & - a20_mask; - PTE_HPHYS(pde_addr); - pde = x86_ldq_phys(cs, pde_addr); - if (!(pde & PG_PRESENT_MASK)) { + /* + * Page table level 2 + */ + pte_addr = ((pte & PG_ADDRESS_MASK) + + (((addr >> 21) & 0x1ff) << 3)) & a20_mask; + PTE_HPHYS(pte_addr); + pte = x86_ldq_phys(cs, pte_addr); + if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } - if (pde & rsvd_mask) { + if (pte & rsvd_mask) { goto do_fault_rsvd; } - ptep &= pde ^ PG_NX_MASK; - if (pde & PG_PSE_MASK) { + ptep &= pte ^ PG_NX_MASK; + if (pte & PG_PSE_MASK) { /* 2 MB page */ page_size = 2048 * 1024; - pte_addr = pde_addr; - pte = pde; goto do_check_protect; } - /* 4 KB page */ - if (!(pde & PG_ACCESSED_MASK)) { - pde |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pde_addr, pde); + if (!(pte & PG_ACCESSED_MASK)) { + pte |= PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pte_addr, pte); } - pte_addr = ((pde & PG_ADDRESS_MASK) + (((addr >> 12) & 0x1ff) << 3)) & - a20_mask; + + /* + * Page table level 1 + */ + pte_addr = ((pte & PG_ADDRESS_MASK) + + (((addr >> 12) & 0x1ff) << 3)) & a20_mask; PTE_HPHYS(pte_addr); pte = x86_ldq_phys(cs, pte_addr); if (!(pte & PG_PRESENT_MASK)) { @@ -214,39 +220,37 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, ptep &= pte ^ PG_NX_MASK; page_size = 4096; } else { - uint32_t pde; - - /* page directory entry */ - pde_addr = ((in->cr3 & ~0xfff) + ((addr >> 20) & 0xffc)) & - a20_mask; - PTE_HPHYS(pde_addr); - pde = x86_ldl_phys(cs, pde_addr); - if (!(pde & PG_PRESENT_MASK)) { + /* + * Page table level 2 + */ + pte_addr = ((in->cr3 & ~0xfff) + ((addr >> 20) & 0xffc)) & a20_mask; + PTE_HPHYS(pte_addr); + pte = x86_ldl_phys(cs, pte_addr); + if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } - ptep = pde | PG_NX_MASK; + ptep = pte | PG_NX_MASK; /* if PSE bit is set, then we use a 4MB page */ - if ((pde & PG_PSE_MASK) && (pg_mode & PG_MODE_PSE)) { + if ((pte & PG_PSE_MASK) && (pg_mode & PG_MODE_PSE)) { page_size = 4096 * 1024; - pte_addr = pde_addr; - - /* Bits 20-13 provide bits 39-32 of the address, bit 21 is reserved. + /* + * Bits 20-13 provide bits 39-32 of the address, bit 21 is reserved. * Leave bits 20-13 in place for setting accessed/dirty bits below. */ - pte = pde | ((pde & 0x1fe000LL) << (32 - 13)); + pte = (uint32_t)pte | ((pte & 0x1fe000LL) << (32 - 13)); rsvd_mask = 0x200000; goto do_check_protect_pse36; } - - if (!(pde & PG_ACCESSED_MASK)) { - pde |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pde_addr, pde); + if (!(pte & PG_ACCESSED_MASK)) { + pte |= PG_ACCESSED_MASK; + x86_stl_phys_notdirty(cs, pte_addr, pte); } - /* page directory entry */ - pte_addr = ((pde & ~0xfff) + ((addr >> 10) & 0xffc)) & - a20_mask; + /* + * Page table level 1 + */ + pte_addr = ((pte & ~0xfffu) + ((addr >> 10) & 0xffc)) & a20_mask; PTE_HPHYS(pte_addr); pte = x86_ldl_phys(cs, pte_addr); if (!(pte & PG_PRESENT_MASK)) { From patchwork Mon Aug 22 23:58:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 599446 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:4388:0:0:0:0 with SMTP id w8csp2050655mae; Mon, 22 Aug 2022 17:08:57 -0700 (PDT) X-Google-Smtp-Source: AA6agR6WISN10w+BmKfBqcEi+aaEYk+h+1JqWh4AGb+HRxga3ZU0fx98bk6/zuS8UKCkcwZ5gkIj X-Received: by 2002:ac8:5f92:0:b0:344:9d67:ff70 with SMTP id j18-20020ac85f92000000b003449d67ff70mr14045235qta.96.1661213337012; Mon, 22 Aug 2022 17:08:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661213337; cv=none; d=google.com; s=arc-20160816; b=MBZuZBHhUDgpU8YgnYlgSlCxpCZnnl+oyfv8+j6ZgfcAUGtJhE9LQW9aMAt1Lty0J2 jX7onNIkI8Qh9Xb9HP3bRcZPQltZe1awW5mZ6f6xlZJZGqpiYmmyVFV0ZAILXZVvwy1h E04+xJCWSvcBlSh5FWSM+CIK7JvFwdeVUiLSrexDYVLYVc0iW2yz1mF8CWqmg0atH9VB vKjJocU9Dhe3jN2+55Uc9P9nqU/ql79pSNyUhSCvr28UbDid2wJF+RmvpJUFdgaTIdqp vjYiQ91AZ2/3TBUbse8KDeH4VD9pw+lqWLNQDYtOrRFUvEqF8Yn/VexCX6Pf9561vvsb fbXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=aw89WC7RVHXSLWd5cxN77p9SR70YM5CjHyw47X2hKRo=; b=xvenFfy9GqKrZfBeT213hmHuS3Am9zB0YRjVaoszzJZutzhv4AabVm1pbqtMTjzaDp hr2OzBuFDrtNE2jCo/ZXRCJxfeKZunnQCDdnz03oh0b5kVHIULbHgjVwSvShh8JKIumY OVoup2lMSNMcuXTLgq4Sy+JAEFWk+78ygNpJ5KOITNFPRFRzkFu00ysGhU5alyTKs1e0 KqVmZveDi46guxkP99FSpyisFAbUEdldENyp5aWNeMoW/FyO65HINIKQryyDkkis/6cD mecjdnKY5gM/8zv/MyjnKOqd9YIM3DlO3V+jtjbjqCl0FjESQz3wMtF0bdq6DidTSf+p tn7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NOrVboCN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ke26-20020a056214301a00b00496d9bc7ca5si2606069qvb.294.2022.08.22.17.08.56 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 22 Aug 2022 17:08:57 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NOrVboCN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58332 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQHTU-0007yg-I1 for patch@linaro.org; Mon, 22 Aug 2022 20:08:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46834) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQHJE-0004MY-Rs for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:22 -0400 Received: from mail-pj1-x1033.google.com ([2607:f8b0:4864:20::1033]:34438) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oQHJC-0002l2-Fv for qemu-devel@nongnu.org; Mon, 22 Aug 2022 19:58:20 -0400 Received: by mail-pj1-x1033.google.com with SMTP id c16-20020a17090aa61000b001fb3286d9f7so106364pjq.1 for ; Mon, 22 Aug 2022 16:58:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=aw89WC7RVHXSLWd5cxN77p9SR70YM5CjHyw47X2hKRo=; b=NOrVboCN+OjzkiKZCn3ILsFYXFSARCkLBY5PU6iHq/1E2RQU13/iijXauZi3nl7nPy sEWF4djDlLbgGqRgbDkL5FQG1UUWgeLe3lGPloDHCVo0Qg/3/0eXzdzQ2yAbC3OUqgDP 4wLgsWs1A0sF85elvhFOT9CC3/MMg9WOLrM0DAjgEvcMyBquFygFRvL6vvVSrS1M5owY KAGpbcfYHOPsre3uUb8dE53R/7U++wsfGmcGAmE/3rOm7onDaieQfMFDc3HQ7TLdVib5 Ej7hm+fKzqqKy/atXuyCZBiRCODQ/XgRHKK3SvKscyJpf5P4gMP/KIzClPegsz0JGx63 APyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=aw89WC7RVHXSLWd5cxN77p9SR70YM5CjHyw47X2hKRo=; b=HdkZKUxQd1algaWU1jL4fQpt+X59S+Oe28Fp3McbB05LJr3OIC3seLs06JzcQkOqZH u3DOFMwJ1r0SVZxTk74Y622T9ziD60G61AGMsWIBIZ5xYPTpaFSQYV535yPYrvxZ5W5q iU7a9MV5ysuXUkG+Phe5bItYCTQFOAfzOalOz+jhbt3JEsxgYNe+jg3Ht1sP2QFlu2yy /uNYGcbsiq3g07sgLBU7wdeNW6IfR+vSNOyGLQiDr10ItVe/ZvGxNO7EBLEdfZmxb45O gCsChqOOSnEsfkH8kfAVcPx6JM4q2vRSMzwKDuPg+lpU2ciNjdvfGW5kn/odWPnedJKZ 5nPA== X-Gm-Message-State: ACgBeo1WPAonB7/ILPNc7UToZgZCS11W2i91IGs3PBiZPLM9GHF6Cvnk 7TIPsfnn4lRxggB7s/tAIfXw5/yDxuCxsQ== X-Received: by 2002:a17:902:e5cf:b0:172:e7e6:d7e with SMTP id u15-20020a170902e5cf00b00172e7e60d7emr7570419plf.30.1661212696815; Mon, 22 Aug 2022 16:58:16 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:c3f1:b74f:5edd:63af]) by smtp.gmail.com with ESMTPSA id w190-20020a6230c7000000b0052d52de6726sm9173159pfw.124.2022.08.22.16.58.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Aug 2022 16:58:16 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, eduardo@habkost.net Subject: [PATCH 14/14] target/i386: Use atomic operations for pte updates Date: Mon, 22 Aug 2022 16:58:03 -0700 Message-Id: <20220822235803.1729290-15-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220822235803.1729290-1-richard.henderson@linaro.org> References: <20220822235803.1729290-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::1033; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1033.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Use probe_access_full in order to resolve to a host address, which then lets us use a host cmpxchg to update the pte. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/279 Signed-off-by: Richard Henderson --- target/i386/tcg/sysemu/excp_helper.c | 242 +++++++++++++++++++-------- 1 file changed, 168 insertions(+), 74 deletions(-) diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index e5d9ff138e..74a76cf883 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -27,8 +27,8 @@ typedef struct TranslateParams { target_ulong cr3; int pg_mode; int mmu_idx; + int ptw_idx; MMUAccessType access_type; - bool use_stage2; } TranslateParams; typedef struct TranslateResult { @@ -50,43 +50,106 @@ typedef struct TranslateFault { TranslateFaultStage2 stage2; } TranslateFault; -#define PTE_HPHYS(ADDR) \ - do { \ - if (in->use_stage2) { \ - nested_in.addr = (ADDR); \ - if (!mmu_translate(env, &nested_in, out, err)) { \ - err->stage2 = S2_GPT; \ - return false; \ - } \ - (ADDR) = out->paddr; \ - } \ - } while (0) +typedef struct PTETranslate { + CPUX86State *env; + TranslateFault *err; + int ptw_idx; + void *haddr; + hwaddr gaddr; +} PTETranslate; + +static bool ptw_translate(PTETranslate *inout, hwaddr addr) +{ + CPUTLBEntryFull *full; + int flags; + + inout->gaddr = addr; + flags = probe_access_full(inout->env, addr, MMU_DATA_STORE, + inout->ptw_idx, true, &inout->haddr, &full, 0); + + if (unlikely(flags & TLB_INVALID_MASK)) { + TranslateFault *err = inout->err; + + assert(inout->ptw_idx == MMU_NESTED_IDX); + err->exception_index = 0; /* unused */ + err->error_code = inout->env->error_code; + err->cr2 = addr; + err->stage2 = S2_GPT; + return false; + } + return true; +} + +static inline uint32_t ptw_ldl(const PTETranslate *in) +{ + if (likely(in->haddr)) { + return ldl_p(in->haddr); + } + return cpu_ldl_mmuidx_ra(in->env, in->gaddr, in->ptw_idx, 0); +} + +static inline uint64_t ptw_ldq(const PTETranslate *in) +{ + if (likely(in->haddr)) { + return ldq_p(in->haddr); + } + return cpu_ldq_mmuidx_ra(in->env, in->gaddr, in->ptw_idx, 0); +} + +/* + * Note that we can use a 32-bit cmpxchg for all page table entries, + * even 64-bit ones, because PG_PRESENT_MASK, PG_ACCESSED_MASK and + * PG_DIRTY_MASK are all in the low 32 bits. + */ +static bool ptw_setl_slow(const PTETranslate *in, uint32_t old, uint32_t new) +{ + uint32_t cmp; + + /* Does x86 really perform a rmw cycle on mmio for ptw? */ + start_exclusive(); + cmp = cpu_ldl_mmuidx_ra(in->env, in->gaddr, in->ptw_idx, 0); + if (cmp == old) { + cpu_stl_mmuidx_ra(in->env, in->gaddr, new, in->ptw_idx, 0); + } + end_exclusive(); + return cmp == old; +} + +static inline bool ptw_setl(const PTETranslate *in, uint32_t old, uint32_t set) +{ + if (set & ~old) { + uint32_t new = old | set; + if (likely(in->haddr)) { + old = cpu_to_le32(old); + new = cpu_to_le32(new); + return qatomic_cmpxchg((uint32_t *)in->haddr, old, new) == old; + } + return ptw_setl_slow(in, old, new); + } + return true; +} static bool mmu_translate(CPUX86State *env, const TranslateParams *in, TranslateResult *out, TranslateFault *err) { - TranslateParams nested_in = { - /* Use store for page table entries, to allow A/D flag updates. */ - .access_type = MMU_DATA_STORE, - .cr3 = env->nested_cr3, - .pg_mode = env->nested_pg_mode, - .mmu_idx = MMU_USER_IDX, - .use_stage2 = false, - }; - - CPUState *cs = env_cpu(env); - X86CPU *cpu = env_archcpu(env); const int32_t a20_mask = x86_get_a20_mask(env); const target_ulong addr = in->addr; const int pg_mode = in->pg_mode; const bool is_user = (in->mmu_idx == MMU_USER_IDX); const MMUAccessType access_type = in->access_type; - uint64_t ptep, pte; + uint64_t ptep, pte, rsvd_mask; + PTETranslate pte_trans = { + .env = env, + .err = err, + .ptw_idx = in->ptw_idx, + }; hwaddr pte_addr; - uint64_t rsvd_mask = PG_ADDRESS_MASK & ~MAKE_64BIT_MASK(0, cpu->phys_bits); uint32_t pkr; int page_size; + restart_all: + rsvd_mask = ~MAKE_64BIT_MASK(0, env_archcpu(env)->phys_bits); + rsvd_mask &= PG_ADDRESS_MASK; if (!(pg_mode & PG_MODE_NXE)) { rsvd_mask |= PG_NX_MASK; } @@ -100,17 +163,19 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, */ pte_addr = ((in->cr3 & ~0xfff) + (((addr >> 48) & 0x1ff) << 3)) & a20_mask; - PTE_HPHYS(pte_addr); - pte = x86_ldq_phys(cs, pte_addr); + if (!ptw_translate(&pte_trans, pte_addr)) { + return false; + } + restart_5: + pte = ptw_ldq(&pte_trans); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } if (pte & (rsvd_mask | PG_PSE_MASK)) { goto do_fault_rsvd; } - if (!(pte & PG_ACCESSED_MASK)) { - pte |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pte_addr, pte); + if (!ptw_setl(&pte_trans, pte, PG_ACCESSED_MASK)) { + goto restart_5; } ptep = pte ^ PG_NX_MASK; } else { @@ -123,17 +188,19 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, */ pte_addr = ((pte & PG_ADDRESS_MASK) + (((addr >> 39) & 0x1ff) << 3)) & a20_mask; - PTE_HPHYS(pte_addr); - pte = x86_ldq_phys(cs, pte_addr); + if (!ptw_translate(&pte_trans, pte_addr)) { + return false; + } + restart_4: + pte = ptw_ldq(&pte_trans); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } if (pte & (rsvd_mask | PG_PSE_MASK)) { goto do_fault_rsvd; } - if (!(pte & PG_ACCESSED_MASK)) { - pte |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pte_addr, pte); + if (!ptw_setl(&pte_trans, pte, PG_ACCESSED_MASK)) { + goto restart_4; } ptep &= pte ^ PG_NX_MASK; @@ -142,19 +209,21 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, */ pte_addr = ((pte & PG_ADDRESS_MASK) + (((addr >> 30) & 0x1ff) << 3)) & a20_mask; - PTE_HPHYS(pte_addr); - pte = x86_ldq_phys(cs, pte_addr); + if (!ptw_translate(&pte_trans, pte_addr)) { + return false; + } + restart_3_lma: + pte = ptw_ldq(&pte_trans); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } if (pte & rsvd_mask) { goto do_fault_rsvd; } - ptep &= pte ^ PG_NX_MASK; - if (!(pte & PG_ACCESSED_MASK)) { - pte |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pte_addr, pte); + if (!ptw_setl(&pte_trans, pte, PG_ACCESSED_MASK)) { + goto restart_3_lma; } + ptep &= pte ^ PG_NX_MASK; if (pte & PG_PSE_MASK) { /* 1 GB page */ page_size = 1024 * 1024 * 1024; @@ -167,15 +236,21 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, * Page table level 3 */ pte_addr = ((in->cr3 & ~0x1f) + ((addr >> 27) & 0x18)) & a20_mask; - PTE_HPHYS(pte_addr); - pte = x86_ldq_phys(cs, pte_addr); + if (!ptw_translate(&pte_trans, pte_addr)) { + return false; + } + rsvd_mask |= PG_HI_USER_MASK; + restart_3_nolma: + pte = ptw_ldq(&pte_trans); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } - rsvd_mask |= PG_HI_USER_MASK; if (pte & (rsvd_mask | PG_NX_MASK)) { goto do_fault_rsvd; } + if (!ptw_setl(&pte_trans, pte, PG_ACCESSED_MASK)) { + goto restart_3_nolma; + } ptep = PG_NX_MASK | PG_USER_MASK | PG_RW_MASK; } @@ -184,32 +259,37 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, */ pte_addr = ((pte & PG_ADDRESS_MASK) + (((addr >> 21) & 0x1ff) << 3)) & a20_mask; - PTE_HPHYS(pte_addr); - pte = x86_ldq_phys(cs, pte_addr); + if (!ptw_translate(&pte_trans, pte_addr)) { + return false; + } + restart_2_pae: + pte = ptw_ldq(&pte_trans); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } if (pte & rsvd_mask) { goto do_fault_rsvd; } - ptep &= pte ^ PG_NX_MASK; if (pte & PG_PSE_MASK) { /* 2 MB page */ page_size = 2048 * 1024; + ptep &= pte ^ PG_NX_MASK; goto do_check_protect; } - if (!(pte & PG_ACCESSED_MASK)) { - pte |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pte_addr, pte); + if (!ptw_setl(&pte_trans, pte, PG_ACCESSED_MASK)) { + goto restart_2_pae; } + ptep &= pte ^ PG_NX_MASK; /* * Page table level 1 */ pte_addr = ((pte & PG_ADDRESS_MASK) + (((addr >> 12) & 0x1ff) << 3)) & a20_mask; - PTE_HPHYS(pte_addr); - pte = x86_ldq_phys(cs, pte_addr); + if (!ptw_translate(&pte_trans, pte_addr)) { + return false; + } + pte = ptw_ldq(&pte_trans); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } @@ -224,8 +304,11 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, * Page table level 2 */ pte_addr = ((in->cr3 & ~0xfff) + ((addr >> 20) & 0xffc)) & a20_mask; - PTE_HPHYS(pte_addr); - pte = x86_ldl_phys(cs, pte_addr); + if (!ptw_translate(&pte_trans, pte_addr)) { + return false; + } + restart_2_nopae: + pte = ptw_ldl(&pte_trans); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } @@ -242,17 +325,18 @@ static bool mmu_translate(CPUX86State *env, const TranslateParams *in, rsvd_mask = 0x200000; goto do_check_protect_pse36; } - if (!(pte & PG_ACCESSED_MASK)) { - pte |= PG_ACCESSED_MASK; - x86_stl_phys_notdirty(cs, pte_addr, pte); + if (!ptw_setl(&pte_trans, pte, PG_ACCESSED_MASK)) { + goto restart_2_nopae; } /* * Page table level 1 */ pte_addr = ((pte & ~0xfffu) + ((addr >> 10) & 0xffc)) & a20_mask; - PTE_HPHYS(pte_addr); - pte = x86_ldl_phys(cs, pte_addr); + if (!ptw_translate(&pte_trans, pte_addr)) { + return false; + } + pte = ptw_ldl(&pte_trans); if (!(pte & PG_PRESENT_MASK)) { goto do_fault; } @@ -319,27 +403,35 @@ do_check_protect_pse36: uint32_t set = PG_ACCESSED_MASK; if (access_type == MMU_DATA_STORE) { set |= PG_DIRTY_MASK; + } else if (!(pte & PG_DIRTY_MASK)) { + /* + * Only set write access if already dirty... + * otherwise wait for dirty access. + */ + prot &= ~PAGE_WRITE; } - if (set & ~pte) { - pte |= set; - x86_stl_phys_notdirty(cs, pte_addr, pte); + if (!ptw_setl(&pte_trans, pte, set)) { + /* + * We can arrive here from any of 3 levels and 2 formats. + * The only safe thing is to restart the entire lookup. + */ + goto restart_all; } } - if (!(pte & PG_DIRTY_MASK)) { - /* only set write access if already dirty... otherwise wait - for dirty access */ - assert(access_type != MMU_DATA_STORE); - prot &= ~PAGE_WRITE; - } - /* align to page_size */ out->paddr = (pte & a20_mask & PG_ADDRESS_MASK & ~(page_size - 1)) | (addr & (page_size - 1)); - if (in->use_stage2) { - nested_in.addr = out->paddr; - nested_in.access_type = access_type; + if (in->ptw_idx == MMU_NESTED_IDX) { + TranslateParams nested_in = { + .addr = out->paddr, + .access_type = access_type, + .cr3 = env->nested_cr3, + .pg_mode = env->nested_pg_mode, + .mmu_idx = MMU_USER_IDX, + .ptw_idx = MMU_PHYS_IDX, + }; if (!mmu_translate(env, &nested_in, out, err)) { err->stage2 = S2_GPA; @@ -436,7 +528,7 @@ static bool get_physical_address(CPUX86State *env, vaddr addr, in.cr3 = env->nested_cr3; in.pg_mode = env->nested_pg_mode; in.mmu_idx = MMU_USER_IDX; - in.use_stage2 = false; + in.ptw_idx = MMU_PHYS_IDX; if (!mmu_translate(env, &in, out, err)) { err->stage2 = S2_GPA; @@ -449,7 +541,7 @@ static bool get_physical_address(CPUX86State *env, vaddr addr, default: in.cr3 = env->cr[3]; in.mmu_idx = mmu_idx; - in.use_stage2 = use_stage2; + in.ptw_idx = use_stage2 ? MMU_NESTED_IDX : MMU_PHYS_IDX; in.pg_mode = get_pg_mode(env); if (likely(in.pg_mode)) { @@ -504,6 +596,8 @@ bool x86_cpu_tlb_fill(CPUState *cs, vaddr addr, int size, } if (probe) { + /* This will be used if recursing for stage2 translation. */ + env->error_code = err.error_code; return false; }