From patchwork Fri Apr 20 15:50:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 133904 Delivered-To: patch@linaro.org Received: by 10.46.66.142 with SMTP id h14csp441933ljf; Fri, 20 Apr 2018 08:56:38 -0700 (PDT) X-Google-Smtp-Source: AIpwx48xkU/JwXNsS2C2KK6J9ZFG3KP+JHE/o8oWiYkQB+GRNIIPONVTXwVbKtspKjTNfl7xI9be X-Received: by 10.31.193.135 with SMTP id r129mr7903877vkf.112.1524239798596; Fri, 20 Apr 2018 08:56:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524239798; cv=none; d=google.com; s=arc-20160816; b=pMV/mL8TsW8+dSMwjLCw162ts7H6Wpc1P4i0puwgzXyQc3eG7pnb0YEjhOUOJ8RGAT aZkD9HyrtRYQo8l8eLqs9EdAgjT2MXLXQoNbbL+pqVOpYRIjHMDipvwuPA9CYwl7LXGf r1y/ik+8b2Er+wa65PKzupeEV9BBbQHUyvHQ/bqFxB7mzuenx6zmUeDeIajTW1RosNmN Zw35RsUj5OITaZCRxRzcokKUtPUCdA7Hi//UYWQoeB62GY6yRNSPBo7RUtn5kQc/XBbm L2keaHm04zbJmVQEWEWncJQcKKCHBLMyP57ofBwND0+E5YrhslX5lKxuYRdfIZz9afCp IfEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:arc-authentication-results; bh=xIZZrhq2WxxKKT4vh353l4y0ShsBHkRTXQp9P0XvCNQ=; b=Voo169x+Devja0lif7Vt59MvZ2rQMQSL8UncEOfJxJ01grb5LF/bxqRGaBYvZecdH8 BYSXJuWHxZpR6OBaf5uanQ+sBen3QMVsM6bIXWrt5MWILkOQOHOtAwdtX3ZRR1DjCuCj Jff0D1IPNh4+CDdtpbOjAHHW937OwsLj09dkRE8pQViYpZV5OQq4yoOGtw/XDGa10Ohr Ml9Pcv3pjxY591hDKKWTdSjGsRJgF2SZOfTbvI0K8zoMJ5++uAJhwbjZlnHv2NPFRVBp tMmT/44ZjUmaM7I1cUFuF3KbZ2qjtdmSL/yBs+ThrZ9YjLrxS5kjuc229Gmiw7RKauCE kIng== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Ep/yGLja; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id b80si2810461vkd.144.2018.04.20.08.56.38 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 20 Apr 2018 08:56:38 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Ep/yGLja; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:39146 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f9YOr-0001NK-UG for patch@linaro.org; Fri, 20 Apr 2018 11:56:38 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50849) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f9YJM-0005yq-SI for qemu-devel@nongnu.org; Fri, 20 Apr 2018 11:51:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f9YJK-0005uU-DE for qemu-devel@nongnu.org; Fri, 20 Apr 2018 11:50:56 -0400 Received: from mail-wr0-x242.google.com ([2a00:1450:400c:c0c::242]:34108) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1f9YJK-0005u2-1d for qemu-devel@nongnu.org; Fri, 20 Apr 2018 11:50:54 -0400 Received: by mail-wr0-x242.google.com with SMTP id p18-v6so5051992wrm.1 for ; Fri, 20 Apr 2018 08:50:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xIZZrhq2WxxKKT4vh353l4y0ShsBHkRTXQp9P0XvCNQ=; b=Ep/yGLjaFZOigRamF+jVOXv46cxPv5rN44WCwMfIddKhr1bDUTtziXG/PeJUFpseNP axGWJee4GT05iqIyrmRzAm9D7lVRlwzKYX5fufu5BcPW0tBo5J6fYlpxBm+LZ5XlbVPB rpkqaRkssfbHjkqhtabhoPkJTzpvy0PLeeWQI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xIZZrhq2WxxKKT4vh353l4y0ShsBHkRTXQp9P0XvCNQ=; b=rFeHEzSklc5HPO2IOh/ZifoQ+eUvtropTidwoliGa1KFjL4A7+4uo8GdOU+C1YRpdT Gb0CbC/fuSGz+tjFImpk7MxKhzMR1A2y8zDA3gr6U5Vq+eiGq6N1j/4eiB8Z9kAldxwg OddWVzwvbPQf243qk1Hfp7kT5wldQjKtxlWE/gBMLikeeIri+2gSYB3DOlR9iS+jfBKu 8htwOuzn+GBcvQntNHR3BWWtePAni0uWu28bQqR9Colxd88zA/+m/yF1IwuR2lroXWQK wSTBKI0oK9JU4vT5e+91KG2I7zL7kXp446v9l1YoymiZWA2UJy2i9axZqzD09T2VF4Jz NtIQ== X-Gm-Message-State: ALQs6tA8pE9YfYjhAp3sAY/lVXEMkbEK+vqMJ18cF4GYOz8ZnnQRRsHL f0yqJTt+ducwcJdPnLlGSk3EVw== X-Received: by 2002:adf:dc4b:: with SMTP id m11-v6mr8830285wrj.174.1524239452580; Fri, 20 Apr 2018 08:50:52 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id b10-v6sm355599wrn.42.2018.04.20.08.50.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 20 Apr 2018 08:50:46 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 776AB3E03CC; Fri, 20 Apr 2018 16:50:45 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Date: Fri, 20 Apr 2018 16:50:43 +0100 Message-Id: <20180420155045.18862-5-alex.bennee@linaro.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180420155045.18862-1-alex.bennee@linaro.org> References: <20180420155045.18862-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:400c:c0c::242 Subject: [Qemu-devel] [RFC PATCH 4/6] accel/tcg: create load_helper X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Peter Crosthwaite , richard.henderson@linaro.org, pbonzini@redhat.com, =?utf-8?q?Alex_Ben?= =?utf-8?q?n=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Collapse all the helpers so far into a common load_helper which can be used for mmu/cmmu and little/big endian functions. Signed-off-by: Alex Bennée --- accel/tcg/softmmu.c | 578 ++++++++++++++------------------------------ 1 file changed, 183 insertions(+), 395 deletions(-) -- 2.17.0 diff --git a/accel/tcg/softmmu.c b/accel/tcg/softmmu.c index fcad3d360f..e6f93250f9 100644 --- a/accel/tcg/softmmu.c +++ b/accel/tcg/softmmu.c @@ -15,88 +15,236 @@ victim_tlb_hit(env, mmu_idx, index, offsetof(CPUTLBEntry, TY), \ (ADDR) & TARGET_PAGE_MASK) -/* For the benefit of TCG generated code, we want to avoid the complication - of ABI-specific return type promotion and always return a value extended - to the register size of the host. This is tcg_target_long, except in the - case of a 32-bit host and 64-bit data, and for that we always have - uint64_t. Don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -static inline uint8_t io_readb(CPUArchState *env, - size_t mmu_idx, size_t index, - target_ulong addr, - uintptr_t retaddr) + +/* + * Load Helpers + * + * We support two different access types. SOFTMMU_CODE_ACCESS is + * specifically for reading instructions from system memory. It is + * called by the translation loop and in some helpers where the code + * is disassembled. It shouldn't be called directly by guest code. + */ + +static inline uint8_t io_readb(CPUArchState *env, size_t mmu_idx, size_t index, + target_ulong addr, uintptr_t retaddr) { CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, 1); } +static inline uint16_t io_readw(CPUArchState *env, size_t mmu_idx, size_t index, + target_ulong addr, uintptr_t retaddr) +{ + CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; + return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, 2); +} -tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) +static tcg_target_ulong load_helper(CPUArchState *env, target_ulong addr, + size_t size, bool big_endian, + bool code_read, TCGMemOpIdx oi, + uintptr_t retaddr) { unsigned mmu_idx = get_mmuidx(oi); int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_read; + target_ulong tlb_addr; unsigned a_bits = get_alignment_bits(get_memop(oi)); uintptr_t haddr; - uint8_t res; + tcg_target_ulong res; + + if (code_read) { + tlb_addr = env->tlb_table[mmu_idx][index].addr_code; + } else { + tlb_addr = env->tlb_table[mmu_idx][index].addr_read; + } + /* Handle unaligned */ if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_LOAD, - mmu_idx, retaddr); + if (code_read) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_INST_FETCH, + mmu_idx, retaddr); + } else { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_LOAD, + mmu_idx, retaddr); + } } /* If the TLB entry is for a different page, reload and try again. */ if ((addr & TARGET_PAGE_MASK) != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { - if (!VICTIM_TLB_HIT(addr_read, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, 1, MMU_DATA_LOAD, - mmu_idx, retaddr); + if (code_read) { + if (!VICTIM_TLB_HIT(addr_code, addr)) { + tlb_fill(ENV_GET_CPU(env), addr, size, MMU_INST_FETCH, + mmu_idx, retaddr); + } + tlb_addr = env->tlb_table[mmu_idx][index].addr_code; + } else { + if (!VICTIM_TLB_HIT(addr_read, addr)) { + tlb_fill(ENV_GET_CPU(env), addr, size, MMU_DATA_LOAD, + mmu_idx, retaddr); + } + tlb_addr = env->tlb_table[mmu_idx][index].addr_read; } - tlb_addr = env->tlb_table[mmu_idx][index].addr_read; } /* Handle an IO access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (1 - 1)) != 0) { + if ((addr & (size - 1)) != 0) { goto do_unaligned_access; } /* ??? Note that the io helpers always read data in the target byte ordering. We should push the LE/BE request down into io. */ - res = io_readb(env, mmu_idx, index, addr, retaddr); - res = (res); + switch (size) { + case 1: + { + uint8_t rv = io_readb(env, mmu_idx, index, addr, retaddr); + res = rv; + break; + } + case 2: + { + uint16_t rv = io_readw(env, mmu_idx, index, addr, retaddr); + if (big_endian) { + res = bswap16(rv); + } else { + res = rv; + } + break; + } + default: + g_assert_not_reached(); + break; + } + return res; } /* Handle slow unaligned access (it spans two pages or IO). */ - if (1 > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + 1 - 1 + if (size > 1 + && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 >= TARGET_PAGE_SIZE)) { target_ulong addr1, addr2; - uint8_t res1, res2; + tcg_target_ulong res1, res2; unsigned shift; do_unaligned_access: - addr1 = addr & ~(1 - 1); - addr2 = addr1 + 1; - res1 = helper_ret_ldub_mmu(env, addr1, oi, retaddr); - res2 = helper_ret_ldub_mmu(env, addr2, oi, retaddr); - shift = (addr & (1 - 1)) * 8; - - /* Little-endian combine. */ - res = (res1 >> shift) | (res2 << ((1 * 8) - shift)); + addr1 = addr & ~(size - 1); + addr2 = addr1 + size; + res1 = load_helper(env, addr1, size, big_endian, code_read, oi, retaddr); + res2 = load_helper(env, addr2, size, big_endian, code_read, oi, retaddr); + shift = (addr & (size - 1)) * 8; + + if (big_endian) { + /* Big-endian combine. */ + res = (res1 << shift) | (res2 >> ((size * 8) - shift)); + } else { + /* Little-endian combine. */ + res = (res1 >> shift) | (res2 << ((size * 8) - shift)); + } return res; } haddr = addr + env->tlb_table[mmu_idx][index].addend; - res = ldub_p((uint8_t *)haddr); + switch (size) { + case 1: + res = ldub_p((uint8_t *)haddr); + break; + case 2: + if (big_endian) { + res = lduw_be_p((uint8_t *)haddr); + } else { + res = lduw_le_p((uint8_t *)haddr); + } + break; + default: + g_assert_not_reached(); + break; + } + return res; +} +/* + * For the benefit of TCG generated code, we want to avoid the + * complication of ABI-specific return type promotion and always + * return a value extended to the register size of the host. This is + * tcg_target_long, except in the case of a 32-bit host and 64-bit + * data, and for that we always have uint64_t. + * + * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. + */ - return res; +tcg_target_ulong __attribute__((flatten)) helper_ret_ldub_mmu(CPUArchState *env, + target_ulong addr, + TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, 1, false, false, oi, retaddr); +} + + + +tcg_target_ulong __attribute__((flatten)) helper_le_lduw_mmu(CPUArchState *env, + target_ulong addr, + TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, 2, false, false, oi, retaddr); } + + +tcg_target_ulong __attribute__((flatten)) helper_be_lduw_mmu(CPUArchState *env, + target_ulong addr, + TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, 2, true, false, oi, retaddr); +} + +uint8_t __attribute__((flatten)) helper_ret_ldb_cmmu (CPUArchState *env, + target_ulong addr, + TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, 1, false, true, oi, retaddr); +} + +uint16_t __attribute__((flatten)) helper_le_ldw_cmmu(CPUArchState *env, + target_ulong addr, + TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, 2, false, true, oi, retaddr); +} + +uint16_t __attribute__((flatten)) helper_be_ldw_cmmu(CPUArchState *env, + target_ulong addr, + TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, 2, true, true, oi, retaddr); +} + /* Provide signed versions of the load routines as well. We can of course avoid this for 64-bit data, or for 32-bit data on 32-bit host. */ + +tcg_target_ulong __attribute__((flatten)) helper_le_ldsw_mmu(CPUArchState *env, + target_ulong addr, + TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return (int16_t)helper_le_lduw_mmu(env, addr, oi, retaddr); +} + + +tcg_target_ulong __attribute__((flatten)) helper_be_ldsw_mmu(CPUArchState *env, + target_ulong addr, + TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return (int16_t)helper_be_lduw_mmu(env, addr, oi, retaddr); +} + static inline void io_writeb(CPUArchState *env, size_t mmu_idx, size_t index, uint8_t val, @@ -183,232 +331,6 @@ void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, } -/* For the benefit of TCG generated code, we want to avoid the complication - of ABI-specific return type promotion and always return a value extended - to the register size of the host. This is tcg_target_long, except in the - case of a 32-bit host and 64-bit data, and for that we always have - uint64_t. Don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - unsigned mmu_idx = get_mmuidx(oi); - int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_code; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - uint8_t res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_INST_FETCH, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if ((addr & TARGET_PAGE_MASK) - != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { - if (!VICTIM_TLB_HIT(addr_code, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, 1, MMU_INST_FETCH, - mmu_idx, retaddr); - } - tlb_addr = env->tlb_table[mmu_idx][index].addr_code; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (1 - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = io_readb(env, mmu_idx, index, addr, retaddr); - res = (res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (1 > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + 1 - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - uint8_t res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(1 - 1); - addr2 = addr1 + 1; - res1 = helper_ret_ldb_cmmu(env, addr1, oi, retaddr); - res2 = helper_ret_ldb_cmmu(env, addr2, oi, retaddr); - shift = (addr & (1 - 1)) * 8; - - /* Little-endian combine. */ - res = (res1 >> shift) | (res2 << ((1 * 8) - shift)); - return res; - } - - haddr = addr + env->tlb_table[mmu_idx][index].addend; - - res = ldub_p((uint8_t *)haddr); - - - - return res; -} - -static inline uint16_t io_readw(CPUArchState *env, - size_t mmu_idx, size_t index, - target_ulong addr, - uintptr_t retaddr) -{ - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, 2); -} - - -tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - unsigned mmu_idx = get_mmuidx(oi); - int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_read; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - uint16_t res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_LOAD, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if ((addr & TARGET_PAGE_MASK) - != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { - if (!VICTIM_TLB_HIT(addr_read, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, 2, MMU_DATA_LOAD, - mmu_idx, retaddr); - } - tlb_addr = env->tlb_table[mmu_idx][index].addr_read; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (2 - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = io_readw(env, mmu_idx, index, addr, retaddr); - res = (res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (2 > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + 2 - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - uint16_t res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(2 - 1); - addr2 = addr1 + 2; - res1 = helper_le_lduw_mmu(env, addr1, oi, retaddr); - res2 = helper_le_lduw_mmu(env, addr2, oi, retaddr); - shift = (addr & (2 - 1)) * 8; - - /* Little-endian combine. */ - res = (res1 >> shift) | (res2 << ((2 * 8) - shift)); - return res; - } - - haddr = addr + env->tlb_table[mmu_idx][index].addend; - - - - res = lduw_le_p((uint8_t *)haddr); - - return res; -} - - -tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - unsigned mmu_idx = get_mmuidx(oi); - int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_read; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - uint16_t res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_LOAD, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if ((addr & TARGET_PAGE_MASK) - != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { - if (!VICTIM_TLB_HIT(addr_read, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, 2, MMU_DATA_LOAD, - mmu_idx, retaddr); - } - tlb_addr = env->tlb_table[mmu_idx][index].addr_read; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (2 - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = io_readw(env, mmu_idx, index, addr, retaddr); - res = bswap16(res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (2 > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + 2 - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - uint16_t res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(2 - 1); - addr2 = addr1 + 2; - res1 = helper_be_lduw_mmu(env, addr1, oi, retaddr); - res2 = helper_be_lduw_mmu(env, addr2, oi, retaddr); - shift = (addr & (2 - 1)) * 8; - - /* Big-endian combine. */ - res = (res1 << shift) | (res2 >> ((2 * 8) - shift)); - return res; - } - - haddr = addr + env->tlb_table[mmu_idx][index].addend; - res = lduw_be_p((uint8_t *)haddr); - return res; -} - - -/* Provide signed versions of the load routines as well. We can of course - avoid this for 64-bit data, or for 32-bit data on 32-bit host. */ - -tcg_target_ulong helper_le_ldsw_mmu(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - return (int16_t)helper_le_lduw_mmu(env, addr, oi, retaddr); -} - - -tcg_target_ulong helper_be_ldsw_mmu(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - return (int16_t)helper_be_lduw_mmu(env, addr, oi, retaddr); -} /* Provide signed versions of the load routines as well. We can of course @@ -571,137 +493,3 @@ void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, haddr = addr + env->tlb_table[mmu_idx][index].addend; stw_be_p((uint8_t *)haddr, val); } - -/* For the benefit of TCG generated code, we want to avoid the complication - of ABI-specific return type promotion and always return a value extended - to the register size of the host. This is tcg_target_long, except in the - case of a 32-bit host and 64-bit data, and for that we always have - uint64_t. Don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - unsigned mmu_idx = get_mmuidx(oi); - int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_code; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - uint16_t res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_INST_FETCH, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if ((addr & TARGET_PAGE_MASK) - != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { - if (!VICTIM_TLB_HIT(addr_code, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, 2, MMU_INST_FETCH, - mmu_idx, retaddr); - } - tlb_addr = env->tlb_table[mmu_idx][index].addr_code; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (2 - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = io_readw(env, mmu_idx, index, addr, retaddr); - res = (res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (2 > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + 2 - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - uint16_t res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(2 - 1); - addr2 = addr1 + 2; - res1 = helper_le_ldw_cmmu(env, addr1, oi, retaddr); - res2 = helper_le_ldw_cmmu(env, addr2, oi, retaddr); - shift = (addr & (2 - 1)) * 8; - - /* Little-endian combine. */ - res = (res1 >> shift) | (res2 << ((2 * 8) - shift)); - return res; - } - - haddr = addr + env->tlb_table[mmu_idx][index].addend; - - - - res = lduw_le_p((uint8_t *)haddr); - - return res; -} - - -uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - unsigned mmu_idx = get_mmuidx(oi); - int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_code; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - uint16_t res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_INST_FETCH, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if ((addr & TARGET_PAGE_MASK) - != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { - if (!VICTIM_TLB_HIT(addr_code, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, 2, MMU_INST_FETCH, - mmu_idx, retaddr); - } - tlb_addr = env->tlb_table[mmu_idx][index].addr_code; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (2 - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = io_readw(env, mmu_idx, index, addr, retaddr); - res = bswap16(res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (2 > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + 2 - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - uint16_t res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(2 - 1); - addr2 = addr1 + 2; - res1 = helper_be_ldw_cmmu(env, addr1, oi, retaddr); - res2 = helper_be_ldw_cmmu(env, addr2, oi, retaddr); - shift = (addr & (2 - 1)) * 8; - - /* Big-endian combine. */ - res = (res1 << shift) | (res2 >> ((2 * 8) - shift)); - return res; - } - - haddr = addr + env->tlb_table[mmu_idx][index].addend; - res = lduw_be_p((uint8_t *)haddr); - return res; -}