From patchwork Mon Dec 17 15:01:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 154004 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp2569811ljp; Mon, 17 Dec 2018 07:05:03 -0800 (PST) X-Google-Smtp-Source: AFSGD/V4unP6p06tiIfvH0awQikPDGYrtCSDlUiPKA3LGETE0q3yWPP3MXsrSPZCUhXDmjoBZ70s X-Received: by 2002:ac8:2585:: with SMTP id e5mr13578489qte.233.1545059103016; Mon, 17 Dec 2018 07:05:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545059103; cv=none; d=google.com; s=arc-20160816; b=mMFjHKgP90nk6PVaxAv8Xq27ClMw1X4eDmGDGdQPU44Mk0wnB6voYmu+hvBff18tYj oU+a8wgW7UmdoewuBxDUIdD1iHx3/dXGScDzrInnsDaxsPGNGrn0OJHQ7dJUZOIOvNyj FaAgTta6hTKY89V5lsRl6Xf4Mk6yJyxB9ascnvVjpLOY6L+vgYN5LLWfolWIMksIh8mF N6+JbknwluO5oMt7tO8VW6zMSWvmpWJ0b1KjMM0dKtY8NTcqhbQ8qyPA4p2W3/KQ7f+e kpWsdkthRunt/d8qfNzdBf/xJtvZ+6MKieZETQ4/DOFcsZRifPYSOeX5AS1oOsaV2Qnt tdjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=y3ewjpA7CB61yUls5cZrKCOIawrs/mDTQT8yJ/ektTM=; b=wjWJaRnKOKk+CrB59nzalcwEYSZHoU5vlWXCkCemc5mpg3LKt1bpWL87lCwfLO1OeX eGc6EEMY9DyNQLkqjLlLG1CrJ9nuPUmeCABxFC/7fhq4oFsGcERQ0UdZDb/Y5Umi9VxH M57KPDzCDXTywmB0KO6pTJpAyzLwa8JUtJf+HZ4QO+snCC0F58gAaOZUWJjRrcR9uNod qVTG3KZnHqyWIV4PCs7hPjQ1JDIKZNJf34dXaS7TfjarNQLUkFdjOB50R9hgRj0lhC77 sSe9X9Sr3mY6egvp+kgv3TeTzxXHar3za5OoGLL1BmAyT+08Fi/i12fAzyvhek543gBt oyxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=X1pxcaF3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id b3si9150qkd.129.2018.12.17.07.05.02 for (version=TLS1 cipher=AES128-SHA bits=128/128); Mon, 17 Dec 2018 07:05:02 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=X1pxcaF3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:47303 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gYuS6-0007gU-12 for patch@linaro.org; Mon, 17 Dec 2018 10:05:02 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54808) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gYuOp-0005ZV-1Y for qemu-devel@nongnu.org; Mon, 17 Dec 2018 10:01:45 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gYuOe-0006sY-BJ for qemu-devel@nongnu.org; Mon, 17 Dec 2018 10:01:38 -0500 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]:54584) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gYuOd-0006m4-QQ for qemu-devel@nongnu.org; Mon, 17 Dec 2018 10:01:28 -0500 Received: by mail-wm1-x343.google.com with SMTP id a62so12660152wmh.4 for ; Mon, 17 Dec 2018 07:01:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=y3ewjpA7CB61yUls5cZrKCOIawrs/mDTQT8yJ/ektTM=; b=X1pxcaF3jrCBnrovDUV4Cd66CmFBhire0NLzL+sxsouJmObsDfEU5X/fx2zKkkvwkU RxTG02ZMbpmGMbT8geR3LJXvL//YLdkkB8T2qQ8ynuUcgLQTFpMETbuO6lL1+V+eMKb3 GxYMVvnB4U/6oI4sgAEKluJwAYvPWtCF/xxX4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=y3ewjpA7CB61yUls5cZrKCOIawrs/mDTQT8yJ/ektTM=; b=N/rIpa0ufCPFJVKzL/TOl0t9vEX/faxxrslgSKMrg6yPI2bWMDn5cr+NDyP3GvNj3F w/PsLbrPTYG40Y8KQo61XOAMIoFYHu59+nvDJNouerHM9dppEsGC0cqxSogpMpQ/0o2w EYsWnU81/Zt0E2U3//TrOfnvnJu/T/1oAfOfccz5tlCYEi4ZFYP576n+xwPUh43kileX UF2Jgi2XCOj0J8bad+rKDNnktHH7Jil7oqyWnC3wGjlWMwi6p6WkxbToFxsyZssJO/tH MF5JzXTurQ0Doqeerofw+vzDv3J7+7shlX1I/nvbiUvrxP8ps+/nB+thR0yPf6JkpM3b OyQg== X-Gm-Message-State: AA+aEWYyVxIGrui2gDfYd1Lu7zo9C/DJ6IRYnfwCg5kUIxZhwfu8rp2W XIzv2EBLdC22j6gFithxfoO8Iw== X-Received: by 2002:a1c:3c06:: with SMTP id j6mr10965121wma.27.1545058880541; Mon, 17 Dec 2018 07:01:20 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id l14sm761900wrp.55.2018.12.17.07.01.17 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 17 Dec 2018 07:01:18 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 94C5A3E0530; Mon, 17 Dec 2018 15:01:16 +0000 (GMT) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Date: Mon, 17 Dec 2018 15:01:14 +0000 Message-Id: <20181217150116.10446-3-alex.bennee@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181217150116.10446-1-alex.bennee@linaro.org> References: <20181217150116.10446-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::343 Subject: [Qemu-devel] [PATCH v1 2/4] accel/tcg: introduce softmmu.c X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , Richard Henderson , cota@braap.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Peter Crosthwaite Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Instead of expanding a series of macros to generate the load/store helpers we move stuff into common functions and rely on the compiler to eliminate the dead code for each variant. Signed-off-by: Alex Bennée --- accel/tcg/softmmu.c | 452 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 452 insertions(+) create mode 100644 accel/tcg/softmmu.c -- 2.17.1 diff --git a/accel/tcg/softmmu.c b/accel/tcg/softmmu.c new file mode 100644 index 0000000000..e08730736f --- /dev/null +++ b/accel/tcg/softmmu.c @@ -0,0 +1,452 @@ +/* + * Software MMU support + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "cputlb.h" +#include "exec/exec-all.h" +#include "exec/cpu_ldst.h" +#include "tcg/tcg.h" + +#ifdef TARGET_WORDS_BIGENDIAN +#define NEED_BE_BSWAP 0 +#define NEED_LE_BSWAP 1 +#else +#define NEED_BE_BSWAP 1 +#define NEED_LE_BSWAP 0 +#endif + +/* + * Byte Swap Helper + * + * This should all dead code away depending on the build host and + * access type. + */ + +static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian) +{ + if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) { + switch (size) { + case 1: return val; + case 2: return bswap16(val); + case 4: return bswap32(val); + case 8: return bswap64(val); + default: + g_assert_not_reached(); + } + } else { + return val; + } +} + +/* Macro to call the above, with local variables from the use context. */ +#define VICTIM_TLB_HIT(TY, ADDR) \ + victim_tlb_hit(env, mmu_idx, index, offsetof(CPUTLBEntry, TY), \ + (ADDR) & TARGET_PAGE_MASK) + +/* + * Load Helpers + * + * We support two different access types. SOFTMMU_CODE_ACCESS is + * specifically for reading instructions from system memory. It is + * called by the translation loop and in some helpers where the code + * is disassembled. It shouldn't be called directly by guest code. + */ + +static tcg_target_ulong load_helper(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr, + size_t size, bool big_endian, + bool code_read) +{ + uintptr_t mmu_idx = get_mmuidx(oi); + uintptr_t index = tlb_index(env, mmu_idx, addr); + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = code_read ? entry->addr_code : entry->addr_read; + unsigned a_bits = get_alignment_bits(get_memop(oi)); + uintptr_t haddr; + tcg_target_ulong res; + + /* Handle unaligned */ + if (addr & ((1 << a_bits) - 1)) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, + mmu_idx, retaddr); + } + + /* If the TLB entry is for a different page, reload and try again. */ + if (!tlb_hit(tlb_addr, addr)) { + if (!VICTIM_TLB_HIT(addr_code, addr)) { + tlb_fill(ENV_GET_CPU(env), addr, size, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, + mmu_idx, retaddr); + } + tlb_addr = code_read ? entry->addr_code : entry->addr_read; + } + + /* Handle an IO access. */ + if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { + CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; + uint64_t tmp; + + if ((addr & (size - 1)) != 0) { + goto do_unaligned_access; + } + + tmp = io_readx(env, iotlbentry, mmu_idx, addr, retaddr, + addr & tlb_addr & TLB_RECHECK, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, size); + return handle_bswap(tmp, size, big_endian); + } + + /* Handle slow unaligned access (it spans two pages or IO). */ + if (size > 1 + && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 + >= TARGET_PAGE_SIZE)) { + target_ulong addr1, addr2; + tcg_target_ulong r1, r2; + unsigned shift; + do_unaligned_access: + addr1 = addr & ~(size - 1); + addr2 = addr1 + size; + r1 = load_helper(env, addr1, oi, retaddr, size, big_endian, code_read); + r2 = load_helper(env, addr2, oi, retaddr, size, big_endian, code_read); + shift = (addr & (size - 1)) * 8; + + if (big_endian) { + /* Big-endian combine. */ + res = (r1 << shift) | (r2 >> ((size * 8) - shift)); + } else { + /* Little-endian combine. */ + res = (r1 >> shift) | (r2 << ((size * 8) - shift)); + } + return res; + } + + haddr = addr + entry->addend; + + switch (size) { + case 1: + res = ldub_p((uint8_t *)haddr); + break; + case 2: + if (big_endian) { + res = lduw_be_p((uint8_t *)haddr); + } else { + res = lduw_le_p((uint8_t *)haddr); + } + break; + case 4: + if (big_endian) { + res = ldl_be_p((uint8_t *)haddr); + } else { + res = ldl_le_p((uint8_t *)haddr); + } + break; + case 8: + if (big_endian) { + res = ldq_be_p((uint8_t *)haddr); + } else { + res = ldq_le_p((uint8_t *)haddr); + } + break; + default: + g_assert_not_reached(); + break; + } + + return res; +} + +/* + * For the benefit of TCG generated code, we want to avoid the + * complication of ABI-specific return type promotion and always + * return a value extended to the register size of the host. This is + * tcg_target_long, except in the case of a 32-bit host and 64-bit + * data, and for that we always have uint64_t. + * + * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. + */ + +tcg_target_ulong __attribute__((flatten)) +helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, true, false); +} + +/* + * Code Access + */ + +uint8_t __attribute__((flatten)) +helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, true); +} + +uint16_t __attribute__((flatten)) +helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, true); +} + +uint16_t __attribute__((flatten)) +helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, true); +} + +uint32_t __attribute__((flatten)) +helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, true); +} + +uint32_t __attribute__((flatten)) +helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, true); +} + +uint64_t __attribute__((flatten)) +helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, false, true); +} + +uint64_t __attribute__((flatten)) +helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, true, true); +} + +/* Provide signed versions of the load routines as well. We can of course + avoid this for 64-bit data, or for 32-bit data on 32-bit host. */ + +tcg_target_ulong __attribute__((flatten)) +helper_le_ldsw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return (int16_t)helper_le_lduw_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong __attribute__((flatten)) +helper_be_ldsw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return (int16_t)helper_be_lduw_mmu(env, addr, oi, retaddr); +} + +/* + * Store Helpers + */ + +static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr, size_t size, + bool big_endian) +{ + uintptr_t mmu_idx = get_mmuidx(oi); + uintptr_t index = tlb_index(env, mmu_idx, addr); + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = tlb_addr_write(entry); + unsigned a_bits = get_alignment_bits(get_memop(oi)); + uintptr_t haddr; + + if (addr & ((1 << a_bits) - 1)) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + + /* If the TLB entry is for a different page, reload and try again. */ + if (!tlb_hit(tlb_addr, addr)) { + if (!VICTIM_TLB_HIT(addr_write, addr)) { + tlb_fill(ENV_GET_CPU(env), addr, size, MMU_DATA_STORE, + mmu_idx, retaddr); + } + tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; + } + + /* Handle an IO access. */ + if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { + CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; + + if ((addr & (size - 1)) != 0) { + goto do_unaligned_access; + } + + io_writex(env, iotlbentry, mmu_idx, + handle_bswap(val, size, big_endian), + addr, retaddr, tlb_addr & TLB_RECHECK, size); + return; + } + + /* Handle slow unaligned access (it spans two pages or IO). */ + if (size > 1 + && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 + >= TARGET_PAGE_SIZE)) { + int i; + uintptr_t index2; + CPUTLBEntry *entry2; + target_ulong page2, tlb_addr2; + do_unaligned_access: + /* Ensure the second page is in the TLB. Note that the first page + is already guaranteed to be filled, and that the second page + cannot evict the first. */ + page2 = (addr + size) & TARGET_PAGE_MASK; + index2 = tlb_index(env, mmu_idx, page2); + entry2 = tlb_entry(env, mmu_idx, index2); + tlb_addr2 = tlb_addr_write(entry2); + if (!tlb_hit_page(tlb_addr2, page2) + && !VICTIM_TLB_HIT(addr_write, page2)) { + tlb_fill(ENV_GET_CPU(env), page2, size, MMU_DATA_STORE, + mmu_idx, retaddr); + } + + /* XXX: not efficient, but simple. */ + /* This loop must go in the forward direction to avoid issues + with self-modifying code in Windows 64-bit. */ + for (i = 0; i < size; ++i) { + uint8_t val8; + if (big_endian) { + /* Big-endian extract. */ + val8 = val >> (((size - 1) * 8) - (i * 8)); + } else { + /* Little-endian extract. */ + val8 = val >> (i * 8); + } + store_helper(env, addr + i, val8, oi, retaddr, 1, big_endian); + } + return; + } + + haddr = addr + entry->addend; + + switch (size) { + case 1: + stb_p((uint8_t *)haddr, val); + break; + case 2: + if (big_endian) { + stw_be_p((uint8_t *)haddr, val); + } else { + stw_le_p((uint8_t *)haddr, val); + } + break; + case 4: + if (big_endian) { + stl_be_p((uint8_t *)haddr, val); + } else { + stl_le_p((uint8_t *)haddr, val); + } + break; + case 8: + if (big_endian) { + stq_be_p((uint8_t *)haddr, val); + } else { + stq_le_p((uint8_t *)haddr, val); + } + break; + default: + g_assert_not_reached(); + break; + } +} + +void __attribute__((flatten)) +helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 1, false); +} + +void __attribute__((flatten)) +helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 2, false); +} + +void __attribute__((flatten)) +helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 2, true); +} + +void __attribute__((flatten)) +helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 4, false); +} + +void __attribute__((flatten)) +helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 4, true); +} + +void __attribute__((flatten)) +helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 8, false); +} + +void __attribute__((flatten)) +helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 8, true); +}