From patchwork Tue Sep 3 19:12:02 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 19725 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f200.google.com (mail-ve0-f200.google.com [209.85.128.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CF39824869 for ; Tue, 3 Sep 2013 19:12:42 +0000 (UTC) Received: by mail-ve0-f200.google.com with SMTP id oz10sf7064805veb.11 for ; Tue, 03 Sep 2013 12:12:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=aM0s26i15whCcBJGYj8OJ8RAAGJJVCglRPTvfCg3kuk=; b=i9Yfncjdu86jq4K43H8oPS+dooMZ8UwiDG2ENs9dCFY67rDpksQrhB7c3KsTbU/vlD e9UPoR1G3m2COquM1AqMpHTaD8/M449nFI3BcWTys/ChcJsOFkrAzWWhSSjNkhPosUmC sOUs+u5BK9xU4vcrRNKmtDfRStQ2m3dOXHK/gitXEhqTPGdDekIcCaGUz1AXtWU85bgM BJPsXLOGAvtiheBB6JX/ZiKQY2vJFgPxSh+xvemPrhalXieIu3fm/ip0CF4J6EZYPU0G P2zStzwIDpkK38M2EXkERv73Ti1omsixBoIw2KXPRuhQ5++ba4cat+HyRtLMs+g9q4wT LJ+Q== X-Received: by 10.236.26.202 with SMTP id c50mr1219731yha.14.1378235562561; Tue, 03 Sep 2013 12:12:42 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.48.41 with SMTP id i9ls777947qen.44.gmail; Tue, 03 Sep 2013 12:12:42 -0700 (PDT) X-Received: by 10.52.162.66 with SMTP id xy2mr15622428vdb.3.1378235562466; Tue, 03 Sep 2013 12:12:42 -0700 (PDT) Received: from mail-ve0-f176.google.com (mail-ve0-f176.google.com [209.85.128.176]) by mx.google.com with ESMTPS id tj1si4810474vdc.144.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 03 Sep 2013 12:12:42 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.176; Received: by mail-ve0-f176.google.com with SMTP id b10so4332497vea.7 for ; Tue, 03 Sep 2013 12:12:42 -0700 (PDT) X-Gm-Message-State: ALoCoQngIjRTw/jKhFOZpCz9zxeZqSEAQAxl9aBEMh7HsqcpGGgcdiqscMIEe6RLKFcKqqtxMODZ X-Received: by 10.220.91.16 with SMTP id k16mr9655483vcm.21.1378235562339; Tue, 03 Sep 2013 12:12:42 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp190474vcz; Tue, 3 Sep 2013 12:12:41 -0700 (PDT) X-Received: by 10.180.13.13 with SMTP id d13mr19573200wic.34.1378235545897; Tue, 03 Sep 2013 12:12:25 -0700 (PDT) Received: from chiark.greenend.org.uk (v6.chiark.greenend.org.uk. [2001:ba8:1e3::]) by mx.google.com with ESMTPS id bb9si7630279wjb.139.1969.12.31.16.00.00 (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 03 Sep 2013 12:12:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pmaydell@chiark.greenend.org.uk designates 2001:ba8:1e3:: as permitted sender) client-ip=2001:ba8:1e3::; Received: by chiark.greenend.org.uk (Debian Exim 4.72 #1) with local (return-path pmaydell@chiark.greenend.org.uk) id 1VGw1Y-0005nz-5c; Tue, 03 Sep 2013 20:12:24 +0100 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, Andreas Schwab , Alexander Graf , "Mian M. Hamayun" , kvmarm@lists.cs.columbia.edu, =?UTF-8?q?Andreas=20F=C3=A4rber?= Subject: [PATCH v6 02/24] target-arm: Abstract out load/store from a vaddr in AArch32 Date: Tue, 3 Sep 2013 20:12:02 +0100 Message-Id: <1378235544-22290-3-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1378235544-22290-1-git-send-email-peter.maydell@linaro.org> References: <1378235544-22290-1-git-send-email-peter.maydell@linaro.org> Sender: Peter Maydell X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , AArch32 code (ie traditional 32 bit world) expects to be able to pass a vaddr in a TCGv_i32. However when QEMU is compiled with TARGET_LONG_BITS=32 the TCG load/store functions take a TCGv_i64. Abstract out load/store with a 32 bit vaddr so we have a place to put the zero extension of the vaddr and the extension/truncation of the data value. Apart from the function definitions most of this patch is a simple s/tcg_gen_qemu_/gen_aa32_/. Signed-off-by: Peter Maydell --- target-arm/translate.c | 334 ++++++++++++++++++++++++++++++------------------ 1 file changed, 210 insertions(+), 124 deletions(-) diff --git a/target-arm/translate.c b/target-arm/translate.c index 9160ced..8e58eb1 100644 --- a/target-arm/translate.c +++ b/target-arm/translate.c @@ -842,6 +842,90 @@ static inline void store_reg_from_load(CPUARMState *env, DisasContext *s, } } +/* Abstractions of "generate code to do a guest load/store for + * AArch32", where a vaddr is always 32 bits (and is zero + * extended if we're a 64 bit core) and data is also + * 32 bits unless specifically doing a 64 bit access. + * These functions work like tcg_gen_qemu_{ld,st}* except + * that their arguments are TCGv_i32 rather than TCGv. + */ +#if TARGET_LONG_BITS == 32 + +#define DO_GEN_LD(OP) \ +static inline void gen_aa32_##OP(TCGv_i32 val, TCGv_i32 addr, int index) \ +{ \ + tcg_gen_qemu_##OP(val, addr, index); \ +} + +#define DO_GEN_ST(OP) \ +static inline void gen_aa32_##OP(TCGv_i32 val, TCGv_i32 addr, int index) \ +{ \ + tcg_gen_qemu_##OP(val, addr, index); \ +} + +static inline void gen_aa32_ld64(TCGv_i64 val, TCGv_i32 addr, int index) +{ + tcg_gen_qemu_ld64(val, addr, index); +} + +static inline void gen_aa32_st64(TCGv_i64 val, TCGv_i32 addr, int index) +{ + tcg_gen_qemu_st64(val, addr, index); +} + +#else + +#define DO_GEN_LD(OP) \ +static inline void gen_aa32_##OP(TCGv_i32 val, TCGv_i32 addr, int index) \ +{ \ + TCGv addr64 = tcg_temp_new(); \ + TCGv val64 = tcg_temp_new(); \ + tcg_gen_extu_i32_i64(addr64, addr); \ + tcg_gen_qemu_##OP(val64, addr64, index); \ + tcg_temp_free(addr64); \ + tcg_gen_trunc_i64_i32(val, val64); \ + tcg_temp_free(val64); \ +} + +#define DO_GEN_ST(OP) \ +static inline void gen_aa32_##OP(TCGv_i32 val, TCGv_i32 addr, int index) \ +{ \ + TCGv addr64 = tcg_temp_new(); \ + TCGv val64 = tcg_temp_new(); \ + tcg_gen_extu_i32_i64(addr64, addr); \ + tcg_gen_extu_i32_i64(val64, val); \ + tcg_gen_qemu_##OP(val64, addr64, index); \ + tcg_temp_free(addr64); \ + tcg_temp_free(val64); \ +} + +static inline void gen_aa32_ld64(TCGv_i64 val, TCGv_i32 addr, int index) +{ + TCGv addr64 = tcg_temp_new(); + tcg_gen_extu_i32_i64(addr64, addr); + tcg_gen_qemu_ld64(val, addr64, index); + tcg_temp_free(addr64); +} + +static inline void gen_aa32_st64(TCGv_i64 val, TCGv_i32 addr, int index) +{ + TCGv addr64 = tcg_temp_new(); + tcg_gen_extu_i32_i64(addr64, addr); + tcg_gen_qemu_st64(val, addr64, index); + tcg_temp_free(addr64); +} + +#endif + +DO_GEN_LD(ld8s) +DO_GEN_LD(ld8u) +DO_GEN_LD(ld16s) +DO_GEN_LD(ld16u) +DO_GEN_LD(ld32u) +DO_GEN_ST(st8) +DO_GEN_ST(st16) +DO_GEN_ST(st32) + static inline void gen_set_pc_im(uint32_t val) { tcg_gen_movi_i32(cpu_R[15], val); @@ -1071,18 +1155,20 @@ VFP_GEN_FIX(ulto) static inline void gen_vfp_ld(DisasContext *s, int dp, TCGv_i32 addr) { - if (dp) - tcg_gen_qemu_ld64(cpu_F0d, addr, IS_USER(s)); - else - tcg_gen_qemu_ld32u(cpu_F0s, addr, IS_USER(s)); + if (dp) { + gen_aa32_ld64(cpu_F0d, addr, IS_USER(s)); + } else { + gen_aa32_ld32u(cpu_F0s, addr, IS_USER(s)); + } } static inline void gen_vfp_st(DisasContext *s, int dp, TCGv_i32 addr) { - if (dp) - tcg_gen_qemu_st64(cpu_F0d, addr, IS_USER(s)); - else - tcg_gen_qemu_st32(cpu_F0s, addr, IS_USER(s)); + if (dp) { + gen_aa32_st64(cpu_F0d, addr, IS_USER(s)); + } else { + gen_aa32_st32(cpu_F0s, addr, IS_USER(s)); + } } static inline long @@ -1419,24 +1505,24 @@ static int disas_iwmmxt_insn(CPUARMState *env, DisasContext *s, uint32_t insn) if (insn & ARM_CP_RW_BIT) { if ((insn >> 28) == 0xf) { /* WLDRW wCx */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); iwmmxt_store_creg(wrd, tmp); } else { i = 1; if (insn & (1 << 8)) { if (insn & (1 << 22)) { /* WLDRD */ - tcg_gen_qemu_ld64(cpu_M0, addr, IS_USER(s)); + gen_aa32_ld64(cpu_M0, addr, IS_USER(s)); i = 0; } else { /* WLDRW wRd */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); } } else { tmp = tcg_temp_new_i32(); if (insn & (1 << 22)) { /* WLDRH */ - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); } else { /* WLDRB */ - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); } } if (i) { @@ -1448,24 +1534,24 @@ static int disas_iwmmxt_insn(CPUARMState *env, DisasContext *s, uint32_t insn) } else { if ((insn >> 28) == 0xf) { /* WSTRW wCx */ tmp = iwmmxt_load_creg(wrd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); } else { gen_op_iwmmxt_movq_M0_wRn(wrd); tmp = tcg_temp_new_i32(); if (insn & (1 << 8)) { if (insn & (1 << 22)) { /* WSTRD */ - tcg_gen_qemu_st64(cpu_M0, addr, IS_USER(s)); + gen_aa32_st64(cpu_M0, addr, IS_USER(s)); } else { /* WSTRW wRd */ tcg_gen_trunc_i64_i32(tmp, cpu_M0); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); } } else { if (insn & (1 << 22)) { /* WSTRH */ tcg_gen_trunc_i64_i32(tmp, cpu_M0); - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); } else { /* WSTRB */ tcg_gen_trunc_i64_i32(tmp, cpu_M0); - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); } } } @@ -2530,15 +2616,15 @@ static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size) TCGv_i32 tmp = tcg_temp_new_i32(); switch (size) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); gen_neon_dup_u8(tmp, 0); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); gen_neon_dup_low16(tmp); break; case 2: - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: /* Avoid compiler warnings. */ abort(); @@ -3816,11 +3902,11 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) if (size == 3) { tmp64 = tcg_temp_new_i64(); if (load) { - tcg_gen_qemu_ld64(tmp64, addr, IS_USER(s)); + gen_aa32_ld64(tmp64, addr, IS_USER(s)); neon_store_reg64(tmp64, rd); } else { neon_load_reg64(tmp64, rd); - tcg_gen_qemu_st64(tmp64, addr, IS_USER(s)); + gen_aa32_st64(tmp64, addr, IS_USER(s)); } tcg_temp_free_i64(tmp64); tcg_gen_addi_i32(addr, addr, stride); @@ -3829,21 +3915,21 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) if (size == 2) { if (load) { tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); neon_store_reg(rd, pass, tmp); } else { tmp = neon_load_reg(rd, pass); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_gen_addi_i32(addr, addr, stride); } else if (size == 1) { if (load) { tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); tcg_gen_addi_i32(addr, addr, stride); tmp2 = tcg_temp_new_i32(); - tcg_gen_qemu_ld16u(tmp2, addr, IS_USER(s)); + gen_aa32_ld16u(tmp2, addr, IS_USER(s)); tcg_gen_addi_i32(addr, addr, stride); tcg_gen_shli_i32(tmp2, tmp2, 16); tcg_gen_or_i32(tmp, tmp, tmp2); @@ -3853,10 +3939,10 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) tmp = neon_load_reg(rd, pass); tmp2 = tcg_temp_new_i32(); tcg_gen_shri_i32(tmp2, tmp, 16); - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); tcg_gen_addi_i32(addr, addr, stride); - tcg_gen_qemu_st16(tmp2, addr, IS_USER(s)); + gen_aa32_st16(tmp2, addr, IS_USER(s)); tcg_temp_free_i32(tmp2); tcg_gen_addi_i32(addr, addr, stride); } @@ -3865,7 +3951,7 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) TCGV_UNUSED_I32(tmp2); for (n = 0; n < 4; n++) { tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); tcg_gen_addi_i32(addr, addr, stride); if (n == 0) { tmp2 = tmp; @@ -3885,7 +3971,7 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) } else { tcg_gen_shri_i32(tmp, tmp2, n * 8); } - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); tcg_gen_addi_i32(addr, addr, stride); } @@ -4009,13 +4095,13 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) tmp = tcg_temp_new_i32(); switch (size) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: /* Avoid compiler warnings. */ abort(); @@ -4033,13 +4119,13 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) tcg_gen_shri_i32(tmp, tmp, shift); switch (size) { case 0: - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); break; case 2: - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); break; } tcg_temp_free_i32(tmp); @@ -6463,14 +6549,14 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, switch (size) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: case 3: - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: abort(); @@ -6481,7 +6567,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, TCGv_i32 tmp2 = tcg_temp_new_i32(); tcg_gen_addi_i32(tmp2, addr, 4); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, tmp2, IS_USER(s)); + gen_aa32_ld32u(tmp, tmp2, IS_USER(s)); tcg_temp_free_i32(tmp2); tcg_gen_mov_i32(cpu_exclusive_high, tmp); store_reg(s, rt2, tmp); @@ -6523,14 +6609,14 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, tmp = tcg_temp_new_i32(); switch (size) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: case 3: - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: abort(); @@ -6541,7 +6627,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, TCGv_i32 tmp2 = tcg_temp_new_i32(); tcg_gen_addi_i32(tmp2, addr, 4); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, tmp2, IS_USER(s)); + gen_aa32_ld32u(tmp, tmp2, IS_USER(s)); tcg_temp_free_i32(tmp2); tcg_gen_brcond_i32(TCG_COND_NE, tmp, cpu_exclusive_high, fail_label); tcg_temp_free_i32(tmp); @@ -6549,14 +6635,14 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, tmp = load_reg(s, rt); switch (size) { case 0: - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); break; case 2: case 3: - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); break; default: abort(); @@ -6565,7 +6651,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, if (size == 3) { tcg_gen_addi_i32(addr, addr, 4); tmp = load_reg(s, rt2); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_gen_movi_i32(cpu_R[rd], 0); @@ -6612,11 +6698,11 @@ static void gen_srs(DisasContext *s, } tcg_gen_addi_i32(addr, addr, offset); tmp = load_reg(s, 14); - tcg_gen_qemu_st32(tmp, addr, 0); + gen_aa32_st32(tmp, addr, 0); tcg_temp_free_i32(tmp); tmp = load_cpu_field(spsr); tcg_gen_addi_i32(addr, addr, 4); - tcg_gen_qemu_st32(tmp, addr, 0); + gen_aa32_st32(tmp, addr, 0); tcg_temp_free_i32(tmp); if (writeback) { switch (amode) { @@ -6761,10 +6847,10 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) tcg_gen_addi_i32(addr, addr, offset); /* Load PC into tmp and CPSR into tmp2. */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, 0); + gen_aa32_ld32u(tmp, addr, 0); tcg_gen_addi_i32(addr, addr, 4); tmp2 = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp2, addr, 0); + gen_aa32_ld32u(tmp2, addr, 0); if (insn & (1 << 21)) { /* Base writeback. */ switch (i) { @@ -7320,13 +7406,13 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) tmp = tcg_temp_new_i32(); switch (op1) { case 0: /* lda */ - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; case 2: /* ldab */ - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 3: /* ldah */ - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; default: abort(); @@ -7337,13 +7423,13 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) tmp = load_reg(s, rm); switch (op1) { case 0: /* stl */ - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); break; case 2: /* stlb */ - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); break; case 3: /* stlh */ - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); break; default: abort(); @@ -7398,11 +7484,11 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) tmp = load_reg(s, rm); tmp2 = tcg_temp_new_i32(); if (insn & (1 << 22)) { - tcg_gen_qemu_ld8u(tmp2, addr, IS_USER(s)); - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp2, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); } else { - tcg_gen_qemu_ld32u(tmp2, addr, IS_USER(s)); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp2, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); } tcg_temp_free_i32(tmp); tcg_temp_free_i32(addr); @@ -7424,14 +7510,14 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) tmp = tcg_temp_new_i32(); switch(sh) { case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: - tcg_gen_qemu_ld8s(tmp, addr, IS_USER(s)); + gen_aa32_ld8s(tmp, addr, IS_USER(s)); break; default: case 3: - tcg_gen_qemu_ld16s(tmp, addr, IS_USER(s)); + gen_aa32_ld16s(tmp, addr, IS_USER(s)); break; } load = 1; @@ -7441,21 +7527,21 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) if (sh & 1) { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); tcg_gen_addi_i32(addr, addr, 4); tmp = load_reg(s, rd + 1); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); load = 0; } else { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); tcg_gen_addi_i32(addr, addr, 4); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); rd++; load = 1; } @@ -7463,7 +7549,7 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); load = 0; } @@ -7796,17 +7882,17 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) /* load */ tmp = tcg_temp_new_i32(); if (insn & (1 << 22)) { - tcg_gen_qemu_ld8u(tmp, tmp2, i); + gen_aa32_ld8u(tmp, tmp2, i); } else { - tcg_gen_qemu_ld32u(tmp, tmp2, i); + gen_aa32_ld32u(tmp, tmp2, i); } } else { /* store */ tmp = load_reg(s, rd); if (insn & (1 << 22)) { - tcg_gen_qemu_st8(tmp, tmp2, i); + gen_aa32_st8(tmp, tmp2, i); } else { - tcg_gen_qemu_st32(tmp, tmp2, i); + gen_aa32_st32(tmp, tmp2, i); } tcg_temp_free_i32(tmp); } @@ -7873,7 +7959,7 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) if (insn & (1 << 20)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); if (user) { tmp2 = tcg_const_i32(i); gen_helper_set_user_reg(cpu_env, tmp2, tmp); @@ -7900,7 +7986,7 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) } else { tmp = load_reg(s, i); } - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } j++; @@ -8159,20 +8245,20 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw if (insn & (1 << 20)) { /* ldrd */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rs, tmp); tcg_gen_addi_i32(addr, addr, 4); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* strd */ tmp = load_reg(s, rs); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); tcg_gen_addi_i32(addr, addr, 4); tmp = load_reg(s, rd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } if (insn & (1 << 21)) { @@ -8210,11 +8296,11 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tcg_gen_add_i32(addr, addr, tmp); tcg_temp_free_i32(tmp); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); } else { /* tbb */ tcg_temp_free_i32(tmp); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); } tcg_temp_free_i32(addr); tcg_gen_shli_i32(tmp, tmp, 1); @@ -8251,13 +8337,13 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tmp = tcg_temp_new_i32(); switch (op) { case 0: /* ldab */ - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 1: /* ldah */ - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: /* lda */ - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: abort(); @@ -8267,13 +8353,13 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tmp = load_reg(s, rs); switch (op) { case 0: /* stlb */ - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); break; case 1: /* stlh */ - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); break; case 2: /* stl */ - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); break; default: abort(); @@ -8301,10 +8387,10 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tcg_gen_addi_i32(addr, addr, -8); /* Load PC into tmp and CPSR into tmp2. */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, 0); + gen_aa32_ld32u(tmp, addr, 0); tcg_gen_addi_i32(addr, addr, 4); tmp2 = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp2, addr, 0); + gen_aa32_ld32u(tmp2, addr, 0); if (insn & (1 << 21)) { /* Base writeback. */ if (insn & (1 << 24)) { @@ -8343,7 +8429,7 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw if (insn & (1 << 20)) { /* Load. */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); if (i == 15) { gen_bx(s, tmp); } else if (i == rn) { @@ -8355,7 +8441,7 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw } else { /* Store. */ tmp = load_reg(s, i); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_gen_addi_i32(addr, addr, 4); @@ -9131,19 +9217,19 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tmp = tcg_temp_new_i32(); switch (op) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, user); + gen_aa32_ld8u(tmp, addr, user); break; case 4: - tcg_gen_qemu_ld8s(tmp, addr, user); + gen_aa32_ld8s(tmp, addr, user); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, user); + gen_aa32_ld16u(tmp, addr, user); break; case 5: - tcg_gen_qemu_ld16s(tmp, addr, user); + gen_aa32_ld16s(tmp, addr, user); break; case 2: - tcg_gen_qemu_ld32u(tmp, addr, user); + gen_aa32_ld32u(tmp, addr, user); break; default: tcg_temp_free_i32(tmp); @@ -9160,13 +9246,13 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tmp = load_reg(s, rs); switch (op) { case 0: - tcg_gen_qemu_st8(tmp, addr, user); + gen_aa32_st8(tmp, addr, user); break; case 1: - tcg_gen_qemu_st16(tmp, addr, user); + gen_aa32_st16(tmp, addr, user); break; case 2: - tcg_gen_qemu_st32(tmp, addr, user); + gen_aa32_st32(tmp, addr, user); break; default: tcg_temp_free_i32(tmp); @@ -9303,7 +9389,7 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) addr = tcg_temp_new_i32(); tcg_gen_movi_i32(addr, val); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); tcg_temp_free_i32(addr); store_reg(s, rd, tmp); break; @@ -9506,28 +9592,28 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) switch (op) { case 0: /* str */ - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); break; case 1: /* strh */ - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); break; case 2: /* strb */ - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); break; case 3: /* ldrsb */ - tcg_gen_qemu_ld8s(tmp, addr, IS_USER(s)); + gen_aa32_ld8s(tmp, addr, IS_USER(s)); break; case 4: /* ldr */ - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; case 5: /* ldrh */ - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 6: /* ldrb */ - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 7: /* ldrsh */ - tcg_gen_qemu_ld16s(tmp, addr, IS_USER(s)); + gen_aa32_ld16s(tmp, addr, IS_USER(s)); break; } if (op >= 3) { /* load */ @@ -9549,12 +9635,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_temp_free_i32(addr); @@ -9571,12 +9657,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_temp_free_i32(addr); @@ -9593,12 +9679,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_temp_free_i32(addr); @@ -9614,12 +9700,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_temp_free_i32(addr); @@ -9687,12 +9773,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* pop */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, i, tmp); } else { /* push */ tmp = load_reg(s, i); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } /* advance to the next address. */ @@ -9704,13 +9790,13 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* pop pc */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); /* don't set the pc until the rest of the instruction has completed */ } else { /* push lr */ tmp = load_reg(s, 14); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_gen_addi_i32(addr, addr, 4); @@ -9835,7 +9921,7 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); if (i == rn) { loaded_var = tmp; } else { @@ -9844,7 +9930,7 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) } else { /* store */ tmp = load_reg(s, i); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } /* advance to the next address */