From patchwork Thu May 23 12:00:04 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 17120 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-fa0-f70.google.com (mail-fa0-f70.google.com [209.85.161.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7F0F52395B for ; Thu, 23 May 2013 12:01:14 +0000 (UTC) Received: by mail-fa0-f70.google.com with SMTP id w1sf2350427fad.9 for ; Thu, 23 May 2013 05:00:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=a6G83oPOLYsFBzzFjiIQsuFHEbv8+Ii0PGIEt9/LYxc=; b=S8Y4/kMwu1ox/lS2lFfLR8lFieAUe6Q4TpSOy96vgWLjmFGg3H/IHElbmh/XwPYMZ9 xLylpCH+ColN9mpcAjrW+lChKYYw1l9deNl10wzj0oGuiwV0LLDNcEd8bDDyfBBLxx3d Ze1464rOuzrc6V/jbO8lBZletpbF5bHSDdcHGNLQg2b4hAqRKlhOB8+7euDs9eU0eRLM GYtvHlw/yLgMwvdTrlA49tCKYhnMYFTWD2ji8VrOg/g8xDYJQJxLLm9W6hhrS9/8n973 iS/e9eTjnIXZEEprld+v4GZkE+PoelI7HFPwO/paGG8J8nH3rJXlVXezQkRPlHxtpqmc +fPA== X-Received: by 10.180.105.161 with SMTP id gn1mr5952711wib.6.1369310418069; Thu, 23 May 2013 05:00:18 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.180.83.40 with SMTP id n8ls1753713wiy.19.canary; Thu, 23 May 2013 05:00:18 -0700 (PDT) X-Received: by 10.180.89.140 with SMTP id bo12mr43140835wib.22.1369310417970; Thu, 23 May 2013 05:00:17 -0700 (PDT) Received: from mail-ve0-x22a.google.com (mail-ve0-x22a.google.com [2607:f8b0:400c:c01::22a]) by mx.google.com with ESMTPS id o14si8583447wiw.4.2013.05.23.05.00.17 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 23 May 2013 05:00:17 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c01::22a is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c01::22a; Received: by mail-ve0-f170.google.com with SMTP id 15so2309460vea.15 for ; Thu, 23 May 2013 05:00:16 -0700 (PDT) X-Received: by 10.52.233.34 with SMTP id tt2mr4176907vdc.70.1369310416575; Thu, 23 May 2013 05:00:16 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.126.138 with SMTP id c10csp47549vcs; Thu, 23 May 2013 05:00:15 -0700 (PDT) X-Received: by 10.180.189.41 with SMTP id gf9mr23768138wic.32.1369310411753; Thu, 23 May 2013 05:00:11 -0700 (PDT) Received: from mnementh.archaic.org.uk (1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.d.1.0.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:1d0::1]) by mx.google.com with ESMTPS id el2si8548703wib.93.2013.05.23.05.00.10 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 May 2013 05:00:11 -0700 (PDT) Received-SPF: neutral (google.com: 2001:8b0:1d0::1 is neither permitted nor denied by best guess record for domain of pm215@archaic.org.uk) client-ip=2001:8b0:1d0::1; Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.72) (envelope-from ) id 1UfUBg-0001O0-Kt; Thu, 23 May 2013 13:00:04 +0100 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, Richard Henderson , John Rigby , =?UTF-8?q?Andreas=20F=C3=A4rber?= , Blue Swirl , Aurelien Jarno Subject: [PATCH 10/10] target-arm: Abstract out load/store from a vaddr in AArch32 Date: Thu, 23 May 2013 13:00:04 +0100 Message-Id: <1369310404-5285-11-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1369310404-5285-1-git-send-email-peter.maydell@linaro.org> References: <1369310404-5285-1-git-send-email-peter.maydell@linaro.org> X-Gm-Message-State: ALoCoQmt4OOFeoRMJ/NXN5DTa0HGSVEZ9xtM4+ZB3priWVElILBgJFWjkcY7/Tj2fBlyLTFbFJeI X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c01::22a is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , AArch32 code (ie traditional 32 bit world) expects to be able to pass a vaddr in a TCGv_i32. However when QEMU is compiled with TARGET_LONG_BITS=32 the TCG load/store functions take a TCGv_i64. Abstract out load/store with a 32 bit vaddr so we have a place to put the zero extension of the vaddr and the extension/truncation of the data value. Apart from the function definitions most of this patch is a simple s/tcg_gen_qemu_/gen_aa32_/. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target-arm/translate.c | 309 ++++++++++++++++++++++++++++++------------------ 1 file changed, 197 insertions(+), 112 deletions(-) diff --git a/target-arm/translate.c b/target-arm/translate.c index 71135bd..426e22a 100644 --- a/target-arm/translate.c +++ b/target-arm/translate.c @@ -841,6 +841,89 @@ static inline void store_reg_from_load(CPUARMState *env, DisasContext *s, } } +/* Abstractions of "generate code to do a guest load/store for + * AArch32", where a vaddr is always 32 bits (and is zero + * extended if we're a 64 bit core) and data is also + * 32 bits unless specifically doing a 64 bit access. + * These functions work like tcg_gen_qemu_{ld,st}* except + * that their arguments are TCGv_i32 rather than TCGv. + */ +#if TARGET_LONG_BITS == 32 + +#define DO_GEN_LD(OP) \ +static inline void gen_aa32_##OP(TCGv_i32 val, TCGv_i32 addr, int index) \ +{ \ + tcg_gen_qemu_##OP(val, addr, index); \ +} + +#define DO_GEN_ST(OP) \ +static inline void gen_aa32_##OP(TCGv_i32 val, TCGv_i32 addr, int index) \ +{ \ + tcg_gen_qemu_##OP(val, addr, index); \ +} + +static inline void gen_aa32_ld64(TCGv_i64 val, TCGv_i32 addr, int index) +{ + tcg_gen_qemu_ld64(val, addr, index); +} + +static inline void gen_aa32_st64(TCGv_i64 val, TCGv_i32 addr, int index) +{ + tcg_gen_qemu_st64(val, addr, index); +} + +#else + +#define DO_GEN_LD(OP) \ +static inline void gen_aa32_##OP(TCGv_i32 val, TCGv_i32 addr, int index) \ +{ \ + TCGv addr64 = tcg_temp_new(); \ + TCGv val64 = tcg_temp_new(); \ + tcg_gen_extu_i32_i64(addr64, addr); \ + tcg_gen_qemu_##OP(val64, addr64, index); \ + tcg_temp_free(addr64); \ + tcg_gen_trunc_i64_i32(val, val64); \ + tcg_temp_free(val64); \ +} + +#define DO_GEN_ST(OP) \ +static inline void gen_aa32_##OP(TCGv_i32 val, TCGv_i32 addr, int index) \ +{ \ + TCGv addr64 = tcg_temp_new(); \ + TCGv val64 = tcg_temp_new(); \ + tcg_gen_extu_i32_i64(addr64, addr); \ + tcg_gen_extu_i32_i64(val64, val); \ + tcg_gen_qemu_##OP(val64, addr64, index); \ + tcg_temp_free(addr64); \ +} + +static inline void gen_aa32_ld64(TCGv_i64 val, TCGv_i32 addr, int index) +{ + TCGv addr64 = tcg_temp_new(); + tcg_gen_extu_i32_i64(addr64, addr); + tcg_gen_qemu_ld64(val, addr64, index); + tcg_temp_free(addr64); +} + +static inline void gen_aa32_st64(TCGv_i64 val, TCGv_i32 addr, int index) +{ + TCGv addr64 = tcg_temp_new(); + tcg_gen_extu_i32_i64(addr64, addr); + tcg_gen_qemu_st64(val, addr64, index); + tcg_temp_free(addr64); +} + +#endif + +DO_GEN_LD(ld8s) +DO_GEN_LD(ld8u) +DO_GEN_LD(ld16s) +DO_GEN_LD(ld16u) +DO_GEN_LD(ld32u) +DO_GEN_ST(st8) +DO_GEN_ST(st16) +DO_GEN_ST(st32) + static inline void gen_set_pc_im(uint32_t val) { tcg_gen_movi_i32(cpu_R[15], val); @@ -1070,18 +1153,20 @@ VFP_GEN_FIX(ulto) static inline void gen_vfp_ld(DisasContext *s, int dp, TCGv_i32 addr) { - if (dp) - tcg_gen_qemu_ld64(cpu_F0d, addr, IS_USER(s)); - else - tcg_gen_qemu_ld32u(cpu_F0s, addr, IS_USER(s)); + if (dp) { + gen_aa32_ld64(cpu_F0d, addr, IS_USER(s)); + } else { + gen_aa32_ld32u(cpu_F0s, addr, IS_USER(s)); + } } static inline void gen_vfp_st(DisasContext *s, int dp, TCGv_i32 addr) { - if (dp) - tcg_gen_qemu_st64(cpu_F0d, addr, IS_USER(s)); - else - tcg_gen_qemu_st32(cpu_F0s, addr, IS_USER(s)); + if (dp) { + gen_aa32_st64(cpu_F0d, addr, IS_USER(s)); + } else { + gen_aa32_st32(cpu_F0s, addr, IS_USER(s)); + } } static inline long @@ -1418,24 +1503,24 @@ static int disas_iwmmxt_insn(CPUARMState *env, DisasContext *s, uint32_t insn) if (insn & ARM_CP_RW_BIT) { if ((insn >> 28) == 0xf) { /* WLDRW wCx */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); iwmmxt_store_creg(wrd, tmp); } else { i = 1; if (insn & (1 << 8)) { if (insn & (1 << 22)) { /* WLDRD */ - tcg_gen_qemu_ld64(cpu_M0, addr, IS_USER(s)); + gen_aa32_ld64(cpu_M0, addr, IS_USER(s)); i = 0; } else { /* WLDRW wRd */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); } } else { tmp = tcg_temp_new_i32(); if (insn & (1 << 22)) { /* WLDRH */ - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); } else { /* WLDRB */ - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); } } if (i) { @@ -1447,24 +1532,24 @@ static int disas_iwmmxt_insn(CPUARMState *env, DisasContext *s, uint32_t insn) } else { if ((insn >> 28) == 0xf) { /* WSTRW wCx */ tmp = iwmmxt_load_creg(wrd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); } else { gen_op_iwmmxt_movq_M0_wRn(wrd); tmp = tcg_temp_new_i32(); if (insn & (1 << 8)) { if (insn & (1 << 22)) { /* WSTRD */ - tcg_gen_qemu_st64(cpu_M0, addr, IS_USER(s)); + gen_aa32_st64(cpu_M0, addr, IS_USER(s)); } else { /* WSTRW wRd */ tcg_gen_trunc_i64_i32(tmp, cpu_M0); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); } } else { if (insn & (1 << 22)) { /* WSTRH */ tcg_gen_trunc_i64_i32(tmp, cpu_M0); - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); } else { /* WSTRB */ tcg_gen_trunc_i64_i32(tmp, cpu_M0); - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); } } } @@ -2529,15 +2614,15 @@ static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size) TCGv_i32 tmp = tcg_temp_new_i32(); switch (size) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); gen_neon_dup_u8(tmp, 0); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); gen_neon_dup_low16(tmp); break; case 2: - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: /* Avoid compiler warnings. */ abort(); @@ -3814,11 +3899,11 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) if (size == 3) { tmp64 = tcg_temp_new_i64(); if (load) { - tcg_gen_qemu_ld64(tmp64, addr, IS_USER(s)); + gen_aa32_ld64(tmp64, addr, IS_USER(s)); neon_store_reg64(tmp64, rd); } else { neon_load_reg64(tmp64, rd); - tcg_gen_qemu_st64(tmp64, addr, IS_USER(s)); + gen_aa32_st64(tmp64, addr, IS_USER(s)); } tcg_temp_free_i64(tmp64); tcg_gen_addi_i32(addr, addr, stride); @@ -3827,21 +3912,21 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) if (size == 2) { if (load) { tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); neon_store_reg(rd, pass, tmp); } else { tmp = neon_load_reg(rd, pass); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_gen_addi_i32(addr, addr, stride); } else if (size == 1) { if (load) { tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); tcg_gen_addi_i32(addr, addr, stride); tmp2 = tcg_temp_new_i32(); - tcg_gen_qemu_ld16u(tmp2, addr, IS_USER(s)); + gen_aa32_ld16u(tmp2, addr, IS_USER(s)); tcg_gen_addi_i32(addr, addr, stride); tcg_gen_shli_i32(tmp2, tmp2, 16); tcg_gen_or_i32(tmp, tmp, tmp2); @@ -3851,10 +3936,10 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) tmp = neon_load_reg(rd, pass); tmp2 = tcg_temp_new_i32(); tcg_gen_shri_i32(tmp2, tmp, 16); - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); tcg_gen_addi_i32(addr, addr, stride); - tcg_gen_qemu_st16(tmp2, addr, IS_USER(s)); + gen_aa32_st16(tmp2, addr, IS_USER(s)); tcg_temp_free_i32(tmp2); tcg_gen_addi_i32(addr, addr, stride); } @@ -3863,7 +3948,7 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) TCGV_UNUSED_I32(tmp2); for (n = 0; n < 4; n++) { tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); tcg_gen_addi_i32(addr, addr, stride); if (n == 0) { tmp2 = tmp; @@ -3883,7 +3968,7 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) } else { tcg_gen_shri_i32(tmp, tmp2, n * 8); } - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); tcg_gen_addi_i32(addr, addr, stride); } @@ -4007,13 +4092,13 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) tmp = tcg_temp_new_i32(); switch (size) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: /* Avoid compiler warnings. */ abort(); @@ -4031,13 +4116,13 @@ static int disas_neon_ls_insn(CPUARMState * env, DisasContext *s, uint32_t insn) tcg_gen_shri_i32(tmp, tmp, shift); switch (size) { case 0: - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); break; case 2: - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); break; } tcg_temp_free_i32(tmp); @@ -6451,14 +6536,14 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, switch (size) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: case 3: - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: abort(); @@ -6469,7 +6554,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, TCGv_i32 tmp2 = tcg_temp_new_i32(); tcg_gen_addi_i32(tmp2, addr, 4); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, tmp2, IS_USER(s)); + gen_aa32_ld32u(tmp, tmp2, IS_USER(s)); tcg_temp_free_i32(tmp2); tcg_gen_mov_i32(cpu_exclusive_high, tmp); store_reg(s, rt2, tmp); @@ -6511,14 +6596,14 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, tmp = tcg_temp_new_i32(); switch (size) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: case 3: - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; default: abort(); @@ -6529,7 +6614,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, TCGv_i32 tmp2 = tcg_temp_new_i32(); tcg_gen_addi_i32(tmp2, addr, 4); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, tmp2, IS_USER(s)); + gen_aa32_ld32u(tmp, tmp2, IS_USER(s)); tcg_temp_free_i32(tmp2); tcg_gen_brcond_i32(TCG_COND_NE, tmp, cpu_exclusive_high, fail_label); tcg_temp_free_i32(tmp); @@ -6537,14 +6622,14 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, tmp = load_reg(s, rt); switch (size) { case 0: - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); break; case 1: - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); break; case 2: case 3: - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); break; default: abort(); @@ -6553,7 +6638,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, if (size == 3) { tcg_gen_addi_i32(addr, addr, 4); tmp = load_reg(s, rt2); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_gen_movi_i32(cpu_R[rd], 0); @@ -6600,11 +6685,11 @@ static void gen_srs(DisasContext *s, } tcg_gen_addi_i32(addr, addr, offset); tmp = load_reg(s, 14); - tcg_gen_qemu_st32(tmp, addr, 0); + gen_aa32_st32(tmp, addr, 0); tcg_temp_free_i32(tmp); tmp = load_cpu_field(spsr); tcg_gen_addi_i32(addr, addr, 4); - tcg_gen_qemu_st32(tmp, addr, 0); + gen_aa32_st32(tmp, addr, 0); tcg_temp_free_i32(tmp); if (writeback) { switch (amode) { @@ -6749,10 +6834,10 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) tcg_gen_addi_i32(addr, addr, offset); /* Load PC into tmp and CPSR into tmp2. */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, 0); + gen_aa32_ld32u(tmp, addr, 0); tcg_gen_addi_i32(addr, addr, 4); tmp2 = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, 0); + gen_aa32_ld32u(tmp, addr, 0); if (insn & (1 << 21)) { /* Base writeback. */ switch (i) { @@ -7328,11 +7413,11 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) tmp = load_reg(s, rm); tmp2 = tcg_temp_new_i32(); if (insn & (1 << 22)) { - tcg_gen_qemu_ld8u(tmp2, addr, IS_USER(s)); - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp2, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); } else { - tcg_gen_qemu_ld32u(tmp2, addr, IS_USER(s)); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp2, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); } tcg_temp_free_i32(tmp); tcg_temp_free_i32(addr); @@ -7354,14 +7439,14 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) tmp = tcg_temp_new_i32(); switch(sh) { case 1: - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 2: - tcg_gen_qemu_ld8s(tmp, addr, IS_USER(s)); + gen_aa32_ld8s(tmp, addr, IS_USER(s)); break; default: case 3: - tcg_gen_qemu_ld16s(tmp, addr, IS_USER(s)); + gen_aa32_ld16s(tmp, addr, IS_USER(s)); break; } load = 1; @@ -7371,21 +7456,21 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) if (sh & 1) { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); tcg_gen_addi_i32(addr, addr, 4); tmp = load_reg(s, rd + 1); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); load = 0; } else { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); tcg_gen_addi_i32(addr, addr, 4); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); rd++; load = 1; } @@ -7393,7 +7478,7 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); load = 0; } @@ -7726,17 +7811,17 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) /* load */ tmp = tcg_temp_new_i32(); if (insn & (1 << 22)) { - tcg_gen_qemu_ld8u(tmp, tmp2, i); + gen_aa32_ld8u(tmp, tmp2, i); } else { - tcg_gen_qemu_ld32u(tmp, tmp2, i); + gen_aa32_ld32u(tmp, tmp2, i); } } else { /* store */ tmp = load_reg(s, rd); if (insn & (1 << 22)) { - tcg_gen_qemu_st8(tmp, tmp2, i); + gen_aa32_st8(tmp, tmp2, i); } else { - tcg_gen_qemu_st32(tmp, tmp2, i); + gen_aa32_st32(tmp, tmp2, i); } tcg_temp_free_i32(tmp); } @@ -7803,7 +7888,7 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) if (insn & (1 << 20)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); if (user) { tmp2 = tcg_const_i32(i); gen_helper_set_user_reg(cpu_env, tmp2, tmp); @@ -7830,7 +7915,7 @@ static void disas_arm_insn(CPUARMState * env, DisasContext *s) } else { tmp = load_reg(s, i); } - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } j++; @@ -8089,20 +8174,20 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw if (insn & (1 << 20)) { /* ldrd */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rs, tmp); tcg_gen_addi_i32(addr, addr, 4); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* strd */ tmp = load_reg(s, rs); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); tcg_gen_addi_i32(addr, addr, 4); tmp = load_reg(s, rd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } if (insn & (1 << 21)) { @@ -8140,11 +8225,11 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tcg_gen_add_i32(addr, addr, tmp); tcg_temp_free_i32(tmp); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); } else { /* tbb */ tcg_temp_free_i32(tmp); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); } tcg_temp_free_i32(addr); tcg_gen_shli_i32(tmp, tmp, 1); @@ -8180,10 +8265,10 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tcg_gen_addi_i32(addr, addr, -8); /* Load PC into tmp and CPSR into tmp2. */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, 0); + gen_aa32_ld32u(tmp, addr, 0); tcg_gen_addi_i32(addr, addr, 4); tmp2 = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp2, addr, 0); + gen_aa32_ld32u(tmp2, addr, 0); if (insn & (1 << 21)) { /* Base writeback. */ if (insn & (1 << 24)) { @@ -8222,7 +8307,7 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw if (insn & (1 << 20)) { /* Load. */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); if (i == 15) { gen_bx(s, tmp); } else if (i == rn) { @@ -8234,7 +8319,7 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw } else { /* Store. */ tmp = load_reg(s, i); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_gen_addi_i32(addr, addr, 4); @@ -9010,19 +9095,19 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tmp = tcg_temp_new_i32(); switch (op) { case 0: - tcg_gen_qemu_ld8u(tmp, addr, user); + gen_aa32_ld8u(tmp, addr, user); break; case 4: - tcg_gen_qemu_ld8s(tmp, addr, user); + gen_aa32_ld8s(tmp, addr, user); break; case 1: - tcg_gen_qemu_ld16u(tmp, addr, user); + gen_aa32_ld16u(tmp, addr, user); break; case 5: - tcg_gen_qemu_ld16s(tmp, addr, user); + gen_aa32_ld16s(tmp, addr, user); break; case 2: - tcg_gen_qemu_ld32u(tmp, addr, user); + gen_aa32_ld32u(tmp, addr, user); break; default: tcg_temp_free_i32(tmp); @@ -9039,13 +9124,13 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw tmp = load_reg(s, rs); switch (op) { case 0: - tcg_gen_qemu_st8(tmp, addr, user); + gen_aa32_st8(tmp, addr, user); break; case 1: - tcg_gen_qemu_st16(tmp, addr, user); + gen_aa32_st16(tmp, addr, user); break; case 2: - tcg_gen_qemu_st32(tmp, addr, user); + gen_aa32_st32(tmp, addr, user); break; default: tcg_temp_free_i32(tmp); @@ -9182,7 +9267,7 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) addr = tcg_temp_new_i32(); tcg_gen_movi_i32(addr, val); tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); tcg_temp_free_i32(addr); store_reg(s, rd, tmp); break; @@ -9385,28 +9470,28 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) switch (op) { case 0: /* str */ - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); break; case 1: /* strh */ - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); break; case 2: /* strb */ - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); break; case 3: /* ldrsb */ - tcg_gen_qemu_ld8s(tmp, addr, IS_USER(s)); + gen_aa32_ld8s(tmp, addr, IS_USER(s)); break; case 4: /* ldr */ - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); break; case 5: /* ldrh */ - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); break; case 6: /* ldrb */ - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); break; case 7: /* ldrsh */ - tcg_gen_qemu_ld16s(tmp, addr, IS_USER(s)); + gen_aa32_ld16s(tmp, addr, IS_USER(s)); break; } if (op >= 3) { /* load */ @@ -9428,12 +9513,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_temp_free_i32(addr); @@ -9450,12 +9535,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld8u(tmp, addr, IS_USER(s)); + gen_aa32_ld8u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st8(tmp, addr, IS_USER(s)); + gen_aa32_st8(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_temp_free_i32(addr); @@ -9472,12 +9557,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld16u(tmp, addr, IS_USER(s)); + gen_aa32_ld16u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st16(tmp, addr, IS_USER(s)); + gen_aa32_st16(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_temp_free_i32(addr); @@ -9493,12 +9578,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, rd, tmp); } else { /* store */ tmp = load_reg(s, rd); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_temp_free_i32(addr); @@ -9566,12 +9651,12 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* pop */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); store_reg(s, i, tmp); } else { /* push */ tmp = load_reg(s, i); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } /* advance to the next address. */ @@ -9583,13 +9668,13 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* pop pc */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); /* don't set the pc until the rest of the instruction has completed */ } else { /* push lr */ tmp = load_reg(s, 14); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } tcg_gen_addi_i32(addr, addr, 4); @@ -9714,7 +9799,7 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) if (insn & (1 << 11)) { /* load */ tmp = tcg_temp_new_i32(); - tcg_gen_qemu_ld32u(tmp, addr, IS_USER(s)); + gen_aa32_ld32u(tmp, addr, IS_USER(s)); if (i == rn) { loaded_var = tmp; } else { @@ -9723,7 +9808,7 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) } else { /* store */ tmp = load_reg(s, i); - tcg_gen_qemu_st32(tmp, addr, IS_USER(s)); + gen_aa32_st32(tmp, addr, IS_USER(s)); tcg_temp_free_i32(tmp); } /* advance to the next address */