From patchwork Fri Mar 28 16:09:55 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 27355 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f71.google.com (mail-pa0-f71.google.com [209.85.220.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A2E5620545 for ; Fri, 28 Mar 2014 16:27:26 +0000 (UTC) Received: by mail-pa0-f71.google.com with SMTP id kq14sf12658154pab.2 for ; Fri, 28 Mar 2014 09:27:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=N21c/M5un18FJvPDw9Wq8enfiZsgDswv/YVUqGQzhbU=; b=LOpNd3P2pivB/+ve92KbSUjGd28JwUE8dun9phrPMueKJCMnPXW6AKgwRgLPm3yziP XTWMs+sZCnRL9w1mJ1UNP7mrP6doQyxZ9ksQlrA9gvoAQ3UuslVSychrSdS+DYgj/Pcq Rj0wP40UpuRuBXIQIG0gD5+ZELJ4cvIB0DwOX/YabgQ4PxNkxEpiwxqzLrOavuUbn0W7 P8DXgPTjJvwl0LNDF4lCdMb5mPFQQvi8OGX6IjfVdU2Aq8I/ZOsvWhqg4/ZsvgI/Ornb 2xYVM3bGKKfJ1lZ4dWFytKm1mBFICmF8dLrfy9GvYurDYoZciW/opGfUNlLPEF3opmly 2DuA== X-Gm-Message-State: ALoCoQnYCHx2iUbDS15uG2Ex4TlTa2dYi/LDMiok1ofQzElPy8YCubsLYf6fGZUPGpfGJd+3Uv6y X-Received: by 10.66.252.1 with SMTP id zo1mr3666108pac.40.1396024045902; Fri, 28 Mar 2014 09:27:25 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.19.81 with SMTP id 75ls1613251qgg.75.gmail; Fri, 28 Mar 2014 09:27:25 -0700 (PDT) X-Received: by 10.52.242.167 with SMTP id wr7mr1068257vdc.32.1396024045732; Fri, 28 Mar 2014 09:27:25 -0700 (PDT) Received: from mail-ve0-f173.google.com (mail-ve0-f173.google.com [209.85.128.173]) by mx.google.com with ESMTPS id sr19si1330873vcb.71.2014.03.28.09.27.25 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 28 Mar 2014 09:27:25 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.173 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.173; Received: by mail-ve0-f173.google.com with SMTP id oy12so6025428veb.32 for ; Fri, 28 Mar 2014 09:27:25 -0700 (PDT) X-Received: by 10.52.72.48 with SMTP id a16mr6624870vdv.19.1396024045634; Fri, 28 Mar 2014 09:27:25 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.12.8 with SMTP id v8csp21820vcv; Fri, 28 Mar 2014 09:27:25 -0700 (PDT) X-Received: by 10.204.75.201 with SMTP id z9mr1824894bkj.37.1396024030479; Fri, 28 Mar 2014 09:27:10 -0700 (PDT) Received: from mnementh.archaic.org.uk (mnementh.archaic.org.uk. [2001:8b0:1d0::1]) by mx.google.com with ESMTPS id h6si3537700bki.82.2014.03.28.09.27.08 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 28 Mar 2014 09:27:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::1 as permitted sender) client-ip=2001:8b0:1d0::1; Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.80) (envelope-from ) id 1WTZMO-0000bT-Tl; Fri, 28 Mar 2014 16:10:24 +0000 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, Alexander Graf , Michael Matz , Dirk Mueller , Laurent Desnogues , kvmarm@lists.cs.columbia.edu, Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Christoffer Dall , Will Newton , Peter Crosthwaite Subject: [PATCH v5 08/37] target-arm: A64: Add assertion that FP access was checked Date: Fri, 28 Mar 2014 16:09:55 +0000 Message-Id: <1396023024-2262-9-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1396023024-2262-1-git-send-email-peter.maydell@linaro.org> References: <1396023024-2262-1-git-send-email-peter.maydell@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.173 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Because unallocated encodings generate different exception syndrome information from traps due to FP being disabled, we can't do a single "is fp access disabled" check at a high level in the decode tree. To help in catching bugs where the access check was forgotten in some code path, we set this flag when the access check is done, and assert that it is set at the point where we actually touch the FP regs. This requires us to pass the DisasContext to the vec_reg_offset and fp_reg_offset functions. Signed-off-by: Peter Maydell Reviewed-by: Peter Crosthwaite --- target-arm/translate-a64.c | 74 +++++++++++++++++++++++++++++++--------------- target-arm/translate.h | 8 +++++ 2 files changed, 58 insertions(+), 24 deletions(-) diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c index 2f67af3..b7cf907 100644 --- a/target-arm/translate-a64.c +++ b/target-arm/translate-a64.c @@ -353,11 +353,29 @@ static TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf) return v; } +/* We should have at some point before trying to access an FP register + * done the necessary access check, so assert that (a) we did the check + * and (b) we didn't then just plough ahead anyway if it failed. + *.Print the instruction pattern in the abort message so we can figure + * out what we need to fix if a user encounters this problem in the wild. + */ +static inline void assert_fp_access_checked(DisasContext *s) +{ +#ifdef CONFIG_DEBUG_TCG + if (unlikely(!s->fp_access_checked || !s->cpacr_fpen)) { + fprintf(stderr, "target-arm: FP access check missing for " + "instruction 0x%08x\n", s->insn); + abort(); + } +#endif +} + /* Return the offset into CPUARMState of an element of specified * size, 'element' places in from the least significant end of * the FP/vector register Qn. */ -static inline int vec_reg_offset(int regno, int element, TCGMemOp size) +static inline int vec_reg_offset(DisasContext *s, int regno, + int element, TCGMemOp size) { int offs = offsetof(CPUARMState, vfp.regs[regno * 2]); #ifdef HOST_WORDS_BIGENDIAN @@ -372,6 +390,7 @@ static inline int vec_reg_offset(int regno, int element, TCGMemOp size) #else offs += element * (1 << size); #endif + assert_fp_access_checked(s); return offs; } @@ -380,18 +399,20 @@ static inline int vec_reg_offset(int regno, int element, TCGMemOp size) * Dn, Sn, Hn or Bn). * (Note that this is not the same mapping as for A32; see cpu.h) */ -static inline int fp_reg_offset(int regno, TCGMemOp size) +static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size) { int offs = offsetof(CPUARMState, vfp.regs[regno * 2]); #ifdef HOST_WORDS_BIGENDIAN offs += (8 - (1 << size)); #endif + assert_fp_access_checked(s); return offs; } /* Offset of the high half of the 128 bit vector Qn */ -static inline int fp_reg_hi_offset(int regno) +static inline int fp_reg_hi_offset(DisasContext *s, int regno) { + assert_fp_access_checked(s); return offsetof(CPUARMState, vfp.regs[regno * 2 + 1]); } @@ -405,7 +426,7 @@ static TCGv_i64 read_fp_dreg(DisasContext *s, int reg) { TCGv_i64 v = tcg_temp_new_i64(); - tcg_gen_ld_i64(v, cpu_env, fp_reg_offset(reg, MO_64)); + tcg_gen_ld_i64(v, cpu_env, fp_reg_offset(s, reg, MO_64)); return v; } @@ -413,7 +434,7 @@ static TCGv_i32 read_fp_sreg(DisasContext *s, int reg) { TCGv_i32 v = tcg_temp_new_i32(); - tcg_gen_ld_i32(v, cpu_env, fp_reg_offset(reg, MO_32)); + tcg_gen_ld_i32(v, cpu_env, fp_reg_offset(s, reg, MO_32)); return v; } @@ -421,8 +442,8 @@ static void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v) { TCGv_i64 tcg_zero = tcg_const_i64(0); - tcg_gen_st_i64(v, cpu_env, fp_reg_offset(reg, MO_64)); - tcg_gen_st_i64(tcg_zero, cpu_env, fp_reg_hi_offset(reg)); + tcg_gen_st_i64(v, cpu_env, fp_reg_offset(s, reg, MO_64)); + tcg_gen_st_i64(tcg_zero, cpu_env, fp_reg_hi_offset(s, reg)); tcg_temp_free_i64(tcg_zero); } @@ -693,14 +714,14 @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size) { /* This writes the bottom N bits of a 128 bit wide vector to memory */ TCGv_i64 tmp = tcg_temp_new_i64(); - tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(srcidx, MO_64)); + tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(s, srcidx, MO_64)); if (size < 4) { tcg_gen_qemu_st_i64(tmp, tcg_addr, get_mem_index(s), MO_TE + size); } else { TCGv_i64 tcg_hiaddr = tcg_temp_new_i64(); tcg_gen_qemu_st_i64(tmp, tcg_addr, get_mem_index(s), MO_TEQ); tcg_gen_qemu_st64(tmp, tcg_addr, get_mem_index(s)); - tcg_gen_ld_i64(tmp, cpu_env, fp_reg_hi_offset(srcidx)); + tcg_gen_ld_i64(tmp, cpu_env, fp_reg_hi_offset(s, srcidx)); tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8); tcg_gen_qemu_st_i64(tmp, tcg_hiaddr, get_mem_index(s), MO_TEQ); tcg_temp_free_i64(tcg_hiaddr); @@ -733,8 +754,8 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size) tcg_temp_free_i64(tcg_hiaddr); } - tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(destidx, MO_64)); - tcg_gen_st_i64(tmphi, cpu_env, fp_reg_hi_offset(destidx)); + tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_64)); + tcg_gen_st_i64(tmphi, cpu_env, fp_reg_hi_offset(s, destidx)); tcg_temp_free_i64(tmplo); tcg_temp_free_i64(tmphi); @@ -756,7 +777,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size) static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx, int element, TCGMemOp memop) { - int vect_off = vec_reg_offset(srcidx, element, memop & MO_SIZE); + int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE); switch (memop) { case MO_8: tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off); @@ -788,7 +809,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx, static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx, int element, TCGMemOp memop) { - int vect_off = vec_reg_offset(srcidx, element, memop & MO_SIZE); + int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE); switch (memop) { case MO_8: tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off); @@ -815,7 +836,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx, static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx, int element, TCGMemOp memop) { - int vect_off = vec_reg_offset(destidx, element, memop & MO_SIZE); + int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE); switch (memop) { case MO_8: tcg_gen_st8_i64(tcg_src, cpu_env, vect_off); @@ -837,7 +858,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx, static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src, int destidx, int element, TCGMemOp memop) { - int vect_off = vec_reg_offset(destidx, element, memop & MO_SIZE); + int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE); switch (memop) { case MO_8: tcg_gen_st8_i32(tcg_src, cpu_env, vect_off); @@ -899,6 +920,9 @@ static void do_vec_ld(DisasContext *s, int destidx, int element, */ static inline bool fp_access_check(DisasContext *s) { + assert(!s->fp_access_checked); + s->fp_access_checked = true; + if (s->cpacr_fpen) { return true; } @@ -4748,9 +4772,9 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof) /* 32 bit */ TCGv_i64 tmp = tcg_temp_new_i64(); tcg_gen_ext32u_i64(tmp, tcg_rn); - tcg_gen_st_i64(tmp, cpu_env, fp_reg_offset(rd, MO_64)); + tcg_gen_st_i64(tmp, cpu_env, fp_reg_offset(s, rd, MO_64)); tcg_gen_movi_i64(tmp, 0); - tcg_gen_st_i64(tmp, cpu_env, fp_reg_hi_offset(rd)); + tcg_gen_st_i64(tmp, cpu_env, fp_reg_hi_offset(s, rd)); tcg_temp_free_i64(tmp); break; } @@ -4758,14 +4782,14 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof) { /* 64 bit */ TCGv_i64 tmp = tcg_const_i64(0); - tcg_gen_st_i64(tcg_rn, cpu_env, fp_reg_offset(rd, MO_64)); - tcg_gen_st_i64(tmp, cpu_env, fp_reg_hi_offset(rd)); + tcg_gen_st_i64(tcg_rn, cpu_env, fp_reg_offset(s, rd, MO_64)); + tcg_gen_st_i64(tmp, cpu_env, fp_reg_hi_offset(s, rd)); tcg_temp_free_i64(tmp); break; } case 2: /* 64 bit to top half. */ - tcg_gen_st_i64(tcg_rn, cpu_env, fp_reg_hi_offset(rd)); + tcg_gen_st_i64(tcg_rn, cpu_env, fp_reg_hi_offset(s, rd)); break; } } else { @@ -4774,15 +4798,15 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof) switch (type) { case 0: /* 32 bit */ - tcg_gen_ld32u_i64(tcg_rd, cpu_env, fp_reg_offset(rn, MO_32)); + tcg_gen_ld32u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_32)); break; case 1: /* 64 bit */ - tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_offset(rn, MO_64)); + tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_64)); break; case 2: /* 64 bits from top half */ - tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_hi_offset(rn)); + tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_hi_offset(s, rn)); break; } } @@ -5727,7 +5751,7 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn) tcg_rd = new_tmp_a64(s); for (i = 0; i < 2; i++) { - int foffs = i ? fp_reg_hi_offset(rd) : fp_reg_offset(rd, MO_64); + int foffs = i ? fp_reg_hi_offset(s, rd) : fp_reg_offset(s, rd, MO_64); if (i == 1 && !is_q) { /* non-quad ops clear high half of vector */ @@ -10557,6 +10581,8 @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s) s->insn = insn; s->pc += 4; + s->fp_access_checked = false; + switch (extract32(insn, 25, 4)) { case 0x0: case 0x1: case 0x2: case 0x3: /* UNALLOCATED */ unallocated_encoding(s); diff --git a/target-arm/translate.h b/target-arm/translate.h index 4536f82..3f7d5ca 100644 --- a/target-arm/translate.h +++ b/target-arm/translate.h @@ -32,6 +32,14 @@ typedef struct DisasContext { int current_pl; GHashTable *cp_regs; uint64_t features; /* CPU features bits */ + /* Because unallocated encodings generate different exception syndrome + * information from traps due to FP being disabled, we can't do a single + * "is fp access disabled" check at a high level in the decode tree. + * To help in catching bugs where the access check was forgotten in some + * code path, we set this flag when the access check is done, and assert + * that it is set at the point where we actually touch the FP regs. + */ + bool fp_access_checked; #define TMP_A64_MAX 16 int tmp_a64_count; TCGv_i64 tmp_a64[TMP_A64_MAX];