From patchwork Mon Nov 25 00:05:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omair Javaid X-Patchwork-Id: 21731 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pd0-f200.google.com (mail-pd0-f200.google.com [209.85.192.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 62EFF23FCB for ; Mon, 25 Nov 2013 00:05:30 +0000 (UTC) Received: by mail-pd0-f200.google.com with SMTP id p10sf8740067pdj.7 for ; Sun, 24 Nov 2013 16:05:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:message-id:date:from:user-agent :mime-version:to:subject:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=CvpQxL20fgH1WKqSdqake9IWO4fiabpuna3wG+HJwps=; b=T5OiLwTAepQa8WTlhPyB5hrd9aA+kedLr9UiNkPwnT+RT0ZxBOWWVWZ4jjC5dpxFh8 UbPmbvfqwNUkElv9ToljyGaUIKH3XnPWDo3WC7sfAIrDO13WssKyAXqtHhtCA/jhqYeH kuU/4DGqJsaikY8U22lXfcXuLZOk9zEZxE/2qe/L4A8m8lIZmYFMsj9Uv3uuTOk+isrt rB2YreQThNrJaMdMiU4l+9yxuHCVZnEPjrGLhqHKRylkiMk2k8/P5gkoPgP/wguzq4zh vCtIGYv1c8LwXFOYi3nC3JKS18SwoMayLYWzGBwkLUy4y054gKeqw4POl1+5uY2taXvP k61w== X-Gm-Message-State: ALoCoQkL/HkuCskbr97wttQ3EMSvNLmHnb+AadEVKlJChCy+SzrHrDxzR1biwfoa96UBs3lWlJpa X-Received: by 10.66.227.193 with SMTP id sc1mr8107928pac.25.1385337930026; Sun, 24 Nov 2013 16:05:30 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.105.34 with SMTP id gj2ls1942129qeb.52.gmail; Sun, 24 Nov 2013 16:05:29 -0800 (PST) X-Received: by 10.53.10.34 with SMTP id dx2mr1320320vdd.13.1385337929866; Sun, 24 Nov 2013 16:05:29 -0800 (PST) Received: from mail-vc0-f173.google.com (mail-vc0-f173.google.com [209.85.220.173]) by mx.google.com with ESMTPS id f14si16759125veh.138.2013.11.24.16.05.29 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 24 Nov 2013 16:05:29 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.173 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.173; Received: by mail-vc0-f173.google.com with SMTP id ia6so2233122vcb.4 for ; Sun, 24 Nov 2013 16:05:29 -0800 (PST) X-Received: by 10.52.35.238 with SMTP id l14mr20164345vdj.9.1385337929596; Sun, 24 Nov 2013 16:05:29 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp96690vcz; Sun, 24 Nov 2013 16:05:28 -0800 (PST) X-Received: by 10.194.109.167 with SMTP id ht7mr19639343wjb.5.1385337928358; Sun, 24 Nov 2013 16:05:28 -0800 (PST) Received: from mail-we0-f182.google.com (mail-we0-f182.google.com [74.125.82.182]) by mx.google.com with ESMTPS id hu4si16927604wjb.92.2013.11.24.16.05.27 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 24 Nov 2013 16:05:28 -0800 (PST) Received-SPF: neutral (google.com: 74.125.82.182 is neither permitted nor denied by best guess record for domain of omair.javaid@linaro.org) client-ip=74.125.82.182; Received: by mail-we0-f182.google.com with SMTP id q59so3140873wes.41 for ; Sun, 24 Nov 2013 16:05:27 -0800 (PST) X-Received: by 10.180.39.177 with SMTP id q17mr11400386wik.16.1385337927756; Sun, 24 Nov 2013 16:05:27 -0800 (PST) Received: from [192.168.1.2] ([182.185.192.168]) by mx.google.com with ESMTPSA id s20sm42318352wib.1.2013.11.24.16.05.25 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 24 Nov 2013 16:05:27 -0800 (PST) Message-ID: <52929441.4020103@linaro.org> Date: Mon, 25 Nov 2013 05:05:21 +0500 From: Omair Javaid User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.1 MIME-Version: 1.0 To: "gdb-patches@sourceware.org" , Patch Tracking Subject: [PATCH] VFP, SIMD and coprocessor instructions recording for arm*-linux* targets. X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: omair.javaid@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.173 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch add support for recording VFP, advance SIMD and coprocessor instruction recording for arm*-linux targets. Also adds support for recording structure load/store and extension register load/store instructions. This patch is follow up on my last three patches in progress to improve process record/replay and reverse debugging support on arm*-linux targets. In order to apply this patch you and run successfully all previous arm process record patches under review are required. There are no significant improvements in testsuite failures as this patch deals with fairly uncommon instruction types. However around 30 new tests pass in remote and 4 new tests are passing in native mode when we run gdb.reverse testsuite after applying this patch alongwith previous arm process record patches. gdb: 2013-11-25 Omair Javaid * arm-tdep.c (arm_record_coproc_data_proc): Updated. (thumb2_record_coproc_insn): New function. (arm_record_asimd_vfp_coproc): New function. (arm_thumb_record_vfp_data_proc): New function. (arm_thumb_record_exreg_ld_st): New function. (arm_thumb_record_asimd_sld_st): New function. (arm_thumb_record_vdata_transfer): New function. (arm_thumb_record_unsupported_insn): New function. --- gdb/arm-tdep.c | 812 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 795 insertions(+), 17 deletions(-) -- diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index 8fc223d..16ac8da 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -10679,6 +10679,13 @@ typedef enum THUMB2_RECORD } record_type_t; +/* Prototypes of process record joint arm and thumb mode handlers. */ + +static int arm_thumb_record_exreg_ld_st (insn_decode_record *arm_insn_r); +static int arm_thumb_record_asimd_sld_st (insn_decode_record *arm_insn_r); +static int arm_thumb_record_vfp_data_proc (insn_decode_record *arm_insn_r); +static int arm_thumb_record_vdata_transfer (insn_decode_record *arm_insn_r); +static int arm_thumb_record_unsupported_insn (insn_decode_record *arm_insn_r); static int arm_record_strx (insn_decode_record *arm_insn_r, uint32_t *record_buf, @@ -11919,13 +11926,60 @@ arm_record_b_bl (insn_decode_record *arm_insn_r) /* Handling opcode 110 insns. */ static int -arm_record_unsupported_insn (insn_decode_record *arm_insn_r) +arm_record_asimd_vfp_coproc (insn_decode_record *arm_insn_r) { - printf_unfiltered (_("Process record does not support instruction " - "0x%0x at address %s.\n"),arm_insn_r->arm_insn, - paddress (arm_insn_r->gdbarch, arm_insn_r->this_addr)); + uint32_t op, op1, op1_sbit, op1_ebit, coproc; - return -1; + coproc = bits (arm_insn_r->arm_insn, 8, 11); + op1 = bits (arm_insn_r->arm_insn, 20, 25); + op1_sbit = bit (arm_insn_r->arm_insn, 24); + op1_ebit = bit (arm_insn_r->arm_insn, 20); + op = bit (arm_insn_r->arm_insn, 4); + + /* Advanced SIMD, VFP instructions. */ + if ((coproc & 0x0e) == 0x0a) + { + /* Handle extension register ld/st instructions. */ + if (!(op1 & 0x20)) + return arm_thumb_record_exreg_ld_st(arm_insn_r); + + /* 64-bit transfers between arm core and extension registers. */ + if ((op1 & 0x3e) == 0x04) + return arm_thumb_record_exreg_ld_st(arm_insn_r); + } + else + { + if (!(op1 & 0x3a)) + { + /* Store coprocessor instructions. */ + if (!op1_ebit) + return arm_thumb_record_unsupported_insn(arm_insn_r); + else + { + /* Load coprocessor instructions. */ + return arm_thumb_record_unsupported_insn(arm_insn_r); + } + } + + /* Move to coprocessor from two arm core registers. */ + if (op1 == 0x4) + return arm_thumb_record_unsupported_insn(arm_insn_r); + + /* Move to two arm core registers from coprocessor. */ + if (op1 == 0x5) + { + uint32_t reg_t[2]; + + reg_t[0] = bits (arm_insn_r->arm_insn, 12, 15); + reg_t[1] = bits (arm_insn_r->arm_insn, 16, 19); + arm_insn_r->reg_rec_count = 2; + + REG_ALLOC (arm_insn_r->arm_regs, arm_insn_r->reg_rec_count, reg_t); + return 0; + } + } + + return arm_thumb_record_unsupported_insn(arm_insn_r); } /* Handling opcode 111 insns. */ @@ -11933,15 +11987,20 @@ arm_record_unsupported_insn (insn_decode_record *arm_insn_r) static int arm_record_coproc_data_proc (insn_decode_record *arm_insn_r) { + uint32_t op, op1_sbit, op1_ebit, coproc; struct gdbarch_tdep *tdep = gdbarch_tdep (arm_insn_r->gdbarch); struct regcache *reg_cache = arm_insn_r->regcache; uint32_t ret = 0; /* function return value: -1:record failure ; 0:success */ ULONGEST u_regval = 0; arm_insn_r->opcode = bits (arm_insn_r->arm_insn, 24, 27); + coproc = bits (arm_insn_r->arm_insn, 8, 11); + op1_sbit = bit (arm_insn_r->arm_insn, 24); + op1_ebit = bit (arm_insn_r->arm_insn, 20); + op = bit (arm_insn_r->arm_insn, 4); /* Handle arm SWI/SVC system call instructions. */ - if (15 == arm_insn_r->opcode) + if (op1_sbit) { if (tdep->arm_syscall_record != NULL) { @@ -11955,20 +12014,52 @@ arm_record_coproc_data_proc (insn_decode_record *arm_insn_r) regcache_raw_read_unsigned (reg_cache, 7, &svc_number); ret = tdep->arm_syscall_record (reg_cache, svc_number); + return ret; } else { printf_unfiltered (_("no syscall record support\n")); - ret = -1; + return -1; } } + + if ((coproc & 0x0e) == 0x0a) + { + /* VFP data-processing instructions. */ + if (!op1_sbit && !op) + return arm_thumb_record_vfp_data_proc(arm_insn_r); + + /* Advanced SIMD, VFP instructions. */ + if (!op1_sbit && op) + return arm_thumb_record_vdata_transfer(arm_insn_r); + } else { - arm_record_unsupported_insn(arm_insn_r); - ret = -1; + /* Coprocessor data operations. */ + if (!op1_sbit && !op) + return arm_thumb_record_unsupported_insn(arm_insn_r); + + /* Move to Coprocessor from ARM core register. */ + if (!op1_sbit && !op1_ebit && op) + return arm_thumb_record_unsupported_insn(arm_insn_r); + + /* Move to arm core register from coprocessor. */ + if (!op1_sbit && op1_ebit && op) + { + uint32_t record_buf[1]; + + record_buf[0] = bits (arm_insn_r->arm_insn, 12, 15); + if (record_buf[0] == 15) + record_buf[0] = ARM_PS_REGNUM; + + arm_insn_r->reg_rec_count = 1; + REG_ALLOC (arm_insn_r->arm_regs, arm_insn_r->reg_rec_count, + record_buf); + return 0; + } } - return ret; + return arm_thumb_record_unsupported_insn(arm_insn_r); } /* Handling opcode 000 insns. */ @@ -12454,7 +12545,7 @@ thumb2_record_ld_st_multiple (insn_decode_record *thumb2_insn_r) else { /* Handle SRS instruction after reading banked SP. */ - return arm_record_unsupported_insn (thumb2_insn_r); + return arm_thumb_record_unsupported_insn (thumb2_insn_r); } } else if(1 == op || 2 == op) @@ -12710,7 +12801,7 @@ thumb2_record_branch_misc_cntrl (insn_decode_record *thumb2_insn_r) } else { - arm_record_unsupported_insn(thumb2_insn_r); + arm_thumb_record_unsupported_insn(thumb2_insn_r); return -1; } } @@ -12899,6 +12990,693 @@ thumb2_record_lmul_lmla_div (insn_decode_record *thumb2_insn_r) return ret; } +/* Record handler for thumb32 coprocessor instructions. */ + +static int +thumb2_record_coproc_insn (insn_decode_record *thumb2_insn_r) +{ + if (bit (thumb2_insn_r->arm_insn, 25)) + return arm_record_coproc_data_proc (thumb2_insn_r); + else + return arm_record_asimd_vfp_coproc (thumb2_insn_r); +} + +/* Record handler for arm/thumb mode advance SIMD and structure + load/store instructions. */ + +static int +arm_thumb_record_asimd_sld_st (insn_decode_record *thumb2_insn_r) +{ + struct regcache *reg_cache = thumb2_insn_r->regcache; + uint32_t l_bit, a_bit, b_bits; + uint32_t record_buf[128], record_buf_mem[128]; + uint32_t reg_rn, reg_vd, address, f_esize, f_elem; + uint32_t index_r = 0, index_e = 0, bf_regs = 0, index_m = 0, loop_t = 0; + uint8_t bf_align, f_alignment, f_ebytes; + + l_bit = bit (thumb2_insn_r->arm_insn, 21); + a_bit = bit (thumb2_insn_r->arm_insn, 23); + b_bits = bits (thumb2_insn_r->arm_insn, 8, 11); + bf_align = bits (thumb2_insn_r->arm_insn, 4, 5); + reg_rn = bits (thumb2_insn_r->arm_insn, 16, 19); + reg_vd = bits (thumb2_insn_r->arm_insn, 12, 15); + reg_vd = (bit (thumb2_insn_r->arm_insn, 22) << 4) | reg_vd; + f_ebytes = (1 << bits (thumb2_insn_r->arm_insn, 6, 7)); + f_esize = 8 * f_ebytes; + f_elem = 8 / f_ebytes; + + if (!l_bit) + { + ULONGEST u_regval = 0; + regcache_raw_read_unsigned (reg_cache, reg_rn, &u_regval); + address = u_regval; + + if (!a_bit) + { + /* Handle VST1. */ + if (b_bits == 0x02 || b_bits == 0x0a || (b_bits & 0x0e) == 0x06) + { + if (b_bits == 0x07) + bf_regs = 1; + else if (b_bits == 0x0a) + bf_regs = 2; + else if (b_bits == 0x06) + bf_regs = 3; + else if (b_bits == 0x02) + bf_regs = 4; + else + bf_regs = 0; + + for (index_r = 0; index_r < bf_regs; index_r++) + { + for (index_e = 0; index_e < f_elem; index_e++) + { + record_buf_mem[index_m++] = f_ebytes; + record_buf_mem[index_m++] = address; + address = address + f_ebytes; + thumb2_insn_r->mem_rec_count += 1; + } + } + } + /* Handle VST2. */ + else if (b_bits == 0x03 || (b_bits & 0x0e) == 0x08) + { + if (b_bits == 0x09 || b_bits == 0x08) + bf_regs = 1; + else if (b_bits == 0x03) + bf_regs = 2; + else + bf_regs = 0; + + for (index_r = 0; index_r < bf_regs; index_r++) + for (index_e = 0; index_e < f_elem; index_e++) + { + for (loop_t = 0; loop_t < 2; loop_t++) + { + record_buf_mem[index_m++] = f_ebytes; + record_buf_mem[index_m++] = address + (loop_t * f_ebytes); + thumb2_insn_r->mem_rec_count += 1; + } + address = address + (2 * f_ebytes); + } + } + /* Handle VST3. */ + else if ((b_bits & 0x0e) == 0x04) + { + for (index_e = 0; index_e < f_elem; index_e++) + { + for (loop_t = 0; loop_t < 3; loop_t++) + { + record_buf_mem[index_m++] = f_ebytes; + record_buf_mem[index_m++] = address + (loop_t * f_ebytes); + thumb2_insn_r->mem_rec_count += 1; + } + address = address + (3 * f_ebytes); + } + } + /* Handle VST4. */ + else if (!(b_bits & 0x0e)) + { + for (index_e = 0; index_e < f_elem; index_e++) + { + for (loop_t = 0; loop_t < 4; loop_t++) + { + record_buf_mem[index_m++] = f_ebytes; + record_buf_mem[index_m++] = address + (loop_t * f_ebytes); + thumb2_insn_r->mem_rec_count += 1; + } + address = address + (4 * f_ebytes); + } + } + } + else + { + uint8_t bft_size = bits (thumb2_insn_r->arm_insn, 10, 11); + + if (bft_size == 0x00) + f_ebytes = 1; + else if (bft_size == 0x01) + f_ebytes = 2; + else if (bft_size == 0x02) + f_ebytes = 4; + else + f_ebytes = 0; + + /* Handle VST1. */ + if (!(b_bits & 0x0b) || b_bits == 0x08) + thumb2_insn_r->mem_rec_count = 1; + /* Handle VST2. */ + else if ((b_bits & 0x0b) == 0x01 || b_bits == 0x09) + thumb2_insn_r->mem_rec_count = 2; + /* Handle VST3. */ + else if ((b_bits & 0x0b) == 0x02 || b_bits == 0x0a) + thumb2_insn_r->mem_rec_count = 3; + /* Handle VST4. */ + else if ((b_bits & 0x0b) == 0x03 || b_bits == 0x0b) + thumb2_insn_r->mem_rec_count = 4; + + for (index_m = 0; index_m < thumb2_insn_r->mem_rec_count; index_m++) + { + record_buf_mem[index_m] = f_ebytes; + record_buf_mem[index_m] = address + (index_m * f_ebytes); + } + } + } + else + { + if (!a_bit) + { + /* Handle VLD1. */ + if (b_bits == 0x02 || b_bits == 0x0a || (b_bits & 0x0e) == 0x06) + thumb2_insn_r->reg_rec_count = 1; + /* Handle VLD2. */ + else if (b_bits == 0x03 || (b_bits & 0x0e) == 0x08) + thumb2_insn_r->reg_rec_count = 2; + /* Handle VLD3. */ + else if ((b_bits & 0x0e) == 0x04) + thumb2_insn_r->reg_rec_count = 3; + /* Handle VLD4. */ + else if (!(b_bits & 0x0e)) + thumb2_insn_r->reg_rec_count = 4; + } + else + { + /* Handle VLD1. */ + if (!(b_bits & 0x0b) || b_bits == 0x08 || b_bits == 0x0c) + thumb2_insn_r->reg_rec_count = 1; + /* Handle VLD2. */ + else if ((b_bits & 0x0b) == 0x01 || b_bits == 0x09 || b_bits == 0x0d) + thumb2_insn_r->reg_rec_count = 2; + /* Handle VLD3. */ + else if ((b_bits & 0x0b) == 0x02 || b_bits == 0x0a || b_bits == 0x0e) + thumb2_insn_r->reg_rec_count = 3; + /* Handle VLD4. */ + else if ((b_bits & 0x0b) == 0x03 || b_bits == 0x0b || b_bits == 0x0f) + thumb2_insn_r->reg_rec_count = 4; + + for (index_r = 0; index_r < thumb2_insn_r->reg_rec_count; index_r++) + { + record_buf[index_r] = reg_vd + ARM_D0_REGNUM + index_r; + } + } + } + + if (bits (thumb2_insn_r->arm_insn, 0, 3) != 15) + { + record_buf[index_r] = reg_rn; + thumb2_insn_r->reg_rec_count += 1; + } + + REG_ALLOC (thumb2_insn_r->arm_regs, thumb2_insn_r->reg_rec_count, + record_buf); + MEM_ALLOC (thumb2_insn_r->arm_mems, thumb2_insn_r->mem_rec_count, + record_buf_mem); + return 0; +} + +/* Record handler for unsupported arm and thumb instructions. */ + +static int +arm_thumb_record_unsupported_insn (insn_decode_record *arm_insn_r) +{ + printf_unfiltered (_("Process record does not support instruction " + "0x%0x at address %s.\n"),arm_insn_r->arm_insn, + paddress (arm_insn_r->gdbarch, arm_insn_r->this_addr)); + + return -1; +} + +/* Record handler for arm/thumb mode vector data transfer instructions. */ + +static int +arm_thumb_record_vdata_transfer (insn_decode_record *arm_insn_r) +{ + uint32_t bits_a, bit_c, bit_l, reg_t, reg_v; + uint32_t record_buf[4]; + + const int num_regs = gdbarch_num_regs (arm_insn_r->gdbarch); + reg_t = bits (arm_insn_r->arm_insn, 12, 15); + reg_v = bits (arm_insn_r->arm_insn, 21, 23); + bits_a = bits (arm_insn_r->arm_insn, 21, 23); + bit_l = bit (arm_insn_r->arm_insn, 20); + bit_c = bit (arm_insn_r->arm_insn, 8); + + /* Handle VMOV instruction. */ + if (bit_l && bit_c) + { + record_buf[0] = reg_t; + arm_insn_r->reg_rec_count = 1; + } + else if (bit_l && !bit_c) + { + /* Handle VMOV instruction. */ + if (bits_a == 0x00) + { + if (bit (arm_insn_r->arm_insn, 20)) + record_buf[0] = reg_t; + else + record_buf[0] = num_regs + (bit (arm_insn_r->arm_insn, 7) | + (reg_v << 1)); + + arm_insn_r->reg_rec_count = 1; + } + /* Handle VMRS instruction. */ + else if (bits_a == 0x07) + { + if (reg_t == 15) + reg_t = ARM_PS_REGNUM; + + record_buf[0] = reg_t; + arm_insn_r->reg_rec_count = 1; + } + } + else if (!bit_l && !bit_c) + { + /* Handle VMOV instruction. */ + if (bits_a == 0x00) + { + if (bit (arm_insn_r->arm_insn, 20)) + record_buf[0] = reg_t; + else + record_buf[0] = num_regs + (bit (arm_insn_r->arm_insn, 7) | + (reg_v << 1)); + + arm_insn_r->reg_rec_count = 1; + } + /* Handle VMSR instruction. */ + else if (bits_a == 0x07) + { + record_buf[0] = ARM_FPSCR_REGNUM; + arm_insn_r->reg_rec_count = 1; + } + } + else if (!bit_l && bit_c) + { + /* Handle VMOV instruction. */ + if (!(bits_a & 0x04)) + { + record_buf[0] = (reg_v | (bit (arm_insn_r->arm_insn, 7) << 4)) + + ARM_D0_REGNUM; + arm_insn_r->reg_rec_count = 1; + } + /* Handle VDUP instruction. */ + else + { + if (bit (arm_insn_r->arm_insn, 21)) + { + reg_v = reg_v | (bit (arm_insn_r->arm_insn, 7) << 4); + record_buf[0] = reg_v + ARM_D0_REGNUM; + record_buf[1] = reg_v + ARM_D0_REGNUM + 1; + arm_insn_r->reg_rec_count = 2; + } + else + { + reg_v = reg_v | (bit (arm_insn_r->arm_insn, 7) << 4); + record_buf[0] = reg_v + ARM_D0_REGNUM; + arm_insn_r->reg_rec_count = 1; + } + } + } + + REG_ALLOC (arm_insn_r->arm_regs, arm_insn_r->reg_rec_count, record_buf); + return 0; +} + +/* Record handler for arm/thumb mode extension register load/store + instructions. */ + +static int +arm_thumb_record_exreg_ld_st (insn_decode_record *arm_insn_r) +{ + uint32_t opcode,single_reg; + uint8_t op_vldm_vstm, op_vldr_vstr; + uint32_t record_buf[8], record_buf_mem[128]; + ULONGEST u_regval = 0; + + struct regcache *reg_cache = arm_insn_r->regcache; + const int num_regs = gdbarch_num_regs (arm_insn_r->gdbarch); + + opcode = bits (arm_insn_r->arm_insn, 20, 24); + single_reg = bit (arm_insn_r->arm_insn, 8); + op_vldm_vstm = opcode & 0x1b; + + /* Handle VMOV instructions. */ + if ((opcode & 0x1e) == 0x04) + { + if (bit (arm_insn_r->arm_insn, 4)) + { + record_buf[0] = bits (arm_insn_r->arm_insn, 12, 15); + record_buf[1] = bits (arm_insn_r->arm_insn, 16, 19); + arm_insn_r->reg_rec_count = 2; + } + else + { + uint8_t reg_m = (bits (arm_insn_r->arm_insn, 0, 3) << 1) + | bit (arm_insn_r->arm_insn, 5); + + if (!single_reg) + { + record_buf[0] = num_regs + reg_m; + record_buf[1] = num_regs + reg_m + 1; + arm_insn_r->reg_rec_count = 2; + } + else + { + record_buf[0] = reg_m + ARM_D0_REGNUM; + arm_insn_r->reg_rec_count = 1; + } + } + } + /* Handle VSTM and VPUSH instructions. */ + else if ( op_vldm_vstm == 0x08 || op_vldm_vstm == 0x0a + || op_vldm_vstm == 0x12 ) + { + uint32_t start_address, reg_rn, imm_off32, imm_off8, memory_count; + uint32_t memory_index = 0; + + reg_rn = bits (arm_insn_r->arm_insn, 16, 19); + regcache_raw_read_unsigned (reg_cache, reg_rn, &u_regval); + imm_off8 = bits (arm_insn_r->arm_insn, 0, 7); + imm_off32 = imm_off8 << 24; + memory_count = imm_off8; + + if (bit (arm_insn_r->arm_insn, 23)) + start_address = u_regval; + else + start_address = u_regval - imm_off32; + + if (bit (arm_insn_r->arm_insn, 21)) + { + record_buf[0] = reg_rn; + arm_insn_r->reg_rec_count = 1; + } + + while (memory_count) + { + if (!single_reg) + { + record_buf_mem[memory_index] = start_address; + record_buf_mem[memory_index + 1] = 4; + start_address = start_address + 4; + memory_index = memory_index + 2; + } + else + { + record_buf_mem[memory_index] = start_address; + record_buf_mem[memory_index + 1] = 4; + record_buf_mem[memory_index + 2] = start_address + 4; + record_buf_mem[memory_index + 3] = 4; + start_address = start_address + 8; + memory_index = memory_index + 4; + } + memory_count--; + } + } + /* Handle VLDM instructions. */ + else if ( op_vldm_vstm == 0x09 || op_vldm_vstm == 0x0b + || op_vldm_vstm == 0x13 ) + { + uint32_t reg_count, reg_vd; + uint32_t reg_index = 0; + + reg_vd = bits (arm_insn_r->arm_insn, 12, 15); + reg_count = bits (arm_insn_r->arm_insn, 0, 7); + + if (single_reg) + reg_vd = reg_vd | (bit (arm_insn_r->arm_insn, 0) << 4); + else + reg_vd = (reg_vd << 1) | bit (arm_insn_r->arm_insn, 0); + + if (bit (arm_insn_r->arm_insn, 21)) + record_buf[reg_index++] = bits (arm_insn_r->arm_insn, 16, 19); + + while (reg_count) + { + if (single_reg) + record_buf[reg_index++] = num_regs + reg_vd + reg_count - 1; + else + record_buf[reg_index++] = ARM_D0_REGNUM + reg_vd + reg_count - 1; + + reg_count--; + } + } + /* VSTR Vector store register. */ + else if ((opcode & 0x13) == 0x10) + { + uint32_t start_address, reg_rn, imm_off32, imm_off8, memory_count; + uint32_t memory_index = 0; + + reg_rn = bits (arm_insn_r->arm_insn, 16, 19); + regcache_raw_read_unsigned (reg_cache, reg_rn, &u_regval); + imm_off8 = bits (arm_insn_r->arm_insn, 0, 7); + imm_off32 = imm_off8 << 24; + memory_count = imm_off8; + + if (bit (arm_insn_r->arm_insn, 23)) + start_address = u_regval + imm_off32; + else + start_address = u_regval - imm_off32; + + if (single_reg) + { + record_buf_mem[memory_index] = start_address; + record_buf_mem[memory_index + 1] = 4; + } + else + { + record_buf_mem[memory_index] = start_address; + record_buf_mem[memory_index + 1] = 4; + record_buf_mem[memory_index + 2] = start_address + 4; + record_buf_mem[memory_index + 3] = 4; + } + } + /* VLDR Vector load register. */ + else if ((opcode & 0x13) == 0x11) + { + uint8_t single_reg = 0; + uint8_t special_case; + + record_buf[0] = 0; + record_buf[1] = 0; + arm_insn_r->reg_rec_count = 2; + } + + REG_ALLOC (arm_insn_r->arm_regs, arm_insn_r->reg_rec_count, record_buf); + MEM_ALLOC (arm_insn_r->arm_mems, arm_insn_r->mem_rec_count, record_buf_mem); + return 0; +} + +/* Record handler for arm/thumb mode VFP data processing instructions. */ + +static int +arm_thumb_record_vfp_data_proc (insn_decode_record *arm_insn_r) +{ + uint32_t opc1, opc2, opc3, dp_op_sz, bit_d, reg_vd; + uint32_t record_buf[4]; + uint8_t insn_type = -1; + + const int num_regs = gdbarch_num_regs (arm_insn_r->gdbarch); + reg_vd = bits (arm_insn_r->arm_insn, 12, 15); + opc1 = bits (arm_insn_r->arm_insn, 20, 23); + opc2 = bits (arm_insn_r->arm_insn, 16, 19); + opc3 = bits (arm_insn_r->arm_insn, 6, 7); + dp_op_sz = bit (arm_insn_r->arm_insn, 8); + bit_d = bit (arm_insn_r->arm_insn, 22); + opc1 = opc1 & 0x04; + + /* Handle VMLA, VMLS. */ + if (opc1 == 0x00) + { + if (bit (arm_insn_r->arm_insn, 10)) + { + if (bit (arm_insn_r->arm_insn, 6)) + insn_type = 0; + else + insn_type = 1; + } + else + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + } + /* Handle VNMLA, VNMLS, VNMUL. */ + else if (opc1 == 0x01) + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + /* Handle VMUL. */ + else if (opc1 == 0x02 && !(opc3 & 0x01)) + { + if (bit (arm_insn_r->arm_insn, 10)) + { + if (bit (arm_insn_r->arm_insn, 6)) + insn_type = 0; + else + insn_type = 1; + } + else + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + } + /* Handle VADD, VSUB. */ + else if (opc1 == 0x03) + { + if (!bit (arm_insn_r->arm_insn, 9)) + { + if (bit (arm_insn_r->arm_insn, 6)) + insn_type = 0; + else + insn_type = 1; + } + else + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + } + /* Handle VDIV. */ + else if (opc1 == 0x0b) + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + /* Handle all other vfp data processing instructions. */ + else if (opc1 == 0x0b) + { + /* Handle VMOV. */ + if (!(opc3 & 0x01) || (opc2 == 0x00 && opc3 == 0x01)) + { + if (bit (arm_insn_r->arm_insn, 4)) + { + if (bit (arm_insn_r->arm_insn, 6)) + insn_type = 0; + else + insn_type = 1; + } + else + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + } + /* Handle VNEG and VABS. */ + else if ((opc2 == 0x01 && opc3 == 0x01) + || (opc2 == 0x00 && opc3 == 0x03)) + { + if (!bit (arm_insn_r->arm_insn, 11)) + { + if (bit (arm_insn_r->arm_insn, 6)) + insn_type = 0; + else + insn_type = 1; + } + else + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + } + /* Handle VSQRT. */ + else if (opc2 == 0x01 && opc3 == 0x03) + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + /* Handle VCVT (A8-584). */ + else if (opc2 == 0x07 && opc3 == 0x03) + { + if (!dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + else if (opc3 & 0x01) + { + /* Handle VCVT (A8-578). */ + if ((opc2 == 0x08) || (opc2 & 0x0e) == 0x0c) + { + if (!bit (arm_insn_r->arm_insn, 18)) + insn_type = 2; + else + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + } + /* Handle VCVT (A8-582). */ + else if ((opc2 & 0x0e) == 0x0a || (opc2 & 0x0e) == 0x0e) + { + if (dp_op_sz) + insn_type = 1; + else + insn_type = 2; + } + /* Handle VCVTB, VCVTT (A8-588). */ + else if ((opc2 & 0x0e) == 0x02) + insn_type = 2; + /* Handle VCMP, VCMPE (A8-572). */ + else if ((opc2 & 0x0e) == 0x04) + insn_type = 3; + } + } + + switch (insn_type) + { + case 0: + reg_vd = reg_vd | (bit_d << 4); + record_buf[0] = reg_vd + ARM_D0_REGNUM; + record_buf[1] = reg_vd + ARM_D0_REGNUM + 1; + arm_insn_r->reg_rec_count = 2; + break; + + case 1: + reg_vd = reg_vd | (bit_d << 4); + record_buf[0] = reg_vd + ARM_D0_REGNUM; + arm_insn_r->reg_rec_count = 1; + break; + + case 2: + reg_vd = (reg_vd << 1) | bit_d; + record_buf[0] = reg_vd + num_regs; + arm_insn_r->reg_rec_count = 1; + break; + + case 3: + record_buf[0] = ARM_FPSCR_REGNUM; + arm_insn_r->reg_rec_count = 1; + break; + + default: + gdb_assert_not_reached ("no decoding pattern found"); + break; + } + + REG_ALLOC (arm_insn_r->arm_regs, arm_insn_r->reg_rec_count, record_buf); + return 0; +} + /* Decodes thumb2 instruction type and return an instruction id. */ static unsigned int @@ -13042,7 +13820,7 @@ decode_insn (insn_decode_record *arm_record, record_type_t record_type, arm_record_ld_st_reg_offset, /* 011. */ arm_record_ld_st_multiple, /* 100. */ arm_record_b_bl, /* 101. */ - arm_record_unsupported_insn, /* 110. */ + arm_record_asimd_vfp_coproc, /* 110. */ arm_record_coproc_data_proc /* 111. */ }; @@ -13066,19 +13844,19 @@ decode_insn (insn_decode_record *arm_record, record_type_t record_type, thumb2_record_ld_st_multiple, /* 00. */ thumb2_record_ld_st_dual_ex_tbb, /* 01. */ thumb2_record_data_proc_sreg_mimm, /* 02. */ - arm_record_unsupported_insn, /* 03. */ + thumb2_record_coproc_insn, /* 03. */ thumb2_record_data_proc_sreg_mimm, /* 04. */ thumb2_record_ps_dest_generic, /* 05. */ thumb2_record_branch_misc_cntrl, /* 06. */ thumb2_record_str_single_data, /* 07. */ - arm_record_unsupported_insn, /* 08. */ + arm_thumb_record_asimd_sld_st, /* 08. */ thumb2_record_ld_mem_hints, /* 09. */ thumb2_record_ld_mem_hints, /* 10. */ thumb2_record_ld_word, /* 11. */ thumb2_record_ps_dest_generic, /* 12. */ thumb2_record_ps_dest_generic, /* 13. */ thumb2_record_lmul_lmla_div, /* 14. */ - arm_record_unsupported_insn /* 15. */ + thumb2_record_coproc_insn /* 15. */ }; uint32_t ret = 0; /* return value: negative:failure 0:success. */ @@ -13128,7 +13906,7 @@ decode_insn (insn_decode_record *arm_record, record_type_t record_type, ret = thumb2_handle_insn[insn_id] (arm_record); else { - arm_record_unsupported_insn(arm_record); + arm_thumb_record_unsupported_insn(arm_record); ret = -1; } }