From patchwork Mon Jun 16 12:53:29 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Venkataramanan Kumar X-Patchwork-Id: 31959 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f71.google.com (mail-yh0-f71.google.com [209.85.213.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 24E33206A0 for ; Mon, 16 Jun 2014 12:53:30 +0000 (UTC) Received: by mail-yh0-f71.google.com with SMTP id t59sf26268700yho.2 for ; Mon, 16 Jun 2014 05:53:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mime-version:date:message-id :subject:from:to:x-original-sender:x-original-authentication-results :precedence:mailing-list:list-id:list-post:list-help:list-archive :list-unsubscribe:content-type; bh=6wDkKVzp4Cx9vwryqQ3pVtTpNfdHHRj4HtnzOOHEWCc=; b=ThverLjyoqjw25v9RuL6lGqT6Yscv7M23q7HnodLGAa3Rt7Sd2ogtkscjpdWkRWTyr gnEtKrgct9UWkPiBc/C+aRmU/UHcYs8cTabSmjTdBjcM/6ySWc0zGi/4f3BiKN2TRqbu TYrDCdaqv8i5/JnT5a406ltS5x8t9pEQSVwDxAvCJv1Li/oPwEAikiEzkhw94VKLlDRE mPgVgv5Ti7u/WoX4+5ybSYTbSPs/n+avPj2q00Ei2l0WIUcOYm4Y9YHd7id/t7ztNSt7 IkiJWtqCXvXVw2clEYOBQFxQOKVaPX+YEHwFqfQLb9uwx4He8gw7rPRSj5aK+vO1Fzie qRXQ== X-Gm-Message-State: ALoCoQn10bSRkdTGk+tNQXLu4IHHdhtnFf8usan7U6iTzxeaqY1WTVs75NVoHLNbMUcX3Q3kRV8m X-Received: by 10.58.178.39 with SMTP id cv7mr1905276vec.9.1402923210780; Mon, 16 Jun 2014 05:53:30 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.83.240 with SMTP id j103ls4336322qgd.47.gmail; Mon, 16 Jun 2014 05:53:30 -0700 (PDT) X-Received: by 10.58.1.228 with SMTP id 4mr503883vep.46.1402923210675; Mon, 16 Jun 2014 05:53:30 -0700 (PDT) Received: from mail-vc0-f180.google.com (mail-vc0-f180.google.com [209.85.220.180]) by mx.google.com with ESMTPS id tu5si4068179vcb.27.2014.06.16.05.53.30 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 16 Jun 2014 05:53:30 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) client-ip=209.85.220.180; Received: by mail-vc0-f180.google.com with SMTP id im17so4886783vcb.11 for ; Mon, 16 Jun 2014 05:53:30 -0700 (PDT) X-Received: by 10.52.85.165 with SMTP id i5mr329606vdz.53.1402923210597; Mon, 16 Jun 2014 05:53:30 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.54.6 with SMTP id vs6csp132871vcb; Mon, 16 Jun 2014 05:53:30 -0700 (PDT) X-Received: by 10.140.90.69 with SMTP id w63mr24792604qgd.52.1402923210053; Mon, 16 Jun 2014 05:53:30 -0700 (PDT) Received: from mail-qa0-f41.google.com (mail-qa0-f41.google.com [209.85.216.41]) by mx.google.com with ESMTPS id s11si3177162qac.20.2014.06.16.05.53.29 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 16 Jun 2014 05:53:30 -0700 (PDT) Received-SPF: pass (google.com: domain of venkataramanan.kumar@linaro.org designates 209.85.216.41 as permitted sender) client-ip=209.85.216.41; Received: by mail-qa0-f41.google.com with SMTP id cm18so7350947qab.14 for ; Mon, 16 Jun 2014 05:53:29 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.224.152.5 with SMTP id e5mr25911070qaw.65.1402923209872; Mon, 16 Jun 2014 05:53:29 -0700 (PDT) Received: by 10.140.25.175 with HTTP; Mon, 16 Jun 2014 05:53:29 -0700 (PDT) Date: Mon, 16 Jun 2014 18:23:29 +0530 Message-ID: Subject: [RFC][ARM]: Fix reload spill failure (PR 60617) From: Venkataramanan Kumar To: "gcc-patches@gcc.gnu.org" , Richard Earnshaw , Ramana Radhakrishnan , Marcus Shawcroft , Patch Tracking , Maxim Kuvyrkov X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: venkataramanan.kumar@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Hi Maintainers, This patch fixes the PR 60617 that occurs when we turn on reload pass in thumb2 mode. It occurs for the pattern "*ior_scc_scc" that gets generated for the 3 argument of the below function call. JIT:emitStoreInt32(dst,regT0m, (op1 == dst || op2 == dst))); (----snip---) (insn 634 633 635 27 (parallel [ (set (reg:SI 3 r3) (ior:SI (eq:SI (reg/v:SI 110 [ dst ]) <== This operand r5 is registers gets assigned (reg/v:SI 112 [ op2 ])) (eq:SI (reg/v:SI 110 [ dst ]) <== This operand (reg/v:SI 111 [ op1 ])))) (clobber (reg:CC 100 cc)) ]) ../Source/JavaScriptCore/jit/JITArithmetic32_64.cpp:179 300 {*ior_scc_scc (----snip---) The issue here is that the above pattern demands 5 registers (LO_REGS). But when we are in reload, registers r0 is used for pointer to the class, r1 and r2 for first and second argument. r7 is used for stack pointer. So we are left with r3,r4,r5 and r6. But the above patterns needs five LO_REGS. Hence we get spill failure when processing the last register operand in that pattern, In ARM port, TARGET_LIKELY_SPILLED_CLASS is defined for Thumb-1 and for thumb 2 mode there is mention of using LO_REG in the comment as below. "Care should be taken to avoid adding thumb-2 patterns that require many low registers" So conservative fix is not to allow this pattern for Thumb-2 mode. I allowed these pattern for Thumb2 when we have constant operands for comparison. That makes the target tests arm/thum2-cond-cmp-1.c to thum2-cond-cmp-4.c pass. Regression tested with gcc 4.9 branch since in trunk this bug is masked revision 209897. Please provide your suggestion on this patch regards, Venkat. diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md index 0284f95..e8fbb11 100644 --- a/gcc/config/arm/arm.md +++ b/gcc/config/arm/arm.md @@ -10654,7 +10654,7 @@ [(match_operand:SI 4 "s_register_operand" "r") (match_operand:SI 5 "arm_add_operand" "rIL")]))) (clobber (reg:CC CC_REGNUM))] - "TARGET_32BIT + "TARGET_ARM && (arm_select_dominance_cc_mode (operands[3], operands[6], DOM_CC_X_OR_Y) != CCmode)" "#" @@ -10675,6 +10675,36 @@ (set_attr "type" "multiple")] ) +(define_insn_and_split "*ior_scc_scc_imm" + [(set (match_operand:SI 0 "s_register_operand" "=Ts") + (ior:SI (match_operator:SI 3 "arm_comparison_operator" + [(match_operand:SI 1 "s_register_operand" "r") + (match_operand:SI 2 "arm_addimm_operand" "IL")]) + (match_operator:SI 6 "arm_comparison_operator" + [(match_operand:SI 4 "s_register_operand" "r") + (match_operand:SI 5 "arm_addimm_operand" "IL")]))) + (clobber (reg:CC CC_REGNUM))] + "TARGET_THUMB2 + && (arm_select_dominance_cc_mode (operands[3], operands[6], DOM_CC_X_OR_Y) + != CCmode)" + "#" + "TARGET_THUMB2 && reload_completed" + [(set (match_dup 7) + (compare + (ior:SI + (match_op_dup 3 [(match_dup 1) (match_dup 2)]) + (match_op_dup 6 [(match_dup 4) (match_dup 5)])) + (const_int 0))) + (set (match_dup 0) (ne:SI (match_dup 7) (const_int 0)))] + "operands[7] + = gen_rtx_REG (arm_select_dominance_cc_mode (operands[3], operands[6], + DOM_CC_X_OR_Y), + CC_REGNUM);" + [(set_attr "conds" "clob") + (set_attr "length" "16") + (set_attr "type" "multiple")] +) + ; If the above pattern is followed by a CMP insn, then the compare is ; redundant, since we can rework the conditional instruction that follows. (define_insn_and_split "*ior_scc_scc_cmp" @@ -10714,7 +10744,7 @@ [(match_operand:SI 4 "s_register_operand" "r") (match_operand:SI 5 "arm_add_operand" "rIL")]))) (clobber (reg:CC CC_REGNUM))] - "TARGET_32BIT + "TARGET_ARM && (arm_select_dominance_cc_mode (operands[3], operands[6], DOM_CC_X_AND_Y) != CCmode)" "#" @@ -10737,6 +10767,38 @@ (set_attr "type" "multiple")] ) +(define_insn_and_split "*and_scc_scc_imm" + [(set (match_operand:SI 0 "s_register_operand" "=Ts") + (and:SI (match_operator:SI 3 "arm_comparison_operator" + [(match_operand:SI 1 "s_register_operand" "r") + (match_operand:SI 2 "arm_addimm_operand" "IL")]) + (match_operator:SI 6 "arm_comparison_operator" + [(match_operand:SI 4 "s_register_operand" "r") + (match_operand:SI 5 "arm_addimm_operand" "IL")]))) + (clobber (reg:CC CC_REGNUM))] + "TARGET_THUMB2 + && (arm_select_dominance_cc_mode (operands[3], operands[6], DOM_CC_X_AND_Y) + != CCmode)" + "#" + "TARGET_THUMB2 && reload_completed + && (arm_select_dominance_cc_mode (operands[3], operands[6], DOM_CC_X_AND_Y) + != CCmode)" + [(set (match_dup 7) + (compare + (and:SI + (match_op_dup 3 [(match_dup 1) (match_dup 2)]) + (match_op_dup 6 [(match_dup 4) (match_dup 5)])) + (const_int 0))) + (set (match_dup 0) (ne:SI (match_dup 7) (const_int 0)))] + "operands[7] + = gen_rtx_REG (arm_select_dominance_cc_mode (operands[3], operands[6], + DOM_CC_X_AND_Y), + CC_REGNUM);" + [(set_attr "conds" "clob") + (set_attr "length" "16") + (set_attr "type" "multiple")] +) + ; If the above pattern is followed by a CMP insn, then the compare is ; redundant, since we can rework the conditional instruction that follows. (define_insn_and_split "*and_scc_scc_cmp"