From patchwork Tue Jul 1 05:07:57 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenqiang Chen X-Patchwork-Id: 32842 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yk0-f200.google.com (mail-yk0-f200.google.com [209.85.160.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DA997203F4 for ; Tue, 1 Jul 2014 05:08:20 +0000 (UTC) Received: by mail-yk0-f200.google.com with SMTP id 20sf15015248yks.7 for ; Mon, 30 Jun 2014 22:08:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mailing-list:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:sender :delivered-to:mime-version:in-reply-to:references:date:message-id :subject:from:to:cc:x-original-sender :x-original-authentication-results:content-type; bh=dhwexwWe14y5j9g9/5Mcj+tTgPcJGBCeyVTTa5kDM4U=; b=HO8ySyhKXP5d9GyB8PJzQCFKDPJ9ZYgfirqgYbso00lVwsI7uMT2JX2vhW51jjtFrx 3O8tDCyA6gkCAjDgjKYyc06Fj9o03oKOkn37m/EEreyyl+Zjxnz/GkDfmJod7HgSSxJq C1/ns0g9W0SEIwZtzfwK111WMuZ2guEhbDIlycdrDgfLP9po0AVXROsiSpwpWu9IM+kW biqrhjjcaR1q8uLyVNn0jiFPGtrpCAz0oKMzhv2YXp7Rt83+YUQf6kSll8kuidY87UaV lbak3LurVgzJhxJopVKWUnbcyYzX37h6APWtmo8ynswiL9DNRbfaljfnTyQ/T077+wI7 dy9w== X-Gm-Message-State: ALoCoQmmcEYmiOqM0UL331CSoRANhuq1P8J+B5YXKDKKY6kXHSkSQwjslvlpmYfGZruPPZ8UVM8/ X-Received: by 10.236.202.143 with SMTP id d15mr2464810yho.18.1404191300564; Mon, 30 Jun 2014 22:08:20 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.98.11 with SMTP id n11ls1941663qge.11.gmail; Mon, 30 Jun 2014 22:08:20 -0700 (PDT) X-Received: by 10.58.228.74 with SMTP id sg10mr40943362vec.6.1404191300475; Mon, 30 Jun 2014 22:08:20 -0700 (PDT) Received: from mail-ve0-x22e.google.com (mail-ve0-x22e.google.com [2607:f8b0:400c:c01::22e]) by mx.google.com with ESMTPS id s2si10987562vew.76.2014.06.30.22.08.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 30 Jun 2014 22:08:20 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::22e as permitted sender) client-ip=2607:f8b0:400c:c01::22e; Received: by mail-ve0-f174.google.com with SMTP id jx11so9112756veb.19 for ; Mon, 30 Jun 2014 22:08:20 -0700 (PDT) X-Received: by 10.52.159.226 with SMTP id xf2mr35516286vdb.14.1404191300320; Mon, 30 Jun 2014 22:08:20 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp190218vcb; Mon, 30 Jun 2014 22:08:19 -0700 (PDT) X-Received: by 10.68.190.98 with SMTP id gp2mr57915589pbc.88.1404191299404; Mon, 30 Jun 2014 22:08:19 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id ye4si25620435pbb.103.2014.06.30.22.08.18 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Jun 2014 22:08:19 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-371552-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 26843 invoked by alias); 1 Jul 2014 05:08:05 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 26809 invoked by uid 89); 1 Jul 2014 05:08:03 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.5 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-la0-f46.google.com Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com) (209.85.215.46) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Tue, 01 Jul 2014 05:08:01 +0000 Received: by mail-la0-f46.google.com with SMTP id el20so5556381lab.33 for ; Mon, 30 Jun 2014 22:07:57 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.152.115.134 with SMTP id jo6mr33974756lab.6.1404191277817; Mon, 30 Jun 2014 22:07:57 -0700 (PDT) Received: by 10.112.13.36 with HTTP; Mon, 30 Jun 2014 22:07:57 -0700 (PDT) In-Reply-To: <53AADF87.5040705@arm.com> References: <53AADF87.5040705@arm.com> Date: Tue, 1 Jul 2014 13:07:57 +0800 Message-ID: Subject: Re: [PATCH, 2/10] prepare ccmp From: Zhenqiang Chen To: Richard Earnshaw Cc: "gcc-patches@gcc.gnu.org" X-IsSubscribed: yes X-Original-Sender: zhenqiang.chen@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::22e as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 On 25 June 2014 22:41, Richard Earnshaw wrote: > On 23/06/14 07:57, Zhenqiang Chen wrote: >> Hi, >> >> The patch makes several functions global, which will be used when >> expanding ccmp instructions. >> >> The other change in this patch is to check CCMP when turning code into >> jumpy sequence. >> >> OK for trunk? >> > > This isn't a complete review. In particular, I'd like one of the gimple > experts to go over this again. > > However, some general issues do crop up. I'll deal with each patch as I > spot something. > > > enum rtx_code code; >> @@ -6503,6 +6503,12 @@ get_rtx_code (enum tree_code tcode, bool unsignedp) >> code = LTGT; >> break; >> >> + case BIT_AND_EXPR: >> + code = AND; >> + break; >> + case BIT_IOR_EXPR: >> + code = IOR; >> + break; > > Blank lines between case alternatives. Thanks. Patch is updated. extern rtx expand_binop (enum machine_mode, optab, rtx, rtx, rtx, int, diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c index e8cd87f..a32e1b3 100644 --- a/gcc/cfgexpand.c +++ b/gcc/cfgexpand.c @@ -2095,9 +2095,10 @@ expand_gimple_cond (basic_block bb, gimple stmt) op0 = gimple_assign_rhs1 (second); op1 = gimple_assign_rhs2 (second); } - /* If jumps are cheap turn some more codes into - jumpy sequences. */ - else if (BRANCH_COST (optimize_insn_for_speed_p (), false) < 4) + /* If jumps are cheap and the target does not support conditional + compare, turn some more codes into jumpy sequences. */ + else if (BRANCH_COST (optimize_insn_for_speed_p (), false) < 4 + && (targetm.gen_ccmp_first == NULL)) { if ((code2 == BIT_AND_EXPR && TYPE_PRECISION (TREE_TYPE (op0)) == 1 diff --git a/gcc/expmed.c b/gcc/expmed.c index e76b6fc..c8d63a9 100644 --- a/gcc/expmed.c +++ b/gcc/expmed.c @@ -5105,7 +5105,7 @@ expand_and (enum machine_mode mode, rtx op0, rtx op1, rtx target) } /* Helper function for emit_store_flag. */ -static rtx +rtx emit_cstore (rtx target, enum insn_code icode, enum rtx_code code, enum machine_mode mode, enum machine_mode compare_mode, int unsignedp, rtx x, rtx y, int normalizep, diff --git a/gcc/expmed.h b/gcc/expmed.h index 4d01d1f..a567bad 100644 --- a/gcc/expmed.h +++ b/gcc/expmed.h @@ -20,6 +20,8 @@ along with GCC; see the file COPYING3. If not see #ifndef EXPMED_H #define EXPMED_H 1 +#include "insn-codes.h" + enum alg_code { alg_unknown, alg_zero, @@ -665,4 +667,9 @@ convert_cost (enum machine_mode to_mode, enum machine_mode from_mode, } extern int mult_by_coeff_cost (HOST_WIDE_INT, enum machine_mode, bool); + +extern rtx emit_cstore (rtx target, enum insn_code icode, enum rtx_code code, + enum machine_mode mode, enum machine_mode compare_mode, + int unsignedp, rtx x, rtx y, int normalizep, + enum machine_mode target_mode); #endif diff --git a/gcc/expr.c b/gcc/expr.c index 512c024..04cf56e 100644 --- a/gcc/expr.c +++ b/gcc/expr.c @@ -146,8 +146,6 @@ static rtx store_field (rtx, HOST_WIDE_INT, HOST_WIDE_INT, static unsigned HOST_WIDE_INT highest_pow2_factor_for_target (const_tree, const_tree); static int is_aligning_offset (const_tree, const_tree); -static void expand_operands (tree, tree, rtx, rtx*, rtx*, - enum expand_modifier); static rtx reduce_to_bit_field_precision (rtx, rtx, tree); static rtx do_store_flag (sepops, rtx, enum machine_mode); #ifdef PUSH_ROUNDING @@ -7496,7 +7494,7 @@ convert_tree_comp_to_rtx (enum tree_code tcode, int unsignedp) The value may be stored in TARGET if TARGET is nonzero. The MODIFIER argument is as documented by expand_expr. */ -static void +void expand_operands (tree exp0, tree exp1, rtx target, rtx *op0, rtx *op1, enum expand_modifier modifier) { diff --git a/gcc/expr.h b/gcc/expr.h index 6a1d3ab..66ca82f 100644 --- a/gcc/expr.h +++ b/gcc/expr.h @@ -787,4 +787,6 @@ extern bool categorize_ctor_elements (const_tree, HOST_WIDE_INT *, by EXP. This does not include any offset in DECL_FIELD_BIT_OFFSET. */ extern tree component_ref_field_offset (tree); +extern void expand_operands (tree, tree, rtx, rtx*, rtx*, + enum expand_modifier); #endif /* GCC_EXPR_H */ diff --git a/gcc/optabs.c b/gcc/optabs.c index ca1c194..0c3dae1 100644 --- a/gcc/optabs.c +++ b/gcc/optabs.c @@ -6453,7 +6453,7 @@ gen_cond_trap (enum rtx_code code, rtx op1, rtx op2, rtx tcode) /* Return rtx code for TCODE. Use UNSIGNEDP to select signed or unsigned operation code. */ -static enum rtx_code +enum rtx_code get_rtx_code (enum tree_code tcode, bool unsignedp) { enum rtx_code code; @@ -6503,6 +6503,14 @@ get_rtx_code (enum tree_code tcode, bool unsignedp) code = LTGT; break; + case BIT_AND_EXPR: + code = AND; + break; + + case BIT_IOR_EXPR: + code = IOR; + break; + default: gcc_unreachable (); } diff --git a/gcc/optabs.h b/gcc/optabs.h index 089b15a..61be4e2 100644 --- a/gcc/optabs.h +++ b/gcc/optabs.h @@ -91,6 +91,7 @@ extern rtx expand_widen_pattern_expr (sepops ops, rtx op0, rtx op1, rtx wide_op, extern rtx expand_ternary_op (enum machine_mode mode, optab ternary_optab, rtx op0, rtx op1, rtx op2, rtx target, int unsignedp); +extern enum rtx_code get_rtx_code (enum tree_code tcode, bool unsignedp); /* Expand a binary operation given optab and rtx operands. */