From patchwork Tue Aug 9 22:51:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kugan Vivekanandarajah X-Patchwork-Id: 73576 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp119995qga; Tue, 9 Aug 2016 15:59:44 -0700 (PDT) X-Received: by 10.66.146.69 with SMTP id ta5mr1319691pab.157.1470783124922; Tue, 09 Aug 2016 15:52:04 -0700 (PDT) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id h70si44878505pfh.199.2016.08.09.15.52.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Aug 2016 15:52:04 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-433644-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org; spf=pass (google.com: domain of gcc-patches-return-433644-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-433644-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:cc:from:message-id:date:mime-version :in-reply-to:content-type; q=dns; s=default; b=LpsZEBjgWeSQuI+dc T8BfiEAQcHaoxGT3wHFTGksgfsfujojrbpWj2gtuag6sGfjBtBbwiifYzWrRaCT7 iqOiVWjwLC3ED4YrGrGE8y4c1HSjDZhO7KqYDaeu3hGKCYNak2KfT1ByV8xthb7n vS2N5wajn3+1bEcarHvKJJl/PE= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:cc:from:message-id:date:mime-version :in-reply-to:content-type; s=default; bh=yE+jON+bpTfHm3lCVQQjw1F YNyw=; b=NN0x3lr6A4S5AnpxHxrlev+eV9CNwTtlBTLQdgozDgdNiqsH4UpmwCb QSEcp33KDW1ifFu6rtWbkuSSMk6YDE+dUEbUYS0Zwp8Vf7CA/K6xMAK2m7PvaDBH uhgv7VcRbRmH0ZunVP24p+hF2bUVDvaQTf4vEtXcHLvTnIvUKfNU= Received: (qmail 86900 invoked by alias); 9 Aug 2016 22:51:49 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 86885 invoked by uid 89); 9 Aug 2016 22:51:48 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.4 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 spammy=OPS, opcode, sk:!is_rea, sk:is_rea X-HELO: mail-pf0-f172.google.com Received: from mail-pf0-f172.google.com (HELO mail-pf0-f172.google.com) (209.85.192.172) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-GCM-SHA256 encrypted) ESMTPS; Tue, 09 Aug 2016 22:51:38 +0000 Received: by mail-pf0-f172.google.com with SMTP id x72so9123046pfd.2 for ; Tue, 09 Aug 2016 15:51:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to; bh=LuIElkjeceQzgJTQ0uTAMEElBfpHHPcdf53ocmw3dNI=; b=VP59wlaxxQ52sq6qaykPTeSxHSfA5UKG0JgShbGIvcz/h5YuZN9uSHyzbEa83Bp+YB b7LeepBU8Jzh3fcuvuVdsHnZHxwyCtLXjJSqjeVMZOaoeUK/LgsqZa5rFMOMgzazm/de FgoTnl85z+dYkiJ08u/6uSLiPV3foV3xUMJ/VYn/fKxeUP/c/BM5zyJbSLgACP1igxEv W6K85VcEUXPk72ywtx8TUUrUZMvg/7A9EiqgL20qDTtb3mcXCtFxsyRMJI5UMFoxf3Wy qTL2tNKgaIG5Ijp+9JZ5QlEk8e+qJACWuuczLSWxSm/k7bDi0+texmkhDB2GuA7lRmz1 DaFQ== X-Gm-Message-State: AEkooutYaAddBs99kV68UQOj6QYVHtqfBW1E5OzHR09mKxMIgA0Bl8pZPVVS2Z9EqsXEA1Hl X-Received: by 10.98.35.7 with SMTP id j7mr1358128pfj.39.1470783096679; Tue, 09 Aug 2016 15:51:36 -0700 (PDT) Received: from [10.1.1.4] (58-6-183-210.dyn.iinet.net.au. [58.6.183.210]) by smtp.gmail.com with ESMTPSA id 71sm58404207pfy.32.2016.08.09.15.51.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Aug 2016 15:51:36 -0700 (PDT) Subject: Re: [PR72835] Incorrect arithmetic optimization involving bitfield arguments To: Jakub Jelinek References: <0a1eaaf8-3ede-cd56-ffb5-40b25f94e46e@linaro.org> <98613cff-7c48-1a56-0014-6d87c35a8f26@linaro.org> <20160809214617.GB14857@tucnak.redhat.com> <7210cceb-be3b-44b1-13b7-4152e89d2a4f@linaro.org> <20160809215527.GC14857@tucnak.redhat.com> Cc: "gcc-patches@gcc.gnu.org" , Richard Biener From: kugan Message-ID: <0c53b0f3-4af6-387c-9350-95b1ae85850d@linaro.org> Date: Wed, 10 Aug 2016 08:51:32 +1000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20160809215527.GC14857@tucnak.redhat.com> X-IsSubscribed: yes Hi Jakub, On 10/08/16 07:55, Jakub Jelinek wrote: > On Wed, Aug 10, 2016 at 07:51:08AM +1000, kugan wrote: >> On 10/08/16 07:46, Jakub Jelinek wrote: >>> On Wed, Aug 10, 2016 at 07:42:25AM +1000, kugan wrote: >>>> There was no new regression while testing. I also moved the testcase from >>>> gcc.dg/torture/pr72835.c to gcc.dg/tree-ssa/pr72835.c. Is this OK for trunk? >>> >>> This looks strange. The tree-ssa-reassoc.c code has been trying to never >>> reuse SSA_NAMEs if they would hold a different value. >>> So there should be no resetting of flow sensitive info needed. >> >> We are not reusing but, if you see the example change in reassoc: >> >> - _5 = -_4; >> - _6 = _2 * _5; >> + _5 = _4; >> + _6 = _5 * _2; >> >> _5 and _6 will now have different value ranges because they compute >> different values. Therefore I think we should reset (?). > > No. We should not have reused _5 and _6 for the different values. > It is not harmful just for the value ranges, but also for debug info. I see it now. The problem is we are just looking at (-1) being in the ops list for passing changed to rewrite_expr_tree in the case of multiplication by negate. If we have combined (-1), as in the testcase, we will not have the (-1) and will pass changed=false to rewrite_expr_tree. We should set changed based on what happens in try_special_add_to_ops. Attached patch does this. Bootstrap and regression testing are ongoing. Is this OK for trunk if there is no regression. Thanks, Kugan gcc/testsuite/ChangeLog: 2016-08-10 Kugan Vivekanandarajah PR tree-optimization/72835 * gcc.dg/tree-ssa/pr72835.c: New test. gcc/ChangeLog: 2016-08-10 Kugan Vivekanandarajah PR tree-optimization/72835 * tree-ssa-reassoc.c (try_special_add_to_ops): Return true if we changed the operands. (linearize_expr_tree): Return true if try_special_add_top_ops set changed to true. (reassociate_bb): Pass changed returned by linearlize_expr_tree to rewrite_expr_tree. diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr72835.c b/gcc/testsuite/gcc.dg/tree-ssa/pr72835.c index e69de29..d7d0a8d 100644 --- a/gcc/testsuite/gcc.dg/tree-ssa/pr72835.c +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr72835.c @@ -0,0 +1,36 @@ +/* PR tree-optimization/72835 */ +/* { dg-do run } */ +/* { dg-options "-O2" } */ + +struct struct_1 { + unsigned int m1 : 6 ; + unsigned int m2 : 24 ; + unsigned int m3 : 6 ; +}; + +unsigned short var_32 = 0x2d10; + +struct struct_1 s1; + +void init () +{ + s1.m1 = 4; + s1.m2 = 0x7ca4b8; + s1.m3 = 24; +} + +void foo () +{ + unsigned int c = + ((unsigned int) s1.m2) * (-((unsigned int) s1.m3)) + + (var_32) * (-((unsigned int) (s1.m1))); + if (c != 4098873984) + __builtin_abort (); +} + +int main () +{ + init (); + foo (); + return 0; +} diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c index 7fd7550..c5641fe 100644 --- a/gcc/tree-ssa-reassoc.c +++ b/gcc/tree-ssa-reassoc.c @@ -1038,7 +1038,7 @@ eliminate_using_constants (enum tree_code opcode, } -static void linearize_expr_tree (vec *, gimple *, +static bool linearize_expr_tree (vec *, gimple *, bool, bool); /* Structure for tracking and counting operands. */ @@ -4456,12 +4456,16 @@ acceptable_pow_call (gcall *stmt, tree *base, HOST_WIDE_INT *exponent) } /* Try to derive and add operand entry for OP to *OPS. Return false if - unsuccessful. */ + unsuccessful. If we changed the operands such that the (intermediate) + results can be different (as in the case of NEGATE_EXPR converted to + multiplication by -1), set changed to true so that we will not reuse the + SSA (PR72835). */ static bool try_special_add_to_ops (vec *ops, enum tree_code code, - tree op, gimple* def_stmt) + tree op, gimple* def_stmt, + bool *changed) { tree base = NULL_TREE; HOST_WIDE_INT exponent = 0; @@ -4492,6 +4496,7 @@ try_special_add_to_ops (vec *ops, add_to_ops_vec (ops, rhs1); add_to_ops_vec (ops, cst); gimple_set_visited (def_stmt, true); + *changed = true; return true; } @@ -4499,9 +4504,10 @@ try_special_add_to_ops (vec *ops, } /* Recursively linearize a binary expression that is the RHS of STMT. - Place the operands of the expression tree in the vector named OPS. */ + Place the operands of the expression tree in the vector named OPS. + Return TRUE if try_special_add_to_ops has set changed to TRUE. */ -static void +static bool linearize_expr_tree (vec *ops, gimple *stmt, bool is_associative, bool set_visited) { @@ -4512,6 +4518,7 @@ linearize_expr_tree (vec *ops, gimple *stmt, bool binrhsisreassoc = false; enum tree_code rhscode = gimple_assign_rhs_code (stmt); struct loop *loop = loop_containing_stmt (stmt); + bool changed = false; if (set_visited) gimple_set_visited (stmt, true); @@ -4542,18 +4549,20 @@ linearize_expr_tree (vec *ops, gimple *stmt, if (!is_associative) { add_to_ops_vec (ops, binrhs); - return; + return changed; } if (!binrhsisreassoc) { - if (!try_special_add_to_ops (ops, rhscode, binrhs, binrhsdef)) + if (!try_special_add_to_ops (ops, rhscode, binrhs, + binrhsdef, &changed)) add_to_ops_vec (ops, binrhs); - if (!try_special_add_to_ops (ops, rhscode, binlhs, binlhsdef)) + if (!try_special_add_to_ops (ops, rhscode, binlhs, + binlhsdef, &changed)) add_to_ops_vec (ops, binlhs); - return; + return changed; } if (dump_file && (dump_flags & TDF_DETAILS)) @@ -4587,11 +4596,13 @@ linearize_expr_tree (vec *ops, gimple *stmt, gcc_assert (TREE_CODE (binrhs) != SSA_NAME || !is_reassociable_op (SSA_NAME_DEF_STMT (binrhs), rhscode, loop)); - linearize_expr_tree (ops, SSA_NAME_DEF_STMT (binlhs), - is_associative, set_visited); + changed |= linearize_expr_tree (ops, SSA_NAME_DEF_STMT (binlhs), + is_associative, set_visited); - if (!try_special_add_to_ops (ops, rhscode, binrhs, binrhsdef)) + if (!try_special_add_to_ops (ops, rhscode, binrhs, + binrhsdef, &changed)) add_to_ops_vec (ops, binrhs); + return changed; } /* Repropagate the negates back into subtracts, since no other pass @@ -5250,6 +5261,7 @@ reassociate_bb (basic_block bb) basic_block son; gimple *stmt = last_stmt (bb); bool cfg_cleanup_needed = false; + bool changed = false; if (stmt && !gimple_visited_p (stmt)) cfg_cleanup_needed |= maybe_optimize_range_tests (stmt); @@ -5323,7 +5335,7 @@ reassociate_bb (basic_block bb) continue; gimple_set_visited (stmt, true); - linearize_expr_tree (&ops, stmt, true, true); + changed = linearize_expr_tree (&ops, stmt, true, true); ops.qsort (sort_by_operand_rank); optimize_ops_list (rhs_code, &ops); if (undistribute_ops_list (rhs_code, &ops, @@ -5415,7 +5427,7 @@ reassociate_bb (basic_block bb) new_lhs = rewrite_expr_tree (stmt, 0, ops, powi_result != NULL - || negate_result); + || changed); } /* If we combined some repeated factors into a