From patchwork Fri Sep 2 08:09:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kugan Vivekanandarajah X-Patchwork-Id: 75289 Delivered-To: patch@linaro.org Received: by 10.140.29.8 with SMTP id a8csp728217qga; Fri, 2 Sep 2016 01:09:42 -0700 (PDT) X-Received: by 10.98.135.13 with SMTP id i13mr34429488pfe.44.1472803782279; Fri, 02 Sep 2016 01:09:42 -0700 (PDT) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id pa7si10282512pac.177.2016.09.02.01.09.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 02 Sep 2016 01:09:42 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-435108-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org; spf=pass (google.com: domain of gcc-patches-return-435108-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-435108-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:in-reply-to:references:from:date:message-id :subject:to:cc:content-type; q=dns; s=default; b=e0bLCEuHJl/gFmP Mjqd71sGEN7St9pgUOWpOfieseN+Z6lS7Js6ibq5+beFw9eAAhzQNSHNa6BEG4BS zfUiXeYugySNDADbtrmAJI3iMJHGkE5707QWzRHx6GJHl70asaoWEWcZxmE2SRSC Dy0WzEk/H4rFMYUJ1mY6zVcbZ5iY= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:in-reply-to:references:from:date:message-id :subject:to:cc:content-type; s=default; bh=m28KdtNpJWu/K0vaZVPTV bz+lJQ=; b=CUSSpg9Pyu/0cj+0WQS/mOSwHkB9mKIguA9UDQiwpgehDaQosid61 G5UuWsBA9OVXa4/bsLal4RF79diDX56BL/LRviSOaVwbwK5sifzs5nEqECITxZfS 73+Hu7j0o+stS9xxesiefXbxZSxfY77B9FMDgAhWr+OsBbuRIYrwBM= Received: (qmail 75888 invoked by alias); 2 Sep 2016 08:09:30 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 75877 invoked by uid 89); 2 Sep 2016 08:09:29 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.1 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 spammy=arises, cst, involving, 1, 36 X-HELO: mail-qt0-f180.google.com Received: from mail-qt0-f180.google.com (HELO mail-qt0-f180.google.com) (209.85.216.180) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 02 Sep 2016 08:09:19 +0000 Received: by mail-qt0-f180.google.com with SMTP id 93so54582749qtg.2 for ; Fri, 02 Sep 2016 01:09:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=710QJOYajbkqK9nXWVUwf16boXSwf0MCxKljVSsNMOc=; b=i4VmGDYqJnOXSI+EKSEap5iiB7oxZ/I1S8naA3hCe+5OJi3oL1zpm4jwl5QTUrgtIe LjVodeZhl0r1CcG6Kw7lwiwTE/ufdMXea0vQO6LU7MYEjyViK6D1H4iIgdHHqvTf9L2S rQFS+S9qGQdlGklQLHyblEUp59juJx9dQKdipIwgLqq09ANMXdxAXYqGen2zhptJ6urv BTWmXW8q6aICK8IdACpd0jQTgmx0dYo3ERlNtWicH0pJ1dRRt0ZYBsigoJ4fy5SBx9Vx 0RoHScNgYa0aiF2s+KaWSye9sapiUW/9tTUZl2Ir/+pIuxgnz1qdWjHyV87CbyCxvQDr OEFQ== X-Gm-Message-State: AE9vXwOw+nAzsdJfm3/duO36GvdFuL/X4CdaVJ46FNeF4R7uhnyCwpWqFLNTtwi6YFpttnPtw+kcswzRGCPUxAyW X-Received: by 10.237.36.248 with SMTP id u53mr6450980qtc.7.1472803757953; Fri, 02 Sep 2016 01:09:17 -0700 (PDT) MIME-Version: 1.0 Received: by 10.200.57.97 with HTTP; Fri, 2 Sep 2016 01:09:17 -0700 (PDT) In-Reply-To: References: <0a1eaaf8-3ede-cd56-ffb5-40b25f94e46e@linaro.org> <98613cff-7c48-1a56-0014-6d87c35a8f26@linaro.org> <20160809214617.GB14857@tucnak.redhat.com> <7210cceb-be3b-44b1-13b7-4152e89d2a4f@linaro.org> <20160809215527.GC14857@tucnak.redhat.com> <0c53b0f3-4af6-387c-9350-95b1ae85850d@linaro.org> <20160810085703.GH14857@tucnak.redhat.com> From: Kugan Vivekanandarajah Date: Fri, 2 Sep 2016 18:09:17 +1000 Message-ID: Subject: Re: [PR72835] Incorrect arithmetic optimization involving bitfield arguments To: Richard Biener Cc: Jakub Jelinek , "gcc-patches@gcc.gnu.org" X-IsSubscribed: yes Hi Richard, On 25 August 2016 at 22:24, Richard Biener wrote: > On Thu, Aug 11, 2016 at 1:09 AM, kugan > wrote: >> Hi, >> >> >> On 10/08/16 20:28, Richard Biener wrote: >>> >>> On Wed, Aug 10, 2016 at 10:57 AM, Jakub Jelinek wrote: >>>> >>>> On Wed, Aug 10, 2016 at 08:51:32AM +1000, kugan wrote: >>>>> >>>>> I see it now. The problem is we are just looking at (-1) being in the >>>>> ops >>>>> list for passing changed to rewrite_expr_tree in the case of >>>>> multiplication >>>>> by negate. If we have combined (-1), as in the testcase, we will not >>>>> have >>>>> the (-1) and will pass changed=false to rewrite_expr_tree. >>>>> >>>>> We should set changed based on what happens in try_special_add_to_ops. >>>>> Attached patch does this. Bootstrap and regression testing are ongoing. >>>>> Is >>>>> this OK for trunk if there is no regression. >>>> >>>> >>>> I think the bug is elsewhere. In particular in >>>> undistribute_ops_list/zero_one_operation/decrement_power. >>>> All those look problematic in this regard, they change RHS of statements >>>> to something that holds a different value, while keeping the LHS. >>>> So, generally you should instead just add a new stmt next to the old one, >>>> and adjust data structures (replace the old SSA_NAME in some ->op with >>>> the new one). decrement_power might be a problem here, dunno if all the >>>> builtins are const in all cases that DSE would kill the old one, >>>> Richard, any preferences for that? reset flow sensitive info + reset >>>> debug >>>> stmt uses, or something different? Though, replacing the LHS with a new >>>> anonymous SSA_NAME might be needed too, in case it is before SSA_NAME of >>>> a >>>> user var that doesn't yet have any debug stmts. >>> >>> >>> I'd say replacing the LHS is the way to go, with calling the appropriate >>> helper >>> on the old stmt to generate a debug stmt for it / its uses (would need >>> to look it >>> up here). >>> >> >> Here is an attempt to fix it. The problem arises when in >> undistribute_ops_list, we linearize_expr_tree such that NEGATE_EXPR is added >> (-1) MULT_EXPR (OP). Real problem starts when we handle this in >> zero_one_operation. Unlike what was done earlier, we now change the stmt >> (with propagate_op_to_signle use or by directly) such that the value >> computed by stmt is no longer what it used to be. Because of this, what is >> computed in undistribute_ops_list and rewrite_expr_tree are also changed. >> >> undistribute_ops_list already expects this but rewrite_expr_tree will not if >> we dont pass the changed as an argument. >> >> The way I am fixing this now is, in linearize_expr_tree, I set ops_changed >> to true if we change NEGATE_EXPR to (-1) MULT_EXPR (OP). Then when we call >> zero_one_operation with ops_changed = true, I replace all the LHS in >> zero_one_operation with the new SSA and replace all the uses. I also call >> the rewrite_expr_tree with changed = false in this case. >> >> Does this make sense? Bootstrapped and regression tested for >> x86_64-linux-gnu without any new regressions. > > I don't think this solves the issue. zero_one_operation associates the > chain starting at the first *def and it will change the intermediate values > of _all_ of the stmts visited until the operation to be removed is found. > Note that this is independent of whether try_special_add_to_ops did anything. > > Even for the regular undistribution cases we get this wrong. > > So we need to back-track in zero_one_operation, replacing each LHS > and in the end the op in the opvector of the main chain. That's basically > the same as if we'd do a regular re-assoc operation on the sub-chains. > Take their subops, simulate zero_one_operation by > appending the cancelling operation and optimizing the oplist, and then > materializing the associated ops via rewrite_expr_tree. > Here is a draft patch which records the stmt chain when in zero_one_operation and then fixes it when OP is removed. when we update *def, that will update the ops vector. Does this looks sane? Bootstrapped and regression tested on x86_64-linux-gnu with no new regressions. Thanks, Kugan diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr72835.c b/gcc/testsuite/gcc.dg/tree-ssa/pr72835.c index e69de29..049eddc 100644 --- a/gcc/testsuite/gcc.dg/tree-ssa/pr72835.c +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr72835.c @@ -0,0 +1,36 @@ +/* PR tree-optimization/72835. */ +/* { dg-do run } */ +/* { dg-options "-O2" } */ + +struct struct_1 { + unsigned int m1 : 6 ; + unsigned int m2 : 24 ; + unsigned int m3 : 6 ; +}; + +unsigned short var_32 = 0x2d10; + +struct struct_1 s1; + +void init () +{ + s1.m1 = 4; + s1.m2 = 0x7ca4b8; + s1.m3 = 24; +} + +void foo () +{ + unsigned int c + = ((unsigned int) s1.m2) * (-((unsigned int) s1.m3)) + + (var_32) * (-((unsigned int) (s1.m1))); + if (c != 4098873984) + __builtin_abort (); +} + +int main () +{ + init (); + foo (); + return 0; +} diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c index 7fd7550..c7f6a66 100644 --- a/gcc/tree-ssa-reassoc.c +++ b/gcc/tree-ssa-reassoc.c @@ -1148,6 +1148,49 @@ decrement_power (gimple *stmt) } } +/* Replace SSA defined by STMT and replace all its uses with new + SSA. Also return the new SSA. */ + +static tree +make_new_ssa_for_def (gimple *stmt) +{ + gimple *use_stmt; + use_operand_p use; + imm_use_iterator iter; + tree new_lhs; + tree lhs = gimple_assign_lhs (stmt); + gcc_assert (has_single_use (lhs)); + + new_lhs = make_ssa_name (TREE_TYPE (lhs)); + gimple_set_lhs (stmt, new_lhs); + + /* Also need to update GIMPLE_DEBUGs. */ + FOR_EACH_IMM_USE_STMT (use_stmt, iter, lhs) + { + FOR_EACH_IMM_USE_ON_STMT (use, iter) + SET_USE (use, new_lhs); + update_stmt (use_stmt); + } + return new_lhs; +} + +/* Replace all SSAs defined in STMTS_TO_FIX and replace its + uses with new SSAs. Also do this for the stmt that defines DEF + if *DEF is not OP. */ + +static void +make_new_ssa_for_all_defs (tree *def, tree op, + auto_vec &stmts_to_fix) +{ + unsigned i; + gimple *stmt = SSA_NAME_DEF_STMT (*def); + if (*def != op + && gimple_code (stmt) != GIMPLE_NOP) + *def = make_new_ssa_for_def (stmt); + FOR_EACH_VEC_ELT (stmts_to_fix, i, stmt) + make_new_ssa_for_def (stmt); +} + /* Find the single immediate use of STMT's LHS, and replace it with OP. Remove STMT. If STMT's LHS is the same as *DEF, replace *DEF with OP as well. */ @@ -1186,6 +1229,9 @@ static void zero_one_operation (tree *def, enum tree_code opcode, tree op) { gimple *stmt = SSA_NAME_DEF_STMT (*def); + /* PR72835 - Record the stmt chain that has to be updated such that + we dont use the same LHS when the values computed are different. */ + auto_vec stmts_to_fix; do { @@ -1195,6 +1241,7 @@ zero_one_operation (tree *def, enum tree_code opcode, tree op) { if (stmt_is_power_of_op (stmt, op)) { + make_new_ssa_for_all_defs (def, op, stmts_to_fix); if (decrement_power (stmt) == 1) propagate_op_to_single_use (op, stmt, def); return; @@ -1204,6 +1251,7 @@ zero_one_operation (tree *def, enum tree_code opcode, tree op) if (gimple_assign_rhs1 (stmt) == op) { tree cst = build_minus_one_cst (TREE_TYPE (op)); + make_new_ssa_for_all_defs (def, op, stmts_to_fix); propagate_op_to_single_use (cst, stmt, def); return; } @@ -1212,6 +1260,7 @@ zero_one_operation (tree *def, enum tree_code opcode, tree op) { gimple_assign_set_rhs_code (stmt, TREE_CODE (gimple_assign_rhs1 (stmt))); + make_new_ssa_for_all_defs (def, op, stmts_to_fix); return; } } @@ -1228,6 +1277,7 @@ zero_one_operation (tree *def, enum tree_code opcode, tree op) { if (name == op) name = gimple_assign_rhs2 (stmt); + make_new_ssa_for_all_defs (def, op, stmts_to_fix); propagate_op_to_single_use (name, stmt, def); return; } @@ -1243,6 +1293,8 @@ zero_one_operation (tree *def, enum tree_code opcode, tree op) gimple *stmt2 = SSA_NAME_DEF_STMT (gimple_assign_rhs2 (stmt)); if (stmt_is_power_of_op (stmt2, op)) { + stmts_to_fix.safe_push (stmt2); + make_new_ssa_for_all_defs (def, op, stmts_to_fix); if (decrement_power (stmt2) == 1) propagate_op_to_single_use (op, stmt2, def); return; @@ -1253,14 +1305,18 @@ zero_one_operation (tree *def, enum tree_code opcode, tree op) if (gimple_assign_rhs1 (stmt2) == op) { tree cst = build_minus_one_cst (TREE_TYPE (op)); + stmts_to_fix.safe_push (stmt2); + make_new_ssa_for_all_defs (def, op, stmts_to_fix); propagate_op_to_single_use (cst, stmt2, def); return; } else if (integer_minus_onep (op) || real_minus_onep (op)) { + stmts_to_fix.safe_push (stmt2); gimple_assign_set_rhs_code (stmt2, TREE_CODE (gimple_assign_rhs1 (stmt2))); + make_new_ssa_for_all_defs (def, op, stmts_to_fix); return; } } @@ -1270,6 +1326,7 @@ zero_one_operation (tree *def, enum tree_code opcode, tree op) gcc_assert (name != op && TREE_CODE (name) == SSA_NAME); stmt = SSA_NAME_DEF_STMT (name); + stmts_to_fix.safe_push (stmt); } while (1); }