From patchwork Tue Jan 9 18:32:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 123977 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4343251qgn; Tue, 9 Jan 2018 10:32:55 -0800 (PST) X-Google-Smtp-Source: ACJfBot9gmld9MSXYAN7rWiFasKTjvZRA/IugYtI4VCj/RGsE0U8n5nAtkXFe7xCZx5SviGr6kPF X-Received: by 10.98.58.5 with SMTP id h5mr10288933pfa.169.1515522775238; Tue, 09 Jan 2018 10:32:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1515522775; cv=none; d=google.com; s=arc-20160816; b=TPIc6o/2YTfiE0+XYkkrISJS0V4xnSjH4/SLa5Hq42WdSKpvjX/ox+LEkaqK/bJkiV 4cTRyGu5jDR/xk7JfSYPRsqTt+wJRLqjXPd3hJcLhVVAATQ19xqjQPQpKu6TBdK7Eyd6 HgB7s8VrJ6B4BZ58zrywFpl7k3SCM/SqQZoOYskRRmDS8ICp1FSaZcv1g6OWaxu/UuB8 4TFjtbJv341UTWtlugso62zAyhbEihDMYB+VZBvLC5CWfcIQob5FX1h65VWRspJXWwDC jc5itzLaXlyj6O/frNfYDqrrjqwwHHiipqlDhj/02zA25I/Nyoz04PzcteWQSPeT6dUg FfmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:date:subject:mail-followup-to:to :from:delivered-to:sender:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:mailing-list:dkim-signature :domainkey-signature:arc-authentication-results; bh=snGW7dA5ExcMN7ZIRoLkCQfzzWvrC0F4J8uPeSo8llM=; b=V2i/5fOgi5jP2BmorOmNqniU3c690g8Fn0u5AJrUG7pvVfYw4+RcznEkWkLQAacDMu vXLJmQQhV+BTDZYdDKd9PtiiuysoCrZUeTKGOQ0FFsvDI1WVZuvwQvDfj8YEoTTBVdTR w+QaPP7B/Is62MF6iL2HniwU+jzLxrmjqSt7BCTK4QVLt1mrmoT27hGqw0S06Enh63UG Hk4ElA5d1r7VBnLgBUB+5IcRPZ+yFgjkgb3R3MV4R9C+eudHVPcTbWKWbRBcwbb5UIAF txm2pLIWotRWqbj61mOTsE6ZWWslhWtnGoeCovG89TTi6pcLvqhP5QzdA9M2xzolLVbk OG0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=brdAFas7; spf=pass (google.com: domain of gcc-patches-return-470591-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-470591-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id a2si9340113pgw.478.2018.01.09.10.32.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Jan 2018 10:32:55 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-return-470591-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=brdAFas7; spf=pass (google.com: domain of gcc-patches-return-470591-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-470591-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; q=dns; s= default; b=KZl3xGLLRGQ9ZegH+2h8ips7HMdWlpkgyPa1avXIJcP3wMUZsfqdt Re+IgtgNRkkn9iUFPhvpEMk5nej4PB77CH04JQppDtjSX5sSCfbHPodA/Hm+tme5 UwaMbJo/VxwhOxbuPYq2Ak0iI0mjexQb5U/ElFlwqNDFh0I6LPxWeA= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; s= default; bh=1fsNRjqVi2E0TdX/8anIQ/oBvSY=; b=brdAFas7z7cJy7HGDkLO Bs8dCYpfM1osekhrBJXbP7jW45+i8LAe1UwYxttYMmS0eyoP0Oojl0hLf/nZAQfT 0oDcfy5YpSMyygM8dETiaqhaRsdZT6/3rr+9Flb0PTvu5HcDBHNvuAG+UoWQgLli F61xKKDfKDygpPfgfnNVBJw= Received: (qmail 12607 invoked by alias); 9 Jan 2018 18:32:42 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 12595 invoked by uid 89); 9 Jan 2018 18:32:42 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-11.3 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=10307 X-HELO: mail-wr0-f171.google.com Received: from mail-wr0-f171.google.com (HELO mail-wr0-f171.google.com) (209.85.128.171) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 09 Jan 2018 18:32:38 +0000 Received: by mail-wr0-f171.google.com with SMTP id f8so15109703wre.4 for ; Tue, 09 Jan 2018 10:32:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:date:message-id :user-agent:mime-version; bh=snGW7dA5ExcMN7ZIRoLkCQfzzWvrC0F4J8uPeSo8llM=; b=YoiHBxfmiXOvOaFTEo+J+LfhkbRjFfEn0876r2q3aTnzQgbjzi/5N/Jrvd1L91qK42 UfYsyySW2bEgv0XRhY7eFPQAC3RFCchtvKj6sIrBMxRGCxrTmgAHcO9pq8Qwwube6kY7 iAYY+kmjqFjamf0w1m+fW081Tte+0c8BwhJ6ZG/b15XSb6oF8QemN6SI4wf1VjzUbpXf ujmCT4F+1K5AJS7/1SojefZyQi2JD7jZcsPAauZtL167qb3qQxtAwP5XAougOIdLJBrw UIaA6DLnmswK9y4KjmJ3MyVaYJc5fv0sVKkb0o1PVULzEeMqqXJXD+acrmpoi6LKMkEV oL1g== X-Gm-Message-State: AKGB3mL+xh6ZjfoYsb59RYT1XCOZCZ8MF2pmEa7NX5nMOZ7gBsZhSXNW nASeAFwfOkv7zSWxvE3TNQ5ktSl4E7Y= X-Received: by 10.223.151.20 with SMTP id r20mr5427142wrb.24.1515522755321; Tue, 09 Jan 2018 10:32:35 -0800 (PST) Received: from localhost ([95.144.14.233]) by smtp.gmail.com with ESMTPSA id o4sm15153441wra.91.2018.01.09.10.32.32 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 09 Jan 2018 10:32:34 -0800 (PST) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: Use poly_int rtx accessors instead of hwi accessors Date: Tue, 09 Jan 2018 18:32:32 +0000 Message-ID: <87vagac27z.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) MIME-Version: 1.0 This patch generalises various places that used hwi rtx accessors so that they can handle poly_ints instead. Earlier patches did this while updating interfaces; this patch just mops up some left-over pieces that weren't necessary to make things compile, but that still make sense. In many cases these changes are by inspection rather than because something had shown them to be necessary. Sorry for not posting this earlier. I kept holding it back in case more examples showed up. Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64el-linux-gnu. Also tested by comparing the before-and-after assembly output for at least one target per CPU directory. OK to install? Richard 2018-01-09 Richard Sandiford gcc/ * poly-int.h (can_div_trunc_p): Add new overload in which all values are poly_ints. * alias.c (get_addr): Extend CONST_INT handling to poly_int_rtx_p. (memrefs_conflict_p): Likewise. (init_alias_analysis): Likewise. * cfgexpand.c (expand_debug_expr): Likewise. * combine.c (combine_simplify_rtx, force_int_to_mode): Likewise. * cse.c (fold_rtx): Likewise. * explow.c (adjust_stack, anti_adjust_stack): Likewise. * expr.c (emit_block_move_hints): Likewise. (clear_storage_hints, push_block, emit_push_insn): Likewise. (store_expr_with_bounds, reduce_to_bit_field_precision): Likewise. (emit_group_load_1): Use rtx_to_poly_int64 for group offsets. (emit_group_store): Likewise. (find_args_size_adjust): Use strip_offset. Use rtx_to_poly_int64 to read the PRE/POST_MODIFY increment. * calls.c (store_one_arg): Use strip_offset. * rtlanal.c (rtx_addr_can_trap_p_1): Extend CONST_INT handling to poly_int_rtx_p. (set_noop_p): Use rtx_to_poly_int64 for the elements selected by a VEC_SELECT. * simplify-rtx.c (avoid_constant_pool_reference): Use strip_offset. (simplify_binary_operation_1): Extend CONST_INT handling to poly_int_rtx_p. (simplify_plus_minus): Likewise. Remove use of neg_const_int and instead use HWI_COMPUTABLE_MODE_P and coeffs_in_range_p to test whether the negation gives a valid poly_int64. * var-tracking.c (compute_cfa_pointer): Take a poly_int64 rather than a HOST_WIDE_INT. (hard_frame_pointer_adjustment): Change from HOST_WIDE_INT to poly_int64. (adjust_mems, add_stores): Update accodingly. (vt_canonicalize_addr): Track polynomial offsets. (emit_note_insn_var_location): Likewise. (vt_add_function_parameter): Likewise. (vt_initialize): Likewise. Index: gcc/poly-int.h =================================================================== --- gcc/poly-int.h 2018-01-09 18:26:49.634702370 +0000 +++ gcc/poly-int.h 2018-01-09 18:26:49.870692974 +0000 @@ -2346,6 +2346,27 @@ can_div_trunc_p (const poly_int_pod +inline bool +can_div_trunc_p (const poly_int_pod &a, + const poly_int_pod &b, + poly_int_pod *quotient) +{ + if (b.is_constant ()) + return can_div_trunc_p (a, b.coeffs[0], quotient); + if (!can_div_trunc_p (a, b, "ient->coeffs[0])) + return false; + for (unsigned int i = 1; i < N; ++i) + quotient->coeffs[i] = 0; + return true; +} + /* Return true if there is some constant Q and polynomial r such that: (1) a = b * Q + r Index: gcc/alias.c =================================================================== --- gcc/alias.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/alias.c 2018-01-09 18:26:49.865693173 +0000 @@ -2247,9 +2247,10 @@ get_addr (rtx x) rtx op0 = get_addr (XEXP (x, 0)); if (op0 != XEXP (x, 0)) { + poly_int64 c; if (GET_CODE (x) == PLUS - && GET_CODE (XEXP (x, 1)) == CONST_INT) - return plus_constant (GET_MODE (x), op0, INTVAL (XEXP (x, 1))); + && poly_int_rtx_p (XEXP (x, 1), &c)) + return plus_constant (GET_MODE (x), op0, c); return simplify_gen_binary (GET_CODE (x), GET_MODE (x), op0, XEXP (x, 1)); } @@ -2536,10 +2537,11 @@ memrefs_conflict_p (poly_int64 xsize, rt return offset_overlap_p (c, xsize, ysize); /* Can't properly adjust our sizes. */ - if (!CONST_INT_P (x1) - || !can_div_trunc_p (xsize, INTVAL (x1), &xsize) - || !can_div_trunc_p (ysize, INTVAL (x1), &ysize) - || !can_div_trunc_p (c, INTVAL (x1), &c)) + poly_int64 c1; + if (!poly_int_rtx_p (x1, &c1) + || !can_div_trunc_p (xsize, c1, &xsize) + || !can_div_trunc_p (ysize, c1, &ysize) + || !can_div_trunc_p (c, c1, &c)) return -1; return memrefs_conflict_p (xsize, x0, ysize, y0, c); } @@ -3375,6 +3377,7 @@ init_alias_analysis (void) && DF_REG_DEF_COUNT (regno) != 1) note = NULL_RTX; + poly_int64 offset; if (note != NULL_RTX && GET_CODE (XEXP (note, 0)) != EXPR_LIST && ! rtx_varies_p (XEXP (note, 0), 1) @@ -3389,10 +3392,9 @@ init_alias_analysis (void) && GET_CODE (src) == PLUS && REG_P (XEXP (src, 0)) && (t = get_reg_known_value (REGNO (XEXP (src, 0)))) - && CONST_INT_P (XEXP (src, 1))) + && poly_int_rtx_p (XEXP (src, 1), &offset)) { - t = plus_constant (GET_MODE (src), t, - INTVAL (XEXP (src, 1))); + t = plus_constant (GET_MODE (src), t, offset); set_reg_known_value (regno, t); set_reg_known_equiv_p (regno, false); } Index: gcc/cfgexpand.c =================================================================== --- gcc/cfgexpand.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/cfgexpand.c 2018-01-09 18:26:49.866693133 +0000 @@ -4430,10 +4430,11 @@ expand_debug_expr (tree exp) goto component_ref; op1 = expand_debug_expr (TREE_OPERAND (exp, 1)); - if (!op1 || !CONST_INT_P (op1)) + poly_int64 offset; + if (!op1 || !poly_int_rtx_p (op1, &offset)) return NULL; - op0 = plus_constant (inner_mode, op0, INTVAL (op1)); + op0 = plus_constant (inner_mode, op0, offset); } as = TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (TREE_OPERAND (exp, 0)))); @@ -4947,10 +4948,11 @@ expand_debug_expr (tree exp) { op1 = expand_debug_expr (TREE_OPERAND (TREE_OPERAND (exp, 0), 1)); - if (!op1 || !CONST_INT_P (op1)) + poly_int64 offset; + if (!op1 || !poly_int_rtx_p (op1, &offset)) return NULL; - return plus_constant (mode, op0, INTVAL (op1)); + return plus_constant (mode, op0, offset); } } Index: gcc/combine.c =================================================================== --- gcc/combine.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/combine.c 2018-01-09 18:26:49.867693093 +0000 @@ -5963,8 +5963,11 @@ combine_simplify_rtx (rtx x, machine_mod GET_MODE_MASK (mode), 0)); /* We can truncate a constant value and return it. */ - if (CONST_INT_P (XEXP (x, 0))) - return gen_int_mode (INTVAL (XEXP (x, 0)), mode); + { + poly_int64 c; + if (poly_int_rtx_p (XEXP (x, 0), &c)) + return gen_int_mode (c, mode); + } /* Similarly to what we do in simplify-rtx.c, a truncate of a register whose value is a comparison can be replaced with a subreg if @@ -8685,6 +8688,7 @@ force_int_to_mode (rtx x, scalar_int_mod int next_select = just_select || code == XOR || code == NOT || code == NEG; unsigned HOST_WIDE_INT fuller_mask; rtx op0, op1, temp; + poly_int64 const_op0; /* When we have an arithmetic operation, or a shift whose count we do not know, we need to assume that all bits up to the highest-order @@ -8808,8 +8812,8 @@ force_int_to_mode (rtx x, scalar_int_mod case MINUS: /* If X is (minus C Y) where C's least set bit is larger than any bit in the mask, then we may replace with (neg Y). */ - if (CONST_INT_P (XEXP (x, 0)) - && least_bit_hwi (UINTVAL (XEXP (x, 0))) > mask) + if (poly_int_rtx_p (XEXP (x, 0), &const_op0) + && (unsigned HOST_WIDE_INT) known_alignment (const_op0) > mask) { x = simplify_gen_unary (NEG, xmode, XEXP (x, 1), xmode); return force_to_mode (x, mode, mask, next_select); Index: gcc/cse.c =================================================================== --- gcc/cse.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/cse.c 2018-01-09 18:26:49.868693053 +0000 @@ -3112,6 +3112,7 @@ fold_rtx (rtx x, rtx_insn *insn) int i; rtx new_rtx = 0; int changed = 0; + poly_int64 xval; /* Operands of X. */ /* Workaround -Wmaybe-uninitialized false positive during @@ -3592,12 +3593,11 @@ fold_rtx (rtx x, rtx_insn *insn) case MINUS: /* If we have (MINUS Y C), see if Y is known to be (PLUS Z C2). If so, produce (PLUS Z C2-C). */ - if (const_arg1 != 0 && CONST_INT_P (const_arg1)) + if (const_arg1 != 0 && poly_int_rtx_p (const_arg1, &xval)) { rtx y = lookup_as_function (XEXP (x, 0), PLUS); - if (y && CONST_INT_P (XEXP (y, 1))) - return fold_rtx (plus_constant (mode, copy_rtx (y), - -INTVAL (const_arg1)), + if (y && poly_int_rtx_p (XEXP (y, 1))) + return fold_rtx (plus_constant (mode, copy_rtx (y), -xval), NULL); } Index: gcc/explow.c =================================================================== --- gcc/explow.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/explow.c 2018-01-09 18:26:49.868693053 +0000 @@ -955,8 +955,9 @@ adjust_stack (rtx adjust) /* We expect all variable sized adjustments to be multiple of PREFERRED_STACK_BOUNDARY. */ - if (CONST_INT_P (adjust)) - stack_pointer_delta -= INTVAL (adjust); + poly_int64 const_adjust; + if (poly_int_rtx_p (adjust, &const_adjust)) + stack_pointer_delta -= const_adjust; adjust_stack_1 (adjust, false); } @@ -972,8 +973,9 @@ anti_adjust_stack (rtx adjust) /* We expect all variable sized adjustments to be multiple of PREFERRED_STACK_BOUNDARY. */ - if (CONST_INT_P (adjust)) - stack_pointer_delta += INTVAL (adjust); + poly_int64 const_adjust; + if (poly_int_rtx_p (adjust, &const_adjust)) + stack_pointer_delta += const_adjust; adjust_stack_1 (adjust, true); } Index: gcc/expr.c =================================================================== --- gcc/expr.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/expr.c 2018-01-09 18:26:49.869693013 +0000 @@ -1607,12 +1607,13 @@ emit_block_move_hints (rtx x, rtx y, rtx /* Set MEM_SIZE as appropriate for this block copy. The main place this can be incorrect is coming from __builtin_memcpy. */ - if (CONST_INT_P (size)) + poly_int64 const_size; + if (poly_int_rtx_p (size, &const_size)) { x = shallow_copy_rtx (x); y = shallow_copy_rtx (y); - set_mem_size (x, INTVAL (size)); - set_mem_size (y, INTVAL (size)); + set_mem_size (x, const_size); + set_mem_size (y, const_size); } if (CONST_INT_P (size) && can_move_by_pieces (INTVAL (size), align)) @@ -2138,7 +2139,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r for (i = start; i < XVECLEN (dst, 0); i++) { machine_mode mode = GET_MODE (XEXP (XVECEXP (dst, 0, i), 0)); - poly_int64 bytepos = INTVAL (XEXP (XVECEXP (dst, 0, i), 1)); + poly_int64 bytepos = rtx_to_poly_int64 (XEXP (XVECEXP (dst, 0, i), 1)); poly_int64 bytelen = GET_MODE_SIZE (mode); poly_int64 shift = 0; @@ -2469,7 +2470,8 @@ emit_group_store (rtx orig_dst, rtx src, { inner = GET_MODE (tmps[start]); bytepos = subreg_lowpart_offset (inner, outer); - if (known_eq (INTVAL (XEXP (XVECEXP (src, 0, start), 1)), bytepos)) + if (known_eq (rtx_to_poly_int64 (XEXP (XVECEXP (src, 0, start), 1)), + bytepos)) { temp = simplify_gen_subreg (outer, tmps[start], inner, 0); @@ -2488,7 +2490,8 @@ emit_group_store (rtx orig_dst, rtx src, { inner = GET_MODE (tmps[finish - 1]); bytepos = subreg_lowpart_offset (inner, outer); - if (known_eq (INTVAL (XEXP (XVECEXP (src, 0, finish - 1), 1)), + if (known_eq (rtx_to_poly_int64 (XEXP (XVECEXP (src, 0, + finish - 1), 1)), bytepos)) { temp = simplify_gen_subreg (outer, tmps[finish - 1], @@ -2510,7 +2513,7 @@ emit_group_store (rtx orig_dst, rtx src, /* Process the pieces. */ for (i = start; i < finish; i++) { - poly_int64 bytepos = INTVAL (XEXP (XVECEXP (src, 0, i), 1)); + poly_int64 bytepos = rtx_to_poly_int64 (XEXP (XVECEXP (src, 0, i), 1)); machine_mode mode = GET_MODE (tmps[i]); poly_int64 bytelen = GET_MODE_SIZE (mode); poly_uint64 adj_bytelen; @@ -2968,9 +2971,10 @@ clear_storage_hints (rtx object, rtx siz /* If OBJECT is not BLKmode and SIZE is the same size as its mode, just move a zero. Otherwise, do this a piece at a time. */ + poly_int64 size_val; if (mode != BLKmode - && CONST_INT_P (size) - && known_eq (INTVAL (size), GET_MODE_SIZE (mode))) + && poly_int_rtx_p (size, &size_val) + && known_eq (size_val, GET_MODE_SIZE (mode))) { rtx zero = CONST0_RTX (mode); if (zero != NULL) @@ -3906,9 +3910,10 @@ push_block (rtx size, poly_int64 extra, } else { - if (CONST_INT_P (size)) + poly_int64 csize; + if (poly_int_rtx_p (size, &csize)) temp = plus_constant (Pmode, virtual_outgoing_args_rtx, - -INTVAL (size) - (below ? 0 : extra)); + -csize - (below ? 0 : extra)); else if (maybe_ne (extra, 0) && !below) temp = gen_rtx_PLUS (Pmode, virtual_outgoing_args_rtx, negate_rtx (Pmode, plus_constant (Pmode, size, @@ -4028,11 +4033,10 @@ find_args_size_adjust (rtx_insn *insn) /* Look for a trivial adjustment, otherwise assume nothing. */ /* Note that the SPU restore_stack_block pattern refers to the stack pointer in V4SImode. Consider that non-trivial. */ + poly_int64 offset; if (SCALAR_INT_MODE_P (GET_MODE (dest)) - && GET_CODE (SET_SRC (set)) == PLUS - && XEXP (SET_SRC (set), 0) == stack_pointer_rtx - && CONST_INT_P (XEXP (SET_SRC (set), 1))) - return INTVAL (XEXP (SET_SRC (set), 1)); + && strip_offset (SET_SRC (set), &offset) == stack_pointer_rtx) + return offset; /* ??? Reload can generate no-op moves, which will be cleaned up later. Recognize it and continue searching. */ else if (rtx_equal_p (dest, SET_SRC (set))) @@ -4070,8 +4074,7 @@ find_args_size_adjust (rtx_insn *insn) addr = XEXP (addr, 1); gcc_assert (GET_CODE (addr) == PLUS); gcc_assert (XEXP (addr, 0) == stack_pointer_rtx); - gcc_assert (CONST_INT_P (XEXP (addr, 1))); - return INTVAL (XEXP (addr, 1)); + return rtx_to_poly_int64 (XEXP (addr, 1)); default: gcc_unreachable (); } @@ -4413,15 +4416,16 @@ emit_push_insn (rtx x, machine_mode mode /* Get the address of the stack space. In this case, we do not deal with EXTRA separately. A single stack adjust will do. */ + poly_int64 offset; if (! args_addr) { temp = push_block (size, extra, where_pad == PAD_DOWNWARD); extra = 0; } - else if (CONST_INT_P (args_so_far)) + else if (poly_int_rtx_p (args_so_far, &offset)) temp = memory_address (BLKmode, plus_constant (Pmode, args_addr, - skip + INTVAL (args_so_far))); + skip + offset)); else temp = memory_address (BLKmode, plus_constant (Pmode, @@ -5761,12 +5765,11 @@ store_expr_with_bounds (tree exp, rtx ta /* Figure out how much is left in TARGET that we have to clear. Do all calculations in pointer_mode. */ - if (CONST_INT_P (copy_size_rtx)) + poly_int64 const_copy_size; + if (poly_int_rtx_p (copy_size_rtx, &const_copy_size)) { - size = plus_constant (address_mode, size, - -INTVAL (copy_size_rtx)); - target = adjust_address (target, BLKmode, - INTVAL (copy_size_rtx)); + size = plus_constant (address_mode, size, -const_copy_size); + target = adjust_address (target, BLKmode, const_copy_size); } else { @@ -11284,10 +11287,10 @@ reduce_to_bit_field_precision (rtx exp, if (target && GET_MODE (target) != GET_MODE (exp)) target = 0; /* For constant values, reduce using build_int_cst_type. */ - if (CONST_INT_P (exp)) + poly_int64 const_exp; + if (poly_int_rtx_p (exp, &const_exp)) { - HOST_WIDE_INT value = INTVAL (exp); - tree t = build_int_cst_type (type, value); + tree t = build_int_cst_type (type, const_exp); return expand_expr (t, target, VOIDmode, EXPAND_NORMAL); } else if (TYPE_UNSIGNED (type)) Index: gcc/calls.c =================================================================== --- gcc/calls.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/calls.c 2018-01-09 18:26:49.866693133 +0000 @@ -5781,15 +5781,9 @@ store_one_arg (struct arg_data *arg, rtx rtx x = arg->value; poly_int64 i = 0; - if (XEXP (x, 0) == crtl->args.internal_arg_pointer - || (GET_CODE (XEXP (x, 0)) == PLUS - && XEXP (XEXP (x, 0), 0) == - crtl->args.internal_arg_pointer - && CONST_INT_P (XEXP (XEXP (x, 0), 1)))) + if (strip_offset (XEXP (x, 0), &i) + == crtl->args.internal_arg_pointer) { - if (XEXP (x, 0) != crtl->args.internal_arg_pointer) - i = rtx_to_poly_int64 (XEXP (XEXP (x, 0), 1)); - /* arg.locate doesn't contain the pretend_args_size offset, it's part of argblock. Ensure we don't count it in I. */ if (STACK_GROWS_DOWNWARD) Index: gcc/rtlanal.c =================================================================== --- gcc/rtlanal.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/rtlanal.c 2018-01-09 18:26:49.870692974 +0000 @@ -462,6 +462,7 @@ rtx_addr_can_trap_p_1 (const_rtx x, poly { enum rtx_code code = GET_CODE (x); gcc_checking_assert (mode == BLKmode || known_size_p (size)); + poly_int64 const_x1; /* The offset must be a multiple of the mode size if we are considering unaligned memory references on strict alignment machines. */ @@ -653,8 +654,8 @@ rtx_addr_can_trap_p_1 (const_rtx x, poly return 0; /* - or it is an address that can't trap plus a constant integer. */ - if (CONST_INT_P (XEXP (x, 1)) - && !rtx_addr_can_trap_p_1 (XEXP (x, 0), offset + INTVAL (XEXP (x, 1)), + if (poly_int_rtx_p (XEXP (x, 1), &const_x1) + && !rtx_addr_can_trap_p_1 (XEXP (x, 0), offset + const_x1, size, mode, unaligned_mems)) return 0; @@ -1613,11 +1614,11 @@ set_noop_p (const_rtx set) int i; rtx par = XEXP (src, 1); rtx src0 = XEXP (src, 0); - int c0 = INTVAL (XVECEXP (par, 0, 0)); - HOST_WIDE_INT offset = GET_MODE_UNIT_SIZE (GET_MODE (src0)) * c0; + poly_int64 c0 = rtx_to_poly_int64 (XVECEXP (par, 0, 0)); + poly_int64 offset = GET_MODE_UNIT_SIZE (GET_MODE (src0)) * c0; for (i = 1; i < XVECLEN (par, 0); i++) - if (INTVAL (XVECEXP (par, 0, i)) != c0 + i) + if (maybe_ne (rtx_to_poly_int64 (XVECEXP (par, 0, i)), c0 + i)) return 0; return simplify_subreg_regno (REGNO (src0), GET_MODE (src0), Index: gcc/simplify-rtx.c =================================================================== --- gcc/simplify-rtx.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/simplify-rtx.c 2018-01-09 18:26:49.871692934 +0000 @@ -210,7 +210,7 @@ avoid_constant_pool_reference (rtx x) { rtx c, tmp, addr; machine_mode cmode; - HOST_WIDE_INT offset = 0; + poly_int64 offset = 0; switch (GET_CODE (x)) { @@ -239,13 +239,7 @@ avoid_constant_pool_reference (rtx x) addr = targetm.delegitimize_address (addr); /* Split the address into a base and integer offset. */ - if (GET_CODE (addr) == CONST - && GET_CODE (XEXP (addr, 0)) == PLUS - && CONST_INT_P (XEXP (XEXP (addr, 0), 1))) - { - offset = INTVAL (XEXP (XEXP (addr, 0), 1)); - addr = XEXP (XEXP (addr, 0), 0); - } + addr = strip_offset (addr, &offset); if (GET_CODE (addr) == LO_SUM) addr = XEXP (addr, 1); @@ -261,7 +255,7 @@ avoid_constant_pool_reference (rtx x) /* If we're accessing the constant in a different mode than it was originally stored, attempt to fix that up via subreg simplifications. If that fails we have no choice but to return the original memory. */ - if (offset == 0 && cmode == GET_MODE (x)) + if (known_eq (offset, 0) && cmode == GET_MODE (x)) return c; else if (known_in_range_p (offset, 0, GET_MODE_SIZE (cmode))) { @@ -2270,13 +2264,13 @@ simplify_binary_operation_1 (enum rtx_co if ((GET_CODE (op0) == CONST || GET_CODE (op0) == SYMBOL_REF || GET_CODE (op0) == LABEL_REF) - && CONST_INT_P (op1)) - return plus_constant (mode, op0, INTVAL (op1)); + && poly_int_rtx_p (op1, &offset)) + return plus_constant (mode, op0, offset); else if ((GET_CODE (op1) == CONST || GET_CODE (op1) == SYMBOL_REF || GET_CODE (op1) == LABEL_REF) - && CONST_INT_P (op0)) - return plus_constant (mode, op1, INTVAL (op0)); + && poly_int_rtx_p (op0, &offset)) + return plus_constant (mode, op1, offset); /* See if this is something like X * C - X or vice versa or if the multiplication is written as a shift. If so, we can @@ -4462,6 +4456,7 @@ simplify_plus_minus (enum rtx_code code, int n_ops = 2; int changed, n_constants, canonicalized = 0; int i, j; + poly_int64 value; memset (ops, 0, sizeof ops); @@ -4736,18 +4731,15 @@ simplify_plus_minus (enum rtx_code code, in the array and that any other constant will be next-to-last. */ if (n_ops > 1 - && CONST_INT_P (ops[n_ops - 1].op) + && poly_int_rtx_p (ops[n_ops - 1].op, &value) + && (HWI_COMPUTABLE_MODE_P (mode) + || coeffs_in_range_p (value, -HOST_WIDE_INT_MAX, HOST_WIDE_INT_MAX)) && CONSTANT_P (ops[n_ops - 2].op)) { - rtx value = ops[n_ops - 1].op; if (ops[n_ops - 1].neg ^ ops[n_ops - 2].neg) - value = neg_const_int (mode, value); - if (CONST_INT_P (value)) - { - ops[n_ops - 2].op = plus_constant (mode, ops[n_ops - 2].op, - INTVAL (value)); - n_ops--; - } + value = trunc_int_for_mode (-value, mode); + ops[n_ops - 2].op = plus_constant (mode, ops[n_ops - 2].op, value); + n_ops--; } /* Put a non-negated operand first, if possible. */ Index: gcc/var-tracking.c =================================================================== --- gcc/var-tracking.c 2018-01-09 18:26:49.634702370 +0000 +++ gcc/var-tracking.c 2018-01-09 18:26:49.872692894 +0000 @@ -917,14 +917,14 @@ vt_stack_adjustments (void) or hard_frame_pointer_rtx. */ static inline rtx -compute_cfa_pointer (HOST_WIDE_INT adjustment) +compute_cfa_pointer (poly_int64 adjustment) { return plus_constant (Pmode, cfa_base_rtx, adjustment + cfa_base_offset); } /* Adjustment for hard_frame_pointer_rtx to cfa base reg, or -1 if the replacement shouldn't be done. */ -static HOST_WIDE_INT hard_frame_pointer_adjustment = -1; +static poly_int64 hard_frame_pointer_adjustment = -1; /* Data for adjust_mems callback. */ @@ -1030,7 +1030,7 @@ adjust_mems (rtx loc, const_rtx old_rtx, return compute_cfa_pointer (amd->stack_adjust); else if (loc == hard_frame_pointer_rtx && frame_pointer_needed - && hard_frame_pointer_adjustment != -1 + && maybe_ne (hard_frame_pointer_adjustment, -1) && cfa_base_rtx) return compute_cfa_pointer (hard_frame_pointer_adjustment); gcc_checking_assert (loc != virtual_incoming_args_rtx); @@ -2156,7 +2156,7 @@ get_addr_from_local_cache (dataflow_set static rtx vt_canonicalize_addr (dataflow_set *set, rtx oloc) { - HOST_WIDE_INT ofst = 0; + poly_int64 ofst = 0, term; machine_mode mode = GET_MODE (oloc); rtx loc = oloc; rtx x; @@ -2165,9 +2165,9 @@ vt_canonicalize_addr (dataflow_set *set, while (retry) { while (GET_CODE (loc) == PLUS - && GET_CODE (XEXP (loc, 1)) == CONST_INT) + && poly_int_rtx_p (XEXP (loc, 1), &term)) { - ofst += INTVAL (XEXP (loc, 1)); + ofst += term; loc = XEXP (loc, 0); } @@ -2192,10 +2192,11 @@ vt_canonicalize_addr (dataflow_set *set, loc = get_addr_from_global_cache (loc); /* Consolidate plus_constants. */ - while (ofst && GET_CODE (loc) == PLUS - && GET_CODE (XEXP (loc, 1)) == CONST_INT) + while (maybe_ne (ofst, 0) + && GET_CODE (loc) == PLUS + && poly_int_rtx_p (XEXP (loc, 1), &term)) { - ofst += INTVAL (XEXP (loc, 1)); + ofst += term; loc = XEXP (loc, 0); } @@ -2211,12 +2212,10 @@ vt_canonicalize_addr (dataflow_set *set, } /* Add OFST back in. */ - if (ofst) + if (maybe_ne (ofst, 0)) { /* Don't build new RTL if we can help it. */ - if (GET_CODE (oloc) == PLUS - && XEXP (oloc, 0) == loc - && INTVAL (XEXP (oloc, 1)) == ofst) + if (strip_offset (oloc, &term) == loc && known_eq (term, ofst)) return oloc; loc = plus_constant (mode, loc, ofst); @@ -6092,7 +6091,7 @@ add_stores (rtx loc, const_rtx expr, voi } if (loc == stack_pointer_rtx - && hard_frame_pointer_adjustment != -1 + && maybe_ne (hard_frame_pointer_adjustment, -1) && preserve) cselib_set_value_sp_based (v); @@ -8764,6 +8763,7 @@ emit_note_insn_var_location (variable ** && GET_CODE (loc[n_var_parts]) == GET_CODE (loc2)) { rtx new_loc = NULL; + poly_int64 offset; if (REG_P (loc[n_var_parts]) && hard_regno_nregs (REGNO (loc[n_var_parts]), mode) * 2 @@ -8788,18 +8788,12 @@ emit_note_insn_var_location (variable ** else if (MEM_P (loc[n_var_parts]) && GET_CODE (XEXP (loc2, 0)) == PLUS && REG_P (XEXP (XEXP (loc2, 0), 0)) - && CONST_INT_P (XEXP (XEXP (loc2, 0), 1))) + && poly_int_rtx_p (XEXP (XEXP (loc2, 0), 1), &offset)) { - if ((REG_P (XEXP (loc[n_var_parts], 0)) - && rtx_equal_p (XEXP (loc[n_var_parts], 0), - XEXP (XEXP (loc2, 0), 0)) - && INTVAL (XEXP (XEXP (loc2, 0), 1)) == size) - || (GET_CODE (XEXP (loc[n_var_parts], 0)) == PLUS - && CONST_INT_P (XEXP (XEXP (loc[n_var_parts], 0), 1)) - && rtx_equal_p (XEXP (XEXP (loc[n_var_parts], 0), 0), - XEXP (XEXP (loc2, 0), 0)) - && INTVAL (XEXP (XEXP (loc[n_var_parts], 0), 1)) + size - == INTVAL (XEXP (XEXP (loc2, 0), 1)))) + poly_int64 offset2; + rtx base2 = strip_offset (XEXP (loc[n_var_parts], 0), &offset2); + if (rtx_equal_p (base2, XEXP (XEXP (loc2, 0), 0)) + && known_eq (offset2 + size, offset)) new_loc = adjust_address_nv (loc[n_var_parts], wider_mode, 0); } @@ -9667,20 +9661,17 @@ vt_add_function_parameter (tree parm) rewrite the incoming location of parameters passed on the stack into MEMs based on the argument pointer, so that incoming doesn't depend on a pseudo. */ + poly_int64 offset2 = 0; if (MEM_P (incoming) - && (XEXP (incoming, 0) == crtl->args.internal_arg_pointer - || (GET_CODE (XEXP (incoming, 0)) == PLUS - && XEXP (XEXP (incoming, 0), 0) - == crtl->args.internal_arg_pointer - && CONST_INT_P (XEXP (XEXP (incoming, 0), 1))))) + && (strip_offset (XEXP (incoming, 0), &offset2) + == crtl->args.internal_arg_pointer)) { HOST_WIDE_INT off = -FIRST_PARM_OFFSET (current_function_decl); - if (GET_CODE (XEXP (incoming, 0)) == PLUS) - off += INTVAL (XEXP (XEXP (incoming, 0), 1)); incoming = replace_equiv_address_nv (incoming, plus_constant (Pmode, - arg_pointer_rtx, off)); + arg_pointer_rtx, + off + offset2)); } #ifdef HAVE_window_save @@ -9982,7 +9973,7 @@ reemit_marker_as_note (rtx_insn *insn) vt_initialize (void) { basic_block bb; - HOST_WIDE_INT fp_cfa_offset = -1; + poly_int64 fp_cfa_offset = -1; alloc_aux_for_blocks (sizeof (variable_tracking_info)); @@ -10097,7 +10088,7 @@ vt_initialize (void) { if (GET_CODE (elim) == PLUS) { - fp_cfa_offset -= INTVAL (XEXP (elim, 1)); + fp_cfa_offset -= rtx_to_poly_int64 (XEXP (elim, 1)); elim = XEXP (elim, 0); } if (elim != hard_frame_pointer_rtx) @@ -10230,8 +10221,8 @@ vt_initialize (void) VTI (bb)->out.stack_adjust += post; } - if (fp_cfa_offset != -1 - && hard_frame_pointer_adjustment == -1 + if (maybe_ne (fp_cfa_offset, -1) + && known_eq (hard_frame_pointer_adjustment, -1) && fp_setter_insn (insn)) { vt_init_cfa_base ();