From patchwork Mon Oct 23 17:42:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 116849 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp4905411qgn; Mon, 23 Oct 2017 10:43:03 -0700 (PDT) X-Received: by 10.98.9.17 with SMTP id e17mr13672790pfd.329.1508780583437; Mon, 23 Oct 2017 10:43:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1508780583; cv=none; d=google.com; s=arc-20160816; b=HjLG7lBrekypbKatQVzVxFFxld7YHXmEp1WKb6wxJbj9m48ReUeZ1m+S6Xe8sqva85 QthrNzNXRwottIypu+ItBHrdKF5waWc1uvd4lQjKonhTCSllmWzauA5DmRtvi96SEsIR pvI/0GLebqZT7lyHaUxuRYdKOedn9/jknjE1gsPTFpMcmjppV6Zu0HZWTUX7tEfOJpGb IWXOWbMzwsHq9RkY7s56rbjAF5xBefpsIAn7DiABzwDJp/i92i3uJT6svHAD3MEAUtoH 9CfJqOVue+QHKfy9ysNjj7Rl4SGh66KgEQ00/OwARCOCLR5DEZ3Ajy4ss+Z9HmHp+LEt 5S7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:mail-followup-to:to:from:delivered-to:sender:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mailing-list:dkim-signature:domainkey-signature :arc-authentication-results; bh=oa3BhYpF27Trm/XflyZQzY4hiwyV/PPU8PZwjTWCS50=; b=zNe1Yt9uGAhEveIxbduneY4YebD45RO666ANI0vXVNA298Wyq6vrE/nB8ZnkddESGe dSiETiGROE9Si4lR6arX3vOufEVLg/ME2PRO8dYrYLTsm8hoDdUdYoCorUwc/GiyqilO +jZ1IWWhEPe5Ll5x9h1mwmLhFl8ud32oUkeVPyyq1Sk4qWgzwopsj3w7mux5J3VVwkJ5 IXDdPEdo1Ep8bka54mNl7EeQGZczETdP+goTJ16ePk0fwFCgj0Q7T5yrscpcOGtm5Xk4 MWvBQmyUmfQgbi/utMXzXM2AbgIUkbEzAxgMVx4L2U1U22IyXunXChu368NRhB5447+j DedA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=Esrf14IF; spf=pass (google.com: domain of gcc-patches-return-464880-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-464880-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id 78si5145999pgb.691.2017.10.23.10.43.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Oct 2017 10:43:03 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-464880-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=Esrf14IF; spf=pass (google.com: domain of gcc-patches-return-464880-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-464880-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; q=dns; s=default; b=c2cMQfJs0oWltaCOZNEyQCr5r1VS6 HLEC7tSvigpHc5KUwIm2fJpXF1v1HjQhLG1bEzE8Hpc+3OilT0p0csy3K/1kpfvZ h6YOVwfggUN9y5MchCHXNMKzxGolbJNdvgZO01Jl21UqQcKByoT4cXoZBsWCqk8Z EM3K6Nh4796I2I= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; s=default; bh=FVLLUBLiYJT4hBN4N5A9xoT6+M0=; b=Esr f14IFZG5bfjmMOE3rT15KpvyGLDvKlf4UZlrYTm7AGDEA7Jryaori7ks/o1enwI8 iMe5xVbAbdSvRcG7W+CeD6DzuCadCvWqqouDj+ZlSEbckhJjAPUcapaXzJHHpiM4 1jHCXRuy4LRR/Mht/98NM+J1wI05RXmGldPVqN3A= Received: (qmail 130136 invoked by alias); 23 Oct 2017 17:42:37 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 130100 invoked by uid 89); 23 Oct 2017 17:42:36 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-10.6 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM, SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-wm0-f51.google.com Received: from mail-wm0-f51.google.com (HELO mail-wm0-f51.google.com) (74.125.82.51) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 23 Oct 2017 17:42:33 +0000 Received: by mail-wm0-f51.google.com with SMTP id z3so4407441wme.5 for ; Mon, 23 Oct 2017 10:42:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:references:date :in-reply-to:message-id:user-agent:mime-version; bh=oa3BhYpF27Trm/XflyZQzY4hiwyV/PPU8PZwjTWCS50=; b=i3FECFcFjg6O6oBFPHwE7YOFAY7VcAaoX1Kbeez2keHnU2BSLIaKSYxmx3LNmh1IWx 7xmjLpEcwkBTgxy358KtuOp6gfH9rqXpCRIg03OKK86Q9gh3JDQ0JjGaL5mvt6Bhkr1G Hn0zKvQyezjeDgpzJWv9Bweb9ekY11V4ZgCi/Gj6iDQxr2RZJk7aLWJ3pGUPKD1t0qTP 4jR3ekoeO7eTiS2k/uAGeZzz1k07HqMGBeANAe7e4nve/PoXg5NbutgysJrhXxVmO/jE bYZZDYwxx64qVus1fSha+sEcfJyDqEadepc5T3qPU64ESZQFnEnKW1ptB6a49tempWUv Wi3A== X-Gm-Message-State: AMCzsaX2Hiaa75DKQGdE5uEtLEYm3QkjiEjjNvYAFp4NiLsQkx1XbuZ1 mNTtOHW7L5ZVNU7bxvPtHrRfmolwQ28= X-Google-Smtp-Source: ABhQp+Qd1bn9vq0319Bhw7tRCuP5pugvpDFjvZ1MDy5I2TUD7AJ+XdsLvJN/T7Gm2bEIaumT0U0V5Q== X-Received: by 10.28.138.133 with SMTP id m127mr6578330wmd.134.1508780549906; Mon, 23 Oct 2017 10:42:29 -0700 (PDT) Received: from localhost ([2.26.27.199]) by smtp.gmail.com with ESMTPSA id 55sm8931465wrw.60.2017.10.23.10.42.28 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 23 Oct 2017 10:42:28 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: [104/nnn] poly_int: GET_MODE_PRECISION References: <871sltvm7r.fsf@linaro.org> Date: Mon, 23 Oct 2017 18:42:27 +0100 In-Reply-To: <871sltvm7r.fsf@linaro.org> (Richard Sandiford's message of "Mon, 23 Oct 2017 17:54:32 +0100") Message-ID: <87bmkxdam4.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 This patch changes GET_MODE_PRECISION from an unsigned short to a poly_uint16. 2017-10-23 Richard Sandiford Alan Hayward David Sherwood gcc/ * machmode.h (mode_precision): Change from unsigned short to poly_uint16_pod. (mode_to_precision): Return a poly_uint16 rather than an unsigned short. (GET_MODE_PRECISION): Return a constant if ONLY_FIXED_SIZE_MODES, or if measurement_type is not polynomial. (HWI_COMPUTABLE_MODE_P): Turn into a function. Optimize the case in which the mode is already known to be a scalar_int_mode. * genmodes.c (emit_mode_precision): Change the type of mode_precision from unsigned short to poly_uint16_pod. Use ZERO_COEFFS for the initializer. * lto-streamer-in.c (lto_input_mode_table): Use bp_unpack_poly_value for GET_MODE_PRECISION. * lto-streamer-out.c (lto_write_mode_table): Use bp_pack_poly_value for GET_MODE_PRECISION. * combine.c (update_rsp_from_reg_equal): Treat GET_MODE_PRECISION as polynomial. (try_combine, find_split_point, combine_simplify_rtx): Likewise. (expand_field_assignment, make_extraction): Likewise. (make_compound_operation_int, record_dead_and_set_regs_1): Likewise. (get_last_value): Likewise. * convert.c (convert_to_integer_1): Likewise. * cse.c (cse_insn): Likewise. * expr.c (expand_expr_real_1): Likewise. * lra-constraints.c (simplify_operand_subreg): Likewise. * optabs-query.c (can_atomic_load_p): Likewise. * optabs.c (expand_atomic_load): Likewise. (expand_atomic_store): Likewise. * ree.c (combine_reaching_defs): Likewise. * rtl.h (partial_subreg_p, paradoxical_subreg_p): Likewise. * rtlanal.c (nonzero_bits1, lsb_bitfield_op_p): Likewise. * tree.h (type_has_mode_precision_p): Likewise. * ubsan.c (instrument_si_overflow): Likewise. gcc/ada/ * gcc-interface/misc.c (enumerate_modes): Treat GET_MODE_PRECISION as polynomial. Index: gcc/machmode.h =================================================================== --- gcc/machmode.h 2017-10-23 17:25:48.620492005 +0100 +++ gcc/machmode.h 2017-10-23 17:25:54.180292158 +0100 @@ -23,7 +23,7 @@ #define HAVE_MACHINE_MODES typedef opt_mode opt_machine_mode; extern CONST_MODE_SIZE unsigned short mode_size[NUM_MACHINE_MODES]; -extern const unsigned short mode_precision[NUM_MACHINE_MODES]; +extern const poly_uint16_pod mode_precision[NUM_MACHINE_MODES]; extern const unsigned char mode_inner[NUM_MACHINE_MODES]; extern const poly_uint16_pod mode_nunits[NUM_MACHINE_MODES]; extern CONST_MODE_UNIT_SIZE unsigned char mode_unit_size[NUM_MACHINE_MODES]; @@ -535,7 +535,7 @@ mode_to_bits (machine_mode mode) /* Return the base GET_MODE_PRECISION value for MODE. */ -ALWAYS_INLINE unsigned short +ALWAYS_INLINE poly_uint16 mode_to_precision (machine_mode mode) { return mode_precision[mode]; @@ -604,7 +604,30 @@ #define GET_MODE_BITSIZE(MODE) (mode_to_ /* Get the number of value bits of an object of mode MODE. */ -#define GET_MODE_PRECISION(MODE) (mode_to_precision (MODE)) +#if ONLY_FIXED_SIZE_MODES +#define GET_MODE_PRECISION(MODE) \ + ((unsigned short) mode_to_precision (MODE).coeffs[0]) +#else +ALWAYS_INLINE poly_uint16 +GET_MODE_PRECISION (machine_mode mode) +{ + return mode_to_precision (mode); +} + +template +ALWAYS_INLINE typename if_poly::t +GET_MODE_PRECISION (const T &mode) +{ + return mode_to_precision (mode); +} + +template +ALWAYS_INLINE typename if_nonpoly::t +GET_MODE_PRECISION (const T &mode) +{ + return mode_to_precision (mode).coeffs[0]; +} +#endif /* Get the number of integral bits of an object of mode MODE. */ extern CONST_MODE_IBIT unsigned char mode_ibit[NUM_MACHINE_MODES]; @@ -863,9 +886,22 @@ #define TRULY_NOOP_TRUNCATION_MODES_P(MO (targetm.truly_noop_truncation (GET_MODE_PRECISION (MODE1), \ GET_MODE_PRECISION (MODE2))) -#define HWI_COMPUTABLE_MODE_P(MODE) \ - (SCALAR_INT_MODE_P (MODE) \ - && GET_MODE_PRECISION (MODE) <= HOST_BITS_PER_WIDE_INT) +/* Return true if MODE is a scalar integer mode that fits in a + HOST_WIDE_INT. */ + +inline bool +HWI_COMPUTABLE_MODE_P (machine_mode mode) +{ + machine_mode mme = mode; + return (SCALAR_INT_MODE_P (mme) + && mode_to_precision (mme).coeffs[0] <= HOST_BITS_PER_WIDE_INT); +} + +inline bool +HWI_COMPUTABLE_MODE_P (scalar_int_mode mode) +{ + return GET_MODE_PRECISION (mode) <= HOST_BITS_PER_WIDE_INT; +} struct int_n_data_t { /* These parts are initailized by genmodes output */ Index: gcc/genmodes.c =================================================================== --- gcc/genmodes.c 2017-10-23 17:25:48.618492077 +0100 +++ gcc/genmodes.c 2017-10-23 17:25:54.178292230 +0100 @@ -1358,13 +1358,14 @@ emit_mode_precision (void) int c; struct mode_data *m; - print_decl ("unsigned short", "mode_precision", "NUM_MACHINE_MODES"); + print_decl ("poly_uint16_pod", "mode_precision", "NUM_MACHINE_MODES"); for_all_modes (c, m) if (m->precision != (unsigned int)-1) - tagged_printf ("%u", m->precision, m->name); + tagged_printf ("{ %u" ZERO_COEFFS " }", m->precision, m->name); else - tagged_printf ("%u*BITS_PER_UNIT", m->bytesize, m->name); + tagged_printf ("{ %u * BITS_PER_UNIT" ZERO_COEFFS " }", + m->bytesize, m->name); print_closer (); } Index: gcc/lto-streamer-in.c =================================================================== --- gcc/lto-streamer-in.c 2017-10-23 17:25:48.619492041 +0100 +++ gcc/lto-streamer-in.c 2017-10-23 17:25:54.179292194 +0100 @@ -1605,7 +1605,7 @@ lto_input_mode_table (struct lto_file_de enum mode_class mclass = bp_unpack_enum (&bp, mode_class, MAX_MODE_CLASS); unsigned int size = bp_unpack_value (&bp, 8); - unsigned int prec = bp_unpack_value (&bp, 16); + poly_uint16 prec = bp_unpack_poly_value (&bp, 16); machine_mode inner = (machine_mode) bp_unpack_value (&bp, 8); poly_uint16 nunits = bp_unpack_poly_value (&bp, 16); unsigned int ibit = 0, fbit = 0; @@ -1639,7 +1639,7 @@ lto_input_mode_table (struct lto_file_de : mr = GET_MODE_WIDER_MODE (mr).else_void ()) if (GET_MODE_CLASS (mr) != mclass || GET_MODE_SIZE (mr) != size - || GET_MODE_PRECISION (mr) != prec + || may_ne (GET_MODE_PRECISION (mr), prec) || (inner == m ? GET_MODE_INNER (mr) != mr : GET_MODE_INNER (mr) != table[(int) inner]) Index: gcc/lto-streamer-out.c =================================================================== --- gcc/lto-streamer-out.c 2017-10-23 17:25:48.620492005 +0100 +++ gcc/lto-streamer-out.c 2017-10-23 17:25:54.180292158 +0100 @@ -2773,7 +2773,7 @@ lto_write_mode_table (void) bp_pack_value (&bp, m, 8); bp_pack_enum (&bp, mode_class, MAX_MODE_CLASS, GET_MODE_CLASS (m)); bp_pack_value (&bp, GET_MODE_SIZE (m), 8); - bp_pack_value (&bp, GET_MODE_PRECISION (m), 16); + bp_pack_poly_value (&bp, GET_MODE_PRECISION (m), 16); bp_pack_value (&bp, GET_MODE_INNER (m), 8); bp_pack_poly_value (&bp, GET_MODE_NUNITS (m), 16); switch (GET_MODE_CLASS (m)) Index: gcc/combine.c =================================================================== --- gcc/combine.c 2017-10-23 17:25:30.702136080 +0100 +++ gcc/combine.c 2017-10-23 17:25:54.176292301 +0100 @@ -1703,7 +1703,7 @@ update_rsp_from_reg_equal (reg_stat_type if (rsp->sign_bit_copies != 1) { num = num_sign_bit_copies (SET_SRC (set), GET_MODE (x)); - if (reg_equal && num != GET_MODE_PRECISION (GET_MODE (x))) + if (reg_equal && may_ne (num, GET_MODE_PRECISION (GET_MODE (x)))) { unsigned int numeq = num_sign_bit_copies (reg_equal, GET_MODE (x)); if (num == 0 || numeq > num) @@ -3938,16 +3938,20 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && ! (temp_expr = SET_DEST (XVECEXP (newpat, 0, 1)), (REG_P (temp_expr) && reg_stat[REGNO (temp_expr)].nonzero_bits != 0 - && GET_MODE_PRECISION (GET_MODE (temp_expr)) < BITS_PER_WORD - && GET_MODE_PRECISION (GET_MODE (temp_expr)) < HOST_BITS_PER_INT + && must_lt (GET_MODE_PRECISION (GET_MODE (temp_expr)), + BITS_PER_WORD) + && must_lt (GET_MODE_PRECISION (GET_MODE (temp_expr)), + HOST_BITS_PER_INT) && (reg_stat[REGNO (temp_expr)].nonzero_bits != GET_MODE_MASK (word_mode)))) && ! (GET_CODE (SET_DEST (XVECEXP (newpat, 0, 1))) == SUBREG && (temp_expr = SUBREG_REG (SET_DEST (XVECEXP (newpat, 0, 1))), (REG_P (temp_expr) && reg_stat[REGNO (temp_expr)].nonzero_bits != 0 - && GET_MODE_PRECISION (GET_MODE (temp_expr)) < BITS_PER_WORD - && GET_MODE_PRECISION (GET_MODE (temp_expr)) < HOST_BITS_PER_INT + && must_lt (GET_MODE_PRECISION (GET_MODE (temp_expr)), + BITS_PER_WORD) + && must_lt (GET_MODE_PRECISION (GET_MODE (temp_expr)), + HOST_BITS_PER_INT) && (reg_stat[REGNO (temp_expr)].nonzero_bits != GET_MODE_MASK (word_mode))))) && ! reg_overlap_mentioned_p (SET_DEST (XVECEXP (newpat, 0, 1)), @@ -5115,8 +5119,9 @@ find_split_point (rtx *loc, rtx_insn *in break; } - if (len && pos >= 0 - && pos + len <= GET_MODE_PRECISION (GET_MODE (inner)) + if (len + && known_subrange_p (pos, len, + 0, GET_MODE_PRECISION (GET_MODE (inner))) && is_a (GET_MODE (SET_SRC (x)), &mode)) { /* For unsigned, we have a choice of a shift followed by an @@ -5982,8 +5987,9 @@ combine_simplify_rtx (rtx x, machine_mod && (UINTVAL (XEXP (XEXP (XEXP (x, 0), 0), 1)) == (HOST_WIDE_INT_1U << (i + 1)) - 1)) || (GET_CODE (XEXP (XEXP (x, 0), 0)) == ZERO_EXTEND - && (GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (XEXP (x, 0), 0), 0))) - == (unsigned int) i + 1)))) + && must_eq ((GET_MODE_PRECISION + (GET_MODE (XEXP (XEXP (XEXP (x, 0), 0), 0)))), + (unsigned int) i + 1)))) return simplify_shift_const (NULL_RTX, ASHIFTRT, int_mode, simplify_shift_const (NULL_RTX, ASHIFT, int_mode, @@ -7314,7 +7320,7 @@ expand_field_assignment (const_rtx x) { rtx inner; rtx pos; /* Always counts from low bit. */ - int len; + int len, inner_len; rtx mask, cleared, masked; scalar_int_mode compute_mode; @@ -7324,8 +7330,10 @@ expand_field_assignment (const_rtx x) if (GET_CODE (SET_DEST (x)) == STRICT_LOW_PART && GET_CODE (XEXP (SET_DEST (x), 0)) == SUBREG) { + rtx x0 = XEXP (SET_DEST (x), 0); + if (!GET_MODE_PRECISION (GET_MODE (x0)).is_constant (&len)) + break; inner = SUBREG_REG (XEXP (SET_DEST (x), 0)); - len = GET_MODE_PRECISION (GET_MODE (XEXP (SET_DEST (x), 0))); pos = gen_int_mode (subreg_lsb (XEXP (SET_DEST (x), 0)), MAX_MODE_INT); } @@ -7333,33 +7341,30 @@ expand_field_assignment (const_rtx x) && CONST_INT_P (XEXP (SET_DEST (x), 1))) { inner = XEXP (SET_DEST (x), 0); + if (!GET_MODE_PRECISION (GET_MODE (inner)).is_constant (&inner_len)) + break; + len = INTVAL (XEXP (SET_DEST (x), 1)); pos = XEXP (SET_DEST (x), 2); /* A constant position should stay within the width of INNER. */ - if (CONST_INT_P (pos) - && INTVAL (pos) + len > GET_MODE_PRECISION (GET_MODE (inner))) + if (CONST_INT_P (pos) && INTVAL (pos) + len > inner_len) break; if (BITS_BIG_ENDIAN) { if (CONST_INT_P (pos)) - pos = GEN_INT (GET_MODE_PRECISION (GET_MODE (inner)) - len - - INTVAL (pos)); + pos = GEN_INT (inner_len - len - INTVAL (pos)); else if (GET_CODE (pos) == MINUS && CONST_INT_P (XEXP (pos, 1)) - && (INTVAL (XEXP (pos, 1)) - == GET_MODE_PRECISION (GET_MODE (inner)) - len)) + && INTVAL (XEXP (pos, 1)) == inner_len - len) /* If position is ADJUST - X, new position is X. */ pos = XEXP (pos, 0); else - { - HOST_WIDE_INT prec = GET_MODE_PRECISION (GET_MODE (inner)); - pos = simplify_gen_binary (MINUS, GET_MODE (pos), - gen_int_mode (prec - len, - GET_MODE (pos)), - pos); - } + pos = simplify_gen_binary (MINUS, GET_MODE (pos), + gen_int_mode (inner_len - len, + GET_MODE (pos)), + pos); } } @@ -7479,7 +7484,7 @@ make_extraction (machine_mode mode, rtx bits outside of is_mode, don't look through non-paradoxical SUBREGs. See PR82192. */ || (pos_rtx == NULL_RTX - && pos + len <= GET_MODE_PRECISION (is_mode)))) + && must_le (pos + len, GET_MODE_PRECISION (is_mode))))) { /* If going from (subreg:SI (mem:QI ...)) to (mem:QI ...), consider just the QI as the memory to extract from. @@ -7510,7 +7515,7 @@ make_extraction (machine_mode mode, rtx bits outside of is_mode, don't look through TRUNCATE. See PR82192. */ && pos_rtx == NULL_RTX - && pos + len <= GET_MODE_PRECISION (is_mode)) + && must_le (pos + len, GET_MODE_PRECISION (is_mode))) inner = XEXP (inner, 0); inner_mode = GET_MODE (inner); @@ -7557,11 +7562,12 @@ make_extraction (machine_mode mode, rtx if (MEM_P (inner)) { - HOST_WIDE_INT offset; + poly_int64 offset; /* POS counts from lsb, but make OFFSET count in memory order. */ if (BYTES_BIG_ENDIAN) - offset = (GET_MODE_PRECISION (is_mode) - len - pos) / BITS_PER_UNIT; + offset = bits_to_bytes_round_down (GET_MODE_PRECISION (is_mode) + - len - pos); else offset = pos / BITS_PER_UNIT; @@ -7653,7 +7659,7 @@ make_extraction (machine_mode mode, rtx other cases, we would only be going outside our object in cases when an original shift would have been undefined. */ if (MEM_P (inner) - && ((pos_rtx == 0 && pos + len > GET_MODE_PRECISION (is_mode)) + && ((pos_rtx == 0 && may_gt (pos + len, GET_MODE_PRECISION (is_mode))) || (pos_rtx != 0 && len != 1))) return 0; @@ -8132,8 +8138,10 @@ make_compound_operation_int (scalar_int_ sub = XEXP (XEXP (x, 0), 0); machine_mode sub_mode = GET_MODE (sub); + int sub_width; if ((REG_P (sub) || MEM_P (sub)) - && GET_MODE_PRECISION (sub_mode) < mode_width) + && GET_MODE_PRECISION (sub_mode).is_constant (&sub_width) + && sub_width < mode_width) { unsigned HOST_WIDE_INT mode_mask = GET_MODE_MASK (sub_mode); unsigned HOST_WIDE_INT mask; @@ -8143,8 +8151,7 @@ make_compound_operation_int (scalar_int_ if ((mask & mode_mask) == mode_mask) { new_rtx = make_compound_operation (sub, next_code); - new_rtx = make_extraction (mode, new_rtx, 0, 0, - GET_MODE_PRECISION (sub_mode), + new_rtx = make_extraction (mode, new_rtx, 0, 0, sub_width, 1, 0, in_code == COMPARE); } } @@ -13215,7 +13222,7 @@ record_dead_and_set_regs_1 (rtx dest, co else if (GET_CODE (setter) == SET && GET_CODE (SET_DEST (setter)) == SUBREG && SUBREG_REG (SET_DEST (setter)) == dest - && GET_MODE_PRECISION (GET_MODE (dest)) <= BITS_PER_WORD + && must_le (GET_MODE_PRECISION (GET_MODE (dest)), BITS_PER_WORD) && subreg_lowpart_p (SET_DEST (setter))) record_value_for_reg (dest, record_dead_insn, gen_lowpart (GET_MODE (dest), @@ -13617,8 +13624,8 @@ get_last_value (const_rtx x) /* If fewer bits were set than what we are asked for now, we cannot use the value. */ - if (GET_MODE_PRECISION (rsp->last_set_mode) - < GET_MODE_PRECISION (GET_MODE (x))) + if (may_lt (GET_MODE_PRECISION (rsp->last_set_mode), + GET_MODE_PRECISION (GET_MODE (x)))) return 0; /* If the value has all its registers valid, return it. */ Index: gcc/convert.c =================================================================== --- gcc/convert.c 2017-09-15 14:47:33.181331910 +0100 +++ gcc/convert.c 2017-10-23 17:25:54.176292301 +0100 @@ -731,7 +731,7 @@ convert_to_integer_1 (tree type, tree ex type corresponding to its mode, then do a nop conversion to TYPE. */ else if (TREE_CODE (type) == ENUMERAL_TYPE - || outprec != GET_MODE_PRECISION (TYPE_MODE (type))) + || may_ne (outprec, GET_MODE_PRECISION (TYPE_MODE (type)))) { expr = convert (lang_hooks.types.type_for_mode (TYPE_MODE (type), TYPE_UNSIGNED (type)), expr); Index: gcc/cse.c =================================================================== --- gcc/cse.c 2017-10-23 17:16:50.359529762 +0100 +++ gcc/cse.c 2017-10-23 17:25:54.177292265 +0100 @@ -5231,8 +5231,9 @@ cse_insn (rtx_insn *insn) && CONST_INT_P (XEXP (SET_DEST (sets[i].rtl), 1)) && CONST_INT_P (XEXP (SET_DEST (sets[i].rtl), 2)) && REG_P (XEXP (SET_DEST (sets[i].rtl), 0)) - && (GET_MODE_PRECISION (GET_MODE (SET_DEST (sets[i].rtl))) - >= INTVAL (XEXP (SET_DEST (sets[i].rtl), 1))) + && (must_ge + (GET_MODE_PRECISION (GET_MODE (SET_DEST (sets[i].rtl))), + INTVAL (XEXP (SET_DEST (sets[i].rtl), 1)))) && ((unsigned) INTVAL (XEXP (SET_DEST (sets[i].rtl), 1)) + (unsigned) INTVAL (XEXP (SET_DEST (sets[i].rtl), 2)) <= HOST_BITS_PER_WIDE_INT)) Index: gcc/expr.c =================================================================== --- gcc/expr.c 2017-10-23 17:25:51.740379860 +0100 +++ gcc/expr.c 2017-10-23 17:25:54.178292230 +0100 @@ -11034,9 +11034,10 @@ expand_expr_real_1 (tree exp, rtx target ; /* If neither mode is BLKmode, and both modes are the same size then we can use gen_lowpart. */ - else if (mode != BLKmode && GET_MODE (op0) != BLKmode - && (GET_MODE_PRECISION (mode) - == GET_MODE_PRECISION (GET_MODE (op0))) + else if (mode != BLKmode + && GET_MODE (op0) != BLKmode + && must_eq (GET_MODE_PRECISION (mode), + GET_MODE_PRECISION (GET_MODE (op0))) && !COMPLEX_MODE_P (GET_MODE (op0))) { if (GET_CODE (op0) == SUBREG) Index: gcc/lra-constraints.c =================================================================== --- gcc/lra-constraints.c 2017-10-23 17:25:42.597708494 +0100 +++ gcc/lra-constraints.c 2017-10-23 17:25:54.179292194 +0100 @@ -1555,7 +1555,8 @@ simplify_operand_subreg (int nop, machin missing important data from memory when the inner is wider than outer. This rule only applies to modes that are no wider than a word. */ - if (!(GET_MODE_PRECISION (mode) != GET_MODE_PRECISION (innermode) + if (!(may_ne (GET_MODE_PRECISION (mode), + GET_MODE_PRECISION (innermode)) && GET_MODE_SIZE (mode) <= UNITS_PER_WORD && GET_MODE_SIZE (innermode) <= UNITS_PER_WORD && WORD_REGISTER_OPERATIONS) Index: gcc/optabs-query.c =================================================================== --- gcc/optabs-query.c 2017-10-23 17:25:48.620492005 +0100 +++ gcc/optabs-query.c 2017-10-23 17:25:54.180292158 +0100 @@ -592,7 +592,7 @@ can_atomic_load_p (machine_mode mode) /* If the size of the object is greater than word size on this target, then we assume that a load will not be atomic. Also see expand_atomic_load. */ - return GET_MODE_PRECISION (mode) <= BITS_PER_WORD; + return must_le (GET_MODE_PRECISION (mode), BITS_PER_WORD); } /* Determine whether "1 << x" is relatively cheap in word_mode. */ Index: gcc/optabs.c =================================================================== --- gcc/optabs.c 2017-10-23 17:25:48.621491969 +0100 +++ gcc/optabs.c 2017-10-23 17:25:54.181292122 +0100 @@ -6416,7 +6416,7 @@ expand_atomic_load (rtx target, rtx mem, emulate a load with a compare-and-swap operation, but the store that doing this could result in would be incorrect if this is a volatile atomic load or targetting read-only-mapped memory. */ - if (GET_MODE_PRECISION (mode) > BITS_PER_WORD) + if (may_gt (GET_MODE_PRECISION (mode), BITS_PER_WORD)) /* If there is no atomic load, leave the library call. */ return NULL_RTX; @@ -6490,7 +6490,7 @@ expand_atomic_store (rtx mem, rtx val, e /* If the size of the object is greater than word size on this target, a default store will not be atomic. */ - if (GET_MODE_PRECISION (mode) > BITS_PER_WORD) + if (may_gt (GET_MODE_PRECISION (mode), BITS_PER_WORD)) { /* If loads are atomic or we are called to provide a __sync builtin, we can try a atomic_exchange and throw away the result. Otherwise, Index: gcc/ree.c =================================================================== --- gcc/ree.c 2017-10-23 11:41:25.865934266 +0100 +++ gcc/ree.c 2017-10-23 17:25:54.181292122 +0100 @@ -860,9 +860,9 @@ combine_reaching_defs (ext_cand *cand, c as destination register will not affect its reaching uses, which may read its value in a larger mode because DEF_INSN implicitly sets it in word mode. */ - const unsigned int prec + poly_int64 prec = GET_MODE_PRECISION (GET_MODE (SET_DEST (*dest_sub_rtx))); - if (WORD_REGISTER_OPERATIONS && prec < BITS_PER_WORD) + if (WORD_REGISTER_OPERATIONS && must_lt (prec, BITS_PER_WORD)) { struct df_link *uses = get_uses (def_insn, src_reg); if (!uses) Index: gcc/rtl.h =================================================================== --- gcc/rtl.h 2017-10-23 17:18:57.862160702 +0100 +++ gcc/rtl.h 2017-10-23 17:25:54.182292086 +0100 @@ -3033,7 +3033,12 @@ extern poly_uint64 subreg_size_lowpart_o inline bool partial_subreg_p (machine_mode outermode, machine_mode innermode) { - return GET_MODE_PRECISION (outermode) < GET_MODE_PRECISION (innermode); + /* Modes involved in a subreg must be ordered. In particular, we must + always know at compile time whether the subreg is paradoxical. */ + poly_int64 outer_prec = GET_MODE_PRECISION (outermode); + poly_int64 inner_prec = GET_MODE_PRECISION (innermode); + gcc_checking_assert (ordered_p (outer_prec, inner_prec)); + return may_lt (outer_prec, inner_prec); } /* Likewise return true if X is a subreg that is smaller than the inner @@ -3054,7 +3059,12 @@ partial_subreg_p (const_rtx x) inline bool paradoxical_subreg_p (machine_mode outermode, machine_mode innermode) { - return GET_MODE_PRECISION (outermode) > GET_MODE_PRECISION (innermode); + /* Modes involved in a subreg must be ordered. In particular, we must + always know at compile time whether the subreg is paradoxical. */ + poly_int64 outer_prec = GET_MODE_PRECISION (outermode); + poly_int64 inner_prec = GET_MODE_PRECISION (innermode); + gcc_checking_assert (ordered_p (outer_prec, inner_prec)); + return may_gt (outer_prec, inner_prec); } /* Return true if X is a paradoxical subreg, false otherwise. */ Index: gcc/rtlanal.c =================================================================== --- gcc/rtlanal.c 2017-10-23 17:25:48.622491933 +0100 +++ gcc/rtlanal.c 2017-10-23 17:25:54.182292086 +0100 @@ -4431,6 +4431,7 @@ nonzero_bits1 (const_rtx x, scalar_int_m unsigned HOST_WIDE_INT inner_nz; enum rtx_code code; machine_mode inner_mode; + unsigned int inner_width; scalar_int_mode xmode; unsigned int mode_width = GET_MODE_PRECISION (mode); @@ -4735,8 +4736,9 @@ nonzero_bits1 (const_rtx x, scalar_int_m machines, we can compute this from which bits of the inner object might be nonzero. */ inner_mode = GET_MODE (SUBREG_REG (x)); - if (GET_MODE_PRECISION (inner_mode) <= BITS_PER_WORD - && GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT) + if (GET_MODE_PRECISION (inner_mode).is_constant (&inner_width) + && inner_width <= BITS_PER_WORD + && inner_width <= HOST_BITS_PER_WIDE_INT) { nonzero &= cached_nonzero_bits (SUBREG_REG (x), mode, known_x, known_mode, known_ret); @@ -4752,8 +4754,9 @@ nonzero_bits1 (const_rtx x, scalar_int_m ? val_signbit_known_set_p (inner_mode, nonzero) : extend_op != ZERO_EXTEND) || (!MEM_P (SUBREG_REG (x)) && !REG_P (SUBREG_REG (x)))) - && xmode_width > GET_MODE_PRECISION (inner_mode)) - nonzero |= (GET_MODE_MASK (xmode) & ~GET_MODE_MASK (inner_mode)); + && xmode_width > inner_width) + nonzero + |= (GET_MODE_MASK (GET_MODE (x)) & ~GET_MODE_MASK (inner_mode)); } break; @@ -6068,8 +6071,9 @@ lsb_bitfield_op_p (rtx x) machine_mode mode = GET_MODE (XEXP (x, 0)); HOST_WIDE_INT len = INTVAL (XEXP (x, 1)); HOST_WIDE_INT pos = INTVAL (XEXP (x, 2)); + poly_int64 remaining_bits = GET_MODE_PRECISION (mode) - len; - return (pos == (BITS_BIG_ENDIAN ? GET_MODE_PRECISION (mode) - len : 0)); + return must_eq (pos, BITS_BIG_ENDIAN ? remaining_bits : 0); } return false; } Index: gcc/tree.h =================================================================== --- gcc/tree.h 2017-10-23 17:25:51.773378674 +0100 +++ gcc/tree.h 2017-10-23 17:25:54.183292050 +0100 @@ -5773,7 +5773,7 @@ struct builtin_structptr_type inline bool type_has_mode_precision_p (const_tree t) { - return TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t)); + return must_eq (TYPE_PRECISION (t), GET_MODE_PRECISION (TYPE_MODE (t))); } #endif /* GCC_TREE_H */ Index: gcc/ubsan.c =================================================================== --- gcc/ubsan.c 2017-10-23 17:18:47.669056745 +0100 +++ gcc/ubsan.c 2017-10-23 17:25:54.183292050 +0100 @@ -1583,7 +1583,8 @@ instrument_si_overflow (gimple_stmt_iter Also punt on bit-fields. */ if (!INTEGRAL_TYPE_P (lhsinner) || TYPE_OVERFLOW_WRAPS (lhsinner) - || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner)) + || may_ne (GET_MODE_BITSIZE (TYPE_MODE (lhsinner)), + TYPE_PRECISION (lhsinner))) return; switch (code) Index: gcc/ada/gcc-interface/misc.c =================================================================== --- gcc/ada/gcc-interface/misc.c 2017-10-23 17:25:48.617492113 +0100 +++ gcc/ada/gcc-interface/misc.c 2017-10-23 17:25:54.174292373 +0100 @@ -1298,11 +1298,13 @@ enumerate_modes (void (*f) (const char * } /* If no predefined C types were found, register the mode itself. */ - int nunits; - if (!skip_p && GET_MODE_NUNITS (i).is_constant (&nunits)) + int nunits, precision; + if (!skip_p + && GET_MODE_NUNITS (i).is_constant (&nunits) + && GET_MODE_PRECISION (i).is_constant (&precision)) f (GET_MODE_NAME (i), digs, complex_p, vector_p ? nunits : 0, float_rep, - GET_MODE_PRECISION (i), GET_MODE_BITSIZE (i), + precision, GET_MODE_BITSIZE (i), GET_MODE_ALIGNMENT (i)); } }