From patchwork Mon Oct 23 17:21:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 116795 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp4887148qgn; Mon, 23 Oct 2017 10:22:03 -0700 (PDT) X-Received: by 10.84.242.74 with SMTP id c10mr11174591pll.141.1508779323551; Mon, 23 Oct 2017 10:22:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1508779323; cv=none; d=google.com; s=arc-20160816; b=OL5PPnWhLijX47nMr5rpyI9zv0healavAY1+NKiNaTrNzPrCIiXGwgWYUr3bX0Ro1K ga5Nkr4L5xaj14+FeIB3E1DmB4xHPuznQiD9TSAAbSEFAA+vM37U5+nL3vj6Ocrpb3EY IiogPqoyFwTH34Ic40PoBMQbtJ5KMPjZHHOlcZuPTNJgT/EeIdyrFVoA6+DNwCxFlMaj TrAb7Oqv5zS61U6R9xlyPDJbMs30UVbsz/uj54pKc71xh1yz7fSWak47gMBSlBq2lVsS Kkx7TJEZkpZgWxJkXq3mCSQA/WmphD2mjfPvPZQgMMp8rTvMBfOkrsjHdTZno2j1wwa8 OdXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:mail-followup-to:to:from:delivered-to:sender:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mailing-list:dkim-signature:domainkey-signature :arc-authentication-results; bh=ZW5Db5qRAdZ6EO4jaj6fqhOMkzCaXJ0C6sg8TBSEZ0U=; b=0ZR9QfZoVDY1yYhV8SeCvc0QZ2GwzPop0f4sA+qNTn2lNJRLo1Zp1wD+cF+osyLBJM nqAfOVJk/mzXEpCd0Vff5JW4grOVoKgeHXM/6AI7hNTaMlmlE7NO71fETPlMbzY8XzY5 6CGZ06vwRbAdtLZfakYNeQ3Oo9hFP7Toe8sXAolUMV+lNsiILlOJcSTpb+GyD1VbMyS+ vqdHl1WepLfD13ize3c6D0hPOvguSCudTpg4YUTdtyWvO6hENE7YoG16Xn/UrRE9RTgp 27CgyA2nJx+BF4gwMSNoKDgRElWoKaW7iBXaEpkBekYIGZhDXjiewy/m5fsOgSTkvnI3 VW0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=JIywo4DP; spf=pass (google.com: domain of gcc-patches-return-464827-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-464827-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id i72si5692921pfe.165.2017.10.23.10.22.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Oct 2017 10:22:03 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-464827-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=JIywo4DP; spf=pass (google.com: domain of gcc-patches-return-464827-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-464827-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; q=dns; s=default; b=vmNx6pW1Sk3xXvI0xim2bLcTJ538r baAAb7vmzybKQ4RPSayW5qAJF+JMornnFrmDR3r7dbchJgShdTuEmsE1dyQr6IZQ b5Ek9yTgDekLbP4TOuNRBOnnJCzDrI90EcKVieQ62oj6swBI0GatgbVpGPcvriwr HdrAMr6WY738OI= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; s=default; bh=Uswy3o1k5wMyDyZieupL4YbbSug=; b=JIy wo4DP+QcZkBzJq4be892jqw6Ct2QR4uZ8+NUtcRSpX/ZfpTQcU7OUAaabNTiTZMe ubQBqd5XUUDPLtj+jufxglRBg6vIEgcVuK/CA/LZylXkCJpifZKYYWciNCuJD79k w+o4pn5d2M2wsBSjYC4U3JyLIxMzvHmVuohpYJaA= Received: (qmail 18231 invoked by alias); 23 Oct 2017 17:21:48 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 17999 invoked by uid 89); 23 Oct 2017 17:21:47 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-11.0 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=25838 X-HELO: mail-wm0-f41.google.com Received: from mail-wm0-f41.google.com (HELO mail-wm0-f41.google.com) (74.125.82.41) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 23 Oct 2017 17:21:45 +0000 Received: by mail-wm0-f41.google.com with SMTP id m72so9334435wmc.0 for ; Mon, 23 Oct 2017 10:21:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:references:date :in-reply-to:message-id:user-agent:mime-version; bh=ZW5Db5qRAdZ6EO4jaj6fqhOMkzCaXJ0C6sg8TBSEZ0U=; b=CRHAExqriJo6uh8PqYpSk2Q1sTK85cnorp9bstuJAha2BWfqinlq7+ba4EtopMPtt4 fM7zm5WNwNS77YyAp3xLFGMlYjTrQ5rWfwQiuaXmXizA39/efg+e6RuLq3nNuIBWCNa9 1WaWdiCHmorQWrdrieTVYmnTqUIC7RBLn5krC7SZ2SmvrtILKCiUDEbLaN9WKEJjihZy ptBBCNjOqcmKb3DfalkYlrUlKkV8AYPYHGSWEMF52Bx9INAmMPfFfGMie1sXP0uO/Oph TLYianvbl+MDembgfyA0IkRKS0p6CxE4jXUQS/n09YrZL3f5q7NUcrWq5RafAWVgpyAi sNyw== X-Gm-Message-State: AMCzsaU6991O+E1X25tlSlaG/kuHU7xA8gV6NUtE0O2AqAjO8OXdPWa7 GqVqw38wIjVBxYm0PoE/c4NZKuvAsX4= X-Google-Smtp-Source: ABhQp+QUKbBWcFORbynuJUoIYWEoeqGOe8FbmozJiNGEm2ZegvKhnM6tuWgm8zV9YVx4x5k/tWt20A== X-Received: by 10.28.69.91 with SMTP id s88mr5940326wma.19.1508779303496; Mon, 23 Oct 2017 10:21:43 -0700 (PDT) Received: from localhost ([2.26.27.199]) by smtp.gmail.com with ESMTPSA id 65sm4346358wrn.27.2017.10.23.10.21.42 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 23 Oct 2017 10:21:42 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: [051/nnn] poly_int: emit_group_load/store References: <871sltvm7r.fsf@linaro.org> Date: Mon, 23 Oct 2017 18:21:41 +0100 In-Reply-To: <871sltvm7r.fsf@linaro.org> (Richard Sandiford's message of "Mon, 23 Oct 2017 17:54:32 +0100") Message-ID: <87r2ttlqze.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 This patch changes the sizes passed to emit_group_load and emit_group_store from int to poly_int64. 2017-10-23 Richard Sandiford Alan Hayward David Sherwood gcc/ * expr.h (emit_group_load, emit_group_load_into_temps) (emit_group_store): Take the size as a poly_int64 rather than an int. * expr.c (emit_group_load_1, emit_group_load): Likewise. (emit_group_load_into_temp, emit_group_store): Likewise. Index: gcc/expr.h =================================================================== --- gcc/expr.h 2017-10-23 17:18:56.434286222 +0100 +++ gcc/expr.h 2017-10-23 17:20:49.571719793 +0100 @@ -128,10 +128,10 @@ extern rtx gen_group_rtx (rtx); /* Load a BLKmode value into non-consecutive registers represented by a PARALLEL. */ -extern void emit_group_load (rtx, rtx, tree, int); +extern void emit_group_load (rtx, rtx, tree, poly_int64); /* Similarly, but load into new temporaries. */ -extern rtx emit_group_load_into_temps (rtx, rtx, tree, int); +extern rtx emit_group_load_into_temps (rtx, rtx, tree, poly_int64); /* Move a non-consecutive group of registers represented by a PARALLEL into a non-consecutive group of registers represented by a PARALLEL. */ @@ -142,7 +142,7 @@ extern rtx emit_group_move_into_temps (r /* Store a BLKmode value from non-consecutive registers represented by a PARALLEL. */ -extern void emit_group_store (rtx, rtx, tree, int); +extern void emit_group_store (rtx, rtx, tree, poly_int64); extern rtx maybe_emit_group_store (rtx, tree); Index: gcc/expr.c =================================================================== --- gcc/expr.c 2017-10-23 17:18:57.860160878 +0100 +++ gcc/expr.c 2017-10-23 17:20:49.571719793 +0100 @@ -2095,7 +2095,8 @@ gen_group_rtx (rtx orig) into corresponding XEXP (XVECEXP (DST, 0, i), 0) element. */ static void -emit_group_load_1 (rtx *tmps, rtx dst, rtx orig_src, tree type, int ssize) +emit_group_load_1 (rtx *tmps, rtx dst, rtx orig_src, tree type, + poly_int64 ssize) { rtx src; int start, i; @@ -2134,12 +2135,16 @@ emit_group_load_1 (rtx *tmps, rtx dst, r for (i = start; i < XVECLEN (dst, 0); i++) { machine_mode mode = GET_MODE (XEXP (XVECEXP (dst, 0, i), 0)); - HOST_WIDE_INT bytepos = INTVAL (XEXP (XVECEXP (dst, 0, i), 1)); - unsigned int bytelen = GET_MODE_SIZE (mode); - int shift = 0; - - /* Handle trailing fragments that run over the size of the struct. */ - if (ssize >= 0 && bytepos + (HOST_WIDE_INT) bytelen > ssize) + poly_int64 bytepos = INTVAL (XEXP (XVECEXP (dst, 0, i), 1)); + poly_int64 bytelen = GET_MODE_SIZE (mode); + poly_int64 shift = 0; + + /* Handle trailing fragments that run over the size of the struct. + It's the target's responsibility to make sure that the fragment + cannot be strictly smaller in some cases and strictly larger + in others. */ + gcc_checking_assert (ordered_p (bytepos + bytelen, ssize)); + if (known_size_p (ssize) && may_gt (bytepos + bytelen, ssize)) { /* Arrange to shift the fragment to where it belongs. extract_bit_field loads to the lsb of the reg. */ @@ -2153,7 +2158,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r ) shift = (bytelen - (ssize - bytepos)) * BITS_PER_UNIT; bytelen = ssize - bytepos; - gcc_assert (bytelen > 0); + gcc_assert (may_gt (bytelen, 0)); } /* If we won't be loading directly from memory, protect the real source @@ -2177,33 +2182,34 @@ emit_group_load_1 (rtx *tmps, rtx dst, r if (MEM_P (src) && (! targetm.slow_unaligned_access (mode, MEM_ALIGN (src)) || MEM_ALIGN (src) >= GET_MODE_ALIGNMENT (mode)) - && bytepos * BITS_PER_UNIT % GET_MODE_ALIGNMENT (mode) == 0 - && bytelen == GET_MODE_SIZE (mode)) + && multiple_p (bytepos * BITS_PER_UNIT, GET_MODE_ALIGNMENT (mode)) + && must_eq (bytelen, GET_MODE_SIZE (mode))) { tmps[i] = gen_reg_rtx (mode); emit_move_insn (tmps[i], adjust_address (src, mode, bytepos)); } else if (COMPLEX_MODE_P (mode) && GET_MODE (src) == mode - && bytelen == GET_MODE_SIZE (mode)) + && must_eq (bytelen, GET_MODE_SIZE (mode))) /* Let emit_move_complex do the bulk of the work. */ tmps[i] = src; else if (GET_CODE (src) == CONCAT) { - unsigned int slen = GET_MODE_SIZE (GET_MODE (src)); - unsigned int slen0 = GET_MODE_SIZE (GET_MODE (XEXP (src, 0))); - unsigned int elt = bytepos / slen0; - unsigned int subpos = bytepos % slen0; + poly_int64 slen = GET_MODE_SIZE (GET_MODE (src)); + poly_int64 slen0 = GET_MODE_SIZE (GET_MODE (XEXP (src, 0))); + unsigned int elt; + poly_int64 subpos; - if (subpos + bytelen <= slen0) + if (can_div_trunc_p (bytepos, slen0, &elt, &subpos) + && must_le (subpos + bytelen, slen0)) { /* The following assumes that the concatenated objects all have the same size. In this case, a simple calculation can be used to determine the object and the bit field to be extracted. */ tmps[i] = XEXP (src, elt); - if (subpos != 0 - || subpos + bytelen != slen0 + if (maybe_nonzero (subpos) + || may_ne (subpos + bytelen, slen0) || (!CONSTANT_P (tmps[i]) && (!REG_P (tmps[i]) || GET_MODE (tmps[i]) != mode))) tmps[i] = extract_bit_field (tmps[i], bytelen * BITS_PER_UNIT, @@ -2215,7 +2221,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r { rtx mem; - gcc_assert (!bytepos); + gcc_assert (known_zero (bytepos)); mem = assign_stack_temp (GET_MODE (src), slen); emit_move_insn (mem, src); tmps[i] = extract_bit_field (mem, bytelen * BITS_PER_UNIT, @@ -2234,23 +2240,21 @@ emit_group_load_1 (rtx *tmps, rtx dst, r mem = assign_stack_temp (GET_MODE (src), slen); emit_move_insn (mem, src); - tmps[i] = adjust_address (mem, mode, (int) bytepos); + tmps[i] = adjust_address (mem, mode, bytepos); } else if (CONSTANT_P (src) && GET_MODE (dst) != BLKmode && XVECLEN (dst, 0) > 1) tmps[i] = simplify_gen_subreg (mode, src, GET_MODE (dst), bytepos); else if (CONSTANT_P (src)) { - HOST_WIDE_INT len = (HOST_WIDE_INT) bytelen; - - if (len == ssize) + if (must_eq (bytelen, ssize)) tmps[i] = src; else { rtx first, second; /* TODO: const_wide_int can have sizes other than this... */ - gcc_assert (2 * len == ssize); + gcc_assert (must_eq (2 * bytelen, ssize)); split_double (src, &first, &second); if (i) tmps[i] = second; @@ -2265,7 +2269,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r bytepos * BITS_PER_UNIT, 1, NULL_RTX, mode, mode, false, NULL); - if (shift) + if (maybe_nonzero (shift)) tmps[i] = expand_shift (LSHIFT_EXPR, mode, tmps[i], shift, tmps[i], 0); } @@ -2277,7 +2281,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r if not known. */ void -emit_group_load (rtx dst, rtx src, tree type, int ssize) +emit_group_load (rtx dst, rtx src, tree type, poly_int64 ssize) { rtx *tmps; int i; @@ -2300,7 +2304,7 @@ emit_group_load (rtx dst, rtx src, tree in the right place. */ rtx -emit_group_load_into_temps (rtx parallel, rtx src, tree type, int ssize) +emit_group_load_into_temps (rtx parallel, rtx src, tree type, poly_int64 ssize) { rtvec vec; int i; @@ -2371,7 +2375,8 @@ emit_group_move_into_temps (rtx src) known. */ void -emit_group_store (rtx orig_dst, rtx src, tree type ATTRIBUTE_UNUSED, int ssize) +emit_group_store (rtx orig_dst, rtx src, tree type ATTRIBUTE_UNUSED, + poly_int64 ssize) { rtx *tmps, dst; int start, finish, i; @@ -2502,24 +2507,28 @@ emit_group_store (rtx orig_dst, rtx src, /* Process the pieces. */ for (i = start; i < finish; i++) { - HOST_WIDE_INT bytepos = INTVAL (XEXP (XVECEXP (src, 0, i), 1)); + poly_int64 bytepos = INTVAL (XEXP (XVECEXP (src, 0, i), 1)); machine_mode mode = GET_MODE (tmps[i]); - unsigned int bytelen = GET_MODE_SIZE (mode); - unsigned int adj_bytelen; + poly_int64 bytelen = GET_MODE_SIZE (mode); + poly_uint64 adj_bytelen; rtx dest = dst; - /* Handle trailing fragments that run over the size of the struct. */ - if (ssize >= 0 && bytepos + (HOST_WIDE_INT) bytelen > ssize) + /* Handle trailing fragments that run over the size of the struct. + It's the target's responsibility to make sure that the fragment + cannot be strictly smaller in some cases and strictly larger + in others. */ + gcc_checking_assert (ordered_p (bytepos + bytelen, ssize)); + if (known_size_p (ssize) && may_gt (bytepos + bytelen, ssize)) adj_bytelen = ssize - bytepos; else adj_bytelen = bytelen; if (GET_CODE (dst) == CONCAT) { - if (bytepos + adj_bytelen - <= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0)))) + if (must_le (bytepos + adj_bytelen, + GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))))) dest = XEXP (dst, 0); - else if (bytepos >= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0)))) + else if (must_ge (bytepos, GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))))) { bytepos -= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))); dest = XEXP (dst, 1); @@ -2529,7 +2538,7 @@ emit_group_store (rtx orig_dst, rtx src, machine_mode dest_mode = GET_MODE (dest); machine_mode tmp_mode = GET_MODE (tmps[i]); - gcc_assert (bytepos == 0 && XVECLEN (src, 0)); + gcc_assert (known_zero (bytepos) && XVECLEN (src, 0)); if (GET_MODE_ALIGNMENT (dest_mode) >= GET_MODE_ALIGNMENT (tmp_mode)) @@ -2554,7 +2563,7 @@ emit_group_store (rtx orig_dst, rtx src, } /* Handle trailing fragments that run over the size of the struct. */ - if (ssize >= 0 && bytepos + (HOST_WIDE_INT) bytelen > ssize) + if (known_size_p (ssize) && may_gt (bytepos + bytelen, ssize)) { /* store_bit_field always takes its value from the lsb. Move the fragment to the lsb if it's not already there. */ @@ -2567,7 +2576,7 @@ emit_group_store (rtx orig_dst, rtx src, #endif ) { - int shift = (bytelen - (ssize - bytepos)) * BITS_PER_UNIT; + poly_int64 shift = (bytelen - (ssize - bytepos)) * BITS_PER_UNIT; tmps[i] = expand_shift (RSHIFT_EXPR, mode, tmps[i], shift, tmps[i], 0); } @@ -2583,8 +2592,9 @@ emit_group_store (rtx orig_dst, rtx src, else if (MEM_P (dest) && (!targetm.slow_unaligned_access (mode, MEM_ALIGN (dest)) || MEM_ALIGN (dest) >= GET_MODE_ALIGNMENT (mode)) - && bytepos * BITS_PER_UNIT % GET_MODE_ALIGNMENT (mode) == 0 - && bytelen == GET_MODE_SIZE (mode)) + && multiple_p (bytepos * BITS_PER_UNIT, + GET_MODE_ALIGNMENT (mode)) + && must_eq (bytelen, GET_MODE_SIZE (mode))) emit_move_insn (adjust_address (dest, mode, bytepos), tmps[i]); else