From patchwork Thu Jun 22 11:30:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 106200 Delivered-To: patch@linaro.org Received: by 10.182.29.35 with SMTP id g3csp79114obh; Thu, 22 Jun 2017 04:31:24 -0700 (PDT) X-Received: by 10.84.230.137 with SMTP id e9mr2462567plk.100.1498131084478; Thu, 22 Jun 2017 04:31:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498131084; cv=none; d=google.com; s=arc-20160816; b=ZAGZQCwTA/zc4W/DK3RPxzMxWq7HWcwd2P+WVsaEG4JucxGyAd3tWIJz/HtLOoyaGr WNxryjAxbkAmRDnIkJrdMRBwGyGov4feU5tRIzT/YlnORRQLHJkX1mcLMsHjURtnsjWb DTp4T4/9zXhs7e/26aJR8EMtuRcki7fxAJUi5HUDoSL6mCUe1Gp9vKujTSgrEivf8Z/g kYUrPNbZNtYn5yA+Q30BMGqogQ4dbYT4lJ6iESGjPFKnaGXVMA6985IAbCQl/1Ho+8uI kIGOwJ/gJyce0d+n6ub1sMpPMNHjjEN+GGktPsjUeN2241QEsFB17cgjft2RxGcjVCWW NrEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:date:subject:mail-followup-to:to :from:delivered-to:sender:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:mailing-list:dkim-signature :domainkey-signature:arc-authentication-results; bh=948xTGhWZ4EwyO7Ed1Wz7qFuwgTon0cOyuuUWTD6dss=; b=Wip1x8T/g2Fhmrnu8VVNdN3He57Ncy8E2lA/FEGEDGPQD03Jq6P9yaHruBQHCclgEg QuW7vAcdzpuV4cuwVkKnssaEVhQeLkCMBr0IH/nRCK2+1/6N7mJrtaXSphHJsyjMIak2 fvIIHdWeRMjHGSxvZ0Gj8ySccnE197CIKLo/ILMkYu2laP9gwjC47HGM+5Y0zmZf8P7N nYtAnD38v+s3PFsKbHYjT/D53qSw1HIiwLMM2Uz7RQj8WfnQ8j5tL26vIrDtCHiZsJ6h zp+5/wOP0Bnh8TfTnt6mrvGXLt4nq52WoocIN9aQ5lHu5bg3GePCjX/NL5HtC5gAzp4A vmFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=imdpjfQa; spf=pass (google.com: domain of gcc-patches-return-456550-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-456550-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id i1si1141123plk.335.2017.06.22.04.31.24 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Jun 2017 04:31:24 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-456550-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=imdpjfQa; spf=pass (google.com: domain of gcc-patches-return-456550-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-456550-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; q=dns; s= default; b=msvOCHnEjTG38YSEOGjBd3HIwQGeEuY+kBGpyHd5EqjPMXDeCHcrs I2LdqPT5U2Wh5IqvYuO5Gh5CcBjarVxt73TUnrkYy7+9kcT/9EIvK2yIIXOU5Cxw vm8G9CIHfONPvCcYoRjOoPwv7gPzClrvdskjWEndMj2gPfjX8LBHvw= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; s= default; bh=0l38LkBUdG751UEJ8PW9R/LKOdg=; b=imdpjfQahL0pTM4PfofX V4EzRxh//ZVzzKmp0CcOWeaQJix6Gl80x2YfB3zrcNRoexl8ukY6VaErmxzepCIp R0hHVxcdD/pl+VWRwmSbNoUOb1TQRxS5aLIiLswZbH35aD1SuMS6Nbm40cRoJB2Z CnuBcQgRttuhrTW+lblVQRc= Received: (qmail 123058 invoked by alias); 22 Jun 2017 11:31:02 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 123019 invoked by uid 89); 22 Jun 2017 11:31:00 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-9.2 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_PASS, UNSUBSCRIBE_BODY autolearn=ham version=3.3.2 spammy=drs, pooled X-HELO: mail-wr0-f175.google.com Received: from mail-wr0-f175.google.com (HELO mail-wr0-f175.google.com) (209.85.128.175) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 22 Jun 2017 11:30:56 +0000 Received: by mail-wr0-f175.google.com with SMTP id r103so19435069wrb.0 for ; Thu, 22 Jun 2017 04:30:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:date:message-id :user-agent:mime-version; bh=948xTGhWZ4EwyO7Ed1Wz7qFuwgTon0cOyuuUWTD6dss=; b=kVaieqBL5DpPcoSCcpyN3+NfvLyi0ZYa03Q9iokBiscAsHSZnS6/CqR1Ph+8CBStgD rVk14zMMKmUPyQsk5pbXYMymT1hwAOgaXCNG2aZ/KA/XQLCTVMs6KiMSglH/iqrgJ214 3pfRvxTicHP9wrAjHXD219vaF72PiYLn/r8YdW/onp+laQWIhOZoXVn3XGezEVp2i9RT t1uH2Dn4GMNOH8Z1ebBL4U5XMov9sn0CzaMWQ34K9XK1BL6cdXdlqxwF54ouZjwBTGqb vHnHPu/JHfk1yizxa0wnpvb9P920zaaVDMhKr7WIXOH3wACpQeHZ5wg2oS2BZnDFhPrc B6ZA== X-Gm-Message-State: AKS2vOxuAw9234MIe9d5Gffrv0UCjokAVmTboY6q9O9k0scudyOxQh1W d1netpdn766M3UPL1qTCQw== X-Received: by 10.28.198.66 with SMTP id w63mr1385448wmf.110.1498131053737; Thu, 22 Jun 2017 04:30:53 -0700 (PDT) Received: from localhost (92.40.248.73.threembb.co.uk. [92.40.248.73]) by smtp.gmail.com with ESMTPSA id a28sm1481344wra.17.2017.06.22.04.30.52 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 22 Jun 2017 04:30:52 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: PR81136: ICE from inconsistent DR_MISALIGNMENTs Date: Thu, 22 Jun 2017 12:30:50 +0100 Message-ID: <87k244z2c5.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 The test case triggered this assert in vect_update_misalignment_for_peel: gcc_assert (DR_MISALIGNMENT (dr) / dr_size == DR_MISALIGNMENT (dr_peel) / dr_peel_size); We knew that the two DRs had the same misalignment at runtime, but when considered in isolation, one data reference guaranteed a higher compile-time base alignment than the other. In the test case this looks like a missed opportunity. Both references are unconditional, so it should be possible to use the highest of the available base alignment guarantees when analyzing each reference. The patch does this. However, as the comment in the patch says, the base alignment guarantees provided by a conditional reference only apply if the reference occurs at least once. In this case it would be legitimate for two references to have the same runtime misalignment and for one reference to provide a stronger compile-time guarantee than the other about what the misalignment actually is. The patch therefore relaxes the assert to handle that case. Tested on powerpc64-linux-gnu, aarch64-linux-gnu and x86_64-linux-gnu. OK to instal? Richard 2017-06-22 Richard Sandiford gcc/ PR tree-optimization/81136 * tree-vectorizer.h: Include tree-hash-traits.h. (vec_base_alignments): New typedef. (vec_info): Add a base_alignments field. (vect_compute_base_alignments: Declare. * tree-data-ref.h (data_reference): Add an is_conditional field. (DR_IS_CONDITIONAL): New macro. (create_data_ref): Add an is_conditional argument. * tree-data-ref.c (create_data_ref): Likewise. Use it to initialize the is_conditional field. (data_ref_loc): Add an is_conditional field. (get_references_in_stmt): Set the is_conditional field. (find_data_references_in_stmt): Update call to create_data_ref. (graphite_find_data_references_in_stmt): Likewise. * tree-ssa-loop-prefetch.c (determine_loop_nest_reuse): Likewise. * tree-vect-data-refs.c (vect_analyze_data_refs): Likewise. (vect_get_base_address): New function. (vect_compute_base_alignments): Likewise. (vect_compute_base_alignment): Likewise, split out from... (vect_compute_data_ref_alignment): ...here. Use precomputed base alignments. Only compute a new base alignment here if the reference is conditional. (vect_update_misalignment_for_peel): Allow the compile-time DR_MISALIGNMENTs of two references with the same runtime alignment to be different if one of the references is conditional. (vect_find_same_alignment_drs): Compare base addresses instead of base objects. (vect_compute_data_ref_alignment): Call vect_compute_base_alignments. * tree-vect-slp.c (vect_slp_analyze_bb_1): Likewise. (new_bb_vec_info): Initialize base_alignments. * tree-vect-loop.c (new_loop_vec_info): Likewise. * tree-vectorizer.c (vect_destroy_datarefs): Release base_alignments. gcc/testsuite/ PR tree-optimization/81136 * gcc.dg/vect/pr81136.c: New test. Index: gcc/tree-vectorizer.h =================================================================== --- gcc/tree-vectorizer.h 2017-06-08 08:51:43.347264181 +0100 +++ gcc/tree-vectorizer.h 2017-06-22 12:23:21.288421018 +0100 @@ -22,6 +22,7 @@ Software Foundation; either version 3, o #define GCC_TREE_VECTORIZER_H #include "tree-data-ref.h" +#include "tree-hash-traits.h" #include "target.h" /* Used for naming of new temporaries. */ @@ -84,6 +85,10 @@ struct stmt_info_for_cost { typedef vec stmt_vector_for_cost; +/* Maps base addresses to the largest alignment that we've been able + to calculate for them. */ +typedef hash_map vec_base_alignments; + /************************************************************************ SLP ************************************************************************/ @@ -156,6 +161,10 @@ struct vec_info { /* All data references. */ vec datarefs; + /* Maps the base addresses of all data references in DATAREFS to the + largest alignment that we've been able to calculate for them. */ + vec_base_alignments base_alignments; + /* All data dependences. */ vec ddrs; @@ -1117,6 +1126,7 @@ extern bool vect_prune_runtime_alias_tes extern bool vect_check_gather_scatter (gimple *, loop_vec_info, gather_scatter_info *); extern bool vect_analyze_data_refs (vec_info *, int *); +extern void vect_compute_base_alignments (vec_info *); extern tree vect_create_data_ref_ptr (gimple *, tree, struct loop *, tree, tree *, gimple_stmt_iterator *, gimple **, bool, bool *, Index: gcc/tree-data-ref.h =================================================================== --- gcc/tree-data-ref.h 2017-06-08 08:51:43.349263895 +0100 +++ gcc/tree-data-ref.h 2017-06-22 12:23:21.285421180 +0100 @@ -119,6 +119,10 @@ struct data_reference /* True when the data reference is in RHS of a stmt. */ bool is_read; + /* True when the data reference is conditional, i.e. if it might not + occur even when the statement runs to completion. */ + bool is_conditional; + /* Behavior of the memory reference in the innermost loop. */ struct innermost_loop_behavior innermost; @@ -138,6 +142,7 @@ #define DR_ACCESS_FN(DR, I) DR_AC #define DR_NUM_DIMENSIONS(DR) DR_ACCESS_FNS (DR).length () #define DR_IS_READ(DR) (DR)->is_read #define DR_IS_WRITE(DR) (!DR_IS_READ (DR)) +#define DR_IS_CONDITIONAL(DR) (DR)->is_conditional #define DR_BASE_ADDRESS(DR) (DR)->innermost.base_address #define DR_OFFSET(DR) (DR)->innermost.offset #define DR_INIT(DR) (DR)->innermost.init @@ -350,7 +355,8 @@ extern bool graphite_find_data_reference vec *); tree find_data_references_in_loop (struct loop *, vec *); bool loop_nest_has_data_refs (loop_p loop); -struct data_reference *create_data_ref (loop_p, loop_p, tree, gimple *, bool); +struct data_reference *create_data_ref (loop_p, loop_p, tree, gimple *, bool, + bool); extern bool find_loop_nest (struct loop *, vec *); extern struct data_dependence_relation *initialize_data_dependence_relation (struct data_reference *, struct data_reference *, vec); Index: gcc/tree-data-ref.c =================================================================== --- gcc/tree-data-ref.c 2017-06-08 08:51:43.349263895 +0100 +++ gcc/tree-data-ref.c 2017-06-22 12:23:21.284421233 +0100 @@ -1053,15 +1053,18 @@ free_data_ref (data_reference_p dr) free (dr); } -/* Analyzes memory reference MEMREF accessed in STMT. The reference - is read if IS_READ is true, write otherwise. Returns the - data_reference description of MEMREF. NEST is the outermost loop - in which the reference should be instantiated, LOOP is the loop in - which the data reference should be analyzed. */ +/* Analyze memory reference MEMREF, which is accessed in STMT. The reference + is a read if IS_READ is true, otherwise it is a write. IS_CONDITIONAL + indicates that the reference is conditional, i.e. that it might not + occur every time that STMT runs to completion. + + Return the data_reference description of MEMREF. NEST is the outermost + loop in which the reference should be instantiated, LOOP is the loop + in which the data reference should be analyzed. */ struct data_reference * create_data_ref (loop_p nest, loop_p loop, tree memref, gimple *stmt, - bool is_read) + bool is_read, bool is_conditional) { struct data_reference *dr; @@ -1076,6 +1079,7 @@ create_data_ref (loop_p nest, loop_p loo DR_STMT (dr) = stmt; DR_REF (dr) = memref; DR_IS_READ (dr) = is_read; + DR_IS_CONDITIONAL (dr) = is_conditional; dr_analyze_innermost (dr, nest); dr_analyze_indices (dr, nest, loop); @@ -4446,6 +4450,10 @@ struct data_ref_loc /* True if the memory reference is read. */ bool is_read; + + /* True if the data reference is conditional, i.e. if it might not + occur even when the statement runs to completion. */ + bool is_conditional; }; @@ -4512,6 +4520,7 @@ get_references_in_stmt (gimple *stmt, ve { ref.ref = op1; ref.is_read = true; + ref.is_conditional = false; references->safe_push (ref); } } @@ -4539,6 +4548,7 @@ get_references_in_stmt (gimple *stmt, ve type = TREE_TYPE (gimple_call_arg (stmt, 3)); if (TYPE_ALIGN (type) != align) type = build_aligned_type (type, align); + ref.is_conditional = true; ref.ref = fold_build2 (MEM_REF, type, gimple_call_arg (stmt, 0), ptr); references->safe_push (ref); @@ -4558,6 +4568,7 @@ get_references_in_stmt (gimple *stmt, ve { ref.ref = op1; ref.is_read = true; + ref.is_conditional = false; references->safe_push (ref); } } @@ -4571,6 +4582,7 @@ get_references_in_stmt (gimple *stmt, ve { ref.ref = op0; ref.is_read = false; + ref.is_conditional = false; references->safe_push (ref); } return clobbers_memory; @@ -4635,8 +4647,8 @@ find_data_references_in_stmt (struct loo FOR_EACH_VEC_ELT (references, i, ref) { - dr = create_data_ref (nest, loop_containing_stmt (stmt), - ref->ref, stmt, ref->is_read); + dr = create_data_ref (nest, loop_containing_stmt (stmt), ref->ref, + stmt, ref->is_read, ref->is_conditional); gcc_assert (dr != NULL); datarefs->safe_push (dr); } @@ -4665,7 +4677,8 @@ graphite_find_data_references_in_stmt (l FOR_EACH_VEC_ELT (references, i, ref) { - dr = create_data_ref (nest, loop, ref->ref, stmt, ref->is_read); + dr = create_data_ref (nest, loop, ref->ref, stmt, ref->is_read, + ref->is_conditional); gcc_assert (dr != NULL); datarefs->safe_push (dr); } Index: gcc/tree-ssa-loop-prefetch.c =================================================================== --- gcc/tree-ssa-loop-prefetch.c 2017-06-07 21:58:55.928557601 +0100 +++ gcc/tree-ssa-loop-prefetch.c 2017-06-22 12:23:21.285421180 +0100 @@ -1633,7 +1633,7 @@ determine_loop_nest_reuse (struct loop * for (ref = gr->refs; ref; ref = ref->next) { dr = create_data_ref (nest, loop_containing_stmt (ref->stmt), - ref->mem, ref->stmt, !ref->write_p); + ref->mem, ref->stmt, !ref->write_p, false); if (dr) { Index: gcc/tree-vect-data-refs.c =================================================================== --- gcc/tree-vect-data-refs.c 2017-06-08 08:51:43.350263752 +0100 +++ gcc/tree-vect-data-refs.c 2017-06-22 12:23:21.286421126 +0100 @@ -646,6 +646,102 @@ vect_slp_analyze_instance_dependence (sl return res; } +/* If DR is nested in a loop that is being vectorized, return the base + address in the context of the vectorized loop (rather than the + nested loop). Otherwise return the base address in the context + of the containing statement. */ + +static tree +vect_get_base_address (data_reference *dr) +{ + gimple *stmt = DR_STMT (dr); + stmt_vec_info stmt_info = vinfo_for_stmt (stmt); + loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info); + struct loop *loop = loop_vinfo != NULL ? LOOP_VINFO_LOOP (loop_vinfo) : NULL; + if (loop && nested_in_vect_loop_p (loop, stmt)) + return STMT_VINFO_DR_BASE_ADDRESS (stmt_info); + else + return DR_BASE_ADDRESS (dr); +} + +/* Compute and return the alignment of base address BASE_ADDR in DR. */ + +static unsigned int +vect_compute_base_alignment (data_reference *dr, tree base_addr) +{ + /* To look at the alignment of the base we have to preserve an inner + MEM_REF as that carries the alignment information of the actual + access. */ + tree base = DR_REF (dr); + while (handled_component_p (base)) + base = TREE_OPERAND (base, 0); + unsigned int base_alignment = 0; + unsigned HOST_WIDE_INT base_bitpos; + get_object_alignment_1 (base, &base_alignment, &base_bitpos); + + /* As data-ref analysis strips the MEM_REF down to its base operand + to form DR_BASE_ADDRESS and adds the offset to DR_INIT we have to + adjust things to make base_alignment valid as the alignment of + DR_BASE_ADDRESS. */ + if (TREE_CODE (base) == MEM_REF) + { + /* Note all this only works if DR_BASE_ADDRESS is the same as + MEM_REF operand zero, otherwise DR/SCEV analysis might have factored + in other offsets. We need to rework DR to compute the alingment + of DR_BASE_ADDRESS as long as all information is still available. */ + if (operand_equal_p (TREE_OPERAND (base, 0), base_addr, 0)) + { + base_bitpos -= mem_ref_offset (base).to_short_addr () * BITS_PER_UNIT; + base_bitpos &= (base_alignment - 1); + } + else + base_bitpos = BITS_PER_UNIT; + } + if (base_bitpos != 0) + base_alignment = base_bitpos & -base_bitpos; + + /* Also look at the alignment of the base address DR analysis + computed. */ + unsigned int base_addr_alignment = get_pointer_alignment (base_addr); + if (base_addr_alignment > base_alignment) + base_alignment = base_addr_alignment; + + return base_alignment; +} + +/* Compute alignments for the base addresses of all datarefs in VINFO. */ + +void +vect_compute_base_alignments (vec_info *vinfo) +{ + /* If the region we're going to vectorize is reached, all unconditional + data references occur at least once. We can therefore pool the base + alignment guarantees from each unconditional reference. */ + data_reference *dr; + unsigned int i; + FOR_EACH_VEC_ELT (vinfo->datarefs, i, dr) + if (!DR_IS_CONDITIONAL (dr)) + { + tree base_addr = vect_get_base_address (dr); + unsigned int alignment = vect_compute_base_alignment (dr, base_addr); + bool existed; + unsigned int &entry + = vinfo->base_alignments.get_or_insert (base_addr, &existed); + if (!existed || entry < alignment) + { + entry = alignment; + if (dump_enabled_p ()) + { + dump_printf_loc (MSG_NOTE, vect_location, + "setting base alignment for "); + dump_generic_expr (MSG_NOTE, TDF_SLIM, base_addr); + dump_printf (MSG_NOTE, " to %d, based on ", alignment); + dump_gimple_stmt (MSG_NOTE, TDF_SLIM, DR_STMT (dr), 0); + } + } + } +} + /* Function vect_compute_data_ref_alignment Compute the misalignment of the data reference DR. @@ -663,6 +759,7 @@ vect_compute_data_ref_alignment (struct { gimple *stmt = DR_STMT (dr); stmt_vec_info stmt_info = vinfo_for_stmt (stmt); + vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments; loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info); struct loop *loop = NULL; tree ref = DR_REF (dr); @@ -699,6 +796,8 @@ vect_compute_data_ref_alignment (struct { tree step = DR_STEP (dr); + base_addr = STMT_VINFO_DR_BASE_ADDRESS (stmt_info); + aligned_to = STMT_VINFO_DR_ALIGNED_TO (stmt_info); if (tree_fits_shwi_p (step) && tree_to_shwi (step) % GET_MODE_SIZE (TYPE_MODE (vectype)) == 0) { @@ -706,8 +805,6 @@ vect_compute_data_ref_alignment (struct dump_printf_loc (MSG_NOTE, vect_location, "inner step divides the vector-size.\n"); misalign = STMT_VINFO_DR_INIT (stmt_info); - aligned_to = STMT_VINFO_DR_ALIGNED_TO (stmt_info); - base_addr = STMT_VINFO_DR_BASE_ADDRESS (stmt_info); } else { @@ -738,39 +835,15 @@ vect_compute_data_ref_alignment (struct } } - /* To look at alignment of the base we have to preserve an inner MEM_REF - as that carries alignment information of the actual access. */ - base = ref; - while (handled_component_p (base)) - base = TREE_OPERAND (base, 0); + /* Calculate the maximum of the pooled base address alignment and the + alignment that we can compute for DR itself. The latter should + already be included in the former for unconditional references. */ unsigned int base_alignment = 0; - unsigned HOST_WIDE_INT base_bitpos; - get_object_alignment_1 (base, &base_alignment, &base_bitpos); - /* As data-ref analysis strips the MEM_REF down to its base operand - to form DR_BASE_ADDRESS and adds the offset to DR_INIT we have to - adjust things to make base_alignment valid as the alignment of - DR_BASE_ADDRESS. */ - if (TREE_CODE (base) == MEM_REF) - { - /* Note all this only works if DR_BASE_ADDRESS is the same as - MEM_REF operand zero, otherwise DR/SCEV analysis might have factored - in other offsets. We need to rework DR to compute the alingment - of DR_BASE_ADDRESS as long as all information is still available. */ - if (operand_equal_p (TREE_OPERAND (base, 0), base_addr, 0)) - { - base_bitpos -= mem_ref_offset (base).to_short_addr () * BITS_PER_UNIT; - base_bitpos &= (base_alignment - 1); - } - else - base_bitpos = BITS_PER_UNIT; - } - if (base_bitpos != 0) - base_alignment = base_bitpos & -base_bitpos; - /* Also look at the alignment of the base address DR analysis - computed. */ - unsigned int base_addr_alignment = get_pointer_alignment (base_addr); - if (base_addr_alignment > base_alignment) - base_alignment = base_addr_alignment; + if (DR_IS_CONDITIONAL (dr)) + base_alignment = vect_compute_base_alignment (dr, base_addr); + if (unsigned int *entry = base_alignments->get (base_addr)) + base_alignment = MAX (base_alignment, *entry); + gcc_assert (base_alignment != 0); if (base_alignment >= TYPE_ALIGN (TREE_TYPE (vectype))) DR_VECT_AUX (dr)->base_element_aligned = true; @@ -906,8 +979,29 @@ vect_update_misalignment_for_peel (struc { if (current_dr != dr) continue; - gcc_assert (DR_MISALIGNMENT (dr) / dr_size == - DR_MISALIGNMENT (dr_peel) / dr_peel_size); + /* Any alignment guarantees provided by a reference only apply if + the reference actually occurs. For example, in: + + struct s __attribute__((aligned(32))) { + int misaligner; + int array[N]; + }; + + int *ptr; + for (int i = 0; i < n; ++i) + ptr[i] = c[i] ? ((struct s *) (ptr - 1))->array[i] : 0; + + we can only assume that ptr is part of a struct s if at least one + c[i] is true. This in turn means that we have a higher base + alignment guarantee for the read from ptr (if it occurs) than for + the write to ptr, and we cannot unconditionally carry the former + over to the latter. We still know that the two address values + have the same misalignment, so if peeling has forced one of them + to be aligned, the other must be too. */ + gcc_assert (DR_IS_CONDITIONAL (dr_peel) + || DR_IS_CONDITIONAL (dr) + || (DR_MISALIGNMENT (dr) / dr_size + == DR_MISALIGNMENT (dr_peel) / dr_peel_size)); SET_DR_MISALIGNMENT (dr, 0); return; } @@ -2117,8 +2211,7 @@ vect_find_same_alignment_drs (struct dat if (dra == drb) return; - if (!operand_equal_p (DR_BASE_OBJECT (dra), DR_BASE_OBJECT (drb), - OEP_ADDRESS_OF) + if (!operand_equal_p (DR_BASE_ADDRESS (dra), DR_BASE_ADDRESS (drb), 0) || !operand_equal_p (DR_OFFSET (dra), DR_OFFSET (drb), 0) || !operand_equal_p (DR_STEP (dra), DR_STEP (drb), 0)) return; @@ -2176,6 +2269,7 @@ vect_analyze_data_refs_alignment (loop_v vec datarefs = vinfo->datarefs; struct data_reference *dr; + vect_compute_base_alignments (vinfo); FOR_EACH_VEC_ELT (datarefs, i, dr) { stmt_vec_info stmt_info = vinfo_for_stmt (DR_STMT (dr)); @@ -3374,7 +3468,8 @@ vect_analyze_data_refs (vec_info *vinfo, { struct data_reference *newdr = create_data_ref (NULL, loop_containing_stmt (stmt), - DR_REF (dr), stmt, maybe_scatter ? false : true); + DR_REF (dr), stmt, !maybe_scatter, + DR_IS_CONDITIONAL (dr)); gcc_assert (newdr != NULL && DR_REF (newdr)); if (DR_BASE_ADDRESS (newdr) && DR_OFFSET (newdr) Index: gcc/tree-vect-slp.c =================================================================== --- gcc/tree-vect-slp.c 2017-06-07 21:58:56.336475882 +0100 +++ gcc/tree-vect-slp.c 2017-06-22 12:23:21.288421018 +0100 @@ -2367,6 +2367,7 @@ new_bb_vec_info (gimple_stmt_iterator re gimple_stmt_iterator gsi; res = (bb_vec_info) xcalloc (1, sizeof (struct _bb_vec_info)); + new (&res->base_alignments) vec_base_alignments (); res->kind = vec_info::bb; BB_VINFO_BB (res) = bb; res->region_begin = region_begin; @@ -2741,6 +2742,8 @@ vect_slp_analyze_bb_1 (gimple_stmt_itera return NULL; } + vect_compute_base_alignments (bb_vinfo); + /* Analyze and verify the alignment of data references and the dependence in the SLP instances. */ for (i = 0; BB_VINFO_SLP_INSTANCES (bb_vinfo).iterate (i, &instance); ) Index: gcc/tree-vect-loop.c =================================================================== --- gcc/tree-vect-loop.c 2017-06-22 12:22:57.734313143 +0100 +++ gcc/tree-vect-loop.c 2017-06-22 12:23:21.287421072 +0100 @@ -1157,6 +1157,7 @@ new_loop_vec_info (struct loop *loop) LOOP_VINFO_VECT_FACTOR (res) = 0; LOOP_VINFO_LOOP_NEST (res) = vNULL; LOOP_VINFO_DATAREFS (res) = vNULL; + new (&res->base_alignments) vec_base_alignments (); LOOP_VINFO_DDRS (res) = vNULL; LOOP_VINFO_UNALIGNED_DR (res) = NULL; LOOP_VINFO_MAY_MISALIGN_STMTS (res) = vNULL; Index: gcc/tree-vectorizer.c =================================================================== --- gcc/tree-vectorizer.c 2017-06-22 12:22:57.732313220 +0100 +++ gcc/tree-vectorizer.c 2017-06-22 12:23:21.288421018 +0100 @@ -370,6 +370,8 @@ vect_destroy_datarefs (vec_info *vinfo) } free_data_refs (vinfo->datarefs); + + vinfo->base_alignments.~vec_base_alignments (); } /* A helper function to free scev and LOOP niter information, as well as Index: gcc/testsuite/gcc.dg/vect/pr81136.c =================================================================== --- /dev/null 2017-06-22 07:43:14.805493307 +0100 +++ gcc/testsuite/gcc.dg/vect/pr81136.c 2017-06-22 12:23:21.283421287 +0100 @@ -0,0 +1,16 @@ +/* { dg-do compile } */ + +struct __attribute__((aligned (32))) +{ + char misaligner; + int foo[100]; + int bar[100]; +} *a; + +void +fn1 (int n) +{ + int *b = a->foo; + for (int i = 0; i < n; i++) + a->bar[i] = b[i]; +}