From patchwork Thu Nov 17 16:17:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Botcazou X-Patchwork-Id: 82739 Delivered-To: patch@linaro.org Received: by 10.182.1.168 with SMTP id 8csp979128obn; Thu, 17 Nov 2016 08:17:43 -0800 (PST) X-Received: by 10.200.36.21 with SMTP id c21mr2520936qtc.1.1479399463712; Thu, 17 Nov 2016 08:17:43 -0800 (PST) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id s59si1143388qtd.111.2016.11.17.08.17.43 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Nov 2016 08:17:43 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-return-441837-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org; spf=pass (google.com: domain of gcc-patches-return-441837-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-441837-patch=linaro.org@gcc.gnu.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type :content-transfer-encoding; q=dns; s=default; b=t/Fcxgf5qEJHWTdY z+MTPFFSDr6u0v50d68wsv6N+fedfDBHmp/5uz0O2I+pWWMwtBdoFgNW+CLNAOWI R7wVwP7t0vI5q5HumOc2RFW1YyQRyn/fGW/2TQdF+WJoJrsJLyrmTlAEFQWNujVD ebgvhP+tWQ1A7lIfiuliOA32HLM= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type :content-transfer-encoding; s=default; bh=0Q+0SDaPLEijkkS145glU4 pw2ss=; b=wnrwjYQLIfRHVRqnIoT+kX2KdBc9EnMyZn7ePvj46+Nj421UCt5mH4 MBgjTv5GsYa2BJZVHtFHlqoefOmYOsSjzcgB6Ueow3XMdFsh3zigKiH2yqU/8vVz TW3TNljrpZ6uc4jkZKipSfHu5t1kSCKjMzlfcd67BivWu8b3EIVzY= Received: (qmail 67466 invoked by alias); 17 Nov 2016 16:17:29 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 67397 invoked by uid 89); 17 Nov 2016 16:17:29 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.1 required=5.0 tests=AWL, BAYES_00, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=no version=3.3.2 spammy=UD:texi.in, act X-HELO: smtp.eu.adacore.com Received: from mel.act-europe.fr (HELO smtp.eu.adacore.com) (194.98.77.210) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 17 Nov 2016 16:17:19 +0000 Received: from localhost (localhost [127.0.0.1]) by filtered-smtp.eu.adacore.com (Postfix) with ESMTP id 7BD2881345 for ; Thu, 17 Nov 2016 17:17:16 +0100 (CET) Received: from smtp.eu.adacore.com ([127.0.0.1]) by localhost (smtp.eu.adacore.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id b6K-8_wLWVFf for ; Thu, 17 Nov 2016 17:17:16 +0100 (CET) Received: from polaris.localnet (bon31-6-88-161-99-133.fbx.proxad.net [88.161.99.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.eu.adacore.com (Postfix) with ESMTPSA id 56058812E3 for ; Thu, 17 Nov 2016 17:17:16 +0100 (CET) From: Eric Botcazou To: gcc-patches@gcc.gnu.org Subject: Fix PR rtl-optimization/78355 Date: Thu, 17 Nov 2016 17:17:15 +0100 Message-ID: <2037118.9NJdoatkLr@polaris> User-Agent: KMail/4.14.10 (Linux/3.16.7-48-desktop; KDE/4.14.9; x86_64; ; ) MIME-Version: 1.0 As diagnosed by the submitter, LRA can generate unaligned accesses when SLOW_UNALIGNED_ACCESS is 1, for example when STRICT_ALIGNMENT is 1, because simplify_operand_subreg can wrongly reject aligned accesses when reloading SUBREGs of MEMs in favor of unaligned ones. The fix is to add the missing guard on the alignment before invoking SLOW_UNALIGNED_ACCESS (as in every other use of SLOW_UNALIGNED_ACCESS in the compiler). Tested on arm-eabi, approved by Vladimir, applied on the mainline. 2016-11-17 Pip Cet Eric Botcazou PR rtl-optimization/78355 * doc/tm.texi.in (SLOW_UNALIGNED_ACCESS): Document that the macro only needs to deal with unaligned accesses. * doc/tm.texi: Regenerate. * lra-constraints.c (simplify_operand_subreg): Only invoke SLOW_UNALIGNED_ACCESS on innermode if the MEM is not aligned enough. -- Eric Botcazou Index: doc/tm.texi.in =================================================================== --- doc/tm.texi.in (revision 242377) +++ doc/tm.texi.in (working copy) @@ -4654,7 +4654,8 @@ other fields in the same word of the str Define this macro to be the value 1 if memory accesses described by the @var{mode} and @var{alignment} parameters have a cost many times greater than aligned accesses, for example if they are emulated in a trap -handler. +handler. This macro is invoked only for unaligned accesses, i.e. when +@code{@var{alignment} < GET_MODE_ALIGNMENT (@var{mode})}. When this macro is nonzero, the compiler will act as if @code{STRICT_ALIGNMENT} were nonzero when generating code for block Index: lra-constraints.c =================================================================== --- lra-constraints.c (revision 242377) +++ lra-constraints.c (working copy) @@ -1486,9 +1486,10 @@ simplify_operand_subreg (int nop, machin equivalences in function lra_constraints) and because for spilled pseudos we allocate stack memory enough for the biggest corresponding paradoxical subreg. */ - if (!SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (reg)) - || SLOW_UNALIGNED_ACCESS (innermode, MEM_ALIGN (reg)) - || MEM_ALIGN (reg) >= GET_MODE_ALIGNMENT (mode)) + if (!(MEM_ALIGN (reg) < GET_MODE_ALIGNMENT (mode) + && SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (reg))) + || (MEM_ALIGN (reg) < GET_MODE_ALIGNMENT (innermode) + && SLOW_UNALIGNED_ACCESS (innermode, MEM_ALIGN (reg)))) return true; /* INNERMODE is fast, MODE slow. Reload the mem in INNERMODE. */