From patchwork Wed Sep 18 01:58:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 173934 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp1871102ill; Tue, 17 Sep 2019 18:59:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqzEglCFuXrTeAK2M/h5tNd9i1nytZk6c9nR2VNiYZksgJ6Ivqa7GcZEJKXtk/vNZ7vIdk9i X-Received: by 2002:a05:6402:782:: with SMTP id d2mr6515785edy.296.1568771957317; Tue, 17 Sep 2019 18:59:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568771957; cv=none; d=google.com; s=arc-20160816; b=tQY0sXX3+xMw7s8nFlqhPh0pcLPb0dwE34e+eFsnLiY3L0qhRJgx23h10wPRwu9bZE WxDhkRZLY9IemRhWSK1ZjJT2vTEEmFUlQgN0kSLYDKRF/MGJYXQ4YTwgELSuPFp1vOmN 9gokcnAIJtWyHD7LoSfys8BOD9/VcpCP4CQ7JcGxbQd0rzm4xOQIgo9E05dYrhTS3EQi daf/9ZmZoYsc/GhvzM4O4qrlHyL9OKcYNKQI3tInzj2X0iPvca6flQ80F+uv+Z0cC2TN Yc7DoEbhqBcYnC2LaOp+If9CfRSIh0pHo8gefbc0q9sJogP9O2LKJWAQ1TErFMzQUUM9 aKng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:delivered-to:sender:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:mailing-list:dkim-signature :domainkey-signature; bh=LYy7CYDV00a+Fq0naW7EFGUs/iRqLDxrtNHdtv2Pguk=; b=W75OfYWfSqxQf/TnPWH8YQCjJtgw5XX7K7L10RDBvRuLAgICC8aNdywkZIX8VHn5rt SICjMm2Dnt7mq2Ps77Z93R6PsM28y58kudXHvLZasi1lKcSsEvz/VSfcLdjSRZsL3MIo GiqV5yMl0iN1ETl+1ahNReQsT2PRhKEz4TZ2biIh7skF3n1mgy/blAATOBxDvto1zjFD iLquDY/otaLVXvSqTQ7OJJjvHOewcZql6TKFC4qrVbUfmb/qBiZ0iNUxU7z0vdDrlPod SoQiLN9P9qygokGWBeIYfpjUwHIabWa0XGhPTFWfAGAa1NgALTQ+46iT22L/oygq0J1C ms7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=bzSgFTWX; dkim=pass header.i=@linaro.org header.s=google header.b="Oic/3aqq"; spf=pass (google.com: domain of gcc-patches-return-509146-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom="gcc-patches-return-509146-patch=linaro.org@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id b1si2570147edm.271.2019.09.17.18.59.16 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Sep 2019 18:59:17 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-509146-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=bzSgFTWX; dkim=pass header.i=@linaro.org header.s=google header.b="Oic/3aqq"; spf=pass (google.com: domain of gcc-patches-return-509146-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom="gcc-patches-return-509146-patch=linaro.org@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:in-reply-to:references; q=dns; s= default; b=wvb1zAXTXTi5+OWre2HicJG2I58sx2uGOWY15PbNC6EbQb5K/3itY 0Bn2uVSR4dbmVR796YVGe09fkBhCfYGqrSDVTtH9oWic2rB68v6rWewM4mADnnq5 Ulix1B+YpFy0RPU9iSNxZkvn3lD7S7hypEhiwjbLoPLyszmbqt05jQ= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:in-reply-to:references; s= default; bh=82SfhKTlDHEdIaQdIjraQM4yllg=; b=bzSgFTWXI3kNvO1YgK70 FhUZqzL0bVOJShsZ+SPBzQJhsflGW0ntWUjPcrCnO0f3qae8YKZYLD4Qlg939VaS 7qLFlZalNyPdk7nW7qwsSAXyifpq7jqoad5+ROz2LTIJ++RCnELBcKEznjbV8j+i 1R/SY+dnWcgyClaW+9q1noc= Received: (qmail 120779 invoked by alias); 18 Sep 2019 01:58:28 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 120696 invoked by uid 89); 18 Sep 2019 01:58:27 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-24.4 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.1 spammy=emit_insn, codes X-HELO: mail-pl1-f193.google.com Received: from mail-pl1-f193.google.com (HELO mail-pl1-f193.google.com) (209.85.214.193) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 18 Sep 2019 01:58:26 +0000 Received: by mail-pl1-f193.google.com with SMTP id x6so874981plv.6 for ; Tue, 17 Sep 2019 18:58:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LYy7CYDV00a+Fq0naW7EFGUs/iRqLDxrtNHdtv2Pguk=; b=Oic/3aqq11ymCWuS69PlQUDtUridVYBzRHIdsJwf3iRcoRwNruLPwhyThp35qTCdCR IST2JOjfQFCPvF5f6+O2qKJOr1AGWL9mprE+3gOKQ1kBohl5bnyoLQsQphYm1D/Hn73s f+hkQ/xSddaB9iGMPUYRfsmCqsFNs0vhE2Co2bSVlZM8O2X23qi3ACUXf2UfWvzXDMHn 2gouXML5HHVoEnjw7PUennxYfq/f1JaNudI43HNPOI5rsVFODhOLSddk7OEyIwWgKTyL 3/GiRV0kMNslNPL9aPG3fy6iCEEleaEleefXkGRP6p1XpTTOzMNyG4jxkGlapxwtk04V ZiDA== Return-Path: Received: from localhost.localdomain (97-113-7-119.tukw.qwest.net. [97.113.7.119]) by smtp.gmail.com with ESMTPSA id d10sm6614306pfn.50.2019.09.17.18.58.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2019 18:58:23 -0700 (PDT) From: Richard Henderson To: gcc-patches@gcc.gnu.org Cc: Wilco.Dijkstra@arm.com, kyrylo.tkachov@foss.arm.com, Marcus.Shawcroft@arm.com, James.Greenhalgh@arm.com Subject: [PATCH, AArch64 v4 3/6] aarch64: Tidy aarch64_split_compare_and_swap Date: Tue, 17 Sep 2019 18:58:14 -0700 Message-Id: <20190918015817.24408-4-richard.henderson@linaro.org> In-Reply-To: <20190918015817.24408-1-richard.henderson@linaro.org> References: <20190918015817.24408-1-richard.henderson@linaro.org> With aarch64_track_speculation, we had extra code to do exactly what the !strong_zero_p path already did. The rest is reducing code duplication. * config/aarch64/aarch64 (aarch64_split_compare_and_swap): Disable strong_zero_p for aarch64_track_speculation; unify some code paths; use aarch64_gen_compare_reg instead of open-coding. --- gcc/config/aarch64/aarch64.c | 50 ++++++++++-------------------------- 1 file changed, 14 insertions(+), 36 deletions(-) -- 2.17.1 diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index a5c4f55627d..b937514e6f8 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -16955,13 +16955,11 @@ aarch64_emit_post_barrier (enum memmodel model) void aarch64_split_compare_and_swap (rtx operands[]) { - rtx rval, mem, oldval, newval, scratch; + rtx rval, mem, oldval, newval, scratch, x, model_rtx; machine_mode mode; bool is_weak; rtx_code_label *label1, *label2; - rtx x, cond; enum memmodel model; - rtx model_rtx; rval = operands[0]; mem = operands[1]; @@ -16982,7 +16980,8 @@ aarch64_split_compare_and_swap (rtx operands[]) CBNZ scratch, .label1 .label2: CMP rval, 0. */ - bool strong_zero_p = !is_weak && oldval == const0_rtx && mode != TImode; + bool strong_zero_p = (!is_weak && !aarch64_track_speculation && + oldval == const0_rtx && mode != TImode); label1 = NULL; if (!is_weak) @@ -16995,35 +16994,20 @@ aarch64_split_compare_and_swap (rtx operands[]) /* The initial load can be relaxed for a __sync operation since a final barrier will be emitted to stop code hoisting. */ if (is_mm_sync (model)) - aarch64_emit_load_exclusive (mode, rval, mem, - GEN_INT (MEMMODEL_RELAXED)); + aarch64_emit_load_exclusive (mode, rval, mem, GEN_INT (MEMMODEL_RELAXED)); else aarch64_emit_load_exclusive (mode, rval, mem, model_rtx); if (strong_zero_p) - { - if (aarch64_track_speculation) - { - /* Emit an explicit compare instruction, so that we can correctly - track the condition codes. */ - rtx cc_reg = aarch64_gen_compare_reg (NE, rval, const0_rtx); - x = gen_rtx_NE (GET_MODE (cc_reg), cc_reg, const0_rtx); - } - else - x = gen_rtx_NE (VOIDmode, rval, const0_rtx); - - x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, - gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); - aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); - } + x = gen_rtx_NE (VOIDmode, rval, const0_rtx); else { - cond = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode); - x = gen_rtx_NE (VOIDmode, cond, const0_rtx); - x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, - gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); - aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); + rtx cc_reg = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode); + x = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx); } + x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, + gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); + aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); aarch64_emit_store_exclusive (mode, scratch, mem, newval, model_rtx); @@ -17044,22 +17028,16 @@ aarch64_split_compare_and_swap (rtx operands[]) aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); } else - { - cond = gen_rtx_REG (CCmode, CC_REGNUM); - x = gen_rtx_COMPARE (CCmode, scratch, const0_rtx); - emit_insn (gen_rtx_SET (cond, x)); - } + aarch64_gen_compare_reg (NE, scratch, const0_rtx); emit_label (label2); + /* If we used a CBNZ in the exchange loop emit an explicit compare with RVAL to set the condition flags. If this is not used it will be removed by later passes. */ if (strong_zero_p) - { - cond = gen_rtx_REG (CCmode, CC_REGNUM); - x = gen_rtx_COMPARE (CCmode, rval, const0_rtx); - emit_insn (gen_rtx_SET (cond, x)); - } + aarch64_gen_compare_reg (NE, rval, const0_rtx); + /* Emit any final barrier needed for a __sync operation. */ if (is_mm_sync (model)) aarch64_emit_post_barrier (model);