From patchwork Tue Dec 24 20:05:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 853252 Delivered-To: patch@linaro.org Received: by 2002:a5d:4888:0:b0:385:e875:8a9e with SMTP id g8csp4082809wrq; Tue, 24 Dec 2024 12:11:33 -0800 (PST) X-Forwarded-Encrypted: i=2; AJvYcCVlTe8EeoV9lTLBlLTzC+b508zhVMCAxIH5d/7HoP012N7YnkxIUUzsr9DJA8jX3WrqN0GzVQ==@linaro.org X-Google-Smtp-Source: AGHT+IFZOM+m8btvm4soeNoxgYjCOFZKWoQqLtplawgBnMCwGiezeO9AYySuzvp+0oOe19wfPTmf X-Received: by 2002:a05:622a:2c6:b0:467:70ce:75ea with SMTP id d75a77b69052e-46a4a8f9543mr276825441cf.23.1735071093218; Tue, 24 Dec 2024 12:11:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1735071093; cv=none; d=google.com; s=arc-20240605; b=L/Kj4BmoRnTCWu5WatgThS6945r+APxpJ9nwQc7786PCU5tEPw4f8MZn6Z0WSmvVvF AKxMczrG2t43q3qGy29Cyzsq7jSwpOmD0FOKnmr/FUByS5BQLY6JCCet1X4usRfKMFFh 0zk0iE4rceD89a0NgNi13BVTuPA0uLrhqnfKWRUjLGPF3Qo81QhyQSiFrVk+ADLuVqdo J54zgCLtqcuyccp1XfgcwsSpHydIcmK0+qfOKEKfHh8+zenqK6rX6YgkUsa2f4BhdTIo fqc1qYztLark09Gg/CEsKFlUnq108O4zhLZCa0wHMcAGEgTWLoySoRuP6Y9Kl+mfBxhr qNvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=JVk+VUZFNdsgvGVfECPDzgCPxhi/YGH65zoztjt+K/8=; fh=vm85eyWrvOAHduw3sqEA+cYBqvHB7M0MHFcRJg7VWOo=; b=BLmmyzkzkr2o0LJkGqbXobKajUR6N05X4fhlqTPVBUfBQ68tJYtukV1g6XcSkic21T Ki4zvo0m1A0CLuHvFYK8biZgd+/ZtV9SLvQulv1qpY0yiDYi+QdJH43lw6RP/FjechRn B+W4XUmQFeKcJTXCPMUtypt+R2kpcT0M4+3nPGpxOq+sYCJjSioafHdSxbgTgvhAQEfB Gq5ysZJ0TSX5W/tGb1Cd2T1nt2PZqjWlt8OR7bbUjf8BAgRQIwi5BjuAJjHhnCJS/M5R YltfNboWhn9/OSEI5c7S78H5QllpCf/uq1ngFTytbUH8H7MpdfmV8snjtzX3Ht/AgPYt T50g==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=em3+DBNI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org; dara=neutral header.i=@linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d75a77b69052e-46a3ebd82a8si108814841cf.609.2024.12.24.12.11.33 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 24 Dec 2024 12:11:33 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=em3+DBNI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org; dara=neutral header.i=@linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tQBDY-000736-OL; Tue, 24 Dec 2024 15:09:25 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tQBDQ-0006SZ-DJ for qemu-devel@nongnu.org; Tue, 24 Dec 2024 15:09:16 -0500 Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tQBDO-0002xA-Mw for qemu-devel@nongnu.org; Tue, 24 Dec 2024 15:09:16 -0500 Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-219f8263ae0so23718105ad.0 for ; Tue, 24 Dec 2024 12:09:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1735070953; x=1735675753; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JVk+VUZFNdsgvGVfECPDzgCPxhi/YGH65zoztjt+K/8=; b=em3+DBNIE8dSsSC2fg5T/psu7wO10DQPf7W9/QZMBhgb9zCiP+H8bXp4xut1ong3zX +U37iAg2NqDJFm+n8JSdRgqFqfAo9RQl8YFInuPoiefBl1bzr4kE/KjStfbhzDcw15PF 7Of9RhUEIUKR3lIiFfttogXPzK0byMecGwvdo7fZg4JVSNDb6fqHwMIslYl7sTT3V3xI ZOzTPRGcdEEUMCBGL79o7+8zJqujw4k9VeBTJ8XJyiPv5YForRN8cTP5SSnPrp0e6E+R QaNl/fzE0+9fFbIN8Z++iedTkCojNeFX5P4c59F+BXl3nfYl7V1ms7Jr6EaiR3goflN7 tuHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735070953; x=1735675753; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JVk+VUZFNdsgvGVfECPDzgCPxhi/YGH65zoztjt+K/8=; b=Q+t6J+QTpzWVnliQ+SriOvlifgu2IEAp+xRl63GoxPVVKr75j3NfxFNhd9s/3y+Uzm Ve5j00pj56Opq0Bs4/6KOZDcbo7sAYnIlww6TBKGr8SOS9gOiphIniqpgRhu3YWAy7jp CRwN5GlT9ats0TgoVbCmvV+kM6TvOrN63kAunWsk83t1IYIhYrfLCP1ogcQ7XaTG7h6P i3PMSFyyxcs35UxOAGu0gskHyjU6M1P0gu9jwH6FwazheRKJKP1uiwi8TdjwCUfg22AX UzN9+UP30o54ILYBy2t/PiFsZXzvs2savdXBWdePmBarKoszBr6QLUqdLw6KsFBL2X9l V92Q== X-Gm-Message-State: AOJu0YxrFPq/RtOKwhMA9CVHIDcqEoqgDVjou/ADfuXiEYKBP4TSQA3c TathsUz5jbbJeu2PRoYRgMzWlUJe3bAaYQqppRoqLq757Ud1NyYsMT3RlFGVTWQSISf/212KYbB S+NU= X-Gm-Gg: ASbGnctzicaGlKvFzb5PW4BVIwL9tM4gKSKAOXbd9Y45X3uG8pCIYLvwtghkKO2M0rn 9NqB6NoNCJ1DDCT+82lha5l/bgK617Xo4Q+jqz/NpqwuFpbt1o+ANXeQOh0nPtIJqsaTHWe183F erTC4ZrTRfhn96SyIiT26rT3Ty0q42Guc5Z3JnMMFNsWUedVEDCnn9wpRCnPFH1vE4jOHHc0/Q7 oEb9hMZgwk/2XklQJOowWaPii6StHEcps2CGpmKVnBlJWR2ll+BbjalrKMfCwzueFWkoSjBaWis BgPLJTdcErDuSpuaZHjxy3dAXA== X-Received: by 2002:a05:6a00:6f0b:b0:725:b4f7:378e with SMTP id d2e1a72fcca58-72abdbe0cb5mr23205064b3a.0.1735070953400; Tue, 24 Dec 2024 12:09:13 -0800 (PST) Received: from stoup.. (syn-156-019-246-023.biz.spectrum.com. [156.19.246.23]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8309acsm10033521b3a.45.2024.12.24.12.09.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Dec 2024 12:09:13 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Cc: Pierrick Bouvier Subject: [PULL 53/72] tcg/optimize: Move fold_bitsel_vec into alphabetic sort Date: Tue, 24 Dec 2024 12:05:02 -0800 Message-ID: <20241224200521.310066-54-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241224200521.310066-1-richard.henderson@linaro.org> References: <20241224200521.310066-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::630; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x630.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The big comment just above says functions should be sorted. Add forward declarations as needed. Reviewed-by: Pierrick Bouvier Signed-off-by: Richard Henderson --- tcg/optimize.c | 114 +++++++++++++++++++++++++------------------------ 1 file changed, 59 insertions(+), 55 deletions(-) diff --git a/tcg/optimize.c b/tcg/optimize.c index 182be7e63c..1df61378ea 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -1193,6 +1193,10 @@ static bool fold_xx_to_x(OptContext *ctx, TCGOp *op) * 3) those that produce information about the result value. */ +static bool fold_or(OptContext *ctx, TCGOp *op); +static bool fold_orc(OptContext *ctx, TCGOp *op); +static bool fold_xor(OptContext *ctx, TCGOp *op); + static bool fold_add(OptContext *ctx, TCGOp *op) { if (fold_const2_commutative(ctx, op) || @@ -1356,6 +1360,61 @@ static bool fold_andc(OptContext *ctx, TCGOp *op) return fold_masks_zs(ctx, op, z_mask, s_mask); } +static bool fold_bitsel_vec(OptContext *ctx, TCGOp *op) +{ + /* If true and false values are the same, eliminate the cmp. */ + if (args_are_copies(op->args[2], op->args[3])) { + return tcg_opt_gen_mov(ctx, op, op->args[0], op->args[2]); + } + + if (arg_is_const(op->args[2]) && arg_is_const(op->args[3])) { + uint64_t tv = arg_info(op->args[2])->val; + uint64_t fv = arg_info(op->args[3])->val; + + if (tv == -1 && fv == 0) { + return tcg_opt_gen_mov(ctx, op, op->args[0], op->args[1]); + } + if (tv == 0 && fv == -1) { + if (TCG_TARGET_HAS_not_vec) { + op->opc = INDEX_op_not_vec; + return fold_not(ctx, op); + } else { + op->opc = INDEX_op_xor_vec; + op->args[2] = arg_new_constant(ctx, -1); + return fold_xor(ctx, op); + } + } + } + if (arg_is_const(op->args[2])) { + uint64_t tv = arg_info(op->args[2])->val; + if (tv == -1) { + op->opc = INDEX_op_or_vec; + op->args[2] = op->args[3]; + return fold_or(ctx, op); + } + if (tv == 0 && TCG_TARGET_HAS_andc_vec) { + op->opc = INDEX_op_andc_vec; + op->args[2] = op->args[1]; + op->args[1] = op->args[3]; + return fold_andc(ctx, op); + } + } + if (arg_is_const(op->args[3])) { + uint64_t fv = arg_info(op->args[3])->val; + if (fv == 0) { + op->opc = INDEX_op_and_vec; + return fold_and(ctx, op); + } + if (fv == -1 && TCG_TARGET_HAS_orc_vec) { + op->opc = INDEX_op_orc_vec; + op->args[2] = op->args[1]; + op->args[1] = op->args[3]; + return fold_orc(ctx, op); + } + } + return finish_folding(ctx, op); +} + static bool fold_brcond(OptContext *ctx, TCGOp *op) { int i = do_constant_folding_cond1(ctx, op, NO_DEST, &op->args[0], @@ -2772,61 +2831,6 @@ static bool fold_xor(OptContext *ctx, TCGOp *op) return fold_masks_zs(ctx, op, z_mask, s_mask); } -static bool fold_bitsel_vec(OptContext *ctx, TCGOp *op) -{ - /* If true and false values are the same, eliminate the cmp. */ - if (args_are_copies(op->args[2], op->args[3])) { - return tcg_opt_gen_mov(ctx, op, op->args[0], op->args[2]); - } - - if (arg_is_const(op->args[2]) && arg_is_const(op->args[3])) { - uint64_t tv = arg_info(op->args[2])->val; - uint64_t fv = arg_info(op->args[3])->val; - - if (tv == -1 && fv == 0) { - return tcg_opt_gen_mov(ctx, op, op->args[0], op->args[1]); - } - if (tv == 0 && fv == -1) { - if (TCG_TARGET_HAS_not_vec) { - op->opc = INDEX_op_not_vec; - return fold_not(ctx, op); - } else { - op->opc = INDEX_op_xor_vec; - op->args[2] = arg_new_constant(ctx, -1); - return fold_xor(ctx, op); - } - } - } - if (arg_is_const(op->args[2])) { - uint64_t tv = arg_info(op->args[2])->val; - if (tv == -1) { - op->opc = INDEX_op_or_vec; - op->args[2] = op->args[3]; - return fold_or(ctx, op); - } - if (tv == 0 && TCG_TARGET_HAS_andc_vec) { - op->opc = INDEX_op_andc_vec; - op->args[2] = op->args[1]; - op->args[1] = op->args[3]; - return fold_andc(ctx, op); - } - } - if (arg_is_const(op->args[3])) { - uint64_t fv = arg_info(op->args[3])->val; - if (fv == 0) { - op->opc = INDEX_op_and_vec; - return fold_and(ctx, op); - } - if (fv == -1 && TCG_TARGET_HAS_orc_vec) { - op->opc = INDEX_op_orc_vec; - op->args[2] = op->args[1]; - op->args[1] = op->args[3]; - return fold_orc(ctx, op); - } - } - return finish_folding(ctx, op); -} - /* Propagate constants and copies, fold constant expressions. */ void tcg_optimize(TCGContext *s) {