From patchwork Wed May 3 07:06:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678617 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp904421wrs; Wed, 3 May 2023 00:11:31 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7+57TUvmjvaqMDHFqLc40aKmjiiJi4x/2F/R4qOCThX+upprDHoItbj0U/tqyYnPeY0/QY X-Received: by 2002:a05:622a:19a7:b0:3f0:a887:7d39 with SMTP id u39-20020a05622a19a700b003f0a8877d39mr32522626qtc.62.1683097890923; Wed, 03 May 2023 00:11:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097890; cv=none; d=google.com; s=arc-20160816; b=kV6Otn9rzETMyraqDBvt1CjiIQyq6JEZ3xcch4LvDXDIZjoBfcfIMgByCN3j9I8Ybd azWGOBZ3yc05ilsEY/xj+7EvPbstQddrqIuh9dCynGvYndvjL0X8/hNFsKzHrw7k0ahJ du4MCx0IVx8Rwo5cqPQ1rTB2pAtzEzzysjAf4Ot6leh4bhrqGa83npYPaZ4Qkm72MmUE YLU0E+WGmNP5VVL+HngaPH89Wyv+zeGc9oSvFJQnIEBIAADj1YlqV1g9JoQ7lewht7uy ZCfipYjWRIraKYv6MB7S03M8d1yCHFQpAbpgGSvuT6rZFM2BKQJver3csmqS7Spl9qGO Pglw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=3lrJF/qXESqdgqxzYXiwMDd8bhgg68V4bNiRRkSEKr0=; b=nBgQMdFxtEYGB2pJcxbJNJpp44W0KbkfOKTC2Tvgm2NYhAc9cGnXt7FKheMyr/PuP8 fRIyMli4wBm5eQhdp3HPrdcbhrrW+YNADtgZc2iviVfelMDkea+iLPtRlhLOT/WfZV1g lutntdQYdHgxrjbuDFMXndafAwf6Rl7DceT00dY8q/iSIg2XvEPX6zHIeoNq98V04g8J p2QKHw/OBZMRcy7MYoFyy1U+Zlt+xZFWv6QcxxV6gv7kh8yc8qPxubPUqnI6pXK5AVQ0 qs+TZlVXyETPw/4XixtbQut06VAN8bRHWoE/++Y+V2uSyb7JLJzwJuyT4AI8c91C2IMh JsyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=rCmOgqNA; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x7-20020a05620a0ec700b0074283e4a0absi17571316qkm.530.2023.05.03.00.11.30 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:11:30 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=rCmOgqNA; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aH-0005ak-Oo; Wed, 03 May 2023 03:07:29 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6Zr-0004z3-5i for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:04 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zn-0005Z5-Uw for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:02 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f19afc4fbfso47405925e9.2 for ; Wed, 03 May 2023 00:06:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097618; x=1685689618; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3lrJF/qXESqdgqxzYXiwMDd8bhgg68V4bNiRRkSEKr0=; b=rCmOgqNAQXzxJaeE44P1uzRxHyYzgGSz+R20lDdTbMbEeULdSML9aJ7Jy6O8YKxeua RQnf0qy6x2T3KLgONhUoeTN+4yxzxYCmkjLYYDEmyxrOC00Doov9XJwAvv4e9n4yTz+N UIzjh+1vq0Sc5IPjn6js4IDLQ6DMzgUj14Bj4AMQo9/2LxS+zeSLTXkTuJVWCDQwuToI Od688Wzlfjnq/rB3ZbOAUHj/WIa/hMG6SK2fYfiZ/JaTD5LsThm60muVfThy19pMhBt4 wDYpshDaeuSWEyNc+S+krKRRhBw0IqUh0YxtwS9RnzX9JkewWWc6QnPtigfy9TX57zYV qv/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097618; x=1685689618; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3lrJF/qXESqdgqxzYXiwMDd8bhgg68V4bNiRRkSEKr0=; b=lQtEg7GXKzxolG4Jxo0bKVObRibVAs4pFEfOlvCYQ4+8a9vIN6UPqTSTmFfERD71fz emBGboV2sPK47EfgoN3zNNUe9WnKj+12dAts3PQH0fBcoeYxlbCmvSjOs7yDppaAFPPp uXq45ipjHTgiWB7RsODBiqIn8AEv2zX4zsMcP9NQJe3aG1SGMQa8y8rn8oCWbyli0RXB ik36TtVl6P9hIqlJmW7TGv6DgE3UOFaAE7Mb8kzk9VJ+mC0T0XLrBaTm6uIUqHwvmsjs uEkTxG/PdBPVSH01QMR+rlDbkrZVnfoqL3OMt62HxtMx3z0PPcWHKDzCKTHKWS+mQz9V R2UQ== X-Gm-Message-State: AC+VfDz9QQtD03w9H8Bqh7/iqt1aZyhyjg6nQ59PPT8li/YT/SLz529H hx6Vjq0xjfXYrP64p9oPLQyvd+hHnnwssQMxOZiWeg== X-Received: by 2002:a7b:cd18:0:b0:3f1:9525:7bb6 with SMTP id f24-20020a7bcd18000000b003f195257bb6mr13570542wmj.5.1683097618487; Wed, 03 May 2023 00:06:58 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.06.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:06:58 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v4 01/57] include/exec/memop: Add bits describing atomicity Date: Wed, 3 May 2023 08:06:00 +0100 Message-Id: <20230503070656.1746170-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org These bits may be used to describe the precise atomicity requirements of the guest, which may then be used to constrain the methods by which it may be emulated by the host. For instance, the AArch64 LDP (32-bit) instruction changes semantics with ARMv8.4 LSE2, from MO_64 | MO_ATMAX_4 | MO_ATOM_IFALIGN (64-bits, single-copy atomic only on 4 byte units, nonatomic if not aligned by 4), to MO_64 | MO_ATMAX_SIZE | MO_ATOM_WITHIN16 (64-bits, single-copy atomic within a 16 byte block) The former may be implemented with two 4 byte loads, or a single 8 byte load if that happens to be efficient on the host. The latter may not, and may also require a helper when misaligned. Reviewed-by: Alex Bennée Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- include/exec/memop.h | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/include/exec/memop.h b/include/exec/memop.h index 25d027434a..04e4048f0b 100644 --- a/include/exec/memop.h +++ b/include/exec/memop.h @@ -81,6 +81,42 @@ typedef enum MemOp { MO_ALIGN_32 = 5 << MO_ASHIFT, MO_ALIGN_64 = 6 << MO_ASHIFT, + /* + * MO_ATOM_* describes that atomicity requirements of the operation: + * MO_ATOM_IFALIGN: the operation must be single-copy atomic if and + * only if it is aligned; if unaligned there is no atomicity. + * MO_ATOM_NONE: the operation has no atomicity requirements. + * MO_ATOM_SUBALIGN: the operation is single-copy atomic by parts + * by the alignment. E.g. if the address is 0 mod 4, then each + * 4-byte subobject is single-copy atomic. + * This is the atomicity of IBM Power and S390X processors. + * MO_ATOM_WITHIN16: the operation is single-copy atomic, even if it + * is unaligned, so long as it does not cross a 16-byte boundary; + * if it crosses a 16-byte boundary there is no atomicity. + * This is the atomicity of Arm FEAT_LSE2. + * + * MO_ATMAX_* describes the maximum atomicity unit required: + * MO_ATMAX_SIZE: the entire operation, i.e. MO_SIZE. + * MO_ATMAX_[248]: units of N bytes. + * + * Note the default (i.e. 0) values are single-copy atomic to the + * size of the operation, if aligned. This retains the behaviour + * from before these were introduced. + */ + MO_ATOM_SHIFT = 8, + MO_ATOM_MASK = 0x3 << MO_ATOM_SHIFT, + MO_ATOM_IFALIGN = 0 << MO_ATOM_SHIFT, + MO_ATOM_NONE = 1 << MO_ATOM_SHIFT, + MO_ATOM_SUBALIGN = 2 << MO_ATOM_SHIFT, + MO_ATOM_WITHIN16 = 3 << MO_ATOM_SHIFT, + + MO_ATMAX_SHIFT = 10, + MO_ATMAX_MASK = 0x3 << MO_ATMAX_SHIFT, + MO_ATMAX_SIZE = 0 << MO_ATMAX_SHIFT, + MO_ATMAX_2 = 1 << MO_ATMAX_SHIFT, + MO_ATMAX_4 = 2 << MO_ATMAX_SHIFT, + MO_ATMAX_8 = 3 << MO_ATMAX_SHIFT, + /* Combinations of the above, for ease of use. */ MO_UB = MO_8, MO_UW = MO_16, From patchwork Wed May 3 07:06:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678601 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp903479wrs; Wed, 3 May 2023 00:08:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6W7aJEw8A53JRF+fiz25vqXisVgvOJV/kX6zTuEEMCYNw+q8/rd2x5iDuyp22Q8PQGaraY X-Received: by 2002:ac8:7c43:0:b0:3f0:a382:cd51 with SMTP id o3-20020ac87c43000000b003f0a382cd51mr33363732qtv.8.1683097735124; Wed, 03 May 2023 00:08:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097735; cv=none; d=google.com; s=arc-20160816; b=Ia4UdzMKhOxH7tHr08LcWQXvbUmhH73f6tucbrgYIxmD/28mxdLhVeGbKFYh6GZDA0 +zhI6pGxEaqw18T7KzrUkBYCi56Dsnl5egcipx+FeLf6dj+VtO7fzzHCNP5J1i6fRJS0 pMl96nbSDAcuCl4PRYIbtMZRmfOFUOIPegOxCw+3HmX1PtQsrhXIIv3zn+KV+CC9CibF VYry3XFsa2iSDXdz9BnzRqDcdIhuWCBNvhoNzGkMzI9aJX2EZd5CYpyoPvY2kcdeD0mq MKzHNFTP8HNazuK8/APFIaDa/TYZIb0EDbYxpyoNJj0PuMovuGnPYvy837UZWCTp2yXQ jTwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=bcNujZc9aYQmCLoddPQYpB6/3gwl5pOhsjTd6sdWfnE=; b=OQBadUYR6ZcMjSYNkZWfid2ZZ9ZwSrgzX8wQFbBvsBC1RfLYJFBZ/tUQvg10efurfU Ajc+hdph/FmnuxXANGC80e6F1/y8c831tN2+k7b3KXBr82ZhzEQeyo1yjAsNmP8sYbE6 6kU+0m/m664q6v6kAS43qCNRF5bSpXiqC+ImGw3HxF/sO87HXKkGwTyJ0Y5MoQosAbZV siRQAymghAtLUrLimBHITQHPhi0sg7t+u8b21RGzvrZrLqRV3mOGVfQQ54Uhq79t5P50 lkmLmzqG01DCrmijc6UPapVcxRpE2nkRqvn31P+14DXO4QyA1QJpQehnvJydLOoeYPl5 Ptlg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="jVj/ietQ"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x5-20020a05620a0b4500b0074e2a01130bsi16953077qkg.396.2023.05.03.00.08.54 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:08:55 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="jVj/ietQ"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aN-0006B6-K3; Wed, 03 May 2023 03:07:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6Zs-00050q-Pd for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:06 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zo-0005Zd-ME for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:04 -0400 Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-3f178da21b5so30851635e9.3 for ; Wed, 03 May 2023 00:07:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097619; x=1685689619; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bcNujZc9aYQmCLoddPQYpB6/3gwl5pOhsjTd6sdWfnE=; b=jVj/ietQx4LaMjfG8tQYpGGDdD3HDGS+JWCsVWx7lh7ZmIWnvb7VUnNku/7YzvKaWM LC4ZyUvV3TpVCz6SaW8LCU+0lEQvO2ZSD7l/4uMN5CdY7arTd7ke/obqwPH3zCEeeSQq 2lOEufTMtXpn8EVoBimsDvYKTHLfSK0ecIQa17Y3/lSpcX9HjuxVRhIKX5GxoZlpFsU9 Wrxyqrcq1ylvMe67QYSVCyH228OXvWmbNVj/ObhkO+epIy2JmclIEIlwgwRsY/2v23p8 uFKUTgZSTnLpq18nDmsYqaFilaew1epYzyRT+zKQO/Syr9F0/ZFlg/AIEDKl+ZKjdgCI 6TmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097619; x=1685689619; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bcNujZc9aYQmCLoddPQYpB6/3gwl5pOhsjTd6sdWfnE=; b=CkO+6tzKdXfbaBMMHLzp392mcmrQfpeABzWNBblpP4r+dXWM0iBYZ881/0tXGQGii0 zFsx2wLHCN+t/Dhks3nI3uq7GeBJoxlUgUAXQ4keZTmUfh+BMyfxJ5svPL+sFoSKzNsn 8e53Mi61xhhg5B1qajL8XYEpVJqgWupS3NiqcDmQxPfSqLQC8RpNBBXvk1pzYZ0kgLPU njdqJahgNulaOea/8dqb9Uddkl8FdDS+ZoDmqr6Cc7Zn+byUHNbqMzcVhmtx+LPuN6yN CAmRQ/gbOlEu2A04v5NLS8Zf7Qv5mxfAzX4F/iKd4toD1jHsDz85nDM/lU2/1tyM9ieL HqYg== X-Gm-Message-State: AC+VfDwJNzLvm0Z6TPl8IasdSQkXW+E783hVQLYJi7djtAypNHQdEMK5 PM7IkKnrwVUCMRz3TDWLbNiTNCFnNA4w1HXbqaGZ6A== X-Received: by 2002:a7b:cbc8:0:b0:3ee:775:c573 with SMTP id n8-20020a7bcbc8000000b003ee0775c573mr13166311wmi.20.1683097619227; Wed, 03 May 2023 00:06:59 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.06.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:06:58 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v4 02/57] accel/tcg: Add cpu_in_serial_context Date: Wed, 3 May 2023 08:06:01 +0100 Message-Id: <20230503070656.1746170-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Like cpu_in_exclusive_context, but also true if there is no other cpu against which we could race. Use it in tb_flush as a direct replacement. Use it in cpu_loop_exit_atomic to ensure that there is no loop against cpu_exec_step_atomic. Reviewed-by: Alex Bennée Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 5 +++++ accel/tcg/cpu-exec-common.c | 3 +++ accel/tcg/tb-maint.c | 2 +- 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 7bb0fdbe14..8ca24420ea 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -64,6 +64,11 @@ static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb) } } +static inline bool cpu_in_serial_context(CPUState *cs) +{ + return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs); +} + extern int64_t max_delay; extern int64_t max_advance; diff --git a/accel/tcg/cpu-exec-common.c b/accel/tcg/cpu-exec-common.c index e7962c9348..9a5fabf625 100644 --- a/accel/tcg/cpu-exec-common.c +++ b/accel/tcg/cpu-exec-common.c @@ -22,6 +22,7 @@ #include "sysemu/tcg.h" #include "exec/exec-all.h" #include "qemu/plugin.h" +#include "internal.h" bool tcg_allowed; @@ -81,6 +82,8 @@ void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc) void cpu_loop_exit_atomic(CPUState *cpu, uintptr_t pc) { + /* Prevent looping if already executing in a serial context. */ + g_assert(!cpu_in_serial_context(cpu)); cpu->exception_index = EXCP_ATOMIC; cpu_loop_exit_restore(cpu, pc); } diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index cb1f806f00..7d613d36d2 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -760,7 +760,7 @@ void tb_flush(CPUState *cpu) if (tcg_enabled()) { unsigned tb_flush_count = qatomic_mb_read(&tb_ctx.tb_flush_count); - if (cpu_in_exclusive_context(cpu)) { + if (cpu_in_serial_context(cpu)) { do_tb_flush(cpu, RUN_ON_CPU_HOST_INT(tb_flush_count)); } else { async_safe_run_on_cpu(cpu, do_tb_flush, From patchwork Wed May 3 07:06:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678667 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp909243wrs; Wed, 3 May 2023 00:25:02 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7NCQ7iiQJ9SnTnBZAsxJNQ9/VsoyVQBexLz3hVyTnq0Yt/yDEoZHidHsdHzDC/D7Chn7H/ X-Received: by 2002:a05:6214:2a49:b0:5ef:1e0a:1b07 with SMTP id jf9-20020a0562142a4900b005ef1e0a1b07mr9486132qvb.40.1683098701986; Wed, 03 May 2023 00:25:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098701; cv=none; d=google.com; s=arc-20160816; b=YAzUXquttU5DUhY/44QrWqfB90ZGkDdnSPwcYYUhxN8sLo4a54j4bN1tJVtIK7gAl0 +AeKINwdnMzjJZBJ04q6Ny5ZFKfdxitJgx70ZpsL0ZbkRqscXG0rxgHiGvlebpfT0WO3 6SYZot2SbCluTnqyzUeJzFn8nYUjeAu2tclFO18eeKAOwgSb+e/CD4JP2/S0ezZU19bX SZ3CtZ4X0066TNopGZG2rUGenMgGDFPFKRygZmzucek4V6niaKYtiKFaQSYbqOOyxYvD PdJd/UuTxO37ysMw5tdZz2E6gV9pUj2ffr7RGxCcvcPW6ryhTeo63b4vTPCDJk/76odZ DxFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=49Ms4D16ycHgwsQWZQ2j9+EkvnxBb28/636UnUq9PB4=; b=V5cDYbJHDGFtkoVZOOB+HydS+cBKpNF8Da4Mv24rdwsi2KsRc8w80ylvbTaWmanUng V2pO3R09FIeu1BxCf821ESOpJa5+FGsREjWaTxrAbGDStdf5GW70Na/hOAVOHbT73UU3 pM36B8LXoYRniCLLEZx8DGsprPAtikHTJJspZ4uaHElK7gbRtgIkOM/1cashBDk3L/bF Aq7afLrXcldyVZzB7P3Q8evx5JaNQG2TGBbkLNyUeozw764TkVnPoTS7xm+MOVenP3Rd J0Hjdf97UL9Iniv76k8T/Ao8IaupkJBhC0AEy65f1jrCKHFjA8J7I37soIBljNbiNlf+ HQYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OrVEPPKB; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r8-20020a0562140c8800b005e7fc559b08si18329363qvr.147.2023.05.03.00.25.01 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:25:01 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OrVEPPKB; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aK-0005px-BC; Wed, 03 May 2023 03:07:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6Zv-00053C-BJ for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:10 -0400 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zp-0005Zx-St for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:06 -0400 Received: by mail-wr1-x430.google.com with SMTP id ffacd0b85a97d-3062678861fso2639476f8f.0 for ; Wed, 03 May 2023 00:07:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097620; x=1685689620; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=49Ms4D16ycHgwsQWZQ2j9+EkvnxBb28/636UnUq9PB4=; b=OrVEPPKB74a9wJDqniLZZB0x/1kLJcLT7a4Zrm9g9GSq+6X8J3FZOKWTG96HwiBwGp v28NABEl63R95fJqa5FGFlbP5K1iFdDmw21dcBu02ehH8fL8xka5mVrGI9ePYsvrYKba Mf/Z+e6Q1/uzQEc6LcgbgWgPWKfbr9uv0XcH956AAjTwQce/CZoqqZNaLjvVB+80BNNK GRfwkrsVr6AKyj+/44DcOWRMbi7pY4GrE45UZGb1hMl8hAevGDCl7oXxdLSKF9UMKv2F 6vw67FBxrY3z/rYuOgNY772GdiQi3Y09pI+iyL/rBBl5+9JbkbZ/dkdepbEHSycaZpDf vR4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097620; x=1685689620; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=49Ms4D16ycHgwsQWZQ2j9+EkvnxBb28/636UnUq9PB4=; b=JolKJOKr8Evdk6RLONioEVWwlzmcLFo1zBX+wo7/mVUpKs/uJRkTMU4lGnGBSPk48y qt1I+nz0fj8DrMJv4Ochtnx3m9/G446Si+Ib+yzypPs4IjEN39hUfjZsk80LbMA7YX7z dIkA91IKFOj50g24i818+UFe2P3/yeJFyJ8j32oBJiImRSbLX03xm6QCcazhWpgsnQvb /NiCiQW4zTaLWHcNLYw/obVT5tBap+/JiUa40jPty4+QMZ875Jl1/1YfmzN43SRW9rke EcNQhxbjVVnEPMLoKHVimiUoWxZMPfnVFCUGdGSP2t92/MJ0HnavXtpyz/ZEVF0b9+Ff DfzQ== X-Gm-Message-State: AC+VfDwPIqK4IKVttwpcZtPn9CbXMSB03bQX3Yh8ikLmifgL/2n6PD6G KUTmghbtRf1XNnt5AHyQnVm3X1i6dAbd9er6EEoh+w== X-Received: by 2002:adf:e342:0:b0:306:4125:5f61 with SMTP id n2-20020adfe342000000b0030641255f61mr805988wrj.44.1683097620058; Wed, 03 May 2023 00:07:00 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.06.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:06:59 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v4 03/57] accel/tcg: Introduce tlb_read_idx Date: Wed, 3 May 2023 08:06:02 +0100 Message-Id: <20230503070656.1746170-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=richard.henderson@linaro.org; helo=mail-wr1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of playing with offsetof in various places, use MMUAccessType to index an array. This is easily defined instead of the previous dummy padding array in the union. Reviewed-by: Alex Bennée Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- include/exec/cpu-defs.h | 7 ++- include/exec/cpu_ldst.h | 26 ++++++++-- accel/tcg/cputlb.c | 104 +++++++++++++--------------------------- 3 files changed, 59 insertions(+), 78 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index e1c498ef4b..a6e0cf1812 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -111,8 +111,11 @@ typedef struct CPUTLBEntry { use the corresponding iotlb value. */ uintptr_t addend; }; - /* padding to get a power of two size */ - uint8_t dummy[1 << CPU_TLB_ENTRY_BITS]; + /* + * Padding to get a power of two size, as well as index + * access to addr_{read,write,code}. + */ + target_ulong addr_idx[(1 << CPU_TLB_ENTRY_BITS) / TARGET_LONG_SIZE]; }; } CPUTLBEntry; diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h index c141f0394f..7c867c94c3 100644 --- a/include/exec/cpu_ldst.h +++ b/include/exec/cpu_ldst.h @@ -360,13 +360,29 @@ static inline void clear_helper_retaddr(void) /* Needed for TCG_OVERSIZED_GUEST */ #include "tcg/tcg.h" +static inline target_ulong tlb_read_idx(const CPUTLBEntry *entry, + MMUAccessType access_type) +{ + /* Do not rearrange the CPUTLBEntry structure members. */ + QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_read) != + MMU_DATA_LOAD * TARGET_LONG_SIZE); + QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_write) != + MMU_DATA_STORE * TARGET_LONG_SIZE); + QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_code) != + MMU_INST_FETCH * TARGET_LONG_SIZE); + + const target_ulong *ptr = &entry->addr_idx[access_type]; +#if TCG_OVERSIZED_GUEST + return *ptr; +#else + /* ofs might correspond to .addr_write, so use qatomic_read */ + return qatomic_read(ptr); +#endif +} + static inline target_ulong tlb_addr_write(const CPUTLBEntry *entry) { -#if TCG_OVERSIZED_GUEST - return entry->addr_write; -#else - return qatomic_read(&entry->addr_write); -#endif + return tlb_read_idx(entry, MMU_DATA_STORE); } /* Find the TLB index corresponding to the mmu_idx + address pair. */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 3117886af1..5051244c67 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1441,34 +1441,17 @@ static void io_writex(CPUArchState *env, CPUTLBEntryFull *full, } } -static inline target_ulong tlb_read_ofs(CPUTLBEntry *entry, size_t ofs) -{ -#if TCG_OVERSIZED_GUEST - return *(target_ulong *)((uintptr_t)entry + ofs); -#else - /* ofs might correspond to .addr_write, so use qatomic_read */ - return qatomic_read((target_ulong *)((uintptr_t)entry + ofs)); -#endif -} - /* Return true if ADDR is present in the victim tlb, and has been copied back to the main tlb. */ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, - size_t elt_ofs, target_ulong page) + MMUAccessType access_type, target_ulong page) { size_t vidx; assert_cpu_is_self(env_cpu(env)); for (vidx = 0; vidx < CPU_VTLB_SIZE; ++vidx) { CPUTLBEntry *vtlb = &env_tlb(env)->d[mmu_idx].vtable[vidx]; - target_ulong cmp; - - /* elt_ofs might correspond to .addr_write, so use qatomic_read */ -#if TCG_OVERSIZED_GUEST - cmp = *(target_ulong *)((uintptr_t)vtlb + elt_ofs); -#else - cmp = qatomic_read((target_ulong *)((uintptr_t)vtlb + elt_ofs)); -#endif + target_ulong cmp = tlb_read_idx(vtlb, access_type); if (cmp == page) { /* Found entry in victim tlb, swap tlb and iotlb. */ @@ -1490,11 +1473,6 @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, return false; } -/* Macro to call the above, with local variables from the use context. */ -#define VICTIM_TLB_HIT(TY, ADDR) \ - victim_tlb_hit(env, mmu_idx, index, offsetof(CPUTLBEntry, TY), \ - (ADDR) & TARGET_PAGE_MASK) - static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size, CPUTLBEntryFull *full, uintptr_t retaddr) { @@ -1527,29 +1505,12 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, { uintptr_t index = tlb_index(env, mmu_idx, addr); CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr, page_addr; - size_t elt_ofs; - int flags; + target_ulong tlb_addr = tlb_read_idx(entry, access_type); + target_ulong page_addr = addr & TARGET_PAGE_MASK; + int flags = TLB_FLAGS_MASK; - switch (access_type) { - case MMU_DATA_LOAD: - elt_ofs = offsetof(CPUTLBEntry, addr_read); - break; - case MMU_DATA_STORE: - elt_ofs = offsetof(CPUTLBEntry, addr_write); - break; - case MMU_INST_FETCH: - elt_ofs = offsetof(CPUTLBEntry, addr_code); - break; - default: - g_assert_not_reached(); - } - tlb_addr = tlb_read_ofs(entry, elt_ofs); - - flags = TLB_FLAGS_MASK; - page_addr = addr & TARGET_PAGE_MASK; if (!tlb_hit_page(tlb_addr, page_addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page_addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, access_type, page_addr)) { CPUState *cs = env_cpu(env); if (!cs->cc->tcg_ops->tlb_fill(cs, addr, fault_size, access_type, @@ -1571,7 +1532,7 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, */ flags &= ~TLB_INVALID_MASK; } - tlb_addr = tlb_read_ofs(entry, elt_ofs); + tlb_addr = tlb_read_idx(entry, access_type); } flags &= tlb_addr; @@ -1802,7 +1763,8 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, if (prot & PAGE_WRITE) { tlb_addr = tlb_addr_write(tlbe); if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(addr_write, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_STORE, + addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, MMU_DATA_STORE, mmu_idx, retaddr); index = tlb_index(env, mmu_idx, addr); @@ -1835,7 +1797,8 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, } else /* if (prot & PAGE_READ) */ { tlb_addr = tlbe->addr_read; if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(addr_write, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_LOAD, + addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, MMU_DATA_LOAD, mmu_idx, retaddr); index = tlb_index(env, mmu_idx, addr); @@ -1929,13 +1892,9 @@ load_memop(const void *haddr, MemOp op) static inline uint64_t QEMU_ALWAYS_INLINE load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, - uintptr_t retaddr, MemOp op, bool code_read, + uintptr_t retaddr, MemOp op, MMUAccessType access_type, FullLoadHelper *full_load) { - const size_t tlb_off = code_read ? - offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read); - const MMUAccessType access_type = - code_read ? MMU_INST_FETCH : MMU_DATA_LOAD; const unsigned a_bits = get_alignment_bits(get_memop(oi)); const size_t size = memop_size(op); uintptr_t mmu_idx = get_mmuidx(oi); @@ -1955,18 +1914,18 @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, index = tlb_index(env, mmu_idx, addr); entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = code_read ? entry->addr_code : entry->addr_read; + tlb_addr = tlb_read_idx(entry, access_type); /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, + if (!victim_tlb_hit(env, mmu_idx, index, access_type, addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, access_type, mmu_idx, retaddr); index = tlb_index(env, mmu_idx, addr); entry = tlb_entry(env, mmu_idx, addr); } - tlb_addr = code_read ? entry->addr_code : entry->addr_read; + tlb_addr = tlb_read_idx(entry, access_type); tlb_addr &= ~TLB_INVALID_MASK; } @@ -2052,7 +2011,8 @@ static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_UB); - return load_helper(env, addr, oi, retaddr, MO_UB, false, full_ldub_mmu); + return load_helper(env, addr, oi, retaddr, MO_UB, MMU_DATA_LOAD, + full_ldub_mmu); } tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, @@ -2065,7 +2025,7 @@ static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUW); - return load_helper(env, addr, oi, retaddr, MO_LEUW, false, + return load_helper(env, addr, oi, retaddr, MO_LEUW, MMU_DATA_LOAD, full_le_lduw_mmu); } @@ -2079,7 +2039,7 @@ static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUW); - return load_helper(env, addr, oi, retaddr, MO_BEUW, false, + return load_helper(env, addr, oi, retaddr, MO_BEUW, MMU_DATA_LOAD, full_be_lduw_mmu); } @@ -2093,7 +2053,7 @@ static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUL); - return load_helper(env, addr, oi, retaddr, MO_LEUL, false, + return load_helper(env, addr, oi, retaddr, MO_LEUL, MMU_DATA_LOAD, full_le_ldul_mmu); } @@ -2107,7 +2067,7 @@ static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUL); - return load_helper(env, addr, oi, retaddr, MO_BEUL, false, + return load_helper(env, addr, oi, retaddr, MO_BEUL, MMU_DATA_LOAD, full_be_ldul_mmu); } @@ -2121,7 +2081,7 @@ uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUQ); - return load_helper(env, addr, oi, retaddr, MO_LEUQ, false, + return load_helper(env, addr, oi, retaddr, MO_LEUQ, MMU_DATA_LOAD, helper_le_ldq_mmu); } @@ -2129,7 +2089,7 @@ uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUQ); - return load_helper(env, addr, oi, retaddr, MO_BEUQ, false, + return load_helper(env, addr, oi, retaddr, MO_BEUQ, MMU_DATA_LOAD, helper_be_ldq_mmu); } @@ -2325,7 +2285,6 @@ store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val, uintptr_t retaddr, size_t size, uintptr_t mmu_idx, bool big_endian) { - const size_t tlb_off = offsetof(CPUTLBEntry, addr_write); uintptr_t index, index2; CPUTLBEntry *entry, *entry2; target_ulong page1, page2, tlb_addr, tlb_addr2; @@ -2347,7 +2306,7 @@ store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val, tlb_addr2 = tlb_addr_write(entry2); if (page1 != page2 && !tlb_hit_page(tlb_addr2, page2)) { - if (!victim_tlb_hit(env, mmu_idx, index2, tlb_off, page2)) { + if (!victim_tlb_hit(env, mmu_idx, index2, MMU_DATA_STORE, page2)) { tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE, mmu_idx, retaddr); index2 = tlb_index(env, mmu_idx, page2); @@ -2400,7 +2359,6 @@ static inline void QEMU_ALWAYS_INLINE store_helper(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr, MemOp op) { - const size_t tlb_off = offsetof(CPUTLBEntry, addr_write); const unsigned a_bits = get_alignment_bits(get_memop(oi)); const size_t size = memop_size(op); uintptr_t mmu_idx = get_mmuidx(oi); @@ -2423,7 +2381,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val, /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, + if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_STORE, addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, MMU_DATA_STORE, mmu_idx, retaddr); @@ -2729,7 +2687,8 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, static uint64_t full_ldub_code(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, MO_8, true, full_ldub_code); + return load_helper(env, addr, oi, retaddr, MO_8, + MMU_INST_FETCH, full_ldub_code); } uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) @@ -2741,7 +2700,8 @@ uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) static uint64_t full_lduw_code(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, MO_TEUW, true, full_lduw_code); + return load_helper(env, addr, oi, retaddr, MO_TEUW, + MMU_INST_FETCH, full_lduw_code); } uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) @@ -2753,7 +2713,8 @@ uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) static uint64_t full_ldl_code(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, MO_TEUL, true, full_ldl_code); + return load_helper(env, addr, oi, retaddr, MO_TEUL, + MMU_INST_FETCH, full_ldl_code); } uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) @@ -2765,7 +2726,8 @@ uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) static uint64_t full_ldq_code(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, MO_TEUQ, true, full_ldq_code); + return load_helper(env, addr, oi, retaddr, MO_TEUQ, + MMU_INST_FETCH, full_ldq_code); } uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr) From patchwork Wed May 3 07:06:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678639 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp906675wrs; Wed, 3 May 2023 00:17:17 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ44LuAVX4SJrrK+Rwah80uttPjCofKDaaTZRQZhN3wsxS+V2hVQWiyI80jSpaDQF9QoZ9g7 X-Received: by 2002:a05:622a:1a27:b0:3f0:b57b:3714 with SMTP id f39-20020a05622a1a2700b003f0b57b3714mr32865283qtb.56.1683098237668; Wed, 03 May 2023 00:17:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098237; cv=none; d=google.com; s=arc-20160816; b=oXu/6aGJAxsSROmrYptM+jbssXZ3dN6pCSQqhnfdLtPJVThcuOzj8yNqWXZZH7C/xy EWWeYpKEicGey1i5ujUk+C85vW9poXSNYWsRKNKk1beJpyRDwEPVFNcF4mA38jQcXE3w Y0MJlsKTRgnfu73uwHLKhSkp5kMjOtUxhwt4wWRfS63zsYDgWeX2pQJuCKeTxMn8aBE2 TcC45XJJf3nfZXnx2tRsd+Hw5mwQbjc0kgBJwkoXmlpgmwSrprEnsehq8oYmblqPKEyW PPRyX7qPVYDAk78ybUGwvNro5jZ7n3DFcXkUb9SggZyuwJHx9FMTIogij/ZrzsVJ5PUX q1gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=L9+XhKSQCD5G4vYrZR7UUufdBb6qhWX8iejYeZx/xyo=; b=dvMd2gGyf5DkfELXhTWOpt3aM+CDgKrzw8Oskh/szPWdvRYNdgQzYRRti+Ea3LG07t 3GxYROTVclkd9Z5+7cDZhF+HbhfBMq9ij3L0bdvXv4oMW16d0rD2mSEDPsMFZTgNPkjC V4lOv6jcCSJ9oBIEtT+LyXQeOabeRqFqnoP/hKmmXZkPItdn6E6rkAZ4HdA3vNe+7RrI VPiZ6c+FctTRvQm1xadWrh2oB7C6MKShaGlH83wWuPBcNPMlmH9chSQjWhKJkPa/1gxM JF2adublXozZNW/aZQ1p1ZK6kVOnAst4tNGZ8s916Q/NbLmKltRpAX/1BsiZVA7HUFNO 3HRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=oh35+lhM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d7-20020a05622a100700b003e4e22d0725si18504377qte.680.2023.05.03.00.17.17 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:17:17 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=oh35+lhM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aL-0005vz-Eg; Wed, 03 May 2023 03:07:33 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6Zx-00053P-IE for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:10 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zq-0005aT-PX for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:09 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-3f18335a870so29193775e9.0 for ; Wed, 03 May 2023 00:07:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097621; x=1685689621; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=L9+XhKSQCD5G4vYrZR7UUufdBb6qhWX8iejYeZx/xyo=; b=oh35+lhMdXU/hNo/7hag5Wm1VTEIM87QJ7uXvF1K3JOYL+jAZqFAZYZlF0LloHIeGV DkdbQVHwXbX3s3AWBEtMW8PrAaZRLPjXn1difoGl4B3nrfTwP60cbCRLcdJtctWi1VM9 u4wPmLCM/cKsmOse1q8r3tbM1pGO1IeH1CZOcPLLGXkrpTN2iSDi5jSLGA9x1Oh8FWpW lNSuTmQQbnkVymEF7qeVBs93JDIY0oVhjqhHaLSzrWzJJxmuARfHzkaEiYauafPy4G0f UFyEC03jzkjcbx3u2PvOuASNb54mkLUyE7wmar00lIVK0q6szBHmLLdqGE0AJ+1VPXpO NdkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097621; x=1685689621; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=L9+XhKSQCD5G4vYrZR7UUufdBb6qhWX8iejYeZx/xyo=; b=aHK/UJzylsc89l0/2fSwlKVFSlYuCVx+TKvxcuSYvFJzepTpGvOpkJvm3uTkpW+RFm 2QZcYPgcZEPwABuSlsnfZJlpTIn1jiMFFv7lp3Zv0bMXy3GVqv8IKmcD04pr/J2pPqYE w+hsZev81NoWf8GFjL7ffktfpv+aN7ClLVhZqmF6yh7FPKJl2Nd7HypH1BiXfNIcW4pl EshCi9UrOtyfHGUE1GEiKZlOTIIrNq9Y+gxy6SI1/lrLVZtIR5D+8xeIYUhC9zjTvpLY JcRLQtCmVnKWtKKepNYjVXTYnWUsqaRawgRDpxXtI4qx1qQtECnDKFMOdfRnd2N6H3gf I5cw== X-Gm-Message-State: AC+VfDwwbgvUGDKPVLjUpKoS1hMkY6r5jE7qsjvn2zLMB5IZ51/616By wHSv4XDe5KliuLgiX4lDp5Nrkzfvs4QriEy+pEgyYw== X-Received: by 2002:a1c:7205:0:b0:3f1:6e88:5785 with SMTP id n5-20020a1c7205000000b003f16e885785mr14748920wmc.14.1683097620975; Wed, 03 May 2023 00:07:00 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:00 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v4 04/57] accel/tcg: Reorg system mode load helpers Date: Wed, 3 May 2023 08:06:03 +0100 Message-Id: <20230503070656.1746170-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of trying to unify all operations on uint64_t, pull out mmu_lookup() to perform the basic tlb hit and resolution. Create individual functions to handle access by size. Reviewed-by: Alex Bennée Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- accel/tcg/cputlb.c | 644 +++++++++++++++++++++++++++++---------------- 1 file changed, 423 insertions(+), 221 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 5051244c67..dd68514260 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1716,6 +1716,178 @@ bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx, #endif +/* + * Probe for a load/store operation. + * Return the host address and into @flags. + */ + +typedef struct MMULookupPageData { + CPUTLBEntryFull *full; + void *haddr; + target_ulong addr; + int flags; + int size; +} MMULookupPageData; + +typedef struct MMULookupLocals { + MMULookupPageData page[2]; + MemOp memop; + int mmu_idx; +} MMULookupLocals; + +/** + * mmu_lookup1: translate one page + * @env: cpu context + * @data: lookup parameters + * @mmu_idx: virtual address context + * @access_type: load/store/code + * @ra: return address into tcg generated code, or 0 + * + * Resolve the translation for the one page at @data.addr, filling in + * the rest of @data with the results. If the translation fails, + * tlb_fill will longjmp out. Return true if the softmmu tlb for + * @mmu_idx may have resized. + */ +static bool mmu_lookup1(CPUArchState *env, MMULookupPageData *data, + int mmu_idx, MMUAccessType access_type, uintptr_t ra) +{ + target_ulong addr = data->addr; + uintptr_t index = tlb_index(env, mmu_idx, addr); + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = tlb_read_idx(entry, access_type); + bool maybe_resized = false; + + /* If the TLB entry is for a different page, reload and try again. */ + if (!tlb_hit(tlb_addr, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, access_type, + addr & TARGET_PAGE_MASK)) { + tlb_fill(env_cpu(env), addr, data->size, access_type, mmu_idx, ra); + maybe_resized = true; + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + } + tlb_addr = tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; + } + + data->flags = tlb_addr & TLB_FLAGS_MASK; + data->full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; + /* Compute haddr speculatively; depending on flags it might be invalid. */ + data->haddr = (void *)((uintptr_t)addr + entry->addend); + + return maybe_resized; +} + +/** + * mmu_watch_or_dirty + * @env: cpu context + * @data: lookup parameters + * @access_type: load/store/code + * @ra: return address into tcg generated code, or 0 + * + * Trigger watchpoints for @data.addr:@data.size; + * record writes to protected clean pages. + */ +static void mmu_watch_or_dirty(CPUArchState *env, MMULookupPageData *data, + MMUAccessType access_type, uintptr_t ra) +{ + CPUTLBEntryFull *full = data->full; + target_ulong addr = data->addr; + int flags = data->flags; + int size = data->size; + + /* On watchpoint hit, this will longjmp out. */ + if (flags & TLB_WATCHPOINT) { + int wp = access_type == MMU_DATA_STORE ? BP_MEM_WRITE : BP_MEM_READ; + cpu_check_watchpoint(env_cpu(env), addr, size, full->attrs, wp, ra); + flags &= ~TLB_WATCHPOINT; + } + + if (flags & TLB_NOTDIRTY) { + notdirty_write(env_cpu(env), addr, size, full, ra); + flags &= ~TLB_NOTDIRTY; + } + data->flags = flags; +} + +/** + * mmu_lookup: translate page(s) + * @env: cpu context + * @addr: virtual address + * @oi: combined mmu_idx and MemOp + * @ra: return address into tcg generated code, or 0 + * @access_type: load/store/code + * @l: output result + * + * Resolve the translation for the page(s) beginning at @addr, for MemOp.size + * bytes. Return true if the lookup crosses a page boundary. + */ +static bool mmu_lookup(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType type, MMULookupLocals *l) +{ + unsigned a_bits; + bool crosspage; + int flags; + + l->memop = get_memop(oi); + l->mmu_idx = get_mmuidx(oi); + + tcg_debug_assert(l->mmu_idx < NB_MMU_MODES); + + /* Handle CPU specific unaligned behaviour */ + a_bits = get_alignment_bits(l->memop); + if (addr & ((1 << a_bits) - 1)) { + cpu_unaligned_access(env_cpu(env), addr, type, l->mmu_idx, ra); + } + + l->page[0].addr = addr; + l->page[0].size = memop_size(l->memop); + l->page[1].addr = (addr + l->page[0].size - 1) & TARGET_PAGE_MASK; + l->page[1].size = 0; + crosspage = (addr ^ l->page[1].addr) & TARGET_PAGE_MASK; + + if (likely(!crosspage)) { + mmu_lookup1(env, &l->page[0], l->mmu_idx, type, ra); + + flags = l->page[0].flags; + if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { + mmu_watch_or_dirty(env, &l->page[0], type, ra); + } + if (unlikely(flags & TLB_BSWAP)) { + l->memop ^= MO_BSWAP; + } + } else { + /* Finish compute of page crossing. */ + int size1 = l->page[1].addr - addr; + l->page[1].size = l->page[0].size - size1; + l->page[0].size = size1; + + /* + * Lookup both pages, recognizing exceptions from either. If the + * second lookup potentially resized, refresh first CPUTLBEntryFull. + */ + mmu_lookup1(env, &l->page[0], l->mmu_idx, type, ra); + if (mmu_lookup1(env, &l->page[1], l->mmu_idx, type, ra)) { + uintptr_t index = tlb_index(env, l->mmu_idx, addr); + l->page[0].full = &env_tlb(env)->d[l->mmu_idx].fulltlb[index]; + } + + flags = l->page[0].flags | l->page[1].flags; + if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { + mmu_watch_or_dirty(env, &l->page[0], type, ra); + mmu_watch_or_dirty(env, &l->page[1], type, ra); + } + + /* + * Since target/sparc is the only user of TLB_BSWAP, and all + * Sparc accesses are aligned, any treatment across two pages + * would be arbitrary. Refuse it until there's a use. + */ + tcg_debug_assert((flags & TLB_BSWAP) == 0); + } + + return crosspage; +} + /* * Probe for an atomic operation. Do not allow unaligned operations, * or io operations to proceed. Return the host address. @@ -1890,113 +2062,6 @@ load_memop(const void *haddr, MemOp op) } } -static inline uint64_t QEMU_ALWAYS_INLINE -load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, - uintptr_t retaddr, MemOp op, MMUAccessType access_type, - FullLoadHelper *full_load) -{ - const unsigned a_bits = get_alignment_bits(get_memop(oi)); - const size_t size = memop_size(op); - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index; - CPUTLBEntry *entry; - target_ulong tlb_addr; - void *haddr; - uint64_t res; - - tcg_debug_assert(mmu_idx < NB_MMU_MODES); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, access_type, - mmu_idx, retaddr); - } - - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = tlb_read_idx(entry, access_type); - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, access_type, - addr & TARGET_PAGE_MASK)) { - tlb_fill(env_cpu(env), addr, size, - access_type, mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = tlb_read_idx(entry, access_type); - tlb_addr &= ~TLB_INVALID_MASK; - } - - /* Handle anything that isn't just a straight memory access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUTLBEntryFull *full; - bool need_swap; - - /* For anything that is unaligned, recurse through full_load. */ - if ((addr & (size - 1)) != 0) { - goto do_unaligned_access; - } - - full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; - - /* Handle watchpoints. */ - if (unlikely(tlb_addr & TLB_WATCHPOINT)) { - /* On watchpoint hit, this will longjmp out. */ - cpu_check_watchpoint(env_cpu(env), addr, size, - full->attrs, BP_MEM_READ, retaddr); - } - - need_swap = size > 1 && (tlb_addr & TLB_BSWAP); - - /* Handle I/O access. */ - if (likely(tlb_addr & TLB_MMIO)) { - return io_readx(env, full, mmu_idx, addr, retaddr, - access_type, op ^ (need_swap * MO_BSWAP)); - } - - haddr = (void *)((uintptr_t)addr + entry->addend); - - /* - * Keep these two load_memop separate to ensure that the compiler - * is able to fold the entire function to a single instruction. - * There is a build-time assert inside to remind you of this. ;-) - */ - if (unlikely(need_swap)) { - return load_memop(haddr, op ^ MO_BSWAP); - } - return load_memop(haddr, op); - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (size > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - uint64_t r1, r2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~((target_ulong)size - 1); - addr2 = addr1 + size; - r1 = full_load(env, addr1, oi, retaddr); - r2 = full_load(env, addr2, oi, retaddr); - shift = (addr & (size - 1)) * 8; - - if (memop_big_endian(op)) { - /* Big-endian combine. */ - res = (r1 << shift) | (r2 >> ((size * 8) - shift)); - } else { - /* Little-endian combine. */ - res = (r1 >> shift) | (r2 << ((size * 8) - shift)); - } - return res & MAKE_64BIT_MASK(0, size * 8); - } - - haddr = (void *)((uintptr_t)addr + entry->addend); - return load_memop(haddr, op); -} - /* * For the benefit of TCG generated code, we want to avoid the * complication of ABI-specific return type promotion and always @@ -2007,90 +2072,250 @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +/** + * do_ld_mmio_beN: + * @env: cpu context + * @p: translation parameters + * @ret_be: accumulated data + * @mmu_idx: virtual address context + * @ra: return address into tcg generated code, or 0 + * + * Load @p->size bytes from @p->addr, which is memory-mapped i/o. + * The bytes are concatenated with in big-endian order with @ret_be. + */ +static uint64_t do_ld_mmio_beN(CPUArchState *env, MMULookupPageData *p, + uint64_t ret_be, int mmu_idx, + MMUAccessType type, uintptr_t ra) { - validate_memop(oi, MO_UB); - return load_helper(env, addr, oi, retaddr, MO_UB, MMU_DATA_LOAD, - full_ldub_mmu); + CPUTLBEntryFull *full = p->full; + target_ulong addr = p->addr; + int i, size = p->size; + + QEMU_IOTHREAD_LOCK_GUARD(); + for (i = 0; i < size; i++) { + uint8_t x = io_readx(env, full, mmu_idx, addr + i, ra, type, MO_UB); + ret_be = (ret_be << 8) | x; + } + return ret_be; +} + +/** + * do_ld_bytes_beN + * @p: translation parameters + * @ret_be: accumulated data + * + * Load @p->size bytes from @p->haddr, which is RAM. + * The bytes to concatenated in big-endian order with @ret_be. + */ +static uint64_t do_ld_bytes_beN(MMULookupPageData *p, uint64_t ret_be) +{ + uint8_t *haddr = p->haddr; + int i, size = p->size; + + for (i = 0; i < size; i++) { + ret_be = (ret_be << 8) | haddr[i]; + } + return ret_be; +} + +/* + * Wrapper for the above. + */ +static uint64_t do_ld_beN(CPUArchState *env, MMULookupPageData *p, + uint64_t ret_be, int mmu_idx, + MMUAccessType type, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + return do_ld_mmio_beN(env, p, ret_be, mmu_idx, type, ra); + } else { + return do_ld_bytes_beN(p, ret_be); + } +} + +static uint8_t do_ld_1(CPUArchState *env, MMULookupPageData *p, int mmu_idx, + MMUAccessType type, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + return io_readx(env, p->full, mmu_idx, p->addr, ra, type, MO_UB); + } else { + return *(uint8_t *)p->haddr; + } +} + +static uint16_t do_ld_2(CPUArchState *env, MMULookupPageData *p, int mmu_idx, + MMUAccessType type, MemOp memop, uintptr_t ra) +{ + uint64_t ret; + + if (unlikely(p->flags & TLB_MMIO)) { + return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); + } + + /* Perform the load host endian, then swap if necessary. */ + ret = load_memop(p->haddr, MO_UW); + if (memop & MO_BSWAP) { + ret = bswap16(ret); + } + return ret; +} + +static uint32_t do_ld_4(CPUArchState *env, MMULookupPageData *p, int mmu_idx, + MMUAccessType type, MemOp memop, uintptr_t ra) +{ + uint32_t ret; + + if (unlikely(p->flags & TLB_MMIO)) { + return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); + } + + /* Perform the load host endian. */ + ret = load_memop(p->haddr, MO_UL); + if (memop & MO_BSWAP) { + ret = bswap32(ret); + } + return ret; +} + +static uint64_t do_ld_8(CPUArchState *env, MMULookupPageData *p, int mmu_idx, + MMUAccessType type, MemOp memop, uintptr_t ra) +{ + uint64_t ret; + + if (unlikely(p->flags & TLB_MMIO)) { + return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); + } + + /* Perform the load host endian. */ + ret = load_memop(p->haddr, MO_UQ); + if (memop & MO_BSWAP) { + ret = bswap64(ret); + } + return ret; +} + +static uint8_t do_ld1_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) +{ + MMULookupLocals l; + bool crosspage; + + crosspage = mmu_lookup(env, addr, oi, ra, access_type, &l); + tcg_debug_assert(!crosspage); + + return do_ld_1(env, &l.page[0], l.mmu_idx, access_type, ra); } tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_ldub_mmu(env, addr, oi, retaddr); + validate_memop(oi, MO_UB); + return do_ld1_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } -static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +static uint16_t do_ld2_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { - validate_memop(oi, MO_LEUW); - return load_helper(env, addr, oi, retaddr, MO_LEUW, MMU_DATA_LOAD, - full_le_lduw_mmu); + MMULookupLocals l; + bool crosspage; + uint16_t ret; + uint8_t a, b; + + crosspage = mmu_lookup(env, addr, oi, ra, access_type, &l); + if (likely(!crosspage)) { + return do_ld_2(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); + } + + a = do_ld_1(env, &l.page[0], l.mmu_idx, access_type, ra); + b = do_ld_1(env, &l.page[1], l.mmu_idx, access_type, ra); + + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = a | (b << 8); + } else { + ret = b | (a << 8); + } + return ret; } tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_le_lduw_mmu(env, addr, oi, retaddr); -} - -static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUW); - return load_helper(env, addr, oi, retaddr, MO_BEUW, MMU_DATA_LOAD, - full_be_lduw_mmu); + validate_memop(oi, MO_LEUW); + return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_be_lduw_mmu(env, addr, oi, retaddr); + validate_memop(oi, MO_BEUW); + return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } -static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +static uint32_t do_ld4_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { - validate_memop(oi, MO_LEUL); - return load_helper(env, addr, oi, retaddr, MO_LEUL, MMU_DATA_LOAD, - full_le_ldul_mmu); + MMULookupLocals l; + bool crosspage; + uint32_t ret; + + crosspage = mmu_lookup(env, addr, oi, ra, access_type, &l); + if (likely(!crosspage)) { + return do_ld_4(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); + } + + ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, ra); + ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, ra); + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = bswap32(ret); + } + return ret; } tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_le_ldul_mmu(env, addr, oi, retaddr); -} - -static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUL); - return load_helper(env, addr, oi, retaddr, MO_BEUL, MMU_DATA_LOAD, - full_be_ldul_mmu); + validate_memop(oi, MO_LEUL); + return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_be_ldul_mmu(env, addr, oi, retaddr); + validate_memop(oi, MO_BEUL); + return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); +} + +static uint64_t do_ld8_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) +{ + MMULookupLocals l; + bool crosspage; + uint64_t ret; + + crosspage = mmu_lookup(env, addr, oi, ra, access_type, &l); + if (likely(!crosspage)) { + return do_ld_8(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); + } + + ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, ra); + ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, ra); + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = bswap64(ret); + } + return ret; } uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUQ); - return load_helper(env, addr, oi, retaddr, MO_LEUQ, MMU_DATA_LOAD, - helper_le_ldq_mmu); + return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUQ); - return load_helper(env, addr, oi, retaddr, MO_BEUQ, MMU_DATA_LOAD, - helper_be_ldq_mmu); + return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } /* @@ -2133,56 +2358,85 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, * Load helpers for cpu_ldst.h. */ -static inline uint64_t cpu_load_helper(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t retaddr, - FullLoadHelper *full_load) +static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) { - uint64_t ret; - - ret = full_load(env, addr, oi, retaddr); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; } uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_ldub_mmu); + uint8_t ret; + + validate_memop(oi, MO_UB); + ret = do_ld1_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_be_lduw_mmu); + uint16_t ret; + + validate_memop(oi, MO_BEUW); + ret = do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_be_ldul_mmu); + uint32_t ret; + + validate_memop(oi, MO_BEUL); + ret = do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, helper_be_ldq_mmu); + uint64_t ret; + + validate_memop(oi, MO_BEUQ); + ret = do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_le_lduw_mmu); + uint16_t ret; + + validate_memop(oi, MO_LEUW); + ret = do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_le_ldul_mmu); + uint32_t ret; + + validate_memop(oi, MO_LEUL); + ret = do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, helper_le_ldq_mmu); + uint64_t ret; + + validate_memop(oi, MO_LEUQ); + ret = do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, @@ -2684,102 +2938,50 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, /* Code access functions. */ -static uint64_t full_ldub_code(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return load_helper(env, addr, oi, retaddr, MO_8, - MMU_INST_FETCH, full_ldub_code); -} - uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi = make_memop_idx(MO_UB, cpu_mmu_index(env, true)); - return full_ldub_code(env, addr, oi, 0); -} - -static uint64_t full_lduw_code(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return load_helper(env, addr, oi, retaddr, MO_TEUW, - MMU_INST_FETCH, full_lduw_code); + return do_ld1_mmu(env, addr, oi, 0, MMU_INST_FETCH); } uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi = make_memop_idx(MO_TEUW, cpu_mmu_index(env, true)); - return full_lduw_code(env, addr, oi, 0); -} - -static uint64_t full_ldl_code(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return load_helper(env, addr, oi, retaddr, MO_TEUL, - MMU_INST_FETCH, full_ldl_code); + return do_ld2_mmu(env, addr, oi, 0, MMU_INST_FETCH); } uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi = make_memop_idx(MO_TEUL, cpu_mmu_index(env, true)); - return full_ldl_code(env, addr, oi, 0); -} - -static uint64_t full_ldq_code(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return load_helper(env, addr, oi, retaddr, MO_TEUQ, - MMU_INST_FETCH, full_ldq_code); + return do_ld4_mmu(env, addr, oi, 0, MMU_INST_FETCH); } uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi = make_memop_idx(MO_TEUQ, cpu_mmu_index(env, true)); - return full_ldq_code(env, addr, oi, 0); + return do_ld8_mmu(env, addr, oi, 0, MMU_INST_FETCH); } uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t retaddr) { - return full_ldub_code(env, addr, oi, retaddr); + return do_ld1_mmu(env, addr, oi, retaddr, MMU_INST_FETCH); } uint16_t cpu_ldw_code_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t retaddr) { - MemOp mop = get_memop(oi); - int idx = get_mmuidx(oi); - uint16_t ret; - - ret = full_lduw_code(env, addr, make_memop_idx(MO_TEUW, idx), retaddr); - if ((mop & MO_BSWAP) != MO_TE) { - ret = bswap16(ret); - } - return ret; + return do_ld2_mmu(env, addr, oi, retaddr, MMU_INST_FETCH); } uint32_t cpu_ldl_code_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t retaddr) { - MemOp mop = get_memop(oi); - int idx = get_mmuidx(oi); - uint32_t ret; - - ret = full_ldl_code(env, addr, make_memop_idx(MO_TEUL, idx), retaddr); - if ((mop & MO_BSWAP) != MO_TE) { - ret = bswap32(ret); - } - return ret; + return do_ld4_mmu(env, addr, oi, retaddr, MMU_INST_FETCH); } uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t retaddr) { - MemOp mop = get_memop(oi); - int idx = get_mmuidx(oi); - uint64_t ret; - - ret = full_ldq_code(env, addr, make_memop_idx(MO_TEUQ, idx), retaddr); - if ((mop & MO_BSWAP) != MO_TE) { - ret = bswap64(ret); - } - return ret; + return do_ld8_mmu(env, addr, oi, retaddr, MMU_INST_FETCH); } From patchwork Wed May 3 07:06:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678727 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp913574wrs; Wed, 3 May 2023 00:39:04 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ40b3k9EY2oRER2sSt0yLfqtLDD4t5tYvvDpGUXGJdENxhl0Ihv9GW8c2aQXa6+VjKnuaV9 X-Received: by 2002:a05:622a:612:b0:3e3:86cb:1b9b with SMTP id z18-20020a05622a061200b003e386cb1b9bmr29652436qta.13.1683099544285; Wed, 03 May 2023 00:39:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683099544; cv=none; d=google.com; s=arc-20160816; b=ZMBO4hsMJEgQZA0r7J9mjHizcuXd4zZYjcbyq4D9nH3+FWW/SZ09bFGCXO+ZJnQHDN BGG8942PEtDjLGlYDXj+gtuJR+zFh2YqULQfrjiWbxC0TZ8yXKU8oYWaDzepXWqdle8t 85mbU1SxsOjDw0LTrfR5YeCmIiwUZ86b3wOUGjdc+uNqDVAuEpWEgQtH1tvW+QcE4Ldj TJ/GCMkb+qheMjLoyX7IeeWGWT37vavBDVJMxCqByuAy+eMJx4OQsuuAzNqO7AGzrftI LpydVEFKyZOIcIxjdOjbG6U6o2vzkGU7FiK9KU5T1WVGVrxPRY87QtkR99MHhnNmoxUW 1FVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Qm6MoMk9ybdgfvVzjFlg+kTYG6DqCqNHtfSUbvJxM98=; b=cW4Ngy2wBv8ur++gvAE4RMBlhzURGAQU/L4qX8yuOV4fB4QmC0GoIMulFJUGtX7BOb dF4QFtIeHHjHW1tRXKwliWv4pb7lE7F0z/5ZZiOAq36MNxPwA8/NI2A2X7D1wePZXX7v mXn9EjzS0huPeQzWHdCmsio6dv5FcY9Lfw5qWz1UYZIrijfOdjEYC+TBK0IBjN1GJ6aJ PJLKa9leT0qz2k4AZLusMyQwqBkZyn4cGT4qqaxun6xUrxObyPLr/uF/oQ+7hXDaSQHX ZgdDnH5U+IxreZTN+eBP8VJwVO56rcBXsdXfRie0vCbMIacBruSvxLvPA7vKmA1J31mI m93Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UO8Ew9fN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 20-20020a370b14000000b0074d8ca56e08si18091056qkl.477.2023.05.03.00.39.04 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:39:04 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UO8Ew9fN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aP-0006OJ-G7; Wed, 03 May 2023 03:07:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6a1-00056j-8H for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:15 -0400 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zr-0005ac-Fu for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:11 -0400 Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-3f1e2555b5aso31678215e9.0 for ; Wed, 03 May 2023 00:07:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097622; x=1685689622; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qm6MoMk9ybdgfvVzjFlg+kTYG6DqCqNHtfSUbvJxM98=; b=UO8Ew9fNxdwEiELKQEVLpVBMpnShCvKozwOBysMIkVYCsO+vsYL+UX7dIzMzyKvjBa HBE3cYJMPGBDsYioECZXl4WGyP2Og1/1Zy45HsAIL/tLn3bhmStW77av3PaE1972K+Ld NNrRWIFvBDaaQHAxG5hVW2ekksA/5OSLCD7jDDuVDO/2mz5oL8sWfuQmIdqvC42SIEGs BGl0P6+97Bknmnkmnh37j3dja/6XmRPXxhshJYXTuDSj9CXF35w/lBXvciEPGNe0we4x GgcrTYW8QEUMxCxZZAkC+HQmBh5QQ9SyVp+w4tVv8BUO/Fg5liJjIJAeg38lHPjG2Lyt 7n9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097622; x=1685689622; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qm6MoMk9ybdgfvVzjFlg+kTYG6DqCqNHtfSUbvJxM98=; b=HPaaTdyoATd4ic706z+1e9yeAbHcKEVL1YyvAI5Tlev+cQah49PZqKb8+O8sOTx/6e kSpgbSNmKO7qE2wa5QG8TrWeqRjVyZnVSRRz/sshcrzSLY1xmHNYAUW0iy1PaSCKhDo2 NBLv7Gi6OCQbXqDBdt7M5KynQfjGCMXp91OTJSo01MxJYqgUIvUXj7ccNEY4HFeKYSUb VPTKpvfzcPW0PFLoSCbdbIxRFi6eC9nAtt0NSxdhWQ9QTYKWj3IhtPESiU2qctH+ZkLl BTIwKLot52fy15BATya+pFi3hicvR4JkgNyF3yCw7kbpS+JKi4tUUblTXn4/a3c0Jdeu GBXg== X-Gm-Message-State: AC+VfDz23F7EuemZMCexq1Jcez8E2GsqHxva3fHP8aU7bsjHZRRjvS9w DMr8CjtQV3X9XHy4m9yrc/2L7GAND05ozPp7Q3fIxg== X-Received: by 2002:a1c:7c19:0:b0:3f1:8c5f:dfc5 with SMTP id x25-20020a1c7c19000000b003f18c5fdfc5mr14421084wmc.39.1683097621715; Wed, 03 May 2023 00:07:01 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:01 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 05/57] accel/tcg: Reorg system mode store helpers Date: Wed, 3 May 2023 08:06:04 +0100 Message-Id: <20230503070656.1746170-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x333.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of trying to unify all operations on uint64_t, use mmu_lookup() to perform the basic tlb hit and resolution. Create individual functions to handle access by size. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- accel/tcg/cputlb.c | 408 +++++++++++++++++++++------------------------ 1 file changed, 193 insertions(+), 215 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index dd68514260..f52c7e6da0 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2531,322 +2531,300 @@ store_memop(void *haddr, uint64_t val, MemOp op) } } -static void full_stb_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr); - -static void __attribute__((noinline)) -store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val, - uintptr_t retaddr, size_t size, uintptr_t mmu_idx, - bool big_endian) +/** + * do_st_mmio_leN: + * @env: cpu context + * @p: translation parameters + * @val_le: data to store + * @mmu_idx: virtual address context + * @ra: return address into tcg generated code, or 0 + * + * Store @p->size bytes at @p->addr, which is memory-mapped i/o. + * The bytes to store are extracted in little-endian order from @val_le; + * return the bytes of @val_le beyond @p->size that have not been stored. + */ +static uint64_t do_st_mmio_leN(CPUArchState *env, MMULookupPageData *p, + uint64_t val_le, int mmu_idx, uintptr_t ra) { - uintptr_t index, index2; - CPUTLBEntry *entry, *entry2; - target_ulong page1, page2, tlb_addr, tlb_addr2; - MemOpIdx oi; - size_t size2; - int i; + CPUTLBEntryFull *full = p->full; + target_ulong addr = p->addr; + int i, size = p->size; - /* - * Ensure the second page is in the TLB. Note that the first page - * is already guaranteed to be filled, and that the second page - * cannot evict the first. An exception to this rule is PAGE_WRITE_INV - * handling: the first page could have evicted itself. - */ - page1 = addr & TARGET_PAGE_MASK; - page2 = (addr + size) & TARGET_PAGE_MASK; - size2 = (addr + size) & ~TARGET_PAGE_MASK; - index2 = tlb_index(env, mmu_idx, page2); - entry2 = tlb_entry(env, mmu_idx, page2); - - tlb_addr2 = tlb_addr_write(entry2); - if (page1 != page2 && !tlb_hit_page(tlb_addr2, page2)) { - if (!victim_tlb_hit(env, mmu_idx, index2, MMU_DATA_STORE, page2)) { - tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE, - mmu_idx, retaddr); - index2 = tlb_index(env, mmu_idx, page2); - entry2 = tlb_entry(env, mmu_idx, page2); - } - tlb_addr2 = tlb_addr_write(entry2); + QEMU_IOTHREAD_LOCK_GUARD(); + for (i = 0; i < size; i++, val_le >>= 8) { + io_writex(env, full, mmu_idx, val_le, addr + i, ra, MO_UB); } + return val_le; +} - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = tlb_addr_write(entry); +/** + * do_st_bytes_leN: + * @p: translation parameters + * @val_le: data to store + * + * Store @p->size bytes at @p->haddr, which is RAM. + * The bytes to store are extracted in little-endian order from @val_le; + * return the bytes of @val_le beyond @p->size that have not been stored. + */ +static uint64_t do_st_bytes_leN(MMULookupPageData *p, uint64_t val_le) +{ + uint8_t *haddr = p->haddr; + int i, size = p->size; - /* - * Handle watchpoints. Since this may trap, all checks - * must happen before any store. - */ - if (unlikely(tlb_addr & TLB_WATCHPOINT)) { - cpu_check_watchpoint(env_cpu(env), addr, size - size2, - env_tlb(env)->d[mmu_idx].fulltlb[index].attrs, - BP_MEM_WRITE, retaddr); - } - if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) { - cpu_check_watchpoint(env_cpu(env), page2, size2, - env_tlb(env)->d[mmu_idx].fulltlb[index2].attrs, - BP_MEM_WRITE, retaddr); + for (i = 0; i < size; i++, val_le >>= 8) { + haddr[i] = val_le; } + return val_le; +} - /* - * XXX: not efficient, but simple. - * This loop must go in the forward direction to avoid issues - * with self-modifying code in Windows 64-bit. - */ - oi = make_memop_idx(MO_UB, mmu_idx); - if (big_endian) { - for (i = 0; i < size; ++i) { - /* Big-endian extract. */ - uint8_t val8 = val >> (((size - 1) * 8) - (i * 8)); - full_stb_mmu(env, addr + i, val8, oi, retaddr); - } +/* + * Wrapper for the above. + */ +static uint64_t do_st_leN(CPUArchState *env, MMULookupPageData *p, + uint64_t val_le, int mmu_idx, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + return do_st_mmio_leN(env, p, val_le, mmu_idx, ra); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + return val_le >> (p->size * 8); } else { - for (i = 0; i < size; ++i) { - /* Little-endian extract. */ - uint8_t val8 = val >> (i * 8); - full_stb_mmu(env, addr + i, val8, oi, retaddr); - } + return do_st_bytes_leN(p, val_le); } } -static inline void QEMU_ALWAYS_INLINE -store_helper(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr, MemOp op) +static void do_st_1(CPUArchState *env, MMULookupPageData *p, uint8_t val, + int mmu_idx, uintptr_t ra) { - const unsigned a_bits = get_alignment_bits(get_memop(oi)); - const size_t size = memop_size(op); - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index; - CPUTLBEntry *entry; - target_ulong tlb_addr; - void *haddr; - - tcg_debug_assert(mmu_idx < NB_MMU_MODES); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); + if (unlikely(p->flags & TLB_MMIO)) { + io_writex(env, p->full, mmu_idx, val, p->addr, ra, MO_UB); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + *(uint8_t *)p->haddr = val; } - - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = tlb_addr_write(entry); - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_STORE, - addr & TARGET_PAGE_MASK)) { - tlb_fill(env_cpu(env), addr, size, MMU_DATA_STORE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; - } - - /* Handle anything that isn't just a straight memory access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUTLBEntryFull *full; - bool need_swap; - - /* For anything that is unaligned, recurse through byte stores. */ - if ((addr & (size - 1)) != 0) { - goto do_unaligned_access; - } - - full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; - - /* Handle watchpoints. */ - if (unlikely(tlb_addr & TLB_WATCHPOINT)) { - /* On watchpoint hit, this will longjmp out. */ - cpu_check_watchpoint(env_cpu(env), addr, size, - full->attrs, BP_MEM_WRITE, retaddr); - } - - need_swap = size > 1 && (tlb_addr & TLB_BSWAP); - - /* Handle I/O access. */ - if (tlb_addr & TLB_MMIO) { - io_writex(env, full, mmu_idx, val, addr, retaddr, - op ^ (need_swap * MO_BSWAP)); - return; - } - - /* Ignore writes to ROM. */ - if (unlikely(tlb_addr & TLB_DISCARD_WRITE)) { - return; - } - - /* Handle clean RAM pages. */ - if (tlb_addr & TLB_NOTDIRTY) { - notdirty_write(env_cpu(env), addr, size, full, retaddr); - } - - haddr = (void *)((uintptr_t)addr + entry->addend); - - /* - * Keep these two store_memop separate to ensure that the compiler - * is able to fold the entire function to a single instruction. - * There is a build-time assert inside to remind you of this. ;-) - */ - if (unlikely(need_swap)) { - store_memop(haddr, val, op ^ MO_BSWAP); - } else { - store_memop(haddr, val, op); - } - return; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (size > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 - >= TARGET_PAGE_SIZE)) { - do_unaligned_access: - store_helper_unaligned(env, addr, val, retaddr, size, - mmu_idx, memop_big_endian(op)); - return; - } - - haddr = (void *)((uintptr_t)addr + entry->addend); - store_memop(haddr, val, op); } -static void __attribute__((noinline)) -full_stb_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) +static void do_st_2(CPUArchState *env, MMULookupPageData *p, uint16_t val, + int mmu_idx, MemOp memop, uintptr_t ra) { - validate_memop(oi, MO_UB); - store_helper(env, addr, val, oi, retaddr, MO_UB); + if (unlikely(p->flags & TLB_MMIO)) { + io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + /* Swap to host endian if necessary, then store. */ + if (memop & MO_BSWAP) { + val = bswap16(val); + } + store_memop(p->haddr, val, MO_UW); + } +} + +static void do_st_4(CPUArchState *env, MMULookupPageData *p, uint32_t val, + int mmu_idx, MemOp memop, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + /* Swap to host endian if necessary, then store. */ + if (memop & MO_BSWAP) { + val = bswap32(val); + } + store_memop(p->haddr, val, MO_UL); + } +} + +static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val, + int mmu_idx, MemOp memop, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + /* Swap to host endian if necessary, then store. */ + if (memop & MO_BSWAP) { + val = bswap64(val); + } + store_memop(p->haddr, val, MO_UQ); + } } void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) + MemOpIdx oi, uintptr_t ra) { - full_stb_mmu(env, addr, val, oi, retaddr); + MMULookupLocals l; + bool crosspage; + + validate_memop(oi, MO_UB); + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + tcg_debug_assert(!crosspage); + + do_st_1(env, &l.page[0], val, l.mmu_idx, ra); } -static void full_le_stw_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) +static void do_st2_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + MemOpIdx oi, uintptr_t ra) { - validate_memop(oi, MO_LEUW); - store_helper(env, addr, val, oi, retaddr, MO_LEUW); + MMULookupLocals l; + bool crosspage; + uint8_t a, b; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + if (likely(!crosspage)) { + do_st_2(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + return; + } + + if ((l.memop & MO_BSWAP) == MO_LE) { + a = val, b = val >> 8; + } else { + b = val, a = val >> 8; + } + do_st_1(env, &l.page[0], a, l.mmu_idx, ra); + do_st_1(env, &l.page[1], b, l.mmu_idx, ra); } void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - full_le_stw_mmu(env, addr, val, oi, retaddr); -} - -static void full_be_stw_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUW); - store_helper(env, addr, val, oi, retaddr, MO_BEUW); + validate_memop(oi, MO_LEUW); + do_st2_mmu(env, addr, val, oi, retaddr); } void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - full_be_stw_mmu(env, addr, val, oi, retaddr); + validate_memop(oi, MO_BEUW); + do_st2_mmu(env, addr, val, oi, retaddr); } -static void full_le_stl_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) +static void do_st4_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) { - validate_memop(oi, MO_LEUL); - store_helper(env, addr, val, oi, retaddr, MO_LEUL); + MMULookupLocals l; + bool crosspage; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + if (likely(!crosspage)) { + do_st_4(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + return; + } + + /* Swap to little endian for simplicity, then store by bytes. */ + if ((l.memop & MO_BSWAP) != MO_LE) { + val = bswap32(val); + } + val = do_st_leN(env, &l.page[0], val, l.mmu_idx, ra); + (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, ra); } void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - full_le_stl_mmu(env, addr, val, oi, retaddr); -} - -static void full_be_stl_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUL); - store_helper(env, addr, val, oi, retaddr, MO_BEUL); + validate_memop(oi, MO_LEUL); + do_st4_mmu(env, addr, val, oi, retaddr); } void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - full_be_stl_mmu(env, addr, val, oi, retaddr); + validate_memop(oi, MO_BEUL); + do_st4_mmu(env, addr, val, oi, retaddr); +} + +static void do_st8_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) +{ + MMULookupLocals l; + bool crosspage; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + if (likely(!crosspage)) { + do_st_8(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + return; + } + + /* Swap to little endian for simplicity, then store by bytes. */ + if ((l.memop & MO_BSWAP) != MO_LE) { + val = bswap64(val); + } + val = do_st_leN(env, &l.page[0], val, l.mmu_idx, ra); + (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, ra); } void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUQ); - store_helper(env, addr, val, oi, retaddr, MO_LEUQ); + do_st8_mmu(env, addr, val, oi, retaddr); } void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUQ); - store_helper(env, addr, val, oi, retaddr, MO_BEUQ); + do_st8_mmu(env, addr, val, oi, retaddr); } /* * Store Helpers for cpu_ldst.h */ -typedef void FullStoreHelper(CPUArchState *env, target_ulong addr, - uint64_t val, MemOpIdx oi, uintptr_t retaddr); - -static inline void cpu_store_helper(CPUArchState *env, target_ulong addr, - uint64_t val, MemOpIdx oi, uintptr_t ra, - FullStoreHelper *full_store) +static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) { - full_store(env, addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_stb_mmu); + helper_ret_stb_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stw_be_mmu(CPUArchState *env, target_ulong addr, uint16_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_be_stw_mmu); + helper_be_stw_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stl_be_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_be_stl_mmu); + helper_be_stl_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stq_be_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, helper_be_stq_mmu); + helper_be_stq_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stw_le_mmu(CPUArchState *env, target_ulong addr, uint16_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_le_stw_mmu); + helper_le_stw_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stl_le_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_le_stl_mmu); + helper_le_stl_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stq_le_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, helper_le_stq_mmu); + helper_le_stq_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, Int128 val, From patchwork Wed May 3 07:06:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678625 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp905054wrs; Wed, 3 May 2023 00:12:54 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5ne2lrvacq16ndFJzKPKt3jIxpKDK5s1TwyIwEBTqbwZqa6KieXKjTecePRQ9wtEazdRaU X-Received: by 2002:a05:622a:4c:b0:3f2:4c09:268c with SMTP id y12-20020a05622a004c00b003f24c09268cmr1606988qtw.1.1683097974169; Wed, 03 May 2023 00:12:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097974; cv=none; d=google.com; s=arc-20160816; b=js5w8lpxyChVhPGZHnzguCi6Y8+PF+5pZWAE2Q16QVu3VDGHagOW2U/oY3OasNEwHI jG+DX0LwWz3UyWN/CvvDUdQjMLSPMIyEbWFFeFYP4za6SolP0y14x+NlEW+yYvb2PwFw E+AP0DR47Azwb4ibt/V/Phutj9/qWEQcWG6IjvuKNEOZpc2RnJPR22ugPQXwIz27fG7U yvtymFjnOsOO77D0ncQ6FaTlhqA6H8WL1pD2QEP6OeNamOYw94Rl/C04KuDEwryLxuFz 9r3GUUW9WK9EVp+gF2XvIIvTWqCLLzEpf8LxJUuYIxea6zrC7mcXcF7oh89hrlalQD9j aE8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=rwkXuUi47qVfBu99/RAr50vJCMSTIZ6BKHmogLuf/B8=; b=PVBlqi2hqSJIZyjUkXSqeu8udWd8Hr+f+CrtNxreVy2wViHFCg2opJd0qgQrRzTI+0 EMYUky8FIL+okbtgLC4e3lNObEPnuZRLc7xYYOLi8iWf7YcFWxhGGGdDbRDJs4uodzlJ A3lYeNFn54TXWQKBwJo6tIdGJH9hDj0M+Bb4yqulIFW3dk1eWVBphgB9JyJJyxmXjPu/ X2V9/E0JPtPgu57jjSBVLvmZanAwgwgaSFuKbkioQ+U2k5zKVQdCLD7pAJ+0SIZSI48I 9dFpq7jhqqunaHVzRA2sn+g6ea0K5z6o0ri4qN55HuzNvvGrBPFpG8Og9EcVq1MdZnvV wu0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tLVqna7I; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id l12-20020a05622a050c00b003ed4808b5cesi18290274qtx.687.2023.05.03.00.12.53 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:12:54 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tLVqna7I; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aO-0006JO-OG; Wed, 03 May 2023 03:07:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6a1-00056k-9N for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:15 -0400 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zs-0005bb-1j for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:12 -0400 Received: by mail-wm1-x330.google.com with SMTP id 5b1f17b1804b1-3f4000ec74aso79245e9.3 for ; Wed, 03 May 2023 00:07:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097622; x=1685689622; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rwkXuUi47qVfBu99/RAr50vJCMSTIZ6BKHmogLuf/B8=; b=tLVqna7IvW8ErYi3Nv76yAK3NtvdcyzTpy7HU5OhImKrf1oGrsflX3rQPR7GDM3/It J3bl4mzqRksQx431PFiIZHQq7uOymtvsOM4Tz6eZZXTrJvnemZvs8gyka7HS5jZmKljx O2d6EzhNuQP0drqUhTnT+RFithrkFVOrQTth9a3mDMIpHflftBvwx/rLU0sfCF4JfON0 qyLcRIBhFbz7XEhIx2YfpfL8ytDMh4W0xmamsIbRIc7vd879VtU+v1+aiizNhi9WO9U2 kQnNmhMQi+8ud1+ApK3fy4lZ/wWv1+41FdN8P7TpZ6evl46s8K6dKNsLZVdNESlx8sn7 o4Bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097622; x=1685689622; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rwkXuUi47qVfBu99/RAr50vJCMSTIZ6BKHmogLuf/B8=; b=c9CT93hlyrv5T6JXl8fpiXEtJtGAMoBlTDwmvy91WIWFtLMd7aS+rgsYT6K7rvTYBB FfDZGbhFbISRgWzpVoq4S5+l2fG6CZi+n9HSt3UCvXoBHTtHvNw7JUduUIIjHVLWTnGU GYjTXq4wlnqntGdN2O0rCD3ztH2XWX2Paat+3ehJpXITt0iQ/FOx+H6S1iiRCLJrkxRM ikAOKLrexy+31MDXHjz3fg1daB64wgpwCO4k92E+bs8tQG8KeKUPN7wSb2qr9ai1aZLf Q7mnWDw64tLm0qAicN0VdcBui3D5iyxNWuBRM35dMqxtljE++wgQYbor2I818b1eVPxk 1A8w== X-Gm-Message-State: AC+VfDx0bVMkNdn2cDdE7Gv6C8zzlmCgRpg4EeHyGPsW4dWNrne/xK14 h2J9nVfNnR+93kQmz4XLXAlxND3ffH2ohdNS0TAO4g== X-Received: by 2002:a7b:c8c3:0:b0:3f1:6f53:7207 with SMTP id f3-20020a7bc8c3000000b003f16f537207mr13401800wml.17.1683097622372; Wed, 03 May 2023 00:07:02 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:02 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v4 06/57] accel/tcg: Honor atomicity of loads Date: Wed, 3 May 2023 08:06:05 +0100 Message-Id: <20230503070656.1746170-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Create ldst_atomicity.c.inc. Not required for user-only code loads, because we've ensured that the page is read-only before beginning to translate code. Reviewed-by: Alex Bennée Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 170 +++++++--- accel/tcg/user-exec.c | 26 +- accel/tcg/ldst_atomicity.c.inc | 550 +++++++++++++++++++++++++++++++++ 3 files changed, 695 insertions(+), 51 deletions(-) create mode 100644 accel/tcg/ldst_atomicity.c.inc diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index f52c7e6da0..6f3a419fe8 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1668,6 +1668,9 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr, return qemu_ram_addr_from_host_nofail(p); } +/* Load/store with atomicity primitives. */ +#include "ldst_atomicity.c.inc" + #ifdef CONFIG_PLUGIN /* * Perform a TLB lookup and populate the qemu_plugin_hwaddr structure. @@ -2034,35 +2037,7 @@ static void validate_memop(MemOpIdx oi, MemOp expected) * specifically for reading instructions from system memory. It is * called by the translation loop and in some helpers where the code * is disassembled. It shouldn't be called directly by guest code. - */ - -typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); - -static inline uint64_t QEMU_ALWAYS_INLINE -load_memop(const void *haddr, MemOp op) -{ - switch (op) { - case MO_UB: - return ldub_p(haddr); - case MO_BEUW: - return lduw_be_p(haddr); - case MO_LEUW: - return lduw_le_p(haddr); - case MO_BEUL: - return (uint32_t)ldl_be_p(haddr); - case MO_LEUL: - return (uint32_t)ldl_le_p(haddr); - case MO_BEUQ: - return ldq_be_p(haddr); - case MO_LEUQ: - return ldq_le_p(haddr); - default: - qemu_build_not_reached(); - } -} - -/* + * * For the benefit of TCG generated code, we want to avoid the * complication of ABI-specific return type promotion and always * return a value extended to the register size of the host. This is @@ -2118,17 +2093,134 @@ static uint64_t do_ld_bytes_beN(MMULookupPageData *p, uint64_t ret_be) return ret_be; } +/** + * do_ld_parts_beN + * @p: translation parameters + * @ret_be: accumulated data + * + * As do_ld_bytes_beN, but atomically on each aligned part. + */ +static uint64_t do_ld_parts_beN(MMULookupPageData *p, uint64_t ret_be) +{ + void *haddr = p->haddr; + int size = p->size; + + do { + uint64_t x; + int n; + + /* + * Find minimum of alignment and size. + * This is slightly stronger than required by MO_ATOM_SUBALIGN, which + * would have only checked the low bits of addr|size once at the start, + * but is just as easy. + */ + switch (((uintptr_t)haddr | size) & 7) { + case 4: + x = cpu_to_be32(load_atomic4(haddr)); + ret_be = (ret_be << 32) | x; + n = 4; + break; + case 2: + case 6: + x = cpu_to_be16(load_atomic2(haddr)); + ret_be = (ret_be << 16) | x; + n = 2; + break; + default: + x = *(uint8_t *)haddr; + ret_be = (ret_be << 8) | x; + n = 1; + break; + case 0: + g_assert_not_reached(); + } + haddr += n; + size -= n; + } while (size != 0); + return ret_be; +} + +/** + * do_ld_parts_be4 + * @p: translation parameters + * @ret_be: accumulated data + * + * As do_ld_bytes_beN, but with one atomic load. + * Four aligned bytes are guaranteed to cover the load. + */ +static uint64_t do_ld_whole_be4(MMULookupPageData *p, uint64_t ret_be) +{ + int o = p->addr & 3; + uint32_t x = load_atomic4(p->haddr - o); + + x = cpu_to_be32(x); + x <<= o * 8; + x >>= (4 - p->size) * 8; + return (ret_be << (p->size * 8)) | x; +} + +/** + * do_ld_parts_be8 + * @p: translation parameters + * @ret_be: accumulated data + * + * As do_ld_bytes_beN, but with one atomic load. + * Eight aligned bytes are guaranteed to cover the load. + */ +static uint64_t do_ld_whole_be8(CPUArchState *env, uintptr_t ra, + MMULookupPageData *p, uint64_t ret_be) +{ + int o = p->addr & 7; + uint64_t x = load_atomic8_or_exit(env, ra, p->haddr - o); + + x = cpu_to_be64(x); + x <<= o * 8; + x >>= (8 - p->size) * 8; + return (ret_be << (p->size * 8)) | x; +} + /* * Wrapper for the above. */ static uint64_t do_ld_beN(CPUArchState *env, MMULookupPageData *p, - uint64_t ret_be, int mmu_idx, - MMUAccessType type, uintptr_t ra) + uint64_t ret_be, int mmu_idx, MMUAccessType type, + MemOp mop, uintptr_t ra) { + MemOp atmax; + if (unlikely(p->flags & TLB_MMIO)) { return do_ld_mmio_beN(env, p, ret_be, mmu_idx, type, ra); - } else { + } + + switch (mop & MO_ATOM_MASK) { + case MO_ATOM_WITHIN16: + /* + * It is a given that we cross a page and therefore there is no + * atomicity for the load as a whole, but there may be a subobject + * as defined by ATMAX which does not cross a 16-byte boundary. + */ + atmax = mop & MO_ATMAX_MASK; + if (atmax == MO_ATMAX_SIZE) { + atmax = mop & MO_SIZE; + } else { + atmax >>= MO_ATMAX_SHIFT; + } + if (unlikely(p->size >= (1 << atmax))) { + if (!HAVE_al8_fast && p->size < 4) { + return do_ld_whole_be4(p, ret_be); + } else { + return do_ld_whole_be8(env, ra, p, ret_be); + } + } + /* fall through */ + case MO_ATOM_IFALIGN: + case MO_ATOM_NONE: return do_ld_bytes_beN(p, ret_be); + case MO_ATOM_SUBALIGN: + return do_ld_parts_beN(p, ret_be); + default: + g_assert_not_reached(); } } @@ -2152,7 +2244,7 @@ static uint16_t do_ld_2(CPUArchState *env, MMULookupPageData *p, int mmu_idx, } /* Perform the load host endian, then swap if necessary. */ - ret = load_memop(p->haddr, MO_UW); + ret = load_atom_2(env, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap16(ret); } @@ -2169,7 +2261,7 @@ static uint32_t do_ld_4(CPUArchState *env, MMULookupPageData *p, int mmu_idx, } /* Perform the load host endian. */ - ret = load_memop(p->haddr, MO_UL); + ret = load_atom_4(env, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap32(ret); } @@ -2186,7 +2278,7 @@ static uint64_t do_ld_8(CPUArchState *env, MMULookupPageData *p, int mmu_idx, } /* Perform the load host endian. */ - ret = load_memop(p->haddr, MO_UQ); + ret = load_atom_8(env, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap64(ret); } @@ -2262,8 +2354,8 @@ static uint32_t do_ld4_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return do_ld_4(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); } - ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, ra); - ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, ra); + ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, l.memop, ra); + ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, l.memop, ra); if ((l.memop & MO_BSWAP) == MO_LE) { ret = bswap32(ret); } @@ -2296,8 +2388,8 @@ static uint64_t do_ld8_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return do_ld_8(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); } - ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, ra); - ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, ra); + ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, l.memop, ra); + ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, l.memop, ra); if ((l.memop & MO_BSWAP) == MO_LE) { ret = bswap64(ret); } diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index fc597a010d..fefc83cc8c 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -931,6 +931,8 @@ static void *cpu_mmu_lookup(CPUArchState *env, target_ulong addr, return ret; } +#include "ldst_atomicity.c.inc" + uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { @@ -953,10 +955,10 @@ uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_BEUW); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = lduw_be_p(haddr); + ret = load_atom_2(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_be16(ret); } uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, @@ -967,10 +969,10 @@ uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_BEUL); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = ldl_be_p(haddr); + ret = load_atom_4(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_be32(ret); } uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, @@ -981,10 +983,10 @@ uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_BEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = ldq_be_p(haddr); + ret = load_atom_8(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_be64(ret); } uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, @@ -995,10 +997,10 @@ uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_LEUW); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = lduw_le_p(haddr); + ret = load_atom_2(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_le16(ret); } uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, @@ -1009,10 +1011,10 @@ uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_LEUL); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = ldl_le_p(haddr); + ret = load_atom_4(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_le32(ret); } uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, @@ -1023,10 +1025,10 @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_LEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = ldq_le_p(haddr); + ret = load_atom_8(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_le64(ret); } Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc new file mode 100644 index 0000000000..5169073431 --- /dev/null +++ b/accel/tcg/ldst_atomicity.c.inc @@ -0,0 +1,550 @@ +/* + * Routines common to user and system emulation of load/store. + * + * Copyright (c) 2022 Linaro, Ltd. + * + * SPDX-License-Identifier: GPL-2.0-or-later + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifdef CONFIG_ATOMIC64 +# define HAVE_al8 true +#else +# define HAVE_al8 false +#endif +#define HAVE_al8_fast (ATOMIC_REG_SIZE >= 8) + +#if defined(CONFIG_ATOMIC128) +# define HAVE_al16_fast true +#else +# define HAVE_al16_fast false +#endif + +/** + * required_atomicity: + * + * Return the lg2 bytes of atomicity required by @memop for @p. + * If the operation must be split into two operations to be + * examined separately for atomicity, return -lg2. + */ +static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop) +{ + int atmax = memop & MO_ATMAX_MASK; + int size = memop & MO_SIZE; + unsigned tmp; + + if (atmax == MO_ATMAX_SIZE) { + atmax = size; + } else { + atmax >>= MO_ATMAX_SHIFT; + } + + switch (memop & MO_ATOM_MASK) { + case MO_ATOM_IFALIGN: + tmp = (1 << atmax) - 1; + if (p & tmp) { + return MO_8; + } + break; + case MO_ATOM_NONE: + return MO_8; + case MO_ATOM_SUBALIGN: + tmp = p & -p; + if (tmp != 0 && tmp < atmax) { + atmax = tmp; + } + break; + case MO_ATOM_WITHIN16: + tmp = p & 15; + if (tmp + (1 << size) <= 16) { + atmax = size; + } else if (atmax == size) { + return MO_8; + } else if (tmp + (1 << atmax) != 16) { + /* + * Paired load/store, where the pairs aren't aligned. + * One of the two must still be handled atomically. + */ + atmax = -atmax; + } + break; + default: + g_assert_not_reached(); + } + + /* + * Here we have the architectural atomicity of the operation. + * However, when executing in a serial context, we need no extra + * host atomicity in order to avoid racing. This reduction + * avoids looping with cpu_loop_exit_atomic. + */ + if (cpu_in_serial_context(env_cpu(env))) { + return MO_8; + } + return atmax; +} + +/** + * load_atomic2: + * @pv: host address + * + * Atomically load 2 aligned bytes from @pv. + */ +static inline uint16_t load_atomic2(void *pv) +{ + uint16_t *p = __builtin_assume_aligned(pv, 2); + return qatomic_read(p); +} + +/** + * load_atomic4: + * @pv: host address + * + * Atomically load 4 aligned bytes from @pv. + */ +static inline uint32_t load_atomic4(void *pv) +{ + uint32_t *p = __builtin_assume_aligned(pv, 4); + return qatomic_read(p); +} + +/** + * load_atomic8: + * @pv: host address + * + * Atomically load 8 aligned bytes from @pv. + */ +static inline uint64_t load_atomic8(void *pv) +{ + uint64_t *p = __builtin_assume_aligned(pv, 8); + + qemu_build_assert(HAVE_al8); + return qatomic_read__nocheck(p); +} + +/** + * load_atomic16: + * @pv: host address + * + * Atomically load 16 aligned bytes from @pv. + */ +static inline Int128 load_atomic16(void *pv) +{ +#ifdef CONFIG_ATOMIC128 + __uint128_t *p = __builtin_assume_aligned(pv, 16); + Int128Alias r; + + r.u = qatomic_read__nocheck(p); + return r.s; +#else + qemu_build_not_reached(); +#endif +} + +/** + * load_atomic8_or_exit: + * @env: cpu context + * @ra: host unwind address + * @pv: host address + * + * Atomically load 8 aligned bytes from @pv. + * If this is not possible, longjmp out to restart serially. + */ +static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv) +{ + if (HAVE_al8) { + return load_atomic8(pv); + } + +#ifdef CONFIG_USER_ONLY + /* + * If the page is not writable, then assume the value is immutable + * and requires no locking. This ignores the case of MAP_SHARED with + * another process, because the fallback start_exclusive solution + * provides no protection across processes. + */ + if (!page_check_range(h2g(pv), 8, PAGE_WRITE)) { + uint64_t *p = __builtin_assume_aligned(pv, 8); + return *p; + } +#endif + + /* Ultimate fallback: re-execute in serial context. */ + cpu_loop_exit_atomic(env_cpu(env), ra); +} + +/** + * load_atomic16_or_exit: + * @env: cpu context + * @ra: host unwind address + * @pv: host address + * + * Atomically load 16 aligned bytes from @pv. + * If this is not possible, longjmp out to restart serially. + */ +static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv) +{ + Int128 *p = __builtin_assume_aligned(pv, 16); + + if (HAVE_al16_fast) { + return load_atomic16(p); + } + +#ifdef CONFIG_USER_ONLY + /* + * We can only use cmpxchg to emulate a load if the page is writable. + * If the page is not writable, then assume the value is immutable + * and requires no locking. This ignores the case of MAP_SHARED with + * another process, because the fallback start_exclusive solution + * provides no protection across processes. + */ + if (!page_check_range(h2g(p), 16, PAGE_WRITE)) { + return *p; + } +#endif + + /* + * In system mode all guest pages are writable, and for user-only + * we have just checked writability. Try cmpxchg. + */ +#if defined(CONFIG_CMPXCHG128) + /* Swap 0 with 0, with the side-effect of returning the old value. */ + { + Int128Alias r; + r.u = __sync_val_compare_and_swap_16((__uint128_t *)p, 0, 0); + return r.s; + } +#endif + + /* Ultimate fallback: re-execute in serial context. */ + cpu_loop_exit_atomic(env_cpu(env), ra); +} + +/** + * load_atom_extract_al4x2: + * @pv: host address + * + * Load 4 bytes from @p, from two sequential atomic 4-byte loads. + */ +static uint32_t load_atom_extract_al4x2(void *pv) +{ + uintptr_t pi = (uintptr_t)pv; + int sh = (pi & 3) * 8; + uint32_t a, b; + + pv = (void *)(pi & ~3); + a = load_atomic4(pv); + b = load_atomic4(pv + 4); + + if (HOST_BIG_ENDIAN) { + return (a << sh) | (b >> (-sh & 31)); + } else { + return (a >> sh) | (b << (-sh & 31)); + } +} + +/** + * load_atom_extract_al8x2: + * @pv: host address + * + * Load 8 bytes from @p, from two sequential atomic 8-byte loads. + */ +static uint64_t load_atom_extract_al8x2(void *pv) +{ + uintptr_t pi = (uintptr_t)pv; + int sh = (pi & 7) * 8; + uint64_t a, b; + + pv = (void *)(pi & ~7); + a = load_atomic8(pv); + b = load_atomic8(pv + 8); + + if (HOST_BIG_ENDIAN) { + return (a << sh) | (b >> (-sh & 63)); + } else { + return (a >> sh) | (b << (-sh & 63)); + } +} + +/** + * load_atom_extract_al8_or_exit: + * @env: cpu context + * @ra: host unwind address + * @pv: host address + * @s: object size in bytes, @s <= 4. + * + * Atomically load @s bytes from @p, when p % s != 0, and [p, p+s-1] does + * not cross an 8-byte boundary. This means that we can perform an atomic + * 8-byte load and extract. + * The value is returned in the low bits of a uint32_t. + */ +static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra, + void *pv, int s) +{ + uintptr_t pi = (uintptr_t)pv; + int o = pi & 7; + int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8; + + pv = (void *)(pi & ~7); + return load_atomic8_or_exit(env, ra, pv) >> shr; +} + +/** + * load_atom_extract_al16_or_exit: + * @env: cpu context + * @ra: host unwind address + * @p: host address + * @s: object size in bytes, @s <= 8. + * + * Atomically load @s bytes from @p, when p % 16 < 8 + * and p % 16 + s > 8. I.e. does not cross a 16-byte + * boundary, but *does* cross an 8-byte boundary. + * This is the slow version, so we must have eliminated + * any faster load_atom_extract_al8_or_exit case. + * + * If this is not possible, longjmp out to restart serially. + */ +static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra, + void *pv, int s) +{ + uintptr_t pi = (uintptr_t)pv; + int o = pi & 7; + int shr = (HOST_BIG_ENDIAN ? 16 - s - o : o) * 8; + Int128 r; + + /* + * Note constraints above: p & 8 must be clear. + * Provoke SIGBUS if possible otherwise. + */ + pv = (void *)(pi & ~7); + r = load_atomic16_or_exit(env, ra, pv); + + r = int128_urshift(r, shr); + return int128_getlo(r); +} + +/** + * load_atom_extract_al16_or_al8: + * @p: host address + * @s: object size in bytes, @s <= 8. + * + * Load @s bytes from @p, when p % s != 0. If [p, p+s-1] does not + * cross an 16-byte boundary then the access must be 16-byte atomic, + * otherwise the access must be 8-byte atomic. + */ +static inline uint64_t load_atom_extract_al16_or_al8(void *pv, int s) +{ +#if defined(CONFIG_ATOMIC128) + uintptr_t pi = (uintptr_t)pv; + int o = pi & 7; + int shr = (HOST_BIG_ENDIAN ? 16 - s - o : o) * 8; + __uint128_t r; + + pv = (void *)(pi & ~7); + if (pi & 8) { + uint64_t *p8 = __builtin_assume_aligned(pv, 16, 8); + uint64_t a = qatomic_read__nocheck(p8); + uint64_t b = qatomic_read__nocheck(p8 + 1); + + if (HOST_BIG_ENDIAN) { + r = ((__uint128_t)a << 64) | b; + } else { + r = ((__uint128_t)b << 64) | a; + } + } else { + __uint128_t *p16 = __builtin_assume_aligned(pv, 16, 0); + r = qatomic_read__nocheck(p16); + } + return r >> shr; +#else + qemu_build_not_reached(); +#endif +} + +/** + * load_atom_4_by_2: + * @pv: host address + * + * Load 4 bytes from @pv, with two 2-byte atomic loads. + */ +static inline uint32_t load_atom_4_by_2(void *pv) +{ + uint32_t a = load_atomic2(pv); + uint32_t b = load_atomic2(pv + 2); + + if (HOST_BIG_ENDIAN) { + return (a << 16) | b; + } else { + return (b << 16) | a; + } +} + +/** + * load_atom_8_by_2: + * @pv: host address + * + * Load 8 bytes from @pv, with four 2-byte atomic loads. + */ +static inline uint64_t load_atom_8_by_2(void *pv) +{ + uint32_t a = load_atom_4_by_2(pv); + uint32_t b = load_atom_4_by_2(pv + 4); + + if (HOST_BIG_ENDIAN) { + return ((uint64_t)a << 32) | b; + } else { + return ((uint64_t)b << 32) | a; + } +} + +/** + * load_atom_8_by_4: + * @pv: host address + * + * Load 8 bytes from @pv, with two 4-byte atomic loads. + */ +static inline uint64_t load_atom_8_by_4(void *pv) +{ + uint32_t a = load_atomic4(pv); + uint32_t b = load_atomic4(pv + 4); + + if (HOST_BIG_ENDIAN) { + return ((uint64_t)a << 32) | b; + } else { + return ((uint64_t)b << 32) | a; + } +} + +/** + * load_atom_2: + * @p: host address + * @memop: the full memory op + * + * Load 2 bytes from @p, honoring the atomicity of @memop. + */ +static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (likely((pi & 1) == 0)) { + return load_atomic2(pv); + } + if (HAVE_al16_fast) { + return load_atom_extract_al16_or_al8(pv, 2); + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + return lduw_he_p(pv); + case MO_16: + /* The only case remaining is MO_ATOM_WITHIN16. */ + if (!HAVE_al8_fast && (pi & 3) == 1) { + /* Big or little endian, we want the middle two bytes. */ + return load_atomic4(pv - 1) >> 8; + } + if (unlikely((pi & 15) != 7)) { + return load_atom_extract_al8_or_exit(env, ra, pv, 2); + } + return load_atom_extract_al16_or_exit(env, ra, pv, 2); + default: + g_assert_not_reached(); + } +} + +/** + * load_atom_4: + * @p: host address + * @memop: the full memory op + * + * Load 4 bytes from @p, honoring the atomicity of @memop. + */ +static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (likely((pi & 3) == 0)) { + return load_atomic4(pv); + } + if (HAVE_al16_fast) { + return load_atom_extract_al16_or_al8(pv, 4); + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + case MO_16: + case -MO_16: + /* + * For MO_ATOM_IFALIGN, this is more atomicity than required, + * but it's trivially supported on all hosts, better than 4 + * individual byte loads (when the host requires alignment), + * and overlaps with the MO_ATOM_SUBALIGN case of p % 2 == 0. + */ + return load_atom_extract_al4x2(pv); + case MO_32: + if (!(pi & 4)) { + return load_atom_extract_al8_or_exit(env, ra, pv, 4); + } + return load_atom_extract_al16_or_exit(env, ra, pv, 4); + default: + g_assert_not_reached(); + } +} + +/** + * load_atom_8: + * @p: host address + * @memop: the full memory op + * + * Load 8 bytes from @p, honoring the atomicity of @memop. + */ +static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + /* + * If the host does not support 8-byte atomics, wait until we have + * examined the atomicity parameters below. + */ + if (HAVE_al8 && likely((pi & 7) == 0)) { + return load_atomic8(pv); + } + if (HAVE_al16_fast) { + return load_atom_extract_al16_or_al8(pv, 8); + } + + atmax = required_atomicity(env, pi, memop); + if (atmax == MO_64) { + if (!HAVE_al8 && (pi & 7) == 0) { + load_atomic8_or_exit(env, ra, pv); + } + return load_atom_extract_al16_or_exit(env, ra, pv, 8); + } + if (HAVE_al8_fast) { + return load_atom_extract_al8x2(pv); + } + switch (atmax) { + case MO_8: + return ldq_he_p(pv); + case MO_16: + return load_atom_8_by_2(pv); + case MO_32: + return load_atom_8_by_4(pv); + case -MO_32: + if (HAVE_al8) { + return load_atom_extract_al8x2(pv); + } + cpu_loop_exit_atomic(env_cpu(env), ra); + default: + g_assert_not_reached(); + } +} From patchwork Wed May 3 07:06:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678666 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp909239wrs; Wed, 3 May 2023 00:25:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4LN9jC1kEWJ9Ei2y3BSv44GmSTjvSd/uMHOwzzAA0LQQODKz9fWaRHYpW+jefPvQGwC/kh X-Received: by 2002:a05:622a:1006:b0:3f0:a16a:64a0 with SMTP id d6-20020a05622a100600b003f0a16a64a0mr30731150qte.54.1683098701273; Wed, 03 May 2023 00:25:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098701; cv=none; d=google.com; s=arc-20160816; b=eGXSN6Yq0tnCA4ZF7MdrPgci0oOXPSIEQZosLyCEUVylifgBraQemEywphahQU+7gO Yn6cXDZwmxxQ7uRkiDQXRfNHC9gPXebehy/F1iCJfFWeuesrwE3o990cGlNh40670SLc vqvr8ndidJSobheSo5qZYPldC4bB5h9HySHtKJ6ljmD1WX3b8trSQkDAVRR5K3HcvKMS Xp5R40ARswQF+omrb2mhooVE+4tfgSeiyOpa1gWKpjVShnGRaD3RQFA7WC8AR4oxw3+W w/S8IkD7aEVsLSmW2ZBDQfQzwRYmbshFVE7gAQxJKIJz2G4wu6RhEqqSbN/jBJFn8SKY sPzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=EhKuGVhuIsOBfMAQxakqSMiqwhTSBU18sNsoc+Kug30=; b=bByQ9/yQ/PqcbRhwzykGMy7fi4Tk797N+kZHHcJOh/nzrb6wYiaiVmvTdfz5lELV8R 8iE47jP7EchNarhd2z7vxAlu/vPdO39O7XbFFxQdIwgKiutkBHpewSQkwCoFcCb+SE6S IixTrPj4Jisc7RtjbeLNifFhv6dDoUCpTwxIwPBBdRbfjMQyIEnmk8yAuxpPjx++r56t k5uqOXuqTF+bBWcSoar21OUahpJKdfWv21h6AXgUirkF0dBdSdOJnv5OH0dYae0mL0tW AjD9ZazdjGt6WalkJ8Raso/idmCZVnlSnxBSZoYf7tf7s0hkvMWNOcFcZNF76DgB2534 OGrw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=AI2DLIaS; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h4-20020a05620a400400b0074a03c7ec04si19395678qko.611.2023.05.03.00.25.01 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:25:01 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=AI2DLIaS; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aT-0006ma-Ez; Wed, 03 May 2023 03:07:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6a3-000583-BO for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:18 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zt-0005c5-0J for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:14 -0400 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-3f178da219bso46369455e9.1 for ; Wed, 03 May 2023 00:07:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097623; x=1685689623; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EhKuGVhuIsOBfMAQxakqSMiqwhTSBU18sNsoc+Kug30=; b=AI2DLIaSdKkCtCh/PtOkK4QBY4aFm5IuwSSBdFNDVAY0nadYR7vz4N2rblkrmdZzEm H1n0/mkp9639oC6+fzPYbe/1fyYwPxXTZEf7hO8w3ox0/tOOFW+3ZmiL6r3dSJBfKOYY wbAnDAk4gX0U1icGwpluGL9bkiY424aTkq7GF9vq3fmcmaAOVMBpEaiU62kXOrUtdQ0X onThv/uwl7u+8OIi8nQdUA1AhxNVuSnHj4ANnUH6N0iq7/IVFHSkGKFdyS0KNGL2I0OL Hp882or5yWZsvbP3m4V8we0vrsXrXkVkdOFpIrKVCqg0cuQKu26vZ0EmcIXd4fal9Gy7 OJ6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097623; x=1685689623; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EhKuGVhuIsOBfMAQxakqSMiqwhTSBU18sNsoc+Kug30=; b=jLufH41rATNvIM6o4l7vmLjfiVcxgLx1UpseIGWAkmuGuWHAD1PAzZiBaTcaNTqshv sWw3MUJNd3p2w11ayDQHmhuiuss1y2EA813piEwoJ9FxlDJGtCFsWtF8nSMwZp+Kssl7 XDZOhU2Wn1WXPaw0Ise4Ofxveu+76qBh0NYFe+HADpj0SKvOhETMq90P9o5G5R3dfCK0 9TJtcAN6NPnMXg4IwHeEU14JHHK4XSYw2kA8653563UWMQI8dwV3t9vnnldmsFWQlWZs DuoYrLcjNxr7Kv87EDwhGwcST34ZV9FQyP7hYTajSTmDbrpmsqNLC70vAiuoQU2Z2Yqc 2kDQ== X-Gm-Message-State: AC+VfDz0+h+ts5AzQtu4H6l+rsrJKE65W8FYgUa836EjFqSiYak1amrR +YuVCgHlt7eJrvL+BeHqKkJOWip2ru8r4eoOFTcENA== X-Received: by 2002:a05:600c:21d9:b0:3f1:9526:22d4 with SMTP id x25-20020a05600c21d900b003f1952622d4mr14027630wmj.21.1683097623247; Wed, 03 May 2023 00:07:03 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:02 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 07/57] accel/tcg: Honor atomicity of stores Date: Wed, 3 May 2023 08:06:06 +0100 Message-Id: <20230503070656.1746170-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 103 +++---- accel/tcg/user-exec.c | 12 +- accel/tcg/ldst_atomicity.c.inc | 491 +++++++++++++++++++++++++++++++++ 3 files changed, 540 insertions(+), 66 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 6f3a419fe8..6b8b472a11 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2593,36 +2593,6 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, * Store Helpers */ -static inline void QEMU_ALWAYS_INLINE -store_memop(void *haddr, uint64_t val, MemOp op) -{ - switch (op) { - case MO_UB: - stb_p(haddr, val); - break; - case MO_BEUW: - stw_be_p(haddr, val); - break; - case MO_LEUW: - stw_le_p(haddr, val); - break; - case MO_BEUL: - stl_be_p(haddr, val); - break; - case MO_LEUL: - stl_le_p(haddr, val); - break; - case MO_BEUQ: - stq_be_p(haddr, val); - break; - case MO_LEUQ: - stq_le_p(haddr, val); - break; - default: - qemu_build_not_reached(); - } -} - /** * do_st_mmio_leN: * @env: cpu context @@ -2649,38 +2619,51 @@ static uint64_t do_st_mmio_leN(CPUArchState *env, MMULookupPageData *p, return val_le; } -/** - * do_st_bytes_leN: - * @p: translation parameters - * @val_le: data to store - * - * Store @p->size bytes at @p->haddr, which is RAM. - * The bytes to store are extracted in little-endian order from @val_le; - * return the bytes of @val_le beyond @p->size that have not been stored. - */ -static uint64_t do_st_bytes_leN(MMULookupPageData *p, uint64_t val_le) -{ - uint8_t *haddr = p->haddr; - int i, size = p->size; - - for (i = 0; i < size; i++, val_le >>= 8) { - haddr[i] = val_le; - } - return val_le; -} - /* * Wrapper for the above. */ static uint64_t do_st_leN(CPUArchState *env, MMULookupPageData *p, - uint64_t val_le, int mmu_idx, uintptr_t ra) + uint64_t val_le, int mmu_idx, + MemOp mop, uintptr_t ra) { + MemOp atmax; + if (unlikely(p->flags & TLB_MMIO)) { return do_st_mmio_leN(env, p, val_le, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { return val_le >> (p->size * 8); - } else { - return do_st_bytes_leN(p, val_le); + } + + switch (mop & MO_ATOM_MASK) { + case MO_ATOM_WITHIN16: + /* + * It is a given that we cross a page and therefore there is no + * atomicity for the load as a whole, but there may be a subobject + * as defined by ATMAX which does not cross a 16-byte boundary. + */ + atmax = mop & MO_ATMAX_MASK; + if (atmax == MO_ATMAX_SIZE) { + atmax = mop & MO_SIZE; + } else { + atmax >>= MO_ATMAX_SHIFT; + } + if (unlikely(p->size >= (1 << atmax))) { + if (!HAVE_al8_fast && p->size <= 4) { + return store_whole_le4(p->haddr, p->size, val_le); + } else if (HAVE_al8) { + return store_whole_le8(p->haddr, p->size, val_le); + } else { + cpu_loop_exit_atomic(env_cpu(env), ra); + } + } + /* fall through */ + case MO_ATOM_IFALIGN: + case MO_ATOM_NONE: + return store_bytes_leN(p->haddr, p->size, val_le); + case MO_ATOM_SUBALIGN: + return store_parts_leN(p->haddr, p->size, val_le); + default: + g_assert_not_reached(); } } @@ -2708,7 +2691,7 @@ static void do_st_2(CPUArchState *env, MMULookupPageData *p, uint16_t val, if (memop & MO_BSWAP) { val = bswap16(val); } - store_memop(p->haddr, val, MO_UW); + store_atom_2(env, ra, p->haddr, memop, val); } } @@ -2724,7 +2707,7 @@ static void do_st_4(CPUArchState *env, MMULookupPageData *p, uint32_t val, if (memop & MO_BSWAP) { val = bswap32(val); } - store_memop(p->haddr, val, MO_UL); + store_atom_4(env, ra, p->haddr, memop, val); } } @@ -2740,7 +2723,7 @@ static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val, if (memop & MO_BSWAP) { val = bswap64(val); } - store_memop(p->haddr, val, MO_UQ); + store_atom_8(env, ra, p->haddr, memop, val); } } @@ -2809,8 +2792,8 @@ static void do_st4_mmu(CPUArchState *env, target_ulong addr, uint32_t val, if ((l.memop & MO_BSWAP) != MO_LE) { val = bswap32(val); } - val = do_st_leN(env, &l.page[0], val, l.mmu_idx, ra); - (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, ra); + val = do_st_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); } void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, @@ -2843,8 +2826,8 @@ static void do_st8_mmu(CPUArchState *env, target_ulong addr, uint64_t val, if ((l.memop & MO_BSWAP) != MO_LE) { val = bswap64(val); } - val = do_st_leN(env, &l.page[0], val, l.mmu_idx, ra); - (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, ra); + val = do_st_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); } void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index fefc83cc8c..b89fa35a83 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -1086,7 +1086,7 @@ void cpu_stw_be_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, validate_memop(oi, MO_BEUW); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stw_be_p(haddr, val); + store_atom_2(env, ra, haddr, get_memop(oi), be16_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1098,7 +1098,7 @@ void cpu_stl_be_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, validate_memop(oi, MO_BEUL); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stl_be_p(haddr, val); + store_atom_4(env, ra, haddr, get_memop(oi), be32_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1110,7 +1110,7 @@ void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, validate_memop(oi, MO_BEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stq_be_p(haddr, val); + store_atom_8(env, ra, haddr, get_memop(oi), be64_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1122,7 +1122,7 @@ void cpu_stw_le_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, validate_memop(oi, MO_LEUW); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stw_le_p(haddr, val); + store_atom_2(env, ra, haddr, get_memop(oi), le16_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1134,7 +1134,7 @@ void cpu_stl_le_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, validate_memop(oi, MO_LEUL); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stl_le_p(haddr, val); + store_atom_4(env, ra, haddr, get_memop(oi), le32_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1146,7 +1146,7 @@ void cpu_stq_le_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, validate_memop(oi, MO_LEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stq_le_p(haddr, val); + store_atom_8(env, ra, haddr, get_memop(oi), le64_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 5169073431..07abbdee3f 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -21,6 +21,12 @@ #else # define HAVE_al16_fast false #endif +#if defined(CONFIG_ATOMIC128) || defined(CONFIG_CMPXCHG128) +# define HAVE_al16 true +#else +# define HAVE_al16 false +#endif + /** * required_atomicity: @@ -548,3 +554,488 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, g_assert_not_reached(); } } + +/** + * store_atomic2: + * @pv: host address + * @val: value to store + * + * Atomically store 2 aligned bytes to @pv. + */ +static inline void store_atomic2(void *pv, uint16_t val) +{ + uint16_t *p = __builtin_assume_aligned(pv, 2); + qatomic_set(p, val); +} + +/** + * store_atomic4: + * @pv: host address + * @val: value to store + * + * Atomically store 4 aligned bytes to @pv. + */ +static inline void store_atomic4(void *pv, uint32_t val) +{ + uint32_t *p = __builtin_assume_aligned(pv, 4); + qatomic_set(p, val); +} + +/** + * store_atomic8: + * @pv: host address + * @val: value to store + * + * Atomically store 8 aligned bytes to @pv. + */ +static inline void store_atomic8(void *pv, uint64_t val) +{ + uint64_t *p = __builtin_assume_aligned(pv, 8); + + qemu_build_assert(HAVE_al8); + qatomic_set__nocheck(p, val); +} + +/** + * store_atom_4x2 + */ +static inline void store_atom_4_by_2(void *pv, uint32_t val) +{ + store_atomic2(pv, val >> (HOST_BIG_ENDIAN ? 16 : 0)); + store_atomic2(pv + 2, val >> (HOST_BIG_ENDIAN ? 0 : 16)); +} + +/** + * store_atom_8_by_2 + */ +static inline void store_atom_8_by_2(void *pv, uint64_t val) +{ + store_atom_4_by_2(pv, val >> (HOST_BIG_ENDIAN ? 32 : 0)); + store_atom_4_by_2(pv + 4, val >> (HOST_BIG_ENDIAN ? 0 : 32)); +} + +/** + * store_atom_8_by_4 + */ +static inline void store_atom_8_by_4(void *pv, uint64_t val) +{ + store_atomic4(pv, val >> (HOST_BIG_ENDIAN ? 32 : 0)); + store_atomic4(pv + 4, val >> (HOST_BIG_ENDIAN ? 0 : 32)); +} + +/** + * store_atom_insert_al4: + * @p: host address + * @val: shifted value to store + * @msk: mask for value to store + * + * Atomically store @val to @p, masked by @msk. + */ +static void store_atom_insert_al4(uint32_t *p, uint32_t val, uint32_t msk) +{ + uint32_t old, new; + + p = __builtin_assume_aligned(p, 4); + old = qatomic_read(p); + do { + new = (old & ~msk) | val; + } while (!__atomic_compare_exchange_n(p, &old, new, true, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)); +} + +/** + * store_atom_insert_al8: + * @p: host address + * @val: shifted value to store + * @msk: mask for value to store + * + * Atomically store @val to @p masked by @msk. + */ +static void store_atom_insert_al8(uint64_t *p, uint64_t val, uint64_t msk) +{ + uint64_t old, new; + + qemu_build_assert(HAVE_al8); + p = __builtin_assume_aligned(p, 8); + old = qatomic_read__nocheck(p); + do { + new = (old & ~msk) | val; + } while (!__atomic_compare_exchange_n(p, &old, new, true, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)); +} + +/** + * store_atom_insert_al16: + * @p: host address + * @val: shifted value to store + * @msk: mask for value to store + * + * Atomically store @val to @p masked by @msk. + */ +static void store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias msk) +{ +#if defined(CONFIG_ATOMIC128) + __uint128_t *pu, old, new; + + /* With CONFIG_ATOMIC128, we can avoid the memory barriers. */ + pu = __builtin_assume_aligned(ps, 16); + old = *pu; + do { + new = (old & ~msk.u) | val.u; + } while (!__atomic_compare_exchange_n(pu, &old, new, true, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)); +#elif defined(CONFIG_CMPXCHG128) + __uint128_t *pu, old, new; + + /* + * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always + * defer to libatomic, so we must use __sync_val_compare_and_swap_16 + * and accept the sequential consistency that comes with it. + */ + pu = __builtin_assume_aligned(ps, 16); + do { + old = *pu; + new = (old & ~msk.u) | val.u; + } while (!__sync_bool_compare_and_swap_16(pu, old, new)); +#else + qemu_build_not_reached(); +#endif +} + +/** + * store_bytes_leN: + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * Store @size bytes at @p. The bytes to store are extracted in little-endian order + * from @val_le; return the bytes of @val_le beyond @size that have not been stored. + */ +static uint64_t store_bytes_leN(void *pv, int size, uint64_t val_le) +{ + uint8_t *p = pv; + for (int i = 0; i < size; i++, val_le >>= 8) { + p[i] = val_le; + } + return val_le; +} + +/** + * store_parts_leN + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * As store_bytes_leN, but atomically on each aligned part. + */ +G_GNUC_UNUSED +static uint64_t store_parts_leN(void *pv, int size, uint64_t val_le) +{ + do { + int n; + + /* Find minimum of alignment and size */ + switch (((uintptr_t)pv | size) & 7) { + case 4: + store_atomic4(pv, le32_to_cpu(val_le)); + val_le >>= 32; + n = 4; + break; + case 2: + case 6: + store_atomic2(pv, le16_to_cpu(val_le)); + val_le >>= 16; + n = 2; + break; + default: + *(uint8_t *)pv = val_le; + val_le >>= 8; + n = 1; + break; + case 0: + g_assert_not_reached(); + } + pv += n; + size -= n; + } while (size != 0); + + return val_le; +} + +/** + * store_whole_le4 + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * As store_bytes_leN, but atomically as a whole. + * Four aligned bytes are guaranteed to cover the store. + */ +static uint64_t store_whole_le4(void *pv, int size, uint64_t val_le) +{ + int sz = size * 8; + int o = (uintptr_t)pv & 3; + int sh = o * 8; + uint32_t m = MAKE_64BIT_MASK(0, sz); + uint32_t v; + + if (HOST_BIG_ENDIAN) { + v = bswap32(val_le) >> sh; + m = bswap32(m) >> sh; + } else { + v = val_le << sh; + m <<= sh; + } + store_atom_insert_al4(pv - o, v, m); + return val_le >> sz; +} + +/** + * store_whole_le8 + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * As store_bytes_leN, but atomically as a whole. + * Eight aligned bytes are guaranteed to cover the store. + */ +static uint64_t store_whole_le8(void *pv, int size, uint64_t val_le) +{ + int sz = size * 8; + int o = (uintptr_t)pv & 7; + int sh = o * 8; + uint64_t m = MAKE_64BIT_MASK(0, sz); + uint64_t v; + + qemu_build_assert(HAVE_al8); + if (HOST_BIG_ENDIAN) { + v = bswap64(val_le) >> sh; + m = bswap64(m) >> sh; + } else { + v = val_le << sh; + m <<= sh; + } + store_atom_insert_al8(pv - o, v, m); + return val_le >> sz; +} + +/** + * store_whole_le16 + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * As store_bytes_leN, but atomically as a whole. + * 16 aligned bytes are guaranteed to cover the store. + */ +static uint64_t store_whole_le16(void *pv, int size, Int128 val_le) +{ + int sz = size * 8; + int o = (uintptr_t)pv & 15; + int sh = o * 8; + Int128 m, v; + + qemu_build_assert(HAVE_al16); + + /* Like MAKE_64BIT_MASK(0, sz), but larger. */ + if (sz <= 64) { + m = int128_make64(MAKE_64BIT_MASK(0, sz)); + } else { + m = int128_make128(-1, MAKE_64BIT_MASK(0, sz - 64)); + } + + if (HOST_BIG_ENDIAN) { + v = int128_urshift(bswap128(val_le), sh); + m = int128_urshift(bswap128(m), sh); + } else { + v = int128_lshift(val_le, sh); + m = int128_lshift(m, sh); + } + store_atom_insert_al16(pv - o, v, m); + + /* Unused if sz <= 64. */ + return int128_gethi(val_le) >> (sz - 64); +} + +/** + * store_atom_2: + * @p: host address + * @val: the value to store + * @memop: the full memory op + * + * Store 2 bytes to @p, honoring the atomicity of @memop. + */ +static void store_atom_2(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop, uint16_t val) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (likely((pi & 1) == 0)) { + store_atomic2(pv, val); + return; + } + + atmax = required_atomicity(env, pi, memop); + if (atmax == MO_8) { + stw_he_p(pv, val); + return; + } + + /* + * The only case remaining is MO_ATOM_WITHIN16. + * Big or little endian, we want the middle two bytes in each test. + */ + if ((pi & 3) == 1) { + store_atom_insert_al4(pv - 1, (uint32_t)val << 8, MAKE_64BIT_MASK(8, 16)); + return; + } else if ((pi & 7) == 3) { + if (HAVE_al8) { + store_atom_insert_al8(pv - 3, (uint64_t)val << 24, MAKE_64BIT_MASK(24, 16)); + return; + } + } else if ((pi & 15) == 7) { + if (HAVE_al16) { + Int128 v = int128_lshift(int128_make64(val), 56); + Int128 m = int128_lshift(int128_make64(0xffff), 56); + store_atom_insert_al16(pv - 7, v, m); + return; + } + } else { + g_assert_not_reached(); + } + + cpu_loop_exit_atomic(env_cpu(env), ra); +} + +/** + * store_atom_4: + * @p: host address + * @val: the value to store + * @memop: the full memory op + * + * Store 4 bytes to @p, honoring the atomicity of @memop. + */ +static void store_atom_4(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop, uint32_t val) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (likely((pi & 3) == 0)) { + store_atomic4(pv, val); + return; + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + stl_he_p(pv, val); + return; + case MO_16: + store_atom_4_by_2(pv, val); + return; + case -MO_16: + { + uint32_t val_le = cpu_to_le32(val); + int s2 = pi & 3; + int s1 = 4 - s2; + + switch (s2) { + case 1: + val_le = store_whole_le4(pv, s1, val_le); + *(uint8_t *)(pv + 3) = val_le; + break; + case 3: + *(uint8_t *)pv = val_le; + store_whole_le4(pv + 1, s2, val_le >> 8); + break; + case 0: /* aligned */ + case 2: /* atmax MO_16 */ + default: + g_assert_not_reached(); + } + } + return; + case MO_32: + if ((pi & 7) < 4) { + if (HAVE_al8) { + store_whole_le8(pv, 4, cpu_to_le32(val)); + return; + } + } else { + if (HAVE_al16) { + store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val))); + return; + } + } + cpu_loop_exit_atomic(env_cpu(env), ra); + default: + g_assert_not_reached(); + } +} + +/** + * store_atom_8: + * @p: host address + * @val: the value to store + * @memop: the full memory op + * + * Store 8 bytes to @p, honoring the atomicity of @memop. + */ +static void store_atom_8(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop, uint64_t val) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (HAVE_al8 && likely((pi & 7) == 0)) { + store_atomic8(pv, val); + return; + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + stq_he_p(pv, val); + return; + case MO_16: + store_atom_8_by_2(pv, val); + return; + case MO_32: + store_atom_8_by_4(pv, val); + return; + case -MO_32: + if (HAVE_al8) { + uint64_t val_le = cpu_to_le64(val); + int s2 = pi & 7; + int s1 = 8 - s2; + + switch (s2) { + case 1 ... 3: + val_le = store_whole_le8(pv, s1, val_le); + store_bytes_leN(pv + s1, s2, val_le); + break; + case 5 ... 7: + val_le = store_bytes_leN(pv, s1, val_le); + store_whole_le8(pv + s1, s2, val_le); + break; + case 0: /* aligned */ + case 4: /* atmax MO_32 */ + default: + g_assert_not_reached(); + } + return; + } + break; + case MO_64: + if (HAVE_al16) { + store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val))); + return; + } + break; + default: + g_assert_not_reached(); + } + cpu_loop_exit_atomic(env_cpu(env), ra); +} From patchwork Wed May 3 07:06:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678621 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp904600wrs; Wed, 3 May 2023 00:11:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6iE6dTe+a3UuUEmn/rHsxWI/4ySNdjwpKQY+ly//rOffz+OzTqqZ6o2b4/kYQpmGzQYena X-Received: by 2002:ac8:5e0a:0:b0:3ef:3057:adc6 with SMTP id h10-20020ac85e0a000000b003ef3057adc6mr1421466qtx.6.1683097915555; Wed, 03 May 2023 00:11:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097915; cv=none; d=google.com; s=arc-20160816; b=t6r7zwHbU3xiNU62bihCUqC8Owe1dLYXFGMMvRciTmzFzdZQVGwxjHtRrd8lzKFlXR x50PDFwugODV/vDfTfkNBB21Roxm22hBLbo94IRF5ykO9DT2noAlqrv84kONPkP7GyW8 cnEZKuOKEG4SgeuQKWx5K432LspBZUxL3/xoaSNjbUL3W8El4mhlSTdBFlf3GTO2hE+U ViuZuyak3aYxotOtePOsfkNrf4gvmq7xlHDH7uz9gmOKKCQBUSVviQiYbFtTRHQg/1fG QF0axXjknAie324CZ3Jjm78joqiMycsW2doQw7rKUmIoiqFSEjt56Q8ydnxgZ3l8nt46 0RUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=gvLKyw9z19puGl4906wWpsqbju3c3EHRrQAYILFihCI=; b=agtE1f/NFVQB2r0YsD/aczA4c33h7GbszmYi2l2I4S8hGPx/ryXQqXYcQlmnNbrO6r QHi4pHXyHqmOAXa4+fVGgN00mkZ1MDLfvM4C9TyA01uriX7axFW+0y2gOlS0GT9g8glI H3aBpNcY0v72ny9TCrss8VlvfrkCufGVkhe/L3cCSFJhajHksylR/yPmBZGCKPbUTz8I N2PQIQoVwNbBvhHDJil7qemjhBZs60oU79IGG62XchlfWDgZZ3ay5UoIvscF/SX0bt9n nYJLhlgrUtQSZuHC4M5mDiPYZ2BkCP0oMJkgPfbrRSZ8rCRb5uT0Y/BMtbHq11W4skFY S6xg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MVnJ9l5s; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o4-20020a05622a008400b003e18051efdbsi3712151qtw.671.2023.05.03.00.11.55 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:11:55 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MVnJ9l5s; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aR-0006dT-O1; Wed, 03 May 2023 03:07:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6a5-0005C8-2D for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:18 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zt-0005cQ-Ep for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:16 -0400 Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-3f315712406so11270545e9.0 for ; Wed, 03 May 2023 00:07:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097624; x=1685689624; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gvLKyw9z19puGl4906wWpsqbju3c3EHRrQAYILFihCI=; b=MVnJ9l5spUH5OFm1bTK3TIwf78cBpsOG1k6kxTpswzfl3DyjhdQr4kArGOuJjhQdDA 1DlOEpl3LJNFtRqHP+kNTI2uL07UEvPYEBkEXB3e6V4mYMEmxhdCgqQNPY9Ow7K2xN9m yi6QwmMbIWH9+UxMSqxkGhV3OXpce6tAi4zoNSbz0xBpdploJI531UaDUbBlrvHI3D4a 625Q9miYhpzhjtz9k+tt4LPSQGcy9yTUPvVr7gRfNbgEpg1cln20AhPpq9k3qIYJMTLT xvL5ih55+6dJukvhUoYxAP2a0YxVuXO9fnRKG/IBYpwhXU01ePwQnSXB+oiWcTocPb2x erfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097624; x=1685689624; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gvLKyw9z19puGl4906wWpsqbju3c3EHRrQAYILFihCI=; b=UKDBbqAtQSs/sbK+zY1wpcD84RAXKnzO8/cj1/qgyvEqFUCP+6YUw6nSOpUR2/UlN3 PpaEhZ1XynWO2IgQYiZn5cM91CyXt78/3f5ZxZ0nmFfR1lowx6dQvuqvXO4VPtMBHbsj /PPpqpZM75fkcKBmXXvInPULS0uhkFYizGgO+6z5b8ClFc0Xi9WtkrWA+vGis98o7c3e xoEadyvPyPZghvpwGP4uy6hFbzb8h/NoSW96t8PUZk6MlNRjWnxzA1O4Jq4YqnnctrhF j9sWNHEmrp2XefhfJ07q31lTfeISWRSCxclsVtTPP3E9krJP3zhxGUGK1bE58ZLKezu7 RcGA== X-Gm-Message-State: AC+VfDyLj/FYenpBFqn/XHTId3C8I3H7ImcY7MXY16R/JZLilTsMGD2L rzdLYyrmz2QOaCuOUPzHNmEcPoY8Zqq+vo3o8aaXHw== X-Received: by 2002:a1c:7c14:0:b0:3ed:3268:5f35 with SMTP id x20-20020a1c7c14000000b003ed32685f35mr645653wmc.18.1683097623838; Wed, 03 May 2023 00:07:03 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:03 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 08/57] target/loongarch: Do not include tcg-ldst.h Date: Wed, 3 May 2023 08:06:07 +0100 Message-Id: <20230503070656.1746170-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org This header is supposed to be private to tcg and in fact does not need to be included here at all. Reviewed-by: Song Gao Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- target/loongarch/csr_helper.c | 1 - target/loongarch/iocsr_helper.c | 1 - 2 files changed, 2 deletions(-) diff --git a/target/loongarch/csr_helper.c b/target/loongarch/csr_helper.c index 7e02787895..6526367946 100644 --- a/target/loongarch/csr_helper.c +++ b/target/loongarch/csr_helper.c @@ -15,7 +15,6 @@ #include "exec/cpu_ldst.h" #include "hw/irq.h" #include "cpu-csr.h" -#include "tcg/tcg-ldst.h" target_ulong helper_csrrd_pgd(CPULoongArchState *env) { diff --git a/target/loongarch/iocsr_helper.c b/target/loongarch/iocsr_helper.c index 505853e17b..dda9845d6c 100644 --- a/target/loongarch/iocsr_helper.c +++ b/target/loongarch/iocsr_helper.c @@ -12,7 +12,6 @@ #include "exec/helper-proto.h" #include "exec/exec-all.h" #include "exec/cpu_ldst.h" -#include "tcg/tcg-ldst.h" #define GET_MEMTXATTRS(cas) \ ((MemTxAttrs){.requester_id = env_cpu(cas)->cpu_index}) From patchwork Wed May 3 07:06:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678664 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908955wrs; Wed, 3 May 2023 00:24:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5dJdBffJywaikX/3k1Ks6CxWAa8lyQm3+2ljDIN44/HztRmFQgOGDgBhSZLoQAuPv/5Q7u X-Received: by 2002:a05:622a:1189:b0:3d8:fd72:b4a3 with SMTP id m9-20020a05622a118900b003d8fd72b4a3mr31615887qtk.65.1683098647968; Wed, 03 May 2023 00:24:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098647; cv=none; d=google.com; s=arc-20160816; b=MSGlM6bYtitQIKjmPEHHrv9iSx11ulT1kuAidQcil0DKDB2xhekl1Q4P++4M8tMLfj PUe09JTvhGiiiUAvTf2O2bZ9cHtM+bt5HUtCO8s6yLdViqAWY4sjYkhWe25FXWC+wNBj zy2RT8YQGetbIUggbYC5PWudDK680CFnnuklpbSw3SbFeIXxpQQ6M8YrsFAhcjsMP0T5 pI043B5daCfjj2GAwlYmL2mFPqnfemqFEI5FNC5VVihrExCtIyF1Mfv/8NzF1BE7ioJe +vIBKhA2CLFQwzFOe9jGr6laaXrUIzBT3+JUo6M9pPP5LxgEcxAFfeRGAlKPZaBlTLmj xxhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=2ueM5QKqXSJWJZ7U2UsGFKUya4DaeCTeFA0Z/mOSCZ8=; b=F6IE5LW/96uqYrj87es4i2S5L72LLyPCS6K/00zrOvFlIZriSzi+kqgk9OY0ttu6r0 Akb/ZI5G1SRkQYH7Ji4X9U97ognrdiza0N1Fg5uXmUWBkTyDZQmXuxNlVWnlENqc2f4w dyL4gkQW1laCnUN74I9Z/HGT83f2MGmPi7zL6N2HizNuPUgOgsZAncgXIiUY80VAjzTY X2ZzfxB9xa4zQWnyIS6MPVzTdV7mEdf9t+AmeghfTR4ku//eM7NYol4Vo6DffQYrR8kf vXJ2fl/rWcvvZL36nPHiWG8+lxk7YHRbKjvGhmQFXmtd16iN5x3By6tKKxT94hSMThZ1 liBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tMQtiHlx; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x12-20020a05622a000c00b003eadeb54286si18679307qtw.458.2023.05.03.00.24.07 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:24:07 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tMQtiHlx; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aV-0006wp-Ct; Wed, 03 May 2023 03:07:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6a9-0005J5-Lc for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:22 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zu-0005dG-EI for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:21 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f178da21b2so49055815e9.1 for ; Wed, 03 May 2023 00:07:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097625; x=1685689625; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2ueM5QKqXSJWJZ7U2UsGFKUya4DaeCTeFA0Z/mOSCZ8=; b=tMQtiHlx5da3mLSM6fTjizTm8AUSyPVGEUUk2topA/BEzEhMbhrJeTmsPs28PxRwcu gAm6bRPeu1REAmY0qLlw+X0kDi7YAQyGsxQ/zk7jIXqGLFHResHDwnvgtfzAFj9qKbLN JqltGjm3sXcmC1dmGI7DatmP6nEegEgqpM9urlj29BFmZJnKZQQAPdL2Y06Dek3bb8da UGtuqMUCezE+OHpiQ7X//f12LtOpvKFIU8Coa9QAh/AiC/hxKRqsfEB0QM3eO+iYjFpY X5W+0KBwZmgBlru5vgsYtLweXjrEFJTkjIXi1g4w15swPlUGeleb8DHiXlrt6XTx03OE sveQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097625; x=1685689625; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2ueM5QKqXSJWJZ7U2UsGFKUya4DaeCTeFA0Z/mOSCZ8=; b=Y/r2/nE59g/S+V6hM8B6sZcNuhi894LPz0CIZYzJ+vQ/UqjzvInrHQJ9N+X8uQDwkD GjP9RkxcQjpzDlbkeqwgpOeTSfQuVwniYDcPfsaArMHZ/zVO8Xxr9s7KF24qlDodOc6Q mcQMfrZI1R6sIcXUjDeDumV+P2IQlF2tp2ryo120Nlq5I0DnQgUFjgS1TXFbqeM7nNqb Rug49VcTpksFA8hMM6JRBk7uvZvlogsTfmrXseY1EQ7775tCwbAestK8jOk4Yd8i0ZHh 8rNIK0bl/nGg7TEje1osolXv2+XozaMugBXh/U/sAoLH3vRJ2QKh5orE9TIOthwIPjbr EgOw== X-Gm-Message-State: AC+VfDyeUDR2vAvXq/FFVVsvNW5h3LrFF1fUlfEqU7g8WjnAj+Pa+9j+ RX5I5EnYtIknRIIRk6E/PKsO6dQnGCvKcvE7k7CdOQ== X-Received: by 2002:a1c:ed0e:0:b0:3f1:7372:66d1 with SMTP id l14-20020a1ced0e000000b003f1737266d1mr13914017wmh.0.1683097624618; Wed, 03 May 2023 00:07:04 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:04 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 09/57] tcg: Unify helper_{be,le}_{ld,st}* Date: Wed, 3 May 2023 08:06:08 +0100 Message-Id: <20230503070656.1746170-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. Hoist the qemu_{ld,st}_helpers arrays to tcg.c. Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- include/tcg/tcg-ldst.h | 60 ++++------ accel/tcg/cputlb.c | 190 ++++++++++--------------------- tcg/tcg.c | 21 ++++ tcg/tci.c | 61 ++++------ docs/devel/loads-stores.rst | 36 ++---- tcg/aarch64/tcg-target.c.inc | 33 ------ tcg/arm/tcg-target.c.inc | 37 ------ tcg/i386/tcg-target.c.inc | 30 +---- tcg/loongarch64/tcg-target.c.inc | 23 ---- tcg/mips/tcg-target.c.inc | 31 ----- tcg/ppc/tcg-target.c.inc | 30 +---- tcg/riscv/tcg-target.c.inc | 42 ------- tcg/s390x/tcg-target.c.inc | 31 +---- tcg/sparc64/tcg-target.c.inc | 32 +----- 14 files changed, 146 insertions(+), 511 deletions(-) diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 684e394b06..3d897ca942 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -28,51 +28,35 @@ #ifdef CONFIG_SOFTMMU /* Value zero-extended to tcg register size. */ -tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_lduw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); /* Value sign-extended to tcg register size. */ -tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_le_ldsw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_le_ldsl_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_be_ldsw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); /* * Value extended to at least uint32_t, so that some ABIs do not require * zero-extension from uint8_t or uint16_t. */ -void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr); +void helper_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr); +void helper_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr); +void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr); +void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t retaddr); #else diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 6b8b472a11..566cf8311b 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2011,25 +2011,6 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, cpu_loop_exit_atomic(env_cpu(env), retaddr); } -/* - * Verify that we have passed the correct MemOp to the correct function. - * - * In the case of the helper_*_mmu functions, we will have done this by - * using the MemOp to look up the helper during code generation. - * - * In the case of the cpu_*_mmu functions, this is up to the caller. - * We could present one function to target code, and dispatch based on - * the MemOp, but so far we have worked hard to avoid an indirect function - * call along the memory path. - */ -static void validate_memop(MemOpIdx oi, MemOp expected) -{ -#ifdef CONFIG_DEBUG_TCG - MemOp have = get_memop(oi) & (MO_SIZE | MO_BSWAP); - assert(have == expected); -#endif -} - /* * Load Helpers * @@ -2297,10 +2278,10 @@ static uint8_t do_ld1_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return do_ld_1(env, &l.page[0], l.mmu_idx, access_type, ra); } -tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_UB); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8); return do_ld1_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } @@ -2328,17 +2309,10 @@ static uint16_t do_ld2_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return ret; } -tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_lduw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUW); - return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); -} - -tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUW); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16); return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } @@ -2362,17 +2336,10 @@ static uint32_t do_ld4_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return ret; } -tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUL); - return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); -} - -tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUL); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32); return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } @@ -2396,17 +2363,10 @@ static uint64_t do_ld8_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return ret; } -uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUQ); - return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); -} - -uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUQ); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64); return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } @@ -2415,35 +2375,22 @@ uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, * avoid this for 64-bit data, or for 32-bit data on 32-bit host. */ - -tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - return (int8_t)helper_ret_ldub_mmu(env, addr, oi, retaddr); + return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr); } -tcg_target_ulong helper_le_ldsw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - return (int16_t)helper_le_lduw_mmu(env, addr, oi, retaddr); + return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr); } -tcg_target_ulong helper_be_ldsw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - return (int16_t)helper_be_lduw_mmu(env, addr, oi, retaddr); -} - -tcg_target_ulong helper_le_ldsl_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return (int32_t)helper_le_ldul_mmu(env, addr, oi, retaddr); -} - -tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return (int32_t)helper_be_ldul_mmu(env, addr, oi, retaddr); + return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr); } /* @@ -2459,7 +2406,7 @@ uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { uint8_t ret; - validate_memop(oi, MO_UB); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_UB); ret = do_ld1_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2470,7 +2417,7 @@ uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, { uint16_t ret; - validate_memop(oi, MO_BEUW); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUW); ret = do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2481,7 +2428,7 @@ uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, { uint32_t ret; - validate_memop(oi, MO_BEUL); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUL); ret = do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2492,7 +2439,7 @@ uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, { uint64_t ret; - validate_memop(oi, MO_BEUQ); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUQ); ret = do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2503,7 +2450,7 @@ uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, { uint16_t ret; - validate_memop(oi, MO_LEUW); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUW); ret = do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2514,7 +2461,7 @@ uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, { uint32_t ret; - validate_memop(oi, MO_LEUL); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUL); ret = do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2525,7 +2472,7 @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, { uint64_t ret; - validate_memop(oi, MO_LEUQ); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUQ); ret = do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2553,8 +2500,8 @@ Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; new_oi = make_memop_idx(mop, mmu_idx); - h = helper_be_ldq_mmu(env, addr, new_oi, ra); - l = helper_be_ldq_mmu(env, addr + 8, new_oi, ra); + h = helper_ldq_mmu(env, addr, new_oi, ra); + l = helper_ldq_mmu(env, addr + 8, new_oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return int128_make128(l, h); @@ -2582,8 +2529,8 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; new_oi = make_memop_idx(mop, mmu_idx); - l = helper_le_ldq_mmu(env, addr, new_oi, ra); - h = helper_le_ldq_mmu(env, addr + 8, new_oi, ra); + l = helper_ldq_mmu(env, addr, new_oi, ra); + h = helper_ldq_mmu(env, addr + 8, new_oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return int128_make128(l, h); @@ -2727,13 +2674,13 @@ static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val, } } -void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) +void helper_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; bool crosspage; - validate_memop(oi, MO_UB); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8); crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); tcg_debug_assert(!crosspage); @@ -2762,17 +2709,10 @@ static void do_st2_mmu(CPUArchState *env, target_ulong addr, uint16_t val, do_st_1(env, &l.page[1], b, l.mmu_idx, ra); } -void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) +void helper_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUW); - do_st2_mmu(env, addr, val, oi, retaddr); -} - -void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUW); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16); do_st2_mmu(env, addr, val, oi, retaddr); } @@ -2796,17 +2736,10 @@ static void do_st4_mmu(CPUArchState *env, target_ulong addr, uint32_t val, (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); } -void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) +void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUL); - do_st4_mmu(env, addr, val, oi, retaddr); -} - -void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUL); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32); do_st4_mmu(env, addr, val, oi, retaddr); } @@ -2830,17 +2763,10 @@ static void do_st8_mmu(CPUArchState *env, target_ulong addr, uint64_t val, (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); } -void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) +void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUQ); - do_st8_mmu(env, addr, val, oi, retaddr); -} - -void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUQ); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64); do_st8_mmu(env, addr, val, oi, retaddr); } @@ -2856,49 +2782,55 @@ static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) void cpu_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_ret_stb_mmu(env, addr, val, oi, retaddr); + helper_stb_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stw_be_mmu(CPUArchState *env, target_ulong addr, uint16_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_be_stw_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUW); + do_st2_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stl_be_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_be_stl_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUL); + do_st4_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stq_be_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_be_stq_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUQ); + do_st8_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stw_le_mmu(CPUArchState *env, target_ulong addr, uint16_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_le_stw_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUW); + do_st2_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stl_le_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_le_stl_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUL); + do_st4_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stq_le_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_le_stq_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUQ); + do_st8_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } @@ -2923,8 +2855,8 @@ void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, Int128 val, mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; new_oi = make_memop_idx(mop, mmu_idx); - helper_be_stq_mmu(env, addr, int128_gethi(val), new_oi, ra); - helper_be_stq_mmu(env, addr + 8, int128_getlo(val), new_oi, ra); + helper_stq_mmu(env, addr, int128_gethi(val), new_oi, ra); + helper_stq_mmu(env, addr + 8, int128_getlo(val), new_oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -2950,8 +2882,8 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; new_oi = make_memop_idx(mop, mmu_idx); - helper_le_stq_mmu(env, addr, int128_getlo(val), new_oi, ra); - helper_le_stq_mmu(env, addr + 8, int128_gethi(val), new_oi, ra); + helper_stq_mmu(env, addr, int128_getlo(val), new_oi, ra); + helper_stq_mmu(env, addr + 8, int128_gethi(val), new_oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } diff --git a/tcg/tcg.c b/tcg/tcg.c index 748be8426a..12510c78c6 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -197,6 +197,27 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *l, const TCGLdstHelperParam *p) __attribute__((unused)); +#ifdef CONFIG_SOFTMMU +static void * const qemu_ld_helpers[MO_SSIZE + 1] = { + [MO_UB] = helper_ldub_mmu, + [MO_SB] = helper_ldsb_mmu, + [MO_UW] = helper_lduw_mmu, + [MO_SW] = helper_ldsw_mmu, + [MO_UL] = helper_ldul_mmu, + [MO_UQ] = helper_ldq_mmu, +#if TCG_TARGET_REG_BITS == 64 + [MO_SL] = helper_ldsl_mmu, +#endif +}; + +static void * const qemu_st_helpers[MO_SIZE + 1] = { + [MO_8] = helper_stb_mmu, + [MO_16] = helper_stw_mmu, + [MO_32] = helper_stl_mmu, + [MO_64] = helper_stq_mmu, +}; +#endif + TCGContext tcg_init_ctx; __thread TCGContext *tcg_ctx; diff --git a/tcg/tci.c b/tcg/tci.c index fc67e7e767..5bde2e1f2e 100644 --- a/tcg/tci.c +++ b/tcg/tci.c @@ -293,31 +293,21 @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr, uintptr_t ra = (uintptr_t)tb_ptr; #ifdef CONFIG_SOFTMMU - switch (mop & (MO_BSWAP | MO_SSIZE)) { + switch (mop & MO_SSIZE) { case MO_UB: - return helper_ret_ldub_mmu(env, taddr, oi, ra); + return helper_ldub_mmu(env, taddr, oi, ra); case MO_SB: - return helper_ret_ldsb_mmu(env, taddr, oi, ra); - case MO_LEUW: - return helper_le_lduw_mmu(env, taddr, oi, ra); - case MO_LESW: - return helper_le_ldsw_mmu(env, taddr, oi, ra); - case MO_LEUL: - return helper_le_ldul_mmu(env, taddr, oi, ra); - case MO_LESL: - return helper_le_ldsl_mmu(env, taddr, oi, ra); - case MO_LEUQ: - return helper_le_ldq_mmu(env, taddr, oi, ra); - case MO_BEUW: - return helper_be_lduw_mmu(env, taddr, oi, ra); - case MO_BESW: - return helper_be_ldsw_mmu(env, taddr, oi, ra); - case MO_BEUL: - return helper_be_ldul_mmu(env, taddr, oi, ra); - case MO_BESL: - return helper_be_ldsl_mmu(env, taddr, oi, ra); - case MO_BEUQ: - return helper_be_ldq_mmu(env, taddr, oi, ra); + return helper_ldsb_mmu(env, taddr, oi, ra); + case MO_UW: + return helper_lduw_mmu(env, taddr, oi, ra); + case MO_SW: + return helper_ldsw_mmu(env, taddr, oi, ra); + case MO_UL: + return helper_ldul_mmu(env, taddr, oi, ra); + case MO_SL: + return helper_ldsl_mmu(env, taddr, oi, ra); + case MO_UQ: + return helper_ldq_mmu(env, taddr, oi, ra); default: g_assert_not_reached(); } @@ -382,27 +372,18 @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val, uintptr_t ra = (uintptr_t)tb_ptr; #ifdef CONFIG_SOFTMMU - switch (mop & (MO_BSWAP | MO_SIZE)) { + switch (mop & MO_SIZE) { case MO_UB: - helper_ret_stb_mmu(env, taddr, val, oi, ra); + helper_stb_mmu(env, taddr, val, oi, ra); break; - case MO_LEUW: - helper_le_stw_mmu(env, taddr, val, oi, ra); + case MO_UW: + helper_stw_mmu(env, taddr, val, oi, ra); break; - case MO_LEUL: - helper_le_stl_mmu(env, taddr, val, oi, ra); + case MO_UL: + helper_stl_mmu(env, taddr, val, oi, ra); break; - case MO_LEUQ: - helper_le_stq_mmu(env, taddr, val, oi, ra); - break; - case MO_BEUW: - helper_be_stw_mmu(env, taddr, val, oi, ra); - break; - case MO_BEUL: - helper_be_stl_mmu(env, taddr, val, oi, ra); - break; - case MO_BEUQ: - helper_be_stq_mmu(env, taddr, val, oi, ra); + case MO_UQ: + helper_stq_mmu(env, taddr, val, oi, ra); break; default: g_assert_not_reached(); diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst index ad5dfe133e..d2cefc77a2 100644 --- a/docs/devel/loads-stores.rst +++ b/docs/devel/loads-stores.rst @@ -297,31 +297,20 @@ swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)`` Regexes for git grep - ``\`` -``helper_*_{ld,st}*_mmu`` +``helper_{ld,st}*_mmu`` ~~~~~~~~~~~~~~~~~~~~~~~~~ These functions are intended primarily to be called by the code -generated by the TCG backend. They may also be called by target -CPU helper function code. Like the ``cpu_{ld,st}_mmuidx_ra`` functions -they perform accesses by guest virtual address, with a given ``mmuidx``. +generated by the TCG backend. Like the ``cpu_{ld,st}_mmu`` functions +they perform accesses by guest virtual address, with a given ``MemOpIdx``. -These functions specify an ``opindex`` parameter which encodes -(among other things) the mmu index to use for the access. This parameter -should be created by calling ``make_memop_idx()``. +They differ from ``cpu_{ld,st}_mmu`` in that they take the endianness +of the operation only from the MemOpIdx, and loads extend the return +value to the size of a host general register (``tcg_target_ulong``). -The ``retaddr`` parameter should be the result of GETPC() called directly -from the top level HELPER(foo) function (or 0 if no guest CPU state -unwinding is required). +load: ``helper_ld{sign}{size}_mmu(env, addr, opindex, retaddr)`` -**TODO** The names of these functions are a bit odd for historical -reasons because they were originally expected to be called only from -within generated code. We should rename them to bring them more in -line with the other memory access functions. The explicit endianness -is the only feature they have beyond ``*_mmuidx_ra``. - -load: ``helper_{endian}_ld{sign}{size}_mmu(env, addr, opindex, retaddr)`` - -store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)`` +store: ``helper_{size}_mmu(env, addr, val, opindex, retaddr)`` ``sign`` - (empty) : for 32 or 64 bit sizes @@ -334,14 +323,9 @@ store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)`` - ``l`` : 32 bits - ``q`` : 64 bits -``endian`` - - ``le`` : little endian - - ``be`` : big endian - - ``ret`` : target endianness - Regexes for git grep - - ``\`` - - ``\`` + - ``\`` + - ``\`` ``address_space_*`` ~~~~~~~~~~~~~~~~~~~ diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 62dd22d73c..e6636c1f8b 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1587,39 +1587,6 @@ typedef struct { } HostAddress; #ifdef CONFIG_SOFTMMU -/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * MemOpIdx oi, uintptr_t ra) - */ -static void * const qemu_ld_helpers[MO_SIZE + 1] = { - [MO_8] = helper_ret_ldub_mmu, -#if HOST_BIG_ENDIAN - [MO_16] = helper_be_lduw_mmu, - [MO_32] = helper_be_ldul_mmu, - [MO_64] = helper_be_ldq_mmu, -#else - [MO_16] = helper_le_lduw_mmu, - [MO_32] = helper_le_ldul_mmu, - [MO_64] = helper_le_ldq_mmu, -#endif -}; - -/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, MemOpIdx oi, - * uintptr_t ra) - */ -static void * const qemu_st_helpers[MO_SIZE + 1] = { - [MO_8] = helper_ret_stb_mmu, -#if HOST_BIG_ENDIAN - [MO_16] = helper_be_stw_mmu, - [MO_32] = helper_be_stl_mmu, - [MO_64] = helper_be_stq_mmu, -#else - [MO_16] = helper_le_stw_mmu, - [MO_32] = helper_le_stl_mmu, - [MO_64] = helper_le_stq_mmu, -#endif -}; - static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 1, .tmp = { TCG_REG_TMP } }; diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index df514e56fc..8b0d526659 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1333,43 +1333,6 @@ typedef struct { } HostAddress; #ifdef CONFIG_SOFTMMU -/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * int mmu_idx, uintptr_t ra) - */ -static void * const qemu_ld_helpers[MO_SSIZE + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, -#if HOST_BIG_ENDIAN - [MO_UW] = helper_be_lduw_mmu, - [MO_UL] = helper_be_ldul_mmu, - [MO_UQ] = helper_be_ldq_mmu, - [MO_SW] = helper_be_ldsw_mmu, - [MO_SL] = helper_be_ldul_mmu, -#else - [MO_UW] = helper_le_lduw_mmu, - [MO_UL] = helper_le_ldul_mmu, - [MO_UQ] = helper_le_ldq_mmu, - [MO_SW] = helper_le_ldsw_mmu, - [MO_SL] = helper_le_ldul_mmu, -#endif -}; - -/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, int mmu_idx, uintptr_t ra) - */ -static void * const qemu_st_helpers[MO_SIZE + 1] = { - [MO_8] = helper_ret_stb_mmu, -#if HOST_BIG_ENDIAN - [MO_16] = helper_be_stw_mmu, - [MO_32] = helper_be_stl_mmu, - [MO_64] = helper_be_stq_mmu, -#else - [MO_16] = helper_le_stw_mmu, - [MO_32] = helper_le_stl_mmu, - [MO_64] = helper_le_stq_mmu, -#endif -}; - static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { /* We arrive at the slow path via "BLNE", so R14 contains l->raddr. */ diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 7dbfcbd20f..bb603e7968 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -1776,32 +1776,6 @@ typedef struct { } HostAddress; #if defined(CONFIG_SOFTMMU) -/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * int mmu_idx, uintptr_t ra) - */ -static void * const qemu_ld_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_LEUW] = helper_le_lduw_mmu, - [MO_LEUL] = helper_le_ldul_mmu, - [MO_LEUQ] = helper_le_ldq_mmu, - [MO_BEUW] = helper_be_lduw_mmu, - [MO_BEUL] = helper_be_ldul_mmu, - [MO_BEUQ] = helper_be_ldq_mmu, -}; - -/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, int mmu_idx, uintptr_t ra) - */ -static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_stb_mmu, - [MO_LEUW] = helper_le_stw_mmu, - [MO_LEUL] = helper_le_stl_mmu, - [MO_LEUQ] = helper_le_stq_mmu, - [MO_BEUW] = helper_be_stw_mmu, - [MO_BEUL] = helper_be_stl_mmu, - [MO_BEUQ] = helper_be_stq_mmu, -}; - /* * Because i686 has no register parameters and because x86_64 has xchg * to handle addr/data register overlap, we have placed all input arguments @@ -1842,7 +1816,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) } tcg_out_ld_helper_args(s, l, &ldst_helper_param); - tcg_out_branch(s, 1, qemu_ld_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_branch(s, 1, qemu_ld_helpers[opc & MO_SIZE]); tcg_out_ld_helper_ret(s, l, false, &ldst_helper_param); tcg_out_jmp(s, l->raddr); @@ -1864,7 +1838,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) } tcg_out_st_helper_args(s, l, &ldst_helper_param); - tcg_out_branch(s, 1, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_branch(s, 1, qemu_st_helpers[opc & MO_SIZE]); tcg_out_jmp(s, l->raddr); return true; diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 83fa45c802..d1bc29826f 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -784,29 +784,6 @@ static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val, */ #if defined(CONFIG_SOFTMMU) -/* - * helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * MemOpIdx oi, uintptr_t ra) - */ -static void * const qemu_ld_helpers[4] = { - [MO_8] = helper_ret_ldub_mmu, - [MO_16] = helper_le_lduw_mmu, - [MO_32] = helper_le_ldul_mmu, - [MO_64] = helper_le_ldq_mmu, -}; - -/* - * helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, MemOpIdx oi, - * uintptr_t ra) - */ -static void * const qemu_st_helpers[4] = { - [MO_8] = helper_ret_stb_mmu, - [MO_16] = helper_le_stw_mmu, - [MO_32] = helper_le_stl_mmu, - [MO_64] = helper_le_stq_mmu, -}; - static bool tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) { tcg_out_opc_b(s, 0); diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index 5ad9867882..7770ef46bd 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1076,37 +1076,6 @@ static void tcg_out_call(TCGContext *s, const tcg_insn_unit *arg, } #if defined(CONFIG_SOFTMMU) -static void * const qemu_ld_helpers[MO_SSIZE + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, -#if HOST_BIG_ENDIAN - [MO_UW] = helper_be_lduw_mmu, - [MO_SW] = helper_be_ldsw_mmu, - [MO_UL] = helper_be_ldul_mmu, - [MO_SL] = helper_be_ldsl_mmu, - [MO_UQ] = helper_be_ldq_mmu, -#else - [MO_UW] = helper_le_lduw_mmu, - [MO_SW] = helper_le_ldsw_mmu, - [MO_UL] = helper_le_ldul_mmu, - [MO_UQ] = helper_le_ldq_mmu, - [MO_SL] = helper_le_ldsl_mmu, -#endif -}; - -static void * const qemu_st_helpers[MO_SIZE + 1] = { - [MO_UB] = helper_ret_stb_mmu, -#if HOST_BIG_ENDIAN - [MO_UW] = helper_be_stw_mmu, - [MO_UL] = helper_be_stl_mmu, - [MO_UQ] = helper_be_stq_mmu, -#else - [MO_UW] = helper_le_stw_mmu, - [MO_UL] = helper_le_stl_mmu, - [MO_UQ] = helper_le_stq_mmu, -#endif -}; - /* We have four temps, we might as well expose three of them. */ static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 3, .tmp = { TCG_TMP0, TCG_TMP1, TCG_TMP2 } diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 0a14c3e997..0963156a78 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -1963,32 +1963,6 @@ static const uint32_t qemu_stx_opc[(MO_SIZE + MO_BSWAP) + 1] = { }; #if defined (CONFIG_SOFTMMU) -/* helper signature: helper_ld_mmu(CPUState *env, target_ulong addr, - * int mmu_idx, uintptr_t ra) - */ -static void * const qemu_ld_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_LEUW] = helper_le_lduw_mmu, - [MO_LEUL] = helper_le_ldul_mmu, - [MO_LEUQ] = helper_le_ldq_mmu, - [MO_BEUW] = helper_be_lduw_mmu, - [MO_BEUL] = helper_be_ldul_mmu, - [MO_BEUQ] = helper_be_ldq_mmu, -}; - -/* helper signature: helper_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, int mmu_idx, uintptr_t ra) - */ -static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_stb_mmu, - [MO_LEUW] = helper_le_stw_mmu, - [MO_LEUL] = helper_le_stl_mmu, - [MO_LEUQ] = helper_le_stq_mmu, - [MO_BEUW] = helper_be_stw_mmu, - [MO_BEUL] = helper_be_stl_mmu, - [MO_BEUQ] = helper_be_stq_mmu, -}; - static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { if (arg < 0) { @@ -2017,7 +1991,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) } tcg_out_ld_helper_args(s, lb, &ldst_helper_param); - tcg_out_call_int(s, LK, qemu_ld_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_call_int(s, LK, qemu_ld_helpers[opc & MO_SIZE]); tcg_out_ld_helper_ret(s, lb, false, &ldst_helper_param); tcg_out_b(s, 0, lb->raddr); @@ -2033,7 +2007,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) } tcg_out_st_helper_args(s, lb, &ldst_helper_param); - tcg_out_call_int(s, LK, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_call_int(s, LK, qemu_st_helpers[opc & MO_SIZE]); tcg_out_b(s, 0, lb->raddr); return true; diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index d12b824d8c..8ed0e2f210 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -847,48 +847,6 @@ static void tcg_out_mb(TCGContext *s, TCGArg a0) */ #if defined(CONFIG_SOFTMMU) -/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * MemOpIdx oi, uintptr_t ra) - */ -static void * const qemu_ld_helpers[MO_SSIZE + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, -#if HOST_BIG_ENDIAN - [MO_UW] = helper_be_lduw_mmu, - [MO_SW] = helper_be_ldsw_mmu, - [MO_UL] = helper_be_ldul_mmu, -#if TCG_TARGET_REG_BITS == 64 - [MO_SL] = helper_be_ldsl_mmu, -#endif - [MO_UQ] = helper_be_ldq_mmu, -#else - [MO_UW] = helper_le_lduw_mmu, - [MO_SW] = helper_le_ldsw_mmu, - [MO_UL] = helper_le_ldul_mmu, -#if TCG_TARGET_REG_BITS == 64 - [MO_SL] = helper_le_ldsl_mmu, -#endif - [MO_UQ] = helper_le_ldq_mmu, -#endif -}; - -/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, MemOpIdx oi, - * uintptr_t ra) - */ -static void * const qemu_st_helpers[MO_SIZE + 1] = { - [MO_8] = helper_ret_stb_mmu, -#if HOST_BIG_ENDIAN - [MO_16] = helper_be_stw_mmu, - [MO_32] = helper_be_stl_mmu, - [MO_64] = helper_be_stq_mmu, -#else - [MO_16] = helper_le_stw_mmu, - [MO_32] = helper_le_stl_mmu, - [MO_64] = helper_le_stq_mmu, -#endif -}; - static void tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) { tcg_out_opc_jump(s, OPC_JAL, TCG_REG_ZERO, 0); diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index aacbaf21d5..968977be98 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -438,33 +438,6 @@ static const uint8_t tcg_cond_to_ltr_cond[] = { [TCG_COND_GEU] = S390_CC_ALWAYS, }; -#ifdef CONFIG_SOFTMMU -static void * const qemu_ld_helpers[(MO_SSIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, - [MO_LEUW] = helper_le_lduw_mmu, - [MO_LESW] = helper_le_ldsw_mmu, - [MO_LEUL] = helper_le_ldul_mmu, - [MO_LESL] = helper_le_ldsl_mmu, - [MO_LEUQ] = helper_le_ldq_mmu, - [MO_BEUW] = helper_be_lduw_mmu, - [MO_BESW] = helper_be_ldsw_mmu, - [MO_BEUL] = helper_be_ldul_mmu, - [MO_BESL] = helper_be_ldsl_mmu, - [MO_BEUQ] = helper_be_ldq_mmu, -}; - -static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_stb_mmu, - [MO_LEUW] = helper_le_stw_mmu, - [MO_LEUL] = helper_le_stl_mmu, - [MO_LEUQ] = helper_le_stq_mmu, - [MO_BEUW] = helper_be_stw_mmu, - [MO_BEUL] = helper_be_stl_mmu, - [MO_BEUQ] = helper_be_stq_mmu, -}; -#endif - static const tcg_insn_unit *tb_ret_addr; uint64_t s390_facilities[3]; @@ -1721,7 +1694,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) } tcg_out_ld_helper_args(s, lb, &ldst_helper_param); - tcg_out_call_int(s, qemu_ld_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_call_int(s, qemu_ld_helpers[opc & MO_SIZE]); tcg_out_ld_helper_ret(s, lb, false, &ldst_helper_param); tgen_gotoi(s, S390_CC_ALWAYS, lb->raddr); @@ -1738,7 +1711,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) } tcg_out_st_helper_args(s, lb, &ldst_helper_param); - tcg_out_call_int(s, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_call_int(s, qemu_st_helpers[opc & MO_SIZE]); tgen_gotoi(s, S390_CC_ALWAYS, lb->raddr); return true; diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 7e6466d3b6..e997db2645 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -919,33 +919,11 @@ static void tcg_out_mb(TCGContext *s, TCGArg a0) } #ifdef CONFIG_SOFTMMU -static const tcg_insn_unit *qemu_ld_trampoline[(MO_SSIZE | MO_BSWAP) + 1]; -static const tcg_insn_unit *qemu_st_trampoline[(MO_SIZE | MO_BSWAP) + 1]; +static const tcg_insn_unit *qemu_ld_trampoline[MO_SSIZE + 1]; +static const tcg_insn_unit *qemu_st_trampoline[MO_SIZE + 1]; static void build_trampolines(TCGContext *s) { - static void * const qemu_ld_helpers[] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, - [MO_LEUW] = helper_le_lduw_mmu, - [MO_LESW] = helper_le_ldsw_mmu, - [MO_LEUL] = helper_le_ldul_mmu, - [MO_LEUQ] = helper_le_ldq_mmu, - [MO_BEUW] = helper_be_lduw_mmu, - [MO_BESW] = helper_be_ldsw_mmu, - [MO_BEUL] = helper_be_ldul_mmu, - [MO_BEUQ] = helper_be_ldq_mmu, - }; - static void * const qemu_st_helpers[] = { - [MO_UB] = helper_ret_stb_mmu, - [MO_LEUW] = helper_le_stw_mmu, - [MO_LEUL] = helper_le_stl_mmu, - [MO_LEUQ] = helper_le_stq_mmu, - [MO_BEUW] = helper_be_stw_mmu, - [MO_BEUL] = helper_be_stl_mmu, - [MO_BEUQ] = helper_be_stq_mmu, - }; - int i; for (i = 0; i < ARRAY_SIZE(qemu_ld_helpers); ++i) { @@ -1210,9 +1188,9 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr, /* We use the helpers to extend SB and SW data, leaving the case of SL needing explicit extending below. */ if ((memop & MO_SSIZE) == MO_SL) { - func = qemu_ld_trampoline[memop & (MO_BSWAP | MO_SIZE)]; + func = qemu_ld_trampoline[MO_UL]; } else { - func = qemu_ld_trampoline[memop & (MO_BSWAP | MO_SSIZE)]; + func = qemu_ld_trampoline[memop & MO_SSIZE]; } tcg_debug_assert(func != NULL); tcg_out_call_nodelay(s, func, false); @@ -1353,7 +1331,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr, tcg_out_movext(s, (memop & MO_SIZE) == MO_64 ? TCG_TYPE_I64 : TCG_TYPE_I32, TCG_REG_O2, data_type, memop & MO_SIZE, data); - func = qemu_st_trampoline[memop & (MO_BSWAP | MO_SIZE)]; + func = qemu_st_trampoline[memop & MO_SIZE]; tcg_debug_assert(func != NULL); tcg_out_call_nodelay(s, func, false); /* delay slot */ From patchwork Wed May 3 07:06:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678663 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908956wrs; Wed, 3 May 2023 00:24:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ675pWycn0xZCx6fUtKTZsf2w/mtCkMqbBkVBE8nPH56gKknt7Cg823ZefPX2DdQFJqcT8n X-Received: by 2002:a05:6214:20aa:b0:616:5460:aafd with SMTP id 10-20020a05621420aa00b006165460aafdmr7751288qvd.3.1683098648079; Wed, 03 May 2023 00:24:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098648; cv=none; d=google.com; s=arc-20160816; b=XaXE2FMxwoPwz7Aq8HZ/27trAYjlrdgIp99NnFMGliyoNPhrTlZg6idYJVZKoPoZ84 8WyuOn2SOR5OwRLVFca1ppp6Aym3bx8zC94GNaQwaeOXB8D/Ok+mfTLoGZF7IUVyCLhm k7R2ojMM9669Z+xPDCjYprTDIDiqxcAK0iPLydI4k6ub/EaqzmObPwT7j7Avs7LUfODE mJEdl1QJczWuNl8dMi+hus9JAQDeEiznLt4nrnZzLBe+/Sau9obWmMpRWT4RRabJqZ2o V9hgS6ly6aHEXWbYptZTqYGhvGgnHarbOgjhbig+RjSQyIEWoe4VCyX3yICDCFe1FyoN FeNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=4OZwqBHH9f/OPUnIPc8KEWsqwzfaMnv+PCzD6HiSncg=; b=tE6DnvVdQq+5Zsww7k5EzgARJ+2TeBZRfKHG34CAO9hVCnaG/iSlesnkkUoY5WI0oG FVMsk8Sh89SWcIwjVXuxVqe3m+E3dIzmnEA3BTsl1JKcH5ILzHIv9TqYJjZIghfF0X9C Kh59ygq15DWit90u7yV2nEu1HHWD5REZBP4GDIy7dcOngGZ5kRKYnL6G/JCpLiKXVccQ EvjUk1Hnsz6HX4V10qoeLzH1s8AqBxqNnFTdBOkOl5DPmwtCUsYkq50rfOLGz9wiRLgj rQRDzcJ6IkewLCU8VESTuzE5IZZ41dYyJNS7+7v82GhxwJpvF+dT0UPVzEg21UAgOisv pLsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="u/IF1fZ0"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n6-20020a05620a222600b00746a86ebd9bsi17827426qkh.614.2023.05.03.00.24.07 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:24:08 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="u/IF1fZ0"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6ay-0008TV-Bm; Wed, 03 May 2023 03:08:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aM-00067b-P7 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:35 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zv-0005ex-6K for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:34 -0400 Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-2f58125b957so4729758f8f.3 for ; Wed, 03 May 2023 00:07:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097625; x=1685689625; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4OZwqBHH9f/OPUnIPc8KEWsqwzfaMnv+PCzD6HiSncg=; b=u/IF1fZ0W4/mXZEl/mcrzjcuAh6Y2IXuiogtXmccIQhy1o014MoF90Rs4ilcKj0v8/ kqB9rIpY0PezBukoXlARsi6/qJNx30/wVnKIPuqVIlgG+ZXI+KvFjpfDjIwt3fn/iWmp JQ0HU/2Seu5EjtCPT83brNMqhpimrZwPBZlbY2PNvbjkrm6l+9zprw41pLt23c40Ynfo oHJN24OvDNFiiA1tJD0zG4QiQiRDiknloMwix42YqPxypl6Ev28UrHrtD6qgKN3ZM1IH bfUKD55/5ZL0+s1kbH3KRG2Zs6VaRgAlG51iK3mAT5HHxtb7jvga7xwwL4jzXWZymPiC ffjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097625; x=1685689625; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4OZwqBHH9f/OPUnIPc8KEWsqwzfaMnv+PCzD6HiSncg=; b=ABhwZeDWxot4ugnQHlz0PSFB7n2LBLb9gX403m/2qVz/co2UkWbot9fajI/mJn+BHm Xrmxv93bYwrzDAHRdNjC7oQZIQoaKWEwYPjFHgehU0jq8Kjk/Q/Qr/uO3pkxVxh4agiG BbR2zL5VDX8oqANpD6b3Ui0xxMhD2+EhOsBCX8Wxqsybk7Ju5Xj2YTw00i+YcNTVUY4L xe4ySv2ttt2DtIuXq08dzVy2eu+/PaPs7cdX1HGteORdHHgce1pY4xVo7CMV52hJIahW q4ZfHE7jB7xUlQ7of/hqM3iJ7a5hmc6YZdUqwKhPLMHv7/lM3sTUMQi3UuQgqBLTvdhR QzAg== X-Gm-Message-State: AC+VfDzfmVUusvGEpjJCS4MTc9NrEYoREHQxNOciusUh1yTYJ1qwIoKX 2L/E25CMn2xpQfUyoWtsuvizureShlMxfKiFW9Mo6w== X-Received: by 2002:a5d:4b8c:0:b0:306:2b56:d268 with SMTP id b12-20020a5d4b8c000000b003062b56d268mr8201483wrt.15.1683097625467; Wed, 03 May 2023 00:07:05 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:05 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 10/57] accel/tcg: Implement helper_{ld, st}*_mmu for user-only Date: Wed, 3 May 2023 08:06:09 +0100 Message-Id: <20230503070656.1746170-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=richard.henderson@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org TCG backends may need to defer to a helper to implement the atomicity required by a given operation. Mirror the interface used in system mode. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- include/tcg/tcg-ldst.h | 6 +- accel/tcg/user-exec.c | 393 ++++++++++++++++++++++++++++------------- tcg/tcg.c | 6 +- 3 files changed, 278 insertions(+), 127 deletions(-) diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 3d897ca942..57fafa14b1 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -25,8 +25,6 @@ #ifndef TCG_LDST_H #define TCG_LDST_H -#ifdef CONFIG_SOFTMMU - /* Value zero-extended to tcg register size. */ tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr); @@ -58,10 +56,10 @@ void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr); -#else +#ifdef CONFIG_USER_ONLY G_NORETURN void helper_unaligned_ld(CPUArchState *env, target_ulong addr); G_NORETURN void helper_unaligned_st(CPUArchState *env, target_ulong addr); -#endif /* CONFIG_SOFTMMU */ +#endif /* CONFIG_USER_ONLY */ #endif /* TCG_LDST_H */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index b89fa35a83..d9f9766b7f 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -889,21 +889,6 @@ void page_reset_target_data(target_ulong start, target_ulong last) { } /* The softmmu versions of these helpers are in cputlb.c. */ -/* - * Verify that we have passed the correct MemOp to the correct function. - * - * We could present one function to target code, and dispatch based on - * the MemOp, but so far we have worked hard to avoid an indirect function - * call along the memory path. - */ -static void validate_memop(MemOpIdx oi, MemOp expected) -{ -#ifdef CONFIG_DEBUG_TCG - MemOp have = get_memop(oi) & (MO_SIZE | MO_BSWAP); - assert(have == expected); -#endif -} - void helper_unaligned_ld(CPUArchState *env, target_ulong addr) { cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_LOAD, GETPC()); @@ -914,10 +899,9 @@ void helper_unaligned_st(CPUArchState *env, target_ulong addr) cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, GETPC()); } -static void *cpu_mmu_lookup(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t ra, MMUAccessType type) +static void *cpu_mmu_lookup(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra, MMUAccessType type) { - MemOp mop = get_memop(oi); int a_bits = get_alignment_bits(mop); void *ret; @@ -933,100 +917,206 @@ static void *cpu_mmu_lookup(CPUArchState *env, target_ulong addr, #include "ldst_atomicity.c.inc" -uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) +static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) { void *haddr; uint8_t ret; - validate_memop(oi, MO_UB); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); + tcg_debug_assert((mop & MO_SIZE) == MO_8); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); ret = ldub_p(haddr); clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + return do_ld1_mmu(env, addr, get_memop(oi), ra); +} + +tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra); +} + +uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + uint8_t ret = do_ld1_mmu(env, addr, get_memop(oi), ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } +static uint16_t do_ld2_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint16_t ret; + + tcg_debug_assert((mop & MO_SIZE) == MO_16); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret = load_atom_2(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_lduw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint16_t ret = do_ld2_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap16(ret); + } + return ret; +} + +tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + int16_t ret = do_ld2_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap16(ret); + } + return ret; +} + uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); uint16_t ret; - validate_memop(oi, MO_BEUW); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_2(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + ret = do_ld2_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_be16(ret); } -uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - uint32_t ret; - - validate_memop(oi, MO_BEUL); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_4(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return cpu_to_be32(ret); -} - -uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - uint64_t ret; - - validate_memop(oi, MO_BEUQ); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_8(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return cpu_to_be64(ret); -} - uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); uint16_t ret; - validate_memop(oi, MO_LEUW); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_2(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + ret = do_ld2_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le16(ret); } +static uint32_t do_ld4_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint32_t ret; + + tcg_debug_assert((mop & MO_SIZE) == MO_32); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret = load_atom_4(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint32_t ret = do_ld4_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap32(ret); + } + return ret; +} + +tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + int32_t ret = do_ld4_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap32(ret); + } + return ret; +} + +uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint32_t ret; + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + ret = do_ld4_he_mmu(env, addr, mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); + return cpu_to_be32(ret); +} + uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); uint32_t ret; - validate_memop(oi, MO_LEUL); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_4(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + ret = do_ld4_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le32(ret); } +static uint64_t do_ld8_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint64_t ret; + + tcg_debug_assert((mop & MO_SIZE) == MO_64); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret = load_atom_8(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint64_t ret = do_ld8_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap64(ret); + } + return ret; +} + +uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint64_t ret; + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + ret = do_ld8_he_mmu(env, addr, mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); + return cpu_to_be64(ret); +} + uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); uint64_t ret; - validate_memop(oi, MO_LEUQ); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_8(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + ret = do_ld8_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le64(ret); } @@ -1037,7 +1127,7 @@ Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, void *haddr; Int128 ret; - validate_memop(oi, MO_128 | MO_BE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_BE)); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); memcpy(&ret, haddr, 16); clear_helper_retaddr(); @@ -1055,7 +1145,7 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, void *haddr; Int128 ret; - validate_memop(oi, MO_128 | MO_LE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_LE)); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); memcpy(&ret, haddr, 16); clear_helper_retaddr(); @@ -1067,87 +1157,153 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, return ret; } -void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, - MemOpIdx oi, uintptr_t ra) +static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, + MemOp mop, uintptr_t ra) { void *haddr; - validate_memop(oi, MO_UB); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); + tcg_debug_assert((mop & MO_SIZE) == MO_8); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); stb_p(haddr, val); clear_helper_retaddr(); +} + +void helper_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + do_st1_mmu(env, addr, val, get_memop(oi), ra); +} + +void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, + MemOpIdx oi, uintptr_t ra) +{ + do_st1_mmu(env, addr, val, get_memop(oi), ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } +static void do_st2_he_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, + MemOp mop, uintptr_t ra) +{ + void *haddr; + + tcg_debug_assert((mop & MO_SIZE) == MO_16); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_2(env, ra, haddr, mop, val); + clear_helper_retaddr(); +} + +void helper_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + if (mop & MO_BSWAP) { + val = bswap16(val); + } + do_st2_he_mmu(env, addr, val, mop, ra); +} + void cpu_stw_be_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); - validate_memop(oi, MO_BEUW); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_2(env, ra, haddr, get_memop(oi), be16_to_cpu(val)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - -void cpu_stl_be_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - - validate_memop(oi, MO_BEUL); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_4(env, ra, haddr, get_memop(oi), be32_to_cpu(val)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - -void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - - validate_memop(oi, MO_BEUQ); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_8(env, ra, haddr, get_memop(oi), be64_to_cpu(val)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + do_st2_he_mmu(env, addr, be16_to_cpu(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_stw_le_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + do_st2_he_mmu(env, addr, le16_to_cpu(val), mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); +} + +static void do_st4_he_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, + MemOp mop, uintptr_t ra) { void *haddr; - validate_memop(oi, MO_LEUW); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_2(env, ra, haddr, get_memop(oi), le16_to_cpu(val)); + tcg_debug_assert((mop & MO_SIZE) == MO_32); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_4(env, ra, haddr, mop, val); clear_helper_retaddr(); +} + +void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + if (mop & MO_BSWAP) { + val = bswap32(val); + } + do_st4_he_mmu(env, addr, val, mop, ra); +} + +void cpu_stl_be_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + do_st4_he_mmu(env, addr, be32_to_cpu(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_stl_le_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + do_st4_he_mmu(env, addr, le32_to_cpu(val), mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); +} + +static void do_st8_he_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, + MemOp mop, uintptr_t ra) { void *haddr; - validate_memop(oi, MO_LEUL); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_4(env, ra, haddr, get_memop(oi), le32_to_cpu(val)); + tcg_debug_assert((mop & MO_SIZE) == MO_64); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_8(env, ra, haddr, mop, val); clear_helper_retaddr(); +} + +void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + if (mop & MO_BSWAP) { + val = bswap64(val); + } + do_st8_he_mmu(env, addr, val, mop, ra); +} + +void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + do_st8_he_mmu(env, addr, cpu_to_be64(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_stq_le_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); - validate_memop(oi, MO_LEUQ); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_8(env, ra, haddr, get_memop(oi), le64_to_cpu(val)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + do_st8_he_mmu(env, addr, cpu_to_le64(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1156,7 +1312,7 @@ void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, { void *haddr; - validate_memop(oi, MO_128 | MO_BE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_BE)); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); if (!HOST_BIG_ENDIAN) { val = bswap128(val); @@ -1171,7 +1327,7 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, { void *haddr; - validate_memop(oi, MO_128 | MO_LE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_LE)); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); if (HOST_BIG_ENDIAN) { val = bswap128(val); @@ -1269,7 +1425,6 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr, void *haddr; uint64_t ret; - validate_memop(oi, MO_BEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); ret = ldq_p(haddr); clear_helper_retaddr(); diff --git a/tcg/tcg.c b/tcg/tcg.c index 12510c78c6..d0afabf194 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -197,8 +197,7 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *l, const TCGLdstHelperParam *p) __attribute__((unused)); -#ifdef CONFIG_SOFTMMU -static void * const qemu_ld_helpers[MO_SSIZE + 1] = { +static void * const qemu_ld_helpers[MO_SSIZE + 1] __attribute__((unused)) = { [MO_UB] = helper_ldub_mmu, [MO_SB] = helper_ldsb_mmu, [MO_UW] = helper_lduw_mmu, @@ -210,13 +209,12 @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] = { #endif }; -static void * const qemu_st_helpers[MO_SIZE + 1] = { +static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = { [MO_8] = helper_stb_mmu, [MO_16] = helper_stw_mmu, [MO_32] = helper_stl_mmu, [MO_64] = helper_stq_mmu, }; -#endif TCGContext tcg_init_ctx; __thread TCGContext *tcg_ctx; From patchwork Wed May 3 07:06:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678689 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp910546wrs; Wed, 3 May 2023 00:29:10 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4n/ZYOCL+4Me7m+igltZsw7d3aGr1OnsU4ISnlvJxeaQXj4Np4lLBBulkj209O281XRoN0 X-Received: by 2002:a05:6214:f2a:b0:5f1:6be3:13e9 with SMTP id iw10-20020a0562140f2a00b005f16be313e9mr9809282qvb.6.1683098950201; Wed, 03 May 2023 00:29:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098950; cv=none; d=google.com; s=arc-20160816; b=PsWjFNTclQEA9MNJYEvUxKUQ3E2ulfrgpLYSX97z/rTq91k4DNWmDtuw67DEs78avi syJWEsFsRV/9uJ+6gBW8YvowvnUgxOspWSZS2vFfqmNxhM9jnsy7qBq1dvuMB3z+BiFV 0PkOsCtFue70c6d/JbbyX3iZyF/XUQGAZ3tGYDg86Rl5sM7W0Yik8gnCc+bj7lP68zVL SBWUhqWn2Epvpztg1uuVy4SJoEyK6ajVCByFgUJR3Q+wiWxwjtJ15DKt0ssaro7NYZRt 3uN83W/i51fS9HeiKiKb7MriE4eQhfGS/KpEzU7QdELDsIY8UH3StFwG39+uDseXBxie ZCzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=WeZLbg9PNWPpx3WburKOtddAsovGFs3y9uMBDN6DTFA=; b=qWaddJpNaxnpx09kVZDBEiOsrZ5tPSDWIGYESdCkUlyRK+yYHlf7S63uX2AUrVb3ig TtVnTIYYqGdDJFnHiM0C38bdya+slWFXNoN3DXcda/Y7DoGb9ZvXlyUsv+B2kwV+PgxT M0aLFLu3Hx5vv5mR64ufQGRNEq/AaWf67Cd8VdKqJrrwhLYph31W/20MuqqB5q69l6V7 Akyl+q2WoesnFpOcAVsqp7dIjU3yfUPMtpSnDtqbQZsmugKCdt91VAC9HpQ1IMWMzdwE e1q7Qdd6cAktgK+d0/nXBsLpAnuSQH2tW3jrbNueQoe6HDfZSlsfJMEhpi6bE9SfT36f ZljA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=vtLNHKej; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id bk18-20020a05620a1a1200b0074dfdfc929bsi18174814qkb.543.2023.05.03.00.29.10 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:29:10 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=vtLNHKej; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aX-00077e-Kc; Wed, 03 May 2023 03:07:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aF-0005NE-5W for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:27 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zv-0005fM-Mf for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:26 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-3f199696149so30070265e9.0 for ; Wed, 03 May 2023 00:07:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097626; x=1685689626; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WeZLbg9PNWPpx3WburKOtddAsovGFs3y9uMBDN6DTFA=; b=vtLNHKej1AzcOUVp1vynR2qHc4G9/A0iKxMoBgZ+HEJ7vpfwyeKWxExBJ7XLfE1DMx ORziHduyMmZH00l4nuSlJtv0QFYhdL4ekVd48KXyRqBbzvMgku08fNJaFUqsV/xfZzEP N/RThaTTkhKUxu0iQBKlQwucNPG1YNb3idySmGvgeUbeaLMB4vv3QgEDYbX6dgkHnjoe /oLjR0eELxjJhhcV1r3MnxydusE7EdJiCkZaUG1B1cU3MKJWYFQJywMyq0lOYAf2jGuT eRiZucIVnw+HHcIOJi7NRzZCJkrxJsFHgG34UHqHMnpe8QAf8Bt+SlCcthFJUrYLBgMY 7IKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097626; x=1685689626; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WeZLbg9PNWPpx3WburKOtddAsovGFs3y9uMBDN6DTFA=; b=hOk7m1H1xspisICmtt0pE94/OcFh7JFBiwkYZIDtDr248wGvt1f5mJHHydhrSNH/o8 Kzk67JxinoNaKAB39BhisK0XAA81TCd9xgfAv52Tw9CrncR8t60r2TglZ+rBvLqFXoNC IrOV3OSnFQCRiJhlh5ol3FNGzxo+J8dryqAGNQcEjsdVbjRH89YngmrOfp2JT7N6Rutl eW6LIyVkUTAw6UDhn2818EpjxX3iISfbvxS/54DF3OLKt9Kv2I3ttAlgCv2wOva4j0ar 2ZEbSscuPEnPZ64YBsXHI/xe3ihDePZGhfjeAJz+5oBp6GDq+JNRAvE+5Rcrlewa73SR X6+A== X-Gm-Message-State: AC+VfDySjPfrYgr/oQHN7mccHar4or7jL4JvECCgphtd6zolcleSnK0l KvmI7REYQysdEKEqdNjOqWFzbIsNmQklUYfcE4V3Uw== X-Received: by 2002:a05:600c:2309:b0:3f1:72e3:576b with SMTP id 9-20020a05600c230900b003f172e3576bmr13946918wmo.26.1683097626139; Wed, 03 May 2023 00:07:06 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:05 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 11/57] tcg/tci: Use helper_{ld,st}*_mmu for user-only Date: Wed, 3 May 2023 08:06:10 +0100 Message-Id: <20230503070656.1746170-12-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org We can now fold these two pieces of code. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/tci.c | 89 ------------------------------------------------------- 1 file changed, 89 deletions(-) diff --git a/tcg/tci.c b/tcg/tci.c index 5bde2e1f2e..15f2f8c463 100644 --- a/tcg/tci.c +++ b/tcg/tci.c @@ -292,7 +292,6 @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr, MemOp mop = get_memop(oi); uintptr_t ra = (uintptr_t)tb_ptr; -#ifdef CONFIG_SOFTMMU switch (mop & MO_SSIZE) { case MO_UB: return helper_ldub_mmu(env, taddr, oi, ra); @@ -311,58 +310,6 @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr, default: g_assert_not_reached(); } -#else - void *haddr = g2h(env_cpu(env), taddr); - unsigned a_mask = (1u << get_alignment_bits(mop)) - 1; - uint64_t ret; - - set_helper_retaddr(ra); - if (taddr & a_mask) { - helper_unaligned_ld(env, taddr); - } - switch (mop & (MO_BSWAP | MO_SSIZE)) { - case MO_UB: - ret = ldub_p(haddr); - break; - case MO_SB: - ret = ldsb_p(haddr); - break; - case MO_LEUW: - ret = lduw_le_p(haddr); - break; - case MO_LESW: - ret = ldsw_le_p(haddr); - break; - case MO_LEUL: - ret = (uint32_t)ldl_le_p(haddr); - break; - case MO_LESL: - ret = (int32_t)ldl_le_p(haddr); - break; - case MO_LEUQ: - ret = ldq_le_p(haddr); - break; - case MO_BEUW: - ret = lduw_be_p(haddr); - break; - case MO_BESW: - ret = ldsw_be_p(haddr); - break; - case MO_BEUL: - ret = (uint32_t)ldl_be_p(haddr); - break; - case MO_BESL: - ret = (int32_t)ldl_be_p(haddr); - break; - case MO_BEUQ: - ret = ldq_be_p(haddr); - break; - default: - g_assert_not_reached(); - } - clear_helper_retaddr(); - return ret; -#endif } static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val, @@ -371,7 +318,6 @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val, MemOp mop = get_memop(oi); uintptr_t ra = (uintptr_t)tb_ptr; -#ifdef CONFIG_SOFTMMU switch (mop & MO_SIZE) { case MO_UB: helper_stb_mmu(env, taddr, val, oi, ra); @@ -388,41 +334,6 @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val, default: g_assert_not_reached(); } -#else - void *haddr = g2h(env_cpu(env), taddr); - unsigned a_mask = (1u << get_alignment_bits(mop)) - 1; - - set_helper_retaddr(ra); - if (taddr & a_mask) { - helper_unaligned_st(env, taddr); - } - switch (mop & (MO_BSWAP | MO_SIZE)) { - case MO_UB: - stb_p(haddr, val); - break; - case MO_LEUW: - stw_le_p(haddr, val); - break; - case MO_LEUL: - stl_le_p(haddr, val); - break; - case MO_LEUQ: - stq_le_p(haddr, val); - break; - case MO_BEUW: - stw_be_p(haddr, val); - break; - case MO_BEUL: - stl_be_p(haddr, val); - break; - case MO_BEUQ: - stq_be_p(haddr, val); - break; - default: - g_assert_not_reached(); - } - clear_helper_retaddr(); -#endif } #if TCG_TARGET_REG_BITS == 64 From patchwork Wed May 3 07:06:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678641 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp906706wrs; Wed, 3 May 2023 00:17:24 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6dZx55wRnjHa72qbv2hSbGQLKjdvBbUpC9G8uE8rW5ZndSmI/5+jcv0at2lh1XWN1u+G0+ X-Received: by 2002:ac8:7f8d:0:b0:3ef:2fbd:90c3 with SMTP id z13-20020ac87f8d000000b003ef2fbd90c3mr34569175qtj.37.1683098243829; Wed, 03 May 2023 00:17:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098243; cv=none; d=google.com; s=arc-20160816; b=KfimrRwuGeiyewUAdH22QRPgaImoRm0WS42F8qr48I7hVGG+ERcNw1VB3PvWwD00l7 6DFuEaVyyxDxrPOrRCoxoVB/SZfjhaRL7gRTiO/zUdCbAllk1RFAB3VddtARsuswraBs 3vdWbVtxzSZj48cw70NsLCu1lCZlal+rdjac0SBjNfPF1ZeMrUqNadnvKVvIkBelfbyk VYVeFoqU+ffDB1/a0MXyWGAjVdQVPLCNX1VCGqnQ15kwKZvzM44O1Mj0TO2hnbA34QUI PkIAfjfPf/2P4rPZKGNyEXioKpZJhg2Thb248EWEMgL21gteP+zXff9E4BoKEw+4NSzN fNwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=1Ugii9ooB/Wrm9g06NhftgEa6O79j/Tqko/PN7hRX3Y=; b=BOzTq/FeRXIqLpVAddJMQCUQZM71SK4L8TnBPe5lHbwlR32RWy/m0B48cFjBP2JVwL oM0igLVB2CrR4yiKO7Yr4hAMkFst9sFoPzGamFmtoAOxKTeKKb1f50XN1SV9/vbV/KYp mThTTpsSeNP+6/4quwxN4sOMbLoRPDeqg8Vknl5Z9yPEawr38wwLSFpcpcOt0LjzyaN8 iCoIcpUe5HzrMFgTIB3f+eewbt8QPBbHtnBSuuktZGsb5rQr7PTHpKG/1RFxTiUSpu2s DzTZRrO2yspNApY/DIX5vzSFLjgIpBn9qb10eNrI0ET9w3K/EChE1XDA6RlFwVvYapKE EpAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=yd9w6Crf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id cx16-20020a05620a51d000b007514d3d0aa3si4898080qkb.419.2023.05.03.00.17.23 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:17:23 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=yd9w6Crf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6aY-0007Bx-9b; Wed, 03 May 2023 03:07:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aH-0005dI-Uv for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:30 -0400 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zw-0005g6-RA for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:29 -0400 Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-3f178da21afso31632215e9.1 for ; Wed, 03 May 2023 00:07:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097627; x=1685689627; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1Ugii9ooB/Wrm9g06NhftgEa6O79j/Tqko/PN7hRX3Y=; b=yd9w6Crf8jjCeDEGzvZ0/U2t5stYajclj8ZlkzQOG7oU8bucUFQOGAIcOF0DuIFo7u BJE4GaNBbAaD/Oz4EjEioCZAsEbj+JEqLbhkSfCiOwTCQYm66uOmGfEaaoigoqor9SG4 EYeGg2o8UFGAehT8FXOHjKXOXstEyM9K+OGvB0FLmFstzhYN4VgV85WvO59hSuZg4u84 t9Qpcq7tDMTsdio6OmCByJt0Ts4Z+g5Q1aL5mPA9iYM+zF/bzuUSm3kiwL3ZXN+S5wha 95Vjt5sWVitIDWcGJVNJKkJ2paaE9v81EBcSqd8K/4FttDh2I8l/1YM17YHl4Wn1gHTG +n0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097627; x=1685689627; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1Ugii9ooB/Wrm9g06NhftgEa6O79j/Tqko/PN7hRX3Y=; b=T+hedg2C2/JxbylzSjzktk4+jo9GbgrLZJF8y7QIP/zXMVp+oSz8xDniEB0tJV66T3 Iixh338m59ACJ1XRkn3jCA7jZ6Ak8oaw/2DpNZwF9atZSbysiK5801bSAg9oTIb6b4X4 QigxqjVn6UVDSIZBZAfR7hHmnH5JLHxwrfRbWt7/rA8VKdstc9GXaxkHhI7fuiQ39CfI 0xDr2fgn1jN1G3nkKZqQv4KSd7CyAt8PexhTSn/UJtWJVI/Pq+B35qLvUAt7awebuB5b 2J/g7Yap2ooXDl7rVw2j2YAB5ZFWyOrycxAs6BOYKbxKHIn4nGAz1MnMBbuX6AKyePhe Ip2g== X-Gm-Message-State: AC+VfDxktyvrVgxKNfM3386m80VnpYEaPM28h/on8E1yvrrC/rFu7kW5 /txAuVTeo1ecMkOeAo0505PsWjMCqZyPAMYYD2hCPg== X-Received: by 2002:a1c:cc0a:0:b0:3f3:fe82:ee89 with SMTP id h10-20020a1ccc0a000000b003f3fe82ee89mr245579wmb.8.1683097626956; Wed, 03 May 2023 00:07:06 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:06 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 12/57] tcg: Add 128-bit guest memory primitives Date: Wed, 3 May 2023 08:06:11 +0100 Message-Id: <20230503070656.1746170-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::335; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x335.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- accel/tcg/tcg-runtime.h | 3 + include/tcg/tcg-ldst.h | 4 + accel/tcg/cputlb.c | 392 +++++++++++++++++++++++++-------- accel/tcg/user-exec.c | 94 ++++++-- tcg/tcg-op.c | 184 +++++++++++----- accel/tcg/ldst_atomicity.c.inc | 189 ++++++++++++++++ 6 files changed, 688 insertions(+), 178 deletions(-) diff --git a/accel/tcg/tcg-runtime.h b/accel/tcg/tcg-runtime.h index b8e6421c8a..d9adc646c1 100644 --- a/accel/tcg/tcg-runtime.h +++ b/accel/tcg/tcg-runtime.h @@ -39,6 +39,9 @@ DEF_HELPER_FLAGS_1(exit_atomic, TCG_CALL_NO_WG, noreturn, env) DEF_HELPER_FLAGS_3(memset, TCG_CALL_NO_RWG, ptr, ptr, int, ptr) #endif /* IN_HELPER_PROTO */ +DEF_HELPER_FLAGS_3(ld_i128, TCG_CALL_NO_WG, i128, env, tl, i32) +DEF_HELPER_FLAGS_4(st_i128, TCG_CALL_NO_WG, void, env, tl, i128, i32) + DEF_HELPER_FLAGS_5(atomic_cmpxchgb, TCG_CALL_NO_WG, i32, env, tl, i32, i32, i32) DEF_HELPER_FLAGS_5(atomic_cmpxchgw_be, TCG_CALL_NO_WG, diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 57fafa14b1..64f48e6990 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -34,6 +34,8 @@ tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr); uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr); +Int128 helper_ld16_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); /* Value sign-extended to tcg register size. */ tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, @@ -55,6 +57,8 @@ void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr); void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr); +void helper_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr); #ifdef CONFIG_USER_ONLY diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 566cf8311b..a77b439df8 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -40,6 +40,7 @@ #include "qemu/plugin-memory.h" #endif #include "tcg/tcg-ldst.h" +#include "exec/helper-proto.h" /* DEBUG defines, enable DEBUG_TLB_LOG to log to the CPU_LOG_MMU target */ /* #define DEBUG_TLB */ @@ -2161,6 +2162,31 @@ static uint64_t do_ld_whole_be8(CPUArchState *env, uintptr_t ra, return (ret_be << (p->size * 8)) | x; } +/** + * do_ld_parts_be16 + * @p: translation parameters + * @ret_be: accumulated data + * + * As do_ld_bytes_beN, but with one atomic load. + * 16 aligned bytes are guaranteed to cover the load. + */ +static Int128 do_ld_whole_be16(CPUArchState *env, uintptr_t ra, + MMULookupPageData *p, uint64_t ret_be) +{ + int o = p->addr & 15; + Int128 x, y = load_atomic16_or_exit(env, ra, p->haddr - o); + int size = p->size; + + if (!HOST_BIG_ENDIAN) { + y = bswap128(y); + } + y = int128_lshift(y, o * 8); + y = int128_urshift(y, (16 - size) * 8); + x = int128_make64(ret_be); + x = int128_lshift(x, size * 8); + return int128_or(x, y); +} + /* * Wrapper for the above. */ @@ -2205,6 +2231,59 @@ static uint64_t do_ld_beN(CPUArchState *env, MMULookupPageData *p, } } +/* + * Wrapper for the above, for 8 < size < 16. + */ +static Int128 do_ld16_beN(CPUArchState *env, MMULookupPageData *p, + uint64_t a, int mmu_idx, MemOp mop, uintptr_t ra) +{ + int size = p->size; + uint64_t b; + MemOp atmax; + + if (unlikely(p->flags & TLB_MMIO)) { + p->size = size - 8; + a = do_ld_mmio_beN(env, p, a, mmu_idx, MMU_DATA_LOAD, ra); + p->addr += p->size; + p->size = 8; + b = do_ld_mmio_beN(env, p, 0, mmu_idx, MMU_DATA_LOAD, ra); + } else { + switch (mop & MO_ATOM_MASK) { + case MO_ATOM_WITHIN16: + /* + * It is a given that we cross a page and therefore there is no + * atomicity for the load as a whole, but there may be a subobject + * as defined by ATMAX which does not cross a 16-byte boundary. + */ + atmax = mop & MO_ATMAX_MASK; + if (atmax != MO_ATMAX_SIZE) { + atmax >>= MO_ATMAX_SHIFT; + if (unlikely(size >= (1 << atmax))) { + return do_ld_whole_be16(env, ra, p, a); + } + } + /* fall through */ + case MO_ATOM_IFALIGN: + case MO_ATOM_NONE: + p->size = size - 8; + a = do_ld_bytes_beN(p, a); + b = ldq_be_p(p->haddr + size - 8); + break; + case MO_ATOM_SUBALIGN: + p->size = size - 8; + a = do_ld_parts_beN(p, a); + p->haddr += size - 8; + p->size = 8; + b = do_ld_parts_beN(p, 0); + break; + default: + g_assert_not_reached(); + } + } + + return int128_make128(b, a); +} + static uint8_t do_ld_1(CPUArchState *env, MMULookupPageData *p, int mmu_idx, MMUAccessType type, uintptr_t ra) { @@ -2393,6 +2472,80 @@ tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr); } +static Int128 do_ld16_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MMULookupLocals l; + bool crosspage; + uint64_t a, b; + Int128 ret; + int first; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD, &l); + if (likely(!crosspage)) { + /* Perform the load host endian. */ + if (unlikely(l.page[0].flags & TLB_MMIO)) { + QEMU_IOTHREAD_LOCK_GUARD(); + a = io_readx(env, l.page[0].full, l.mmu_idx, addr, + ra, MMU_DATA_LOAD, MO_64); + b = io_readx(env, l.page[0].full, l.mmu_idx, addr + 8, + ra, MMU_DATA_LOAD, MO_64); + ret = int128_make128(HOST_BIG_ENDIAN ? b : a, + HOST_BIG_ENDIAN ? a : b); + } else { + ret = load_atom_16(env, ra, l.page[0].haddr, l.memop); + } + if (l.memop & MO_BSWAP) { + ret = bswap128(ret); + } + return ret; + } + + first = l.page[0].size; + if (first == 8) { + MemOp mop8 = (l.memop & ~MO_SIZE) | MO_64; + + a = do_ld_8(env, &l.page[0], l.mmu_idx, MMU_DATA_LOAD, mop8, ra); + b = do_ld_8(env, &l.page[1], l.mmu_idx, MMU_DATA_LOAD, mop8, ra); + if ((mop8 & MO_BSWAP) == MO_LE) { + ret = int128_make128(a, b); + } else { + ret = int128_make128(b, a); + } + return ret; + } + + if (first < 8) { + a = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, + MMU_DATA_LOAD, l.memop, ra); + ret = do_ld16_beN(env, &l.page[1], a, l.mmu_idx, l.memop, ra); + } else { + ret = do_ld16_beN(env, &l.page[0], 0, l.mmu_idx, l.memop, ra); + b = int128_getlo(ret); + ret = int128_lshift(ret, l.page[1].size * 8); + a = int128_gethi(ret); + b = do_ld_beN(env, &l.page[1], b, l.mmu_idx, + MMU_DATA_LOAD, l.memop, ra); + ret = int128_make128(b, a); + } + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = bswap128(ret); + } + return ret; +} + +Int128 helper_ld16_mmu(CPUArchState *env, target_ulong addr, + uint32_t oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128); + return do_ld16_mmu(env, addr, oi, retaddr); +} + +Int128 helper_ld_i128(CPUArchState *env, target_ulong addr, uint32_t oi) +{ + return helper_ld16_mmu(env, addr, oi, GETPC()); +} + /* * Load helpers for cpu_ldst.h. */ @@ -2481,59 +2634,23 @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - MemOp mop = get_memop(oi); - int mmu_idx = get_mmuidx(oi); - MemOpIdx new_oi; - unsigned a_bits; - uint64_t h, l; + Int128 ret; - tcg_debug_assert((mop & (MO_BSWAP|MO_SSIZE)) == (MO_BE|MO_128)); - a_bits = get_alignment_bits(mop); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_LOAD, - mmu_idx, ra); - } - - /* Construct an unaligned 64-bit replacement MemOpIdx. */ - mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; - new_oi = make_memop_idx(mop, mmu_idx); - - h = helper_ldq_mmu(env, addr, new_oi, ra); - l = helper_ldq_mmu(env, addr + 8, new_oi, ra); - - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return int128_make128(l, h); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP|MO_SIZE)) == (MO_BE|MO_128)); + ret = do_ld16_mmu(env, addr, oi, ra); + plugin_load_cb(env, addr, oi); + return ret; } Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - MemOp mop = get_memop(oi); - int mmu_idx = get_mmuidx(oi); - MemOpIdx new_oi; - unsigned a_bits; - uint64_t h, l; + Int128 ret; - tcg_debug_assert((mop & (MO_BSWAP|MO_SSIZE)) == (MO_LE|MO_128)); - a_bits = get_alignment_bits(mop); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_LOAD, - mmu_idx, ra); - } - - /* Construct an unaligned 64-bit replacement MemOpIdx. */ - mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; - new_oi = make_memop_idx(mop, mmu_idx); - - l = helper_ldq_mmu(env, addr, new_oi, ra); - h = helper_ldq_mmu(env, addr + 8, new_oi, ra); - - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return int128_make128(l, h); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP|MO_SIZE)) == (MO_LE|MO_128)); + ret = do_ld16_mmu(env, addr, oi, ra); + plugin_load_cb(env, addr, oi); + return ret; } /* @@ -2614,6 +2731,57 @@ static uint64_t do_st_leN(CPUArchState *env, MMULookupPageData *p, } } +/* + * Wrapper for the above, for 8 < size < 16. + */ +static uint64_t do_st16_leN(CPUArchState *env, MMULookupPageData *p, + Int128 val_le, int mmu_idx, + MemOp mop, uintptr_t ra) +{ + int size = p->size; + MemOp atmax; + + if (unlikely(p->flags & TLB_MMIO)) { + p->size = 8; + do_st_mmio_leN(env, p, int128_getlo(val_le), mmu_idx, ra); + p->size = size - 8; + p->addr += 8; + return do_st_mmio_leN(env, p, int128_gethi(val_le), mmu_idx, ra); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + return int128_gethi(val_le) >> ((size - 8) * 8); + } + + switch (mop & MO_ATOM_MASK) { + case MO_ATOM_WITHIN16: + /* + * It is a given that we cross a page and therefore there is no + * atomicity for the store as a whole, but there may be a subobject + * as defined by ATMAX which does not cross a 16-byte boundary. + */ + atmax = mop & MO_ATMAX_MASK; + if (atmax != MO_ATMAX_SIZE) { + atmax >>= MO_ATMAX_SHIFT; + if (unlikely(size >= (1 << atmax))) { + if (HAVE_al16) { + return store_whole_le16(p->haddr, p->size, val_le); + } else { + cpu_loop_exit_atomic(env_cpu(env), ra); + } + } + } + /* fall through */ + case MO_ATOM_IFALIGN: + case MO_ATOM_NONE: + stq_le_p(p->haddr, int128_getlo(val_le)); + return store_bytes_leN(p->haddr + 8, p->size - 8, int128_gethi(val_le)); + case MO_ATOM_SUBALIGN: + store_parts_leN(p->haddr, 8, int128_getlo(val_le)); + return store_parts_leN(p->haddr + 8, p->size - 8, int128_gethi(val_le)); + default: + g_assert_not_reached(); + } +} + static void do_st_1(CPUArchState *env, MMULookupPageData *p, uint8_t val, int mmu_idx, uintptr_t ra) { @@ -2770,6 +2938,80 @@ void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, do_st8_mmu(env, addr, val, oi, retaddr); } +static void do_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t ra) +{ + MMULookupLocals l; + bool crosspage; + uint64_t a, b; + int first; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + if (likely(!crosspage)) { + /* Swap to host endian if necessary, then store. */ + if (l.memop & MO_BSWAP) { + val = bswap128(val); + } + if (unlikely(l.page[0].flags & TLB_MMIO)) { + QEMU_IOTHREAD_LOCK_GUARD(); + if (HOST_BIG_ENDIAN) { + b = int128_getlo(val), a = int128_gethi(val); + } else { + a = int128_getlo(val), b = int128_gethi(val); + } + io_writex(env, l.page[0].full, l.mmu_idx, a, addr, ra, MO_64); + io_writex(env, l.page[0].full, l.mmu_idx, b, addr + 8, ra, MO_64); + } else if (unlikely(l.page[0].flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + store_atom_16(env, ra, l.page[0].haddr, l.memop, val); + } + return; + } + + first = l.page[0].size; + if (first == 8) { + MemOp mop8 = (l.memop & ~(MO_SIZE | MO_BSWAP)) | MO_64; + + if (l.memop & MO_BSWAP) { + val = bswap128(val); + } + if (HOST_BIG_ENDIAN) { + b = int128_getlo(val), a = int128_gethi(val); + } else { + a = int128_getlo(val), b = int128_gethi(val); + } + do_st_8(env, &l.page[0], a, l.mmu_idx, mop8, ra); + do_st_8(env, &l.page[1], b, l.mmu_idx, mop8, ra); + return; + } + + if ((l.memop & MO_BSWAP) != MO_LE) { + val = bswap128(val); + } + if (first < 8) { + do_st_leN(env, &l.page[0], int128_getlo(val), l.mmu_idx, l.memop, ra); + val = int128_urshift(val, first * 8); + do_st16_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); + } else { + b = do_st16_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + do_st_leN(env, &l.page[1], b, l.mmu_idx, l.memop, ra); + } +} + +void helper_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128); + do_st16_mmu(env, addr, val, oi, retaddr); +} + +void helper_st_i128(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi) +{ + helper_st16_mmu(env, addr, val, oi, GETPC()); +} + /* * Store Helpers for cpu_ldst.h */ @@ -2834,58 +3076,20 @@ void cpu_stq_le_mmu(CPUArchState *env, target_ulong addr, uint64_t val, plugin_store_cb(env, addr, oi); } -void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, Int128 val, - MemOpIdx oi, uintptr_t ra) +void cpu_st16_be_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr) { - MemOp mop = get_memop(oi); - int mmu_idx = get_mmuidx(oi); - MemOpIdx new_oi; - unsigned a_bits; - - tcg_debug_assert((mop & (MO_BSWAP|MO_SSIZE)) == (MO_BE|MO_128)); - a_bits = get_alignment_bits(mop); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_STORE, - mmu_idx, ra); - } - - /* Construct an unaligned 64-bit replacement MemOpIdx. */ - mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; - new_oi = make_memop_idx(mop, mmu_idx); - - helper_stq_mmu(env, addr, int128_gethi(val), new_oi, ra); - helper_stq_mmu(env, addr + 8, int128_getlo(val), new_oi, ra); - - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP|MO_SIZE)) == (MO_BE|MO_128)); + do_st16_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } -void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, - MemOpIdx oi, uintptr_t ra) +void cpu_st16_le_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr) { - MemOp mop = get_memop(oi); - int mmu_idx = get_mmuidx(oi); - MemOpIdx new_oi; - unsigned a_bits; - - tcg_debug_assert((mop & (MO_BSWAP|MO_SSIZE)) == (MO_LE|MO_128)); - a_bits = get_alignment_bits(mop); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_STORE, - mmu_idx, ra); - } - - /* Construct an unaligned 64-bit replacement MemOpIdx. */ - mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; - new_oi = make_memop_idx(mop, mmu_idx); - - helper_stq_mmu(env, addr, int128_getlo(val), new_oi, ra); - helper_stq_mmu(env, addr + 8, int128_gethi(val), new_oi, ra); - - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP|MO_SIZE)) == (MO_LE|MO_128)); + do_st16_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } #include "ldst_common.c.inc" diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index d9f9766b7f..8f86254eb4 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -1121,18 +1121,45 @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, return cpu_to_le64(ret); } -Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) +static Int128 do_ld16_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) { void *haddr; Int128 ret; - tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_BE)); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - memcpy(&ret, haddr, 16); + tcg_debug_assert((mop & MO_SIZE) == MO_128); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret = load_atom_16(env, ra, haddr, mop); clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); + return ret; +} +Int128 helper_ld16_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + Int128 ret = do_ld16_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap128(ret); + } + return ret; +} + +Int128 helper_ld_i128(CPUArchState *env, target_ulong addr, MemOpIdx oi) +{ + return helper_ld16_mmu(env, addr, oi, GETPC()); +} + +Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + Int128 ret; + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + ret = do_ld16_he_mmu(env, addr, mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); if (!HOST_BIG_ENDIAN) { ret = bswap128(ret); } @@ -1142,15 +1169,12 @@ Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); Int128 ret; - tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_LE)); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - memcpy(&ret, haddr, 16); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + ret = do_ld16_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - if (HOST_BIG_ENDIAN) { ret = bswap128(ret); } @@ -1307,33 +1331,57 @@ void cpu_stq_le_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } -void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, - Int128 val, MemOpIdx oi, uintptr_t ra) +static void do_st16_he_mmu(CPUArchState *env, abi_ptr addr, Int128 val, + MemOp mop, uintptr_t ra) { void *haddr; - tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_BE)); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); + tcg_debug_assert((mop & MO_SIZE) == MO_128); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_16(env, ra, haddr, mop, val); + clear_helper_retaddr(); +} + +void helper_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + if (mop & MO_BSWAP) { + val = bswap128(val); + } + do_st16_he_mmu(env, addr, val, mop, ra); +} + +void helper_st_i128(CPUArchState *env, target_ulong addr, + Int128 val, MemOpIdx oi) +{ + helper_st16_mmu(env, addr, val, oi, GETPC()); +} + +void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, + Int128 val, MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); if (!HOST_BIG_ENDIAN) { val = bswap128(val); } - memcpy(haddr, &val, 16); - clear_helper_retaddr(); + do_st16_he_mmu(env, addr, val, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); - tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_LE)); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); if (HOST_BIG_ENDIAN) { val = bswap128(val); } - memcpy(haddr, &val, 16); - clear_helper_retaddr(); + do_st16_he_mmu(env, addr, val, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 3136cef81a..9101d334b6 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -3119,6 +3119,48 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop) } } +/* + * Return true if @mop, without knowledge of the pointer alignment, + * does not require 16-byte atomicity, and it would be adventagous + * to avoid a call to a helper function. + */ +static bool use_two_i64_for_i128(MemOp mop) +{ +#ifdef CONFIG_SOFTMMU + /* Two softmmu tlb lookups is larger than one function call. */ + return false; +#else + /* + * For user-only, two 64-bit operations may well be smaller than a call. + * Determine if that would be legal for the requested atomicity. + */ + MemOp atom = mop & MO_ATOM_MASK; + MemOp atmax = mop & MO_ATMAX_MASK; + + /* In a serialized context, no atomicity is required. */ + if (!(tcg_ctx->gen_tb->cflags & CF_PARALLEL)) { + return true; + } + + if (atmax == MO_ATMAX_SIZE) { + atmax = mop & MO_SIZE; + } else { + atmax >>= MO_ATMAX_SHIFT; + } + switch (atom) { + case MO_ATOM_NONE: + return true; + case MO_ATOM_IFALIGN: + case MO_ATOM_SUBALIGN: + return atmax < MO_128; + case MO_ATOM_WITHIN16: + return atmax == MO_8; + default: + g_assert_not_reached(); + } +#endif +} + static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) { MemOp mop_1 = orig, mop_2; @@ -3164,93 +3206,113 @@ static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) ret[1] = mop_2; } +#if TARGET_LONG_BITS == 64 +#define tcg_temp_ebb_new tcg_temp_ebb_new_i64 +#else +#define tcg_temp_ebb_new tcg_temp_ebb_new_i32 +#endif + void tcg_gen_qemu_ld_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) { - MemOp mop[2]; - TCGv addr_p8; - TCGv_i64 x, y; + MemOpIdx oi = make_memop_idx(memop, idx); - canonicalize_memop_i128_as_i64(mop, memop); + tcg_debug_assert((memop & MO_SIZE) == MO_128); + tcg_debug_assert((memop & MO_SIGN) == 0); tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); addr = plugin_prep_mem_callbacks(addr); - /* TODO: respect atomicity of the operation. */ /* TODO: allow the tcg backend to see the whole operation. */ - /* - * Since there are no global TCGv_i128, there is no visible state - * changed if the second load faults. Load directly into the two - * subwords. - */ - if ((memop & MO_BSWAP) == MO_LE) { - x = TCGV128_LOW(val); - y = TCGV128_HIGH(val); + if (use_two_i64_for_i128(memop)) { + MemOp mop[2]; + TCGv addr_p8; + TCGv_i64 x, y; + + canonicalize_memop_i128_as_i64(mop, memop); + + /* + * Since there are no global TCGv_i128, there is no visible state + * changed if the second load faults. Load directly into the two + * subwords. + */ + if ((memop & MO_BSWAP) == MO_LE) { + x = TCGV128_LOW(val); + y = TCGV128_HIGH(val); + } else { + x = TCGV128_HIGH(val); + y = TCGV128_LOW(val); + } + + gen_ldst_i64(INDEX_op_qemu_ld_i64, x, addr, mop[0], idx); + + if ((mop[0] ^ memop) & MO_BSWAP) { + tcg_gen_bswap64_i64(x, x); + } + + addr_p8 = tcg_temp_ebb_new(); + tcg_gen_addi_tl(addr_p8, addr, 8); + gen_ldst_i64(INDEX_op_qemu_ld_i64, y, addr_p8, mop[1], idx); + tcg_temp_free(addr_p8); + + if ((mop[0] ^ memop) & MO_BSWAP) { + tcg_gen_bswap64_i64(y, y); + } } else { - x = TCGV128_HIGH(val); - y = TCGV128_LOW(val); + gen_helper_ld_i128(val, cpu_env, addr, tcg_constant_i32(oi)); } - gen_ldst_i64(INDEX_op_qemu_ld_i64, x, addr, mop[0], idx); - - if ((mop[0] ^ memop) & MO_BSWAP) { - tcg_gen_bswap64_i64(x, x); - } - - addr_p8 = tcg_temp_new(); - tcg_gen_addi_tl(addr_p8, addr, 8); - gen_ldst_i64(INDEX_op_qemu_ld_i64, y, addr_p8, mop[1], idx); - tcg_temp_free(addr_p8); - - if ((mop[0] ^ memop) & MO_BSWAP) { - tcg_gen_bswap64_i64(y, y); - } - - plugin_gen_mem_callbacks(addr, make_memop_idx(memop, idx), - QEMU_PLUGIN_MEM_R); + plugin_gen_mem_callbacks(addr, oi, QEMU_PLUGIN_MEM_R); } void tcg_gen_qemu_st_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) { - MemOp mop[2]; - TCGv addr_p8; - TCGv_i64 x, y; + MemOpIdx oi = make_memop_idx(memop, idx); - canonicalize_memop_i128_as_i64(mop, memop); + tcg_debug_assert((memop & MO_SIZE) == MO_128); + tcg_debug_assert((memop & MO_SIGN) == 0); tcg_gen_req_mo(TCG_MO_ST_LD | TCG_MO_ST_ST); addr = plugin_prep_mem_callbacks(addr); - /* TODO: respect atomicity of the operation. */ /* TODO: allow the tcg backend to see the whole operation. */ - if ((memop & MO_BSWAP) == MO_LE) { - x = TCGV128_LOW(val); - y = TCGV128_HIGH(val); + if (use_two_i64_for_i128(memop)) { + MemOp mop[2]; + TCGv addr_p8; + TCGv_i64 x, y; + + canonicalize_memop_i128_as_i64(mop, memop); + + if ((memop & MO_BSWAP) == MO_LE) { + x = TCGV128_LOW(val); + y = TCGV128_HIGH(val); + } else { + x = TCGV128_HIGH(val); + y = TCGV128_LOW(val); + } + + addr_p8 = tcg_temp_ebb_new(); + if ((mop[0] ^ memop) & MO_BSWAP) { + TCGv_i64 t = tcg_temp_ebb_new_i64(); + + tcg_gen_bswap64_i64(t, x); + gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr, mop[0], idx); + tcg_gen_bswap64_i64(t, y); + tcg_gen_addi_tl(addr_p8, addr, 8); + gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr_p8, mop[1], idx); + tcg_temp_free_i64(t); + } else { + gen_ldst_i64(INDEX_op_qemu_st_i64, x, addr, mop[0], idx); + tcg_gen_addi_tl(addr_p8, addr, 8); + gen_ldst_i64(INDEX_op_qemu_st_i64, y, addr_p8, mop[1], idx); + } + tcg_temp_free(addr_p8); } else { - x = TCGV128_HIGH(val); - y = TCGV128_LOW(val); + gen_helper_st_i128(cpu_env, addr, val, tcg_constant_i32(oi)); } - addr_p8 = tcg_temp_new(); - if ((mop[0] ^ memop) & MO_BSWAP) { - TCGv_i64 t = tcg_temp_ebb_new_i64(); - - tcg_gen_bswap64_i64(t, x); - gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr, mop[0], idx); - tcg_gen_bswap64_i64(t, y); - tcg_gen_addi_tl(addr_p8, addr, 8); - gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr_p8, mop[1], idx); - tcg_temp_free_i64(t); - } else { - gen_ldst_i64(INDEX_op_qemu_st_i64, x, addr, mop[0], idx); - tcg_gen_addi_tl(addr_p8, addr, 8); - gen_ldst_i64(INDEX_op_qemu_st_i64, y, addr_p8, mop[1], idx); - } - tcg_temp_free(addr_p8); - - plugin_gen_mem_callbacks(addr, make_memop_idx(memop, idx), - QEMU_PLUGIN_MEM_W); + plugin_gen_mem_callbacks(addr, oi, QEMU_PLUGIN_MEM_W); } static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 07abbdee3f..e61121d6bf 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -423,6 +423,21 @@ static inline uint64_t load_atom_8_by_4(void *pv) } } +/** + * load_atom_8_by_8_or_4: + * @pv: host address + * + * Load 8 bytes from aligned @pv, with at least 4-byte atomicity. + */ +static inline uint64_t load_atom_8_by_8_or_4(void *pv) +{ + if (HAVE_al8_fast) { + return load_atomic8(pv); + } else { + return load_atom_8_by_4(pv); + } +} + /** * load_atom_2: * @p: host address @@ -555,6 +570,64 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, } } +/** + * load_atom_16: + * @p: host address + * @memop: the full memory op + * + * Load 16 bytes from @p, honoring the atomicity of @memop. + */ +static Int128 load_atom_16(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + Int128 r; + uint64_t a, b; + + /* + * If the host does not support 8-byte atomics, wait until we have + * examined the atomicity parameters below. + */ + if (HAVE_al16_fast && likely((pi & 15) == 0)) { + return load_atomic16(pv); + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + memcpy(&r, pv, 16); + return r; + case MO_16: + a = load_atom_8_by_2(pv); + b = load_atom_8_by_2(pv + 8); + break; + case MO_32: + a = load_atom_8_by_4(pv); + b = load_atom_8_by_4(pv + 8); + break; + case MO_64: + if (!HAVE_al8) { + cpu_loop_exit_atomic(env_cpu(env), ra); + } + a = load_atomic8(pv); + b = load_atomic8(pv + 8); + break; + case -MO_64: + if (!HAVE_al8) { + cpu_loop_exit_atomic(env_cpu(env), ra); + } + a = load_atom_extract_al8x2(pv); + b = load_atom_extract_al8x2(pv + 8); + break; + case MO_128: + return load_atomic16_or_exit(env, ra, pv); + default: + g_assert_not_reached(); + } + return int128_make128(HOST_BIG_ENDIAN ? b : a, HOST_BIG_ENDIAN ? a : b); +} + /** * store_atomic2: * @pv: host address @@ -596,6 +669,40 @@ static inline void store_atomic8(void *pv, uint64_t val) qatomic_set__nocheck(p, val); } +/** + * store_atomic16: + * @pv: host address + * @val: value to store + * + * Atomically store 16 aligned bytes to @pv. + */ +static inline void store_atomic16(void *pv, Int128 val) +{ +#if defined(CONFIG_ATOMIC128) + __uint128_t *pu = __builtin_assume_aligned(pv, 16); + Int128Alias new; + + new.s = val; + qatomic_set__nocheck(pu, new.u); +#elif defined(CONFIG_CMPXCHG128) + __uint128_t *pu = __builtin_assume_aligned(pv, 16); + __uint128_t o; + Int128Alias n; + + /* + * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always + * defer to libatomic, so we must use __sync_val_compare_and_swap_16 + * and accept the sequential consistency that comes with it. + */ + n.s = val; + do { + o = *pu; + } while (!__sync_bool_compare_and_swap_16(pu, o, n.u)); +#else + qemu_build_not_reached(); +#endif +} + /** * store_atom_4x2 */ @@ -1039,3 +1146,85 @@ static void store_atom_8(CPUArchState *env, uintptr_t ra, } cpu_loop_exit_atomic(env_cpu(env), ra); } + +/** + * store_atom_16: + * @p: host address + * @val: the value to store + * @memop: the full memory op + * + * Store 16 bytes to @p, honoring the atomicity of @memop. + */ +static void store_atom_16(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop, Int128 val) +{ + uintptr_t pi = (uintptr_t)pv; + uint64_t a, b; + int atmax; + + if (HAVE_al16_fast && likely((pi & 15) == 0)) { + store_atomic16(pv, val); + return; + } + + atmax = required_atomicity(env, pi, memop); + + a = HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val); + b = HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val); + switch (atmax) { + case MO_8: + memcpy(pv, &val, 16); + return; + case MO_16: + store_atom_8_by_2(pv, a); + store_atom_8_by_2(pv + 8, b); + return; + case MO_32: + store_atom_8_by_4(pv, a); + store_atom_8_by_4(pv + 8, b); + return; + case MO_64: + if (HAVE_al8) { + store_atomic8(pv, a); + store_atomic8(pv + 8, b); + return; + } + break; + case -MO_64: + if (HAVE_al16) { + uint64_t val_le; + int s2 = pi & 15; + int s1 = 16 - s2; + + if (HOST_BIG_ENDIAN) { + val = bswap128(val); + } + switch (s2) { + case 1 ... 7: + val_le = store_whole_le16(pv, s1, val); + store_bytes_leN(pv + s1, s2, val_le); + break; + case 9 ... 15: + store_bytes_leN(pv, s1, int128_getlo(val)); + val = int128_urshift(val, s1 * 8); + store_whole_le16(pv + s1, s2, val); + break; + case 0: /* aligned */ + case 8: /* atmax MO_64 */ + default: + g_assert_not_reached(); + } + return; + } + break; + case MO_128: + if (HAVE_al16) { + store_atomic16(pv, val); + return; + } + break; + default: + g_assert_not_reached(); + } + cpu_loop_exit_atomic(env_cpu(env), ra); +} From patchwork Wed May 3 07:06:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678730 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp913698wrs; Wed, 3 May 2023 00:39:33 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7T4PI8ptTj3YFjRPP3iT011js/wicwDb97lpqwWId5HIGLCwN0b4HzgbVPo5kA5/V5CwYm X-Received: by 2002:a05:6214:518b:b0:605:648b:2ac8 with SMTP id kl11-20020a056214518b00b00605648b2ac8mr1454502qvb.4.1683099573377; Wed, 03 May 2023 00:39:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683099573; cv=none; d=google.com; s=arc-20160816; b=IfEBl8H5+1mssXSZ6EINMdUuf40CVucXtFVrQaCO/ZbSSrjCG44w+iVtL0JilbBHUA 13F7vcM5PpHpoHu6YLiP1lZs+R+szD4zJu0AOU1xCK0sniG/mcZ0muDErxInLM42uPis poJDZWsIMd4ZR5J3h+n18ZehsyLOIkIZvIanIcLU3fWgN3m4KnA9O0PaePCfk9s6ZaDM hNP7WATx1vA7wqE0vsBIO5iMverFRkXIGJOTrP8enbNALemB1N6bOhf8CxZWbkELVAUQ otC5L3mAV7bzNWZLp5pCZZz8aflefx+eAgesioxxep1+9r8k2FJuLVuk+Ljst7EWoVZR gD/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yOPWh6rf7mSRkh1L4stxmzgun3ak1K2cz0ZVsvVUM5M=; b=hiH3f5iCAyxJ4kg/ZoQMo9tABjKEyOj4HSM3QFW2s6924inOYKr0JKtOGVTSk5kjL1 ZZLDHNrQE6ajqrcbA4tZF9FAFn4U+tlLHlxpS4v/Cpp9XWaF9Cwk2aX3IlYPTDiIRM50 JUvBWqsa1GdpIHX0DVwlLiXDU72e3h+T1tI9VpxqokPQZNJ4Uv4/eZmT8UV66FTY7HU0 VOZjduEwcf2pOenU8XLhHkzAME4p5JkgDTSC4EqKOqyj3Ivuso72AqDDGOF+5WzD97gf U2Ohj4NqHlRn5o/8l/FkVpojQKoBuySeecU7hPbGj6ERWpE/5pPP3ei1n7TkVnwBzSxN J7VA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=aKc9GfE4; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id fo15-20020ad45f0f000000b005353068a8acsi18002784qvb.387.2023.05.03.00.39.33 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:39:33 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=aKc9GfE4; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6ae-0007np-1g; Wed, 03 May 2023 03:07:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aL-00061o-M5 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:33 -0400 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zx-0005gL-E4 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:33 -0400 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3f315735514so10777235e9.1 for ; Wed, 03 May 2023 00:07:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097628; x=1685689628; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yOPWh6rf7mSRkh1L4stxmzgun3ak1K2cz0ZVsvVUM5M=; b=aKc9GfE4b3bulzJV3Zs5nUy8IhIG0xxM9M35Ev8v8gNgalr3PPGpOQr7pOrZnXmss7 ZcBk/JB5NR6VM3voZm69y1aLRvXpXsZUxAUF3eLycVbOihk3l4xXHYkoTu/55TKsX7uZ 4PSKlf1NEOql/nLWLftku8R7WRd8EEphuXMwVFwHTrKZmZs/p5lLEiyJldZFx51EEIw6 npBzT90ZmHR+PyR3h/0hZB1e13oXeUAygjwRaHTRIRdgcyXF9OnB97tDp0DafkNXhEKn I242cYTKYVO2GH3m4XesOWA80z1iv0HngUJE/GXuVSOLIXPlXCzx1neYcmaclNsY7exB ufpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097628; x=1685689628; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yOPWh6rf7mSRkh1L4stxmzgun3ak1K2cz0ZVsvVUM5M=; b=MyH+Z1moAhNE93RrgPZEswGEUrrTb/xSQ3lwkO1/86AvtmSjYb4bEeTo16uwRYhcI/ /vwogQeA2mm4Vo/q2FF+qRV49cKQyMw623jdr6cUh2ZlyMJY4MF7m3fwRsGEKuPbl4q8 NY+20lnv6fxLbGDXi/A5CrAM50T9I8cIxrdj8n2QtGi6LJPuprRRDZ98/SCVsxsX6pU/ YJBfaC5dNt2uocWwYjkQDX8QzLO/E2KcYw9FDgPHK0aeyJfNnqy79ZWmdOt4aSRnfTRT ywLnKvFtK2NxEZ7GpqdxGtlPVt3z298ab57HdfSDWYnkQRLYvdmwwHTspisosB6Zw1Ng a3bg== X-Gm-Message-State: AC+VfDwsbowZIuRkIOaYucd+p41Fmdlhbg7kVAkPhX/NnsARs2ZexaYS CHfvqihRa/uCYfFzHSeOoSBC8wdeHdNDY2KeXZ7txA== X-Received: by 2002:a05:600c:b45:b0:3f1:70d5:1be8 with SMTP id k5-20020a05600c0b4500b003f170d51be8mr763229wmr.15.1683097627825; Wed, 03 May 2023 00:07:07 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:07 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 13/57] meson: Detect atomic128 support with optimization Date: Wed, 3 May 2023 08:06:12 +0100 Message-Id: <20230503070656.1746170-14-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::329; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x329.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org There is an edge condition prior to gcc13 for which optimization is required to generate 16-byte atomic sequences. Detect this. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- accel/tcg/ldst_atomicity.c.inc | 38 ++++++++++++++++++------- meson.build | 52 ++++++++++++++++++++++------------ 2 files changed, 61 insertions(+), 29 deletions(-) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index e61121d6bf..c43f101ebe 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -16,6 +16,23 @@ #endif #define HAVE_al8_fast (ATOMIC_REG_SIZE >= 8) +/* + * If __alignof(unsigned __int128) < 16, GCC may refuse to inline atomics + * that are supported by the host, e.g. s390x. We can force the pointer to + * have our known alignment with __builtin_assume_aligned, however prior to + * GCC 13 that was only reliable with optimization enabled. See + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107389 + */ +#if defined(CONFIG_ATOMIC128_OPT) +# if !defined(__OPTIMIZE__) +# define ATTRIBUTE_ATOMIC128_OPT __attribute__((optimize("O1"))) +# endif +# define CONFIG_ATOMIC128 +#endif +#ifndef ATTRIBUTE_ATOMIC128_OPT +# define ATTRIBUTE_ATOMIC128_OPT +#endif + #if defined(CONFIG_ATOMIC128) # define HAVE_al16_fast true #else @@ -136,7 +153,8 @@ static inline uint64_t load_atomic8(void *pv) * * Atomically load 16 aligned bytes from @pv. */ -static inline Int128 load_atomic16(void *pv) +static inline Int128 ATTRIBUTE_ATOMIC128_OPT +load_atomic16(void *pv) { #ifdef CONFIG_ATOMIC128 __uint128_t *p = __builtin_assume_aligned(pv, 16); @@ -340,7 +358,8 @@ static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra, * cross an 16-byte boundary then the access must be 16-byte atomic, * otherwise the access must be 8-byte atomic. */ -static inline uint64_t load_atom_extract_al16_or_al8(void *pv, int s) +static inline uint64_t ATTRIBUTE_ATOMIC128_OPT +load_atom_extract_al16_or_al8(void *pv, int s) { #if defined(CONFIG_ATOMIC128) uintptr_t pi = (uintptr_t)pv; @@ -676,28 +695,24 @@ static inline void store_atomic8(void *pv, uint64_t val) * * Atomically store 16 aligned bytes to @pv. */ -static inline void store_atomic16(void *pv, Int128 val) +static inline void ATTRIBUTE_ATOMIC128_OPT +store_atomic16(void *pv, Int128Alias val) { #if defined(CONFIG_ATOMIC128) __uint128_t *pu = __builtin_assume_aligned(pv, 16); - Int128Alias new; - - new.s = val; - qatomic_set__nocheck(pu, new.u); + qatomic_set__nocheck(pu, val.u); #elif defined(CONFIG_CMPXCHG128) __uint128_t *pu = __builtin_assume_aligned(pv, 16); __uint128_t o; - Int128Alias n; /* * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always * defer to libatomic, so we must use __sync_val_compare_and_swap_16 * and accept the sequential consistency that comes with it. */ - n.s = val; do { o = *pu; - } while (!__sync_bool_compare_and_swap_16(pu, o, n.u)); + } while (!__sync_bool_compare_and_swap_16(pu, o, val.u)); #else qemu_build_not_reached(); #endif @@ -779,7 +794,8 @@ static void store_atom_insert_al8(uint64_t *p, uint64_t val, uint64_t msk) * * Atomically store @val to @p masked by @msk. */ -static void store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias msk) +static void ATTRIBUTE_ATOMIC128_OPT +store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias msk) { #if defined(CONFIG_ATOMIC128) __uint128_t *pu, old, new; diff --git a/meson.build b/meson.build index 77d42898c8..4bbdbcef37 100644 --- a/meson.build +++ b/meson.build @@ -2241,23 +2241,21 @@ config_host_data.set('HAVE_BROKEN_SIZE_MAX', not cc.compiles(''' return printf("%zu", SIZE_MAX); }''', args: ['-Werror'])) -atomic_test = ''' +# See if 64-bit atomic operations are supported. +# Note that without __atomic builtins, we can only +# assume atomic loads/stores max at pointer size. +config_host_data.set('CONFIG_ATOMIC64', cc.links(''' #include int main(void) { - @0@ x = 0, y = 0; + uint64_t x = 0, y = 0; y = __atomic_load_n(&x, __ATOMIC_RELAXED); __atomic_store_n(&x, y, __ATOMIC_RELAXED); __atomic_compare_exchange_n(&x, &y, x, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED); __atomic_exchange_n(&x, y, __ATOMIC_RELAXED); __atomic_fetch_add(&x, y, __ATOMIC_RELAXED); return 0; - }''' - -# See if 64-bit atomic operations are supported. -# Note that without __atomic builtins, we can only -# assume atomic loads/stores max at pointer size. -config_host_data.set('CONFIG_ATOMIC64', cc.links(atomic_test.format('uint64_t'))) + }''')) has_int128 = cc.links(''' __int128_t a; @@ -2275,21 +2273,39 @@ if has_int128 # "do we have 128-bit atomics which are handled inline and specifically not # via libatomic". The reason we can't use libatomic is documented in the # comment starting "GCC is a house divided" in include/qemu/atomic128.h. - has_atomic128 = cc.links(atomic_test.format('unsigned __int128')) + # We only care about these operations on 16-byte aligned pointers, so + # force 16-byte alignment of the pointer, which may be greater than + # __alignof(unsigned __int128) for the host. + atomic_test_128 = ''' + int main(int ac, char **av) { + unsigned __int128 *p = __builtin_assume_aligned(av[ac - 1], sizeof(16)); + p[1] = __atomic_load_n(&p[0], __ATOMIC_RELAXED); + __atomic_store_n(&p[2], p[3], __ATOMIC_RELAXED); + __atomic_compare_exchange_n(&p[4], &p[5], p[6], 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED); + return 0; + }''' + has_atomic128 = cc.links(atomic_test_128) config_host_data.set('CONFIG_ATOMIC128', has_atomic128) if not has_atomic128 - has_cmpxchg128 = cc.links(''' - int main(void) - { - unsigned __int128 x = 0, y = 0; - __sync_val_compare_and_swap_16(&x, y, x); - return 0; - } - ''') + # Even with __builtin_assume_aligned, the above test may have failed + # without optimization enabled. Try again with optimizations locally + # enabled for the function. See + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107389 + has_atomic128_opt = cc.links('__attribute__((optimize("O1")))' + atomic_test_128) + config_host_data.set('CONFIG_ATOMIC128_OPT', has_atomic128_opt) - config_host_data.set('CONFIG_CMPXCHG128', has_cmpxchg128) + if not has_atomic128_opt + config_host_data.set('CONFIG_CMPXCHG128', cc.links(''' + int main(void) + { + unsigned __int128 x = 0, y = 0; + __sync_val_compare_and_swap_16(&x, y, x); + return 0; + } + ''')) + endif endif endif From patchwork Wed May 3 07:06:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678610 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp904130wrs; Wed, 3 May 2023 00:10:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6yL4+xjeYK1gxcOnOMEaZDmRzvTx5Z/uVDKkDTucS1eNLA46uCMu0LD2adXDnk1Ejj09K6 X-Received: by 2002:ac8:5c88:0:b0:3f3:63ed:62d8 with SMTP id r8-20020ac85c88000000b003f363ed62d8mr186855qta.29.1683097846939; Wed, 03 May 2023 00:10:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097846; cv=none; d=google.com; s=arc-20160816; b=Nbx9YC5X5RsnQgxqdEuEM/wdCI0KdF8huOR/aMFhbW93zUxnHDqPd5jhdVDCg34BbL vgAxekW3Wg/rjv5Y5KJiTndeaK9tNUNBv+WRkAmNcTpXOar32N1QwUmFQyeSMw5VIguz W1XTmknK81zlI2coXw4DpNxl8l08SMLJOQzgSpj/1iWuoNjxhYEh83e+oYFO35dUKTb4 aeBe998HyO9lKdVWiCV8UMlqX++44eTcDk+x+wDB0dh6/VB1GAb4BeDW44II99/2PaUQ gZUdGNGKBR9jgFZpBeaOHl+baEnW3cfg0897lNhyx8z6xDEbhyjYk2px2n72aY5p+/iv vgXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=AlbQNgtkgysZMHjfft1C2G8eExbC6pUxkdPDPUlgiFs=; b=PObuCLE8iy/NdFQrfw75MBH2EfdNaRfDyWKUIbmvRMsUuKDUuOx6Yp/T70HxmB1zfY nOR2aoS6FABZJ8xiWeO88PNYWJYQglPb7D0/E3PQVtN3j/Di7BVmSGU08etCY6sasKMb Qodoq302/Wlm/SArGFdwPEJXdZSu2sZTyuxSBX0DnqhbBpQct0bAaOpfuEnAyt01avgG 7WmoIGuJvKD6Bmv8yUotRwTmW+mO1XJ53jzkj9hc5eyMUmf97v7b8YAIx0AQmk6xnZYY vLXmR0mT4MmkpQNLnLIemDzfEKrceVJeCd9TdaBGlq+1p+WEW9JdTIxJ3z5WsheoFaqb V3uw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=F0mH2cTO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id v20-20020ac85794000000b003ef5ec1dab9si16207047qta.553.2023.05.03.00.10.46 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:10:46 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=F0mH2cTO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6ai-00086D-Ia; Wed, 03 May 2023 03:07:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aL-000605-9w for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:33 -0400 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zx-0005Zv-IZ for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:32 -0400 Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-3f19a80a330so29502345e9.2 for ; Wed, 03 May 2023 00:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097628; x=1685689628; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AlbQNgtkgysZMHjfft1C2G8eExbC6pUxkdPDPUlgiFs=; b=F0mH2cTOcVgXUOHG7GibycCjAs9KyTyDhxLWIyIwZ5qmthZzduKtnEccuQj3wzhWsZ 9kq8Tx8HPEeXFQVLDA20AKpGnq/hynE2bKc7KCYR1mnS/Z/sJ6aG/Rz00ydxjSvjvnjz lWvBBH40OUpC8YxypWN4TPS1RCOmu301jX0sOIlmV3OEW1SjgK9IH7oC0NfWFfYpssrH hzXzBKm3+mPluEN7F9JR6vCLi2pxwanrKPcMghrtpk0tzowhB+OaTKc5iNnc6z5ZiGtD ifTC0xeaKbKQdr4NlB3oBP79JFGfeLg+Hsjx6g7SpVsTQg6NC6v4qIhQa/SPf0h8AL8Z gj7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097628; x=1685689628; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AlbQNgtkgysZMHjfft1C2G8eExbC6pUxkdPDPUlgiFs=; b=e7iLZBlVrzKT+DcHGMIR7m37ah3qRE7LUGIEKWCKP8R8tY+fuQum47iV/T6tzuItI2 clXpokvr1+lGdM5Q6+ThLlsMMfqaItb/yCw3hnwEqJYxqyqaRdGaEEtIulRog3wvWp8R 2Z1RIOq1ygBE/O4w7ebb2nQDqQfsLY2Pu19L6plUPgpic8YdDyTlIK1h9sKr9soS3sNH 0xYut5++WvkMTKQpcGV82HmTjfKYou1G34jU8CHIemwKfbpTFrrXctVt6d2XPMRotYFE tqhD1MZPnDk1GkiWQooOcFYDLuBiUkrIsiAgA+fDaByJZ0Ln43dcGQDFN5+FLLU+emJN Fa9Q== X-Gm-Message-State: AC+VfDwoYTakygA/Z80dApOeXHXiz/l+6SlibpzpyqMy2cwD6yr/CVxW Zvg8PctrTVRSqGTfD7O/wKQRglz9pL3kY1hFfN+0Hg== X-Received: by 2002:a7b:cc05:0:b0:3f0:683d:224d with SMTP id f5-20020a7bcc05000000b003f0683d224dmr13525315wmh.9.1683097628520; Wed, 03 May 2023 00:07:08 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:08 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 14/57] tcg/i386: Add have_atomic16 Date: Wed, 3 May 2023 08:06:13 +0100 Message-Id: <20230503070656.1746170-15-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32b; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Notice when Intel or AMD have guaranteed that vmovdqa is atomic. The new variable will also be used in generated code. Signed-off-by: Richard Henderson --- include/qemu/cpuid.h | 18 ++++++++++++++++++ tcg/i386/tcg-target.h | 1 + tcg/i386/tcg-target.c.inc | 27 +++++++++++++++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/include/qemu/cpuid.h b/include/qemu/cpuid.h index 1451e8ef2f..35325f1995 100644 --- a/include/qemu/cpuid.h +++ b/include/qemu/cpuid.h @@ -71,6 +71,24 @@ #define bit_LZCNT (1 << 5) #endif +/* + * Signatures for different CPU implementations as returned from Leaf 0. + */ + +#ifndef signature_INTEL_ecx +/* "Genu" "ineI" "ntel" */ +#define signature_INTEL_ebx 0x756e6547 +#define signature_INTEL_edx 0x49656e69 +#define signature_INTEL_ecx 0x6c65746e +#endif + +#ifndef signature_AMD_ecx +/* "Auth" "enti" "cAMD" */ +#define signature_AMD_ebx 0x68747541 +#define signature_AMD_edx 0x69746e65 +#define signature_AMD_ecx 0x444d4163 +#endif + static inline unsigned xgetbv_low(unsigned c) { unsigned a, d; diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index d4f2a6f8c2..0421776cb8 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -120,6 +120,7 @@ extern bool have_avx512dq; extern bool have_avx512vbmi2; extern bool have_avx512vl; extern bool have_movbe; +extern bool have_atomic16; /* optional instructions */ #define TCG_TARGET_HAS_div2_i32 1 diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index bb603e7968..f838683fc3 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -185,6 +185,7 @@ bool have_avx512dq; bool have_avx512vbmi2; bool have_avx512vl; bool have_movbe; +bool have_atomic16; #ifdef CONFIG_CPUID_H static bool have_bmi2; @@ -4024,6 +4025,32 @@ static void tcg_target_init(TCGContext *s) have_avx512dq = (b7 & bit_AVX512DQ) != 0; have_avx512vbmi2 = (c7 & bit_AVX512VBMI2) != 0; } + + /* + * The Intel SDM has added: + * Processors that enumerate support for Intel® AVX + * (by setting the feature flag CPUID.01H:ECX.AVX[bit 28]) + * guarantee that the 16-byte memory operations performed + * by the following instructions will always be carried + * out atomically: + * - MOVAPD, MOVAPS, and MOVDQA. + * - VMOVAPD, VMOVAPS, and VMOVDQA when encoded with VEX.128. + * - VMOVAPD, VMOVAPS, VMOVDQA32, and VMOVDQA64 when encoded + * with EVEX.128 and k0 (masking disabled). + * Note that these instructions require the linear addresses + * of their memory operands to be 16-byte aligned. + * + * AMD has provided an even stronger guarantee that processors + * with AVX provide 16-byte atomicity for all cachable, + * naturally aligned single loads and stores, e.g. MOVDQU. + * + * See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104688 + */ + if (have_avx1) { + __cpuid(0, a, b, c, d); + have_atomic16 = (c == signature_INTEL_ecx || + c == signature_AMD_ecx); + } } } } From patchwork Wed May 3 07:06:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678599 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp903434wrs; Wed, 3 May 2023 00:08:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ41DMulIjz2niHKVM+rny+IIkdiWBDzHbszyCrECvg0RbCKPc4Jpu0p/F3OGU2HeeHIQrXz X-Received: by 2002:a05:6214:20e7:b0:5ad:2a05:ddd1 with SMTP id 7-20020a05621420e700b005ad2a05ddd1mr7066199qvk.34.1683097728266; Wed, 03 May 2023 00:08:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097728; cv=none; d=google.com; s=arc-20160816; b=kEhwKB0ELP87fQf+MKb7iBGRMBTwFD33LWs338CJhkADeXrsCcS4LeHCcMpmVEuf1Q RBHyPTG+Z23sdgbRDDTvhZUH70sInMlGlBJSRIDzaIO8N4Tc1KOlAoJW9Ad9tPIrtcfs 8w7w15J6YCxEqKCSK5fCIJNfznZCSDBntCrtI7x18+wPU6japXqGu2kvWhjOsbCDWSK2 iMPBdbb0Ymv+P54lXEDzk1PL3C/TW7/cploprOHPaRYmbHiyWVvA94+B9GmZ5tXpp+au PwmBabTnmgKbc4f5S1N0p0fj0dTc5ualhRGOXtw6FKMRxtPRqiTxaZ7/x9ofquaLIdgx yQZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=F6DEO7Q+8H6EpwW/2nX1Lodtp98wxNa6YcOQprjLoAY=; b=Pw2Vm1BeJfoVKH13uxbSYKRu1usywM8W9NYSAEDEn/5MZelBUozmGYxCp2D8YaqKXu PT4FnhBMBvre56bAyPMJzPiqWX9UZ3R5RJQLkVcuV+52Hu5bJ/ICzy+IH4P5I1RloS2n 4VOENrAAGLv9tk4L8hyXrPEar0d6sriqZnmMaZsngeqiD+8dceeGYR0CjxMinTEzeOe9 0593W+lG6WzqhNCiqy4TCrhgzO0us/FAxzz82sBPsdnKlnM2bwaJ+7TWrVcf7Nhasicy aQGQOLXQVi8JrtDuZxbjpEDScTmQwWe3UhnYZ/Bo5746WZnjqKKimEo1Kfm8/xEbJ03u ABPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hGB7Sflr; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o2-20020ad45c82000000b005e32ca3bf52si19484575qvh.314.2023.05.03.00.08.48 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:08:48 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hGB7Sflr; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6ab-0007VA-Ak; Wed, 03 May 2023 03:07:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aL-00061i-Jl for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:33 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zy-0005Z5-4N for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:33 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f19afc4fbfso47407855e9.2 for ; Wed, 03 May 2023 00:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097629; x=1685689629; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=F6DEO7Q+8H6EpwW/2nX1Lodtp98wxNa6YcOQprjLoAY=; b=hGB7SflrlpeaEbrXGAjMRjrG6QBdAib9ke7ieKzJSvq+uG1kMTbdI1kfzYS/tcHX+C 2gzz3s51kK3FJKYk9mEDpN2fB5XQhEVL8eNkkBVK1i1YVbC2GQ+DuR+IgPBqTX+n+rDh tCw3t7xUXpjqpFWFFCl1EoH0vLV/3lRjgrn34WqRv9BJ/4Z3d+CHjr95WKJU84Dd8DSy ZEGNGQtJj/4a/99jssN16EOEDBo6TsGjE0bYgryLzNKGf/wm2iExt5a02PTTLYZzo4tU eGJweJLf3W+jRT1IZ2PP2d+vmlOZzpAdj9Gy6JMf9usVAPN0pKNX7G/S8wc/akFCn8RZ RBLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097629; x=1685689629; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F6DEO7Q+8H6EpwW/2nX1Lodtp98wxNa6YcOQprjLoAY=; b=TAzAMlRCEv/HoLfJZ1KWYydQcwpDP14o7QQ+eQK5FPo2aH7wGzoMar+zoFnRY/taLU NWi9kuGiZ/zZdwxbHLlH5RZ04+wsOjqzjaeDNemF4cIAKd3Qp39lfykKrXIp1f+BpwWl /tgcT/HidoRYUu7LRB5y2VZStJVRLIMv5ELGf/msWq0n8O6WvWgpmCCSFe1hckzFc79i iTJH5VexpaaHFGC3Bcyj3GCxDUwX7JYivZzLynIz6QaBS4G0GYRqrKGUBYtgBZiu64fg Ojc/iLxw/NNO/G0vo1FrvDw8uQj/kBmKiVbbyvd2pVewVOGgOJ+Vvzx4ahJ45R1pz24v xpoQ== X-Gm-Message-State: AC+VfDysQbMCxugJJ+59nu7K8Ps+Mj3x/xaQ+3FzS+tia57HI0AhXHro zi/eF+REAQThFVDra+UGl9Zkpf0G6/TXZxgxFQbbXA== X-Received: by 2002:a1c:6a05:0:b0:3f1:95af:172c with SMTP id f5-20020a1c6a05000000b003f195af172cmr13324434wmc.41.1683097629233; Wed, 03 May 2023 00:07:09 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:08 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 15/57] accel/tcg: Use have_atomic16 in ldst_atomicity.c.inc Date: Wed, 3 May 2023 08:06:14 +0100 Message-Id: <20230503070656.1746170-16-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Hosts using Intel and AMD AVX cpus are quite common. Add fast paths through ldst_atomicity using this. Only enable with CONFIG_INT128; some older clang versions do not support __int128_t, and the inline assembly won't work on structures. Signed-off-by: Richard Henderson --- accel/tcg/ldst_atomicity.c.inc | 76 +++++++++++++++++++++++++++------- 1 file changed, 60 insertions(+), 16 deletions(-) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index c43f101ebe..07bfa5c3c8 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -35,6 +35,14 @@ #if defined(CONFIG_ATOMIC128) # define HAVE_al16_fast true +#elif defined(CONFIG_TCG_INTERPRETER) +/* + * FIXME: host specific detection for this is in tcg/$host/, + * but we're using tcg/tci/ instead. + */ +# define HAVE_al16_fast false +#elif defined(__x86_64__) && defined(CONFIG_INT128) +# define HAVE_al16_fast likely(have_atomic16) #else # define HAVE_al16_fast false #endif @@ -162,6 +170,12 @@ load_atomic16(void *pv) r.u = qatomic_read__nocheck(p); return r.s; +#elif defined(__x86_64__) && defined(CONFIG_INT128) + Int128Alias r; + + /* Via HAVE_al16_fast, have_atomic16 is true. */ + asm("vmovdqa %1, %0" : "=x" (r.u) : "m" (*(Int128 *)pv)); + return r.s; #else qemu_build_not_reached(); #endif @@ -383,6 +397,24 @@ load_atom_extract_al16_or_al8(void *pv, int s) r = qatomic_read__nocheck(p16); } return r >> shr; +#elif defined(__x86_64__) && defined(CONFIG_INT128) + uintptr_t pi = (uintptr_t)pv; + int shr = (pi & 7) * 8; + uint64_t a, b; + + /* Via HAVE_al16_fast, have_atomic16 is true. */ + pv = (void *)(pi & ~7); + if (pi & 8) { + uint64_t *p8 = __builtin_assume_aligned(pv, 16, 8); + a = qatomic_read__nocheck(p8); + b = qatomic_read__nocheck(p8 + 1); + } else { + asm("vmovdqa %2, %0\n\tvpextrq $1, %0, %1" + : "=x"(a), "=r"(b) : "m" (*(__uint128_t *)pv)); + } + asm("shrd %b2, %1, %0" : "+r"(a) : "r"(b), "c"(shr)); + + return a; #else qemu_build_not_reached(); #endif @@ -699,23 +731,35 @@ static inline void ATTRIBUTE_ATOMIC128_OPT store_atomic16(void *pv, Int128Alias val) { #if defined(CONFIG_ATOMIC128) - __uint128_t *pu = __builtin_assume_aligned(pv, 16); - qatomic_set__nocheck(pu, val.u); -#elif defined(CONFIG_CMPXCHG128) - __uint128_t *pu = __builtin_assume_aligned(pv, 16); - __uint128_t o; - - /* - * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always - * defer to libatomic, so we must use __sync_val_compare_and_swap_16 - * and accept the sequential consistency that comes with it. - */ - do { - o = *pu; - } while (!__sync_bool_compare_and_swap_16(pu, o, val.u)); -#else - qemu_build_not_reached(); + { + __uint128_t *pu = __builtin_assume_aligned(pv, 16); + qatomic_set__nocheck(pu, val.u); + return; + } #endif +#if defined(__x86_64__) && defined(CONFIG_INT128) + if (HAVE_al16_fast) { + asm("vmovdqa %1, %0" : "=m"(*(__uint128_t *)pv) : "x" (val.u)); + return; + } +#endif +#if defined(CONFIG_CMPXCHG128) + { + __uint128_t *pu = __builtin_assume_aligned(pv, 16); + __uint128_t o; + + /* + * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always + * defer to libatomic, so we must use __sync_val_compare_and_swap_16 + * and accept the sequential consistency that comes with it. + */ + do { + o = *pu; + } while (!__sync_bool_compare_and_swap_16(pu, o, val.u)); + return; + } +#endif + qemu_build_not_reached(); } /** From patchwork Wed May 3 07:06:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678668 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp909257wrs; Wed, 3 May 2023 00:25:04 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Eewt2uHBfj4DVJaqttrz7GBjn1VPPA4L45f/UYwcYOopMKthRfn+FZG90Kfjj51hQ2TY1 X-Received: by 2002:a05:622a:4d3:b0:3ef:6513:75ff with SMTP id q19-20020a05622a04d300b003ef651375ffmr31286299qtx.7.1683098704165; Wed, 03 May 2023 00:25:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098704; cv=none; d=google.com; s=arc-20160816; b=NcmiwbwaxvLA/SGbv2U8HM9vd+lYWasALYBUYx2n0hyRKUDSr1/5uzITIkRpIgWXFD fROv8z7sjaXCv3h5tdEIyNSZJFaUw9Lxu2HimqoVuTiLeEs49bvowlQFEwavvR7dsyoE qtOufsDylvg9ID8aWFzCOAZV980Qfm/5G2pDcU5B2cKwUcKVsRmqsry5fBBbh3oDcGVR U+ArZcGJdsAvQzZo7au1YKSIIpBd9YMmCrWlp/Boh4RxkD1EyUt+cfCswFb9ldYV//A0 yZHj1te1bvuLlX6KR+G0MvarOB5oAy/MeROnqOMPYt/IYyl9CNFhTb/yq+bX6pRqimHa E51g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=6BNIcK6JzbkWbShIrl34k5OQr12Qxqrkns93mAN/jdA=; b=VhnuL0UAza8lmvIJeZxxwOmEA94nQr4C31dH3KKWyNYvTG3eH0eO8rT2AkuxIHy6G3 IF68e4FqQOkGF1gZyXigC1x+KeakdU8PL+H4NfKUtVVVAuCFrLNOmzFln7zKVRIAUg5y +7XjVYCBbjush3/juTlTD+QbMf0NQ30uhDaIjg6baMUS0mz6GFjDwG/sCzG277caWaYM Q/EKHL9R6rWP4wdnWBJbFqimopwadn+K8yFqXbgbEE4ELwvaBdkVhmpB/1hPL0UfRMMI 1wocdmuLyf9llmHqQFUrXuXnazZ+lJSz+Zi1sZf6ge2vBsCMsHNrf3xyI6ACBMVEPdki SHyw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="F/ztUjsM"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h17-20020ac85851000000b003f0a004df7dsi13092938qth.312.2023.05.03.00.25.04 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:25:04 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="F/ztUjsM"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6b3-0000Qq-BK; Wed, 03 May 2023 03:08:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aO-0006KO-LT for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:36 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6Zz-0005he-GD for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:36 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f178da21afso31632435e9.1 for ; Wed, 03 May 2023 00:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097630; x=1685689630; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6BNIcK6JzbkWbShIrl34k5OQr12Qxqrkns93mAN/jdA=; b=F/ztUjsMEpuaelp3UlAYA4dKoMBXNXEbErcZO8M1iiKQm/w84prvfN5iU7+diKJfI+ WlXgB5JVkU5txndHHcOBCvLp2gj3FhZS542F+O2JAIwfNpJY8I5tobfrhbDJol7ryxD0 fUkYngSvMUUlVT9PxDDjqWf3qthR+24ujHE1mvY0xTy9zoyhVQyRE+TbYflvcFIJMSCQ VPkajrnQ8N7NooyJvEhtPA6At2/QzcbAC4C8o1jCPQfQ75WSZ4eAXvR8TD5OKYTpodDG GTob5PNEV7dfygyM6057NChFA0HxFsMWeEi4AoDRvXI4uN1Evefl4XyV7Pk1E7BHJefr 7LVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097630; x=1685689630; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6BNIcK6JzbkWbShIrl34k5OQr12Qxqrkns93mAN/jdA=; b=Yl1kW86i91eSpu193roXYak53liQMuJFmOR/Uyg9LIN5/DN7AQrLGBM7holg57aWgW gjSdfBHFoxpK/pmfjVoQivbrDRXAUJKc/juDpZM4gXKAF6dQ5gL88wdgMT9YSklAdRBn fsEd9EjYcGOp0lCInXlOTvZST8vcwGh0EgU5cKlLYyCC6Wq8DytAmmpUJTuJDxNS8lo6 XRbGFhydjgZ7cjdivqs5/LUVX/quxEmWM6hviR9t77J4fGkX08TjKt7/MT05+MxHNByH X0nNSTD6hJlcheCQ7vPSVzwwNHmnmRiKAPu35khHl2QlzTCuvM3Oe5tKKFi83htpCZ6x uwRQ== X-Gm-Message-State: AC+VfDxRr7DGXvoFQE4l6Bk7YFo/DBWQq2fnOEZPf7xwyO70HttVysD/ 8A3c8M+5KkVr3cD/ROndPfdrIh4aJYivbb6lNFif1w== X-Received: by 2002:a7b:cc16:0:b0:3ed:f5b5:37fc with SMTP id f22-20020a7bcc16000000b003edf5b537fcmr14245100wmh.1.1683097630003; Wed, 03 May 2023 00:07:10 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:09 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 16/57] accel/tcg: Add aarch64 specific support in ldst_atomicity Date: Wed, 3 May 2023 08:06:15 +0100 Message-Id: <20230503070656.1746170-17-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org We have code in atomic128.h noting that through GCC 8, there was no support for atomic operations on __uint128. This has been fixed in GCC 10. But we can still improve over any basic compare-and-swap loop using the ldxp/stxp instructions. Signed-off-by: Richard Henderson --- accel/tcg/ldst_atomicity.c.inc | 60 ++++++++++++++++++++++++++++++++-- 1 file changed, 57 insertions(+), 3 deletions(-) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 07bfa5c3c8..2426b09aef 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -247,7 +247,22 @@ static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv) * In system mode all guest pages are writable, and for user-only * we have just checked writability. Try cmpxchg. */ -#if defined(CONFIG_CMPXCHG128) +#if defined(__aarch64__) + /* We can do better than cmpxchg for AArch64. */ + { + uint64_t l, h; + uint32_t fail; + + /* The load must be paired with the store to guarantee not tearing. */ + asm("0: ldxp %0, %1, %3\n\t" + "stxp %w2, %0, %1, %3\n\t" + "cbnz %w2, 0b" + : "=&r"(l), "=&r"(h), "=&r"(fail) : "Q"(*p)); + + qemu_build_assert(!HOST_BIG_ENDIAN); + return int128_make128(l, h); + } +#elif defined(CONFIG_CMPXCHG128) /* Swap 0 with 0, with the side-effect of returning the old value. */ { Int128Alias r; @@ -743,7 +758,22 @@ store_atomic16(void *pv, Int128Alias val) return; } #endif -#if defined(CONFIG_CMPXCHG128) +#if defined(__aarch64__) + /* We can do better than cmpxchg for AArch64. */ + { + uint64_t l, h, t; + + qemu_build_assert(!HOST_BIG_ENDIAN); + l = int128_getlo(val.s); + h = int128_gethi(val.s); + + asm("0: ldxp %0, xzr, %1\n\t" + "stxp %w0, %2, %3, %1\n\t" + "cbnz %w0, 0b" + : "=&r"(t), "=Q"(*(__uint128_t *)pv) : "r"(l), "r"(h)); + return; + } +#elif defined(CONFIG_CMPXCHG128) { __uint128_t *pu = __builtin_assume_aligned(pv, 16); __uint128_t o; @@ -841,7 +871,31 @@ static void store_atom_insert_al8(uint64_t *p, uint64_t val, uint64_t msk) static void ATTRIBUTE_ATOMIC128_OPT store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias msk) { -#if defined(CONFIG_ATOMIC128) +#if defined(__aarch64__) + /* + * GCC only implements __sync* primitives for int128 on aarch64. + * We can do better without the barriers, and integrating the + * arithmetic into the load-exclusive/store-conditional pair. + */ + uint64_t tl, th, vl, vh, ml, mh; + uint32_t fail; + + qemu_build_assert(!HOST_BIG_ENDIAN); + vl = int128_getlo(val.s); + vh = int128_gethi(val.s); + ml = int128_getlo(msk.s); + mh = int128_gethi(msk.s); + + asm("0: ldxp %[l], %[h], %[mem]\n\t" + "bic %[l], %[l], %[ml]\n\t" + "bic %[h], %[h], %[mh]\n\t" + "orr %[l], %[l], %[vl]\n\t" + "orr %[h], %[h], %[vh]\n\t" + "stxp %w[f], %[l], %[h], %[mem]\n\t" + "cbnz %w[f], 0b\n" + : [mem] "+Q"(*ps), [f] "=&r"(fail), [l] "=&r"(tl), [h] "=&r"(th) + : [vl] "r"(vl), [vh] "r"(vh), [ml] "r"(ml), [mh] "r"(mh)); +#elif defined(CONFIG_ATOMIC128) __uint128_t *pu, old, new; /* With CONFIG_ATOMIC128, we can avoid the memory barriers. */ From patchwork Wed May 3 07:06:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678676 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp909975wrs; Wed, 3 May 2023 00:27:15 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4567GpzjqaSNCRPpMslJgik9dl6mqPXyOdUiqXDX3DCVuEMUfcZ+Tm/4Evpfp5gx9Ifk6F X-Received: by 2002:a05:6214:d04:b0:615:53c3:f32a with SMTP id 4-20020a0562140d0400b0061553c3f32amr8103137qvh.42.1683098835429; Wed, 03 May 2023 00:27:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098835; cv=none; d=google.com; s=arc-20160816; b=Gr2wN3nxdrEdJiDzF+WqZnUuDaq770SuLu0H+5g4azDvOswM5Bl6J33DjXcWJRub8H rCwowBAVFJTExt+ZpKbxVS23p5OLtFzdB7oc2Fqg7PKNhSqYN5kdQy0fuW0qLO+yIVFW 5e74qZVeSjiISu/LnpU7c/0YZvW+nMcr95oTVjV8RugCNftNmCJX1H4dUXxaKb6izfjg tlIcCZYWp95m6e3j22ZhaD7RTpBiloGDRLtcGuA5mV3uxhCkpeegJaM+dZoKqLjQyoMe YFJEztOWZ3gB8b5RU6D61Bd2ASwwC3S4hjLBhU7HhkHuq44yqCFG1gvxkxS4V+Bqmt2v rsfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=fOANmGOo5iyv6fa4+ddDWc+mmS1WoR4SXYoPSJOnShA=; b=jgbDji8TNFHC1HSRpPel6CbKyxteiU0AlvAdbySkfoIPUPQiSIbqYMnqboY4sg41VS HCs6bxRhEy0nlzWyKZip/y8+bpI1CeyHx//fc8EDK2nLu78oOa5D3qpCizduqz0LgPRt hmHh0Rz7DUBdQw8Kdu+WKRyArulzyHrUhTJdbxLW6WYMCbmmJ4FMFIm0Ye9iI7rCPemf KBW6WEz4/Eh9L+u07tzx+yMCQEpY9F0oFdZ+WeIgVqdb3dTSU1nlUNzx32QP+nbhwZVI szvyHp5nEJ0n668KGbcMcPS0phLyVDIsrvwrYsgEhqaQ8302bvYYxwcOmxxS7WwsuyU3 o8JQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=k3d17Kvu; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id p17-20020a374211000000b0074e1f14bd03si17565760qka.59.2023.05.03.00.27.15 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:27:15 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=k3d17Kvu; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6b6-0000me-Sp; Wed, 03 May 2023 03:08:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aR-0006cV-BI for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:39 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a2-0005hu-Mu for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:39 -0400 Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3f178da21b2so49056875e9.1 for ; Wed, 03 May 2023 00:07:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097630; x=1685689630; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fOANmGOo5iyv6fa4+ddDWc+mmS1WoR4SXYoPSJOnShA=; b=k3d17Kvueqex9aG45mnR1zw6wt4u+8PpKuHTBDvqvE5W5rOFlfW3s0+gj55SySzoK8 VLsaXUqX4X+GrAFsRZMgBLCePNTqUt/vOI09X4k8bEiCq5UjxB/+Ae3lQU/3o1GDLmxx 7ERyNpk9OgtKJNgc2dwjRQbgTrfkeZrRfYJ/5yDQSHiebnO+yVivTOd+vqWPkS3MyeQD IBHiAxbQ31lHoyLIoyL5qe556AQeSBF9myL+/UhyQhM6lAmbgxRllkn0OvBWzBQkaJxF GuJe0GA4yXXvIpOpF/TXEovAfDlMWNghZXXMn0YcnpBhb2Iqlae6S59XP3TSrW6fpsPy Badg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097630; x=1685689630; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fOANmGOo5iyv6fa4+ddDWc+mmS1WoR4SXYoPSJOnShA=; b=RPAw1cX6sGBgx1jGTIz7WxO0EjRvO1GAVZ5pIpsfJ8rbund1FeZPmTidd4a5qLOt/C MdYDmSvOlLmYWsAtbliWegK73i/Xe0kirtcjZH2NLPdoEjdjkq+5hfkjFwDbuVYwZb8S MD2f3FrL6xrLjeXQ8jvCJzH9Bp2V1aLGUMZMsDAWxwYPmkkWLIsxW8XIkBD5xwfZT9aI LIoYACxyH4F0xxsuAOLasuNLq2/YWYTo9h11dozledCsawqFzf8SV6EBN9ZZru0MoSpm 67QSfn1fA6xjeH4RVuFkXtKRCB9GG5tv/dqsNGeM4wlYeURPubh780ZhBxRUXK8FjO4Z eHoA== X-Gm-Message-State: AC+VfDwok4am6doE6Qf+K9F07tnU9VQ+A6XKVlMZ8qXiRAaDNPP0tkR7 mM73Iw6Zmx2wivFjVKkYo1UJWVhcAg7SLo6vcTWO8w== X-Received: by 2002:a05:600c:2195:b0:3f3:295c:58fc with SMTP id e21-20020a05600c219500b003f3295c58fcmr11410141wme.39.1683097630679; Wed, 03 May 2023 00:07:10 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:10 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 17/57] tcg/aarch64: Detect have_lse, have_lse2 for linux Date: Wed, 3 May 2023 08:06:16 +0100 Message-Id: <20230503070656.1746170-18-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::332; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x332.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Notice when the host has additional atomic instructions. The new variables will also be used in generated code. Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/aarch64/tcg-target.h | 3 +++ tcg/aarch64/tcg-target.c.inc | 12 ++++++++++++ 2 files changed, 15 insertions(+) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index c0b0f614ba..3c0b0d312d 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -57,6 +57,9 @@ typedef enum { #define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_EVEN #define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_NORMAL +extern bool have_lse; +extern bool have_lse2; + /* optional instructions */ #define TCG_TARGET_HAS_div_i32 1 #define TCG_TARGET_HAS_rem_i32 1 diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index e6636c1f8b..fc551a3d10 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -13,6 +13,9 @@ #include "../tcg-ldst.c.inc" #include "../tcg-pool.c.inc" #include "qemu/bitops.h" +#ifdef __linux__ +#include +#endif /* We're going to re-use TCGType in setting of the SF bit, which controls the size of the operation performed. If we know the values match, it @@ -71,6 +74,9 @@ static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot) return TCG_REG_X0 + slot; } +bool have_lse; +bool have_lse2; + #define TCG_REG_TMP TCG_REG_X30 #define TCG_VEC_TMP TCG_REG_V31 @@ -2899,6 +2905,12 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) static void tcg_target_init(TCGContext *s) { +#ifdef __linux__ + unsigned long hwcap = qemu_getauxval(AT_HWCAP); + have_lse = hwcap & HWCAP_ATOMICS; + have_lse2 = hwcap & HWCAP_USCAT; +#endif + tcg_target_available_regs[TCG_TYPE_I32] = 0xffffffffu; tcg_target_available_regs[TCG_TYPE_I64] = 0xffffffffu; tcg_target_available_regs[TCG_TYPE_V64] = 0xffffffff00000000ull; From patchwork Wed May 3 07:06:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678654 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908179wrs; Wed, 3 May 2023 00:21:43 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Sp5c1Ks1wLeZHFyITgjLbuDHuH770yyUKEAoLKL1rsqUQi6Ujs1KcbzTD35RzTvqaKiA7 X-Received: by 2002:ac8:5c89:0:b0:3ef:6432:ac3f with SMTP id r9-20020ac85c89000000b003ef6432ac3fmr28378695qta.34.1683098503004; Wed, 03 May 2023 00:21:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098502; cv=none; d=google.com; s=arc-20160816; b=0DmFUPXYLEUGKWEf6ZQU+HMWUun2e0k4g3B6sfJm2sXtZbs2OoBBeB+3SI4VNQjazP ym2Tkrtfj8G4+e8mQWCbnJvPhJbx2TVBY5dszeWtGIwYuMfBzdm6pmLixLakGmybUtX6 35/wn7OATOOOHT2KyhCL1oFfBd5KmDIUkn3FDkIIm+ud3Dpnx6djRIuma/ad8a2/OssP oA+DwK05QRHDKyRssNG1pffXYxF/LIVJONyzJRlHYNmrX+mlTF5uZeqCAJgyqKZNo+K2 GZefS7Rs4BbgCSecQF6tEgwiQyLTps21LBAjVjZwdnuhg9wr2RRURKK0HObHprHY4fAa M5ZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=SlwZz/TkkaqpzfKRH/vz8Q4X3wFSx4Ts4VLXAbAY/Hk=; b=joMuAhO7irWrlzwfCoX8N38YNjnEdBRQ8qyrxML4pLwCu68FjSd3M+8OEYajeSTXYM K8n3kMNW0LVtH+sKiCt9JNYgdJvxJ80SXL4icyaMxQ34tBu5cGKdIiYSKJPfNzoV8KbK wRUQmckJEWa2E+oBzoL1yGsonH1miDqV+QdHPnh0Gh5V1VEZjtIL3qJVbj24RPk/zCGK zNo3cR/LtZkymr4sYL30wpHLNKinHOd7DHFNZVRTcTJ/PB0fpIaE4Xbj6J+8zEsVRER4 xs1jJDQLEXiQ2WxI9xPb5Q9D0NFoU0GNwKOgn2B4dTnigupC39CcAXKtb+cJgKFpd/ph rLeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=d5w3fu43; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r14-20020a05622a034e00b003f20c2a271bsi4574403qtw.306.2023.05.03.00.21.42 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:21:42 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=d5w3fu43; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6b2-0000KR-6V; Wed, 03 May 2023 03:08:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aQ-0006V5-43 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:38 -0400 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a1-0005iP-0e for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:37 -0400 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3f182d745deso46336485e9.0 for ; Wed, 03 May 2023 00:07:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097631; x=1685689631; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SlwZz/TkkaqpzfKRH/vz8Q4X3wFSx4Ts4VLXAbAY/Hk=; b=d5w3fu43/7Y/xej5/yLGS9lqNFyES0yCbEZrxvHIvzXTvmZ2QDreN6B8/U7YQvyPfT D2xRNs5/JJjWUgA7IukBGjv2tyFUHX2gyTqNm5HtbxEFm0k4WDrMSMA3pO+fZZ+6DMzb GMDNFlOLfTJ+UrazNaKA+Uw4FcJK9GJa1tJnPa4MAzXV//tfxyXUvDqxzmOGKIL3hpv5 qf4xt9GzqluXra+3KD2Kr752xoafJFVcZFl9dmi/kIMVI/vx/cpV8UAXXdCGacWf6CqJ j0Dcts6kz9npVtk3Xx3FnFNryel4T9Fqcn+42/5AB/1OKLSeUZ7Q9Nraaa+hm8Xg000u ky4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097631; x=1685689631; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SlwZz/TkkaqpzfKRH/vz8Q4X3wFSx4Ts4VLXAbAY/Hk=; b=FN9r0Cj+s8+bt6gSZFtvirrgXL3yuNVjljkuWkp+iPhGO2m9Vby5gBXkAvEE9eo09L mTb0Kt2kspTZvBU3ImFejY0tZDQfBtAFY47B06llG+P6kaGuojsYqgcXwXMSwP7W/60r YVUubR1N9FsjXo3FPAwlw4EgqtdvlQW7PWxsUlJ1rlfwiGeHFx5Rdg7okSucHq/UKo91 cGAcc63jTC+NNtcQxAFUPKTR0c59mJ6629NGJ4y1zk/byuNtG2S2tdXgwwhXNOg4MBmo KzBG92XGIxfEQw1bkvgmG0sF381zJyN/HObybrTFQDhbU9whenvWL1aesLXSoxVej9r9 vyfA== X-Gm-Message-State: AC+VfDwsrboSQeejn5OxeGBptOhIgI80O7Jjg57ukwK2vne/ATjBZ1FZ SaIDDtsGemr3o9D9kNclx8dWTVyRRreBLe33q5Gt4g== X-Received: by 2002:a7b:ce07:0:b0:3f3:fed1:5a45 with SMTP id m7-20020a7bce07000000b003f3fed15a45mr193544wmc.38.1683097631393; Wed, 03 May 2023 00:07:11 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:11 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 18/57] tcg/aarch64: Detect have_lse, have_lse2 for darwin Date: Wed, 3 May 2023 08:06:17 +0100 Message-Id: <20230503070656.1746170-19-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::329; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x329.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org These features are present for Apple M1. Tested-by: Philippe Mathieu-Daudé Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/aarch64/tcg-target.c.inc | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index fc551a3d10..3adc5fd3a3 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -16,6 +16,9 @@ #ifdef __linux__ #include #endif +#ifdef CONFIG_DARWIN +#include +#endif /* We're going to re-use TCGType in setting of the SF bit, which controls the size of the operation performed. If we know the values match, it @@ -2903,6 +2906,27 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) } } +#ifdef CONFIG_DARWIN +static bool sysctl_for_bool(const char *name) +{ + int val = 0; + size_t len = sizeof(val); + + if (sysctlbyname(name, &val, &len, NULL, 0) == 0) { + return val != 0; + } + + /* + * We might in ask for properties not present in older kernels, + * but we're only asking about static properties, all of which + * should be 'int'. So we shouln't see ENOMEM (val too small), + * or any of the other more exotic errors. + */ + assert(errno == ENOENT); + return false; +} +#endif + static void tcg_target_init(TCGContext *s) { #ifdef __linux__ @@ -2910,6 +2934,10 @@ static void tcg_target_init(TCGContext *s) have_lse = hwcap & HWCAP_ATOMICS; have_lse2 = hwcap & HWCAP_USCAT; #endif +#ifdef CONFIG_DARWIN + have_lse = sysctl_for_bool("hw.optional.arm.FEAT_LSE"); + have_lse2 = sysctl_for_bool("hw.optional.arm.FEAT_LSE2"); +#endif tcg_target_available_regs[TCG_TYPE_I32] = 0xffffffffu; tcg_target_available_regs[TCG_TYPE_I64] = 0xffffffffu; From patchwork Wed May 3 07:06:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678650 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908043wrs; Wed, 3 May 2023 00:21:22 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6QYTKXD4tOiHYMG0xuTxNTZA0akECodyvBaaHFJbDnBGAhZrIISlqgEKlcvgfpR2jfZRv2 X-Received: by 2002:a05:6214:411b:b0:61b:6fcd:34af with SMTP id kc27-20020a056214411b00b0061b6fcd34afmr3745343qvb.9.1683098482439; Wed, 03 May 2023 00:21:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098482; cv=none; d=google.com; s=arc-20160816; b=zNeO/f9+PnIfMhpIKgEWgskOGfnrunwkEg2RAJNQnND98+gNWkHsuDtnwp1bWBjWWu aX1DKpy6c4ldGVZm6krPsqCwfhY/3E3/RUKlJ7AZ02oUv+NgiFOf2FxbljRaH67lX/Vm W5uswNSCY4YI2X4cajkjxY8rAKfXLcIwRZnEwekAiDNXO44kDnQtxiJafSwScsF83cwX w4UFBEJsEezzsaxJvni2ud4Ez9GDHA83RZ3bp/5TgCSx69yibOhGcGyl4sEDLD7PFMyA GsWCReoNEr1e9h0ki0uBByDhdgWDWrJEjGghIfGxICMvo77iIz0m+FBorub0abG/AVJq ZRfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=sA2aOIeGNBWRjeeou5Y4ut1mvsbxM8pXHUcMzyX3UOg=; b=bVzbgw1Y/ZuUozKwsy8ClWwCuru//8EMwNohLqH5/zRMNIMRpwCHZy09/xSrmWj6hf qXRuJxV5U9z/ZRSNORFP4QDJWjvUoGoET4pbcgrTdnIPjQXBkAwVar7dKtg6gCEdncZ4 0IEGNy4NUmv6OR2GrjVHPePPSv5KKyM8apvK4RQilOF0g7P/BO4KrzGrKSlmGRb71F7q o+rixxyc72cAVJcBwu3PFfNothOr1jb2XAu/Dz5YBjOUxaa0taQ6Dqkz70F4al7UXo5P NitlD3FB5IHF2vIfrEKBXmqNttx6MXLjseuL7C8pTVDCOmLvy9Udrsy7fpgb4o/yVXTn 4+mg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=FukclqHg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j14-20020a05621419ce00b0056ec21e1722si18561945qvc.110.2023.05.03.00.21.22 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:21:22 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=FukclqHg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bA-000167-9C; Wed, 03 May 2023 03:08:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aS-0006jk-Oc for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:40 -0400 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a2-0005ie-Nm for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:40 -0400 Received: by mail-wr1-x42c.google.com with SMTP id ffacd0b85a97d-3063433fa66so1517232f8f.3 for ; Wed, 03 May 2023 00:07:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097632; x=1685689632; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sA2aOIeGNBWRjeeou5Y4ut1mvsbxM8pXHUcMzyX3UOg=; b=FukclqHgBMT9J/gxByhlAW2U8thk+ABrtRkcfnxFr9R/QrINexGHP7u2veZidVDaYC U9ucU0LxbW/v1JjS58bUGmMN8IOgphTKlmCT9lfQ6zGCJqjfkFvFICB6I6golAny07kW 0qMwNy8sJGenc5qG7FNhiVDEN64XQNx6/q5fnMZiJCrF2tERMSPxo8EQx91m2S3QO5+U pCZfCUTmEzZ9m8eanYAHFaHiq6hl9Di1RUBWlY9p3i+3LUv4cQShawg7p2rRBSZ6ToEq sIYQKIL8NUZEA4nn5e25h3pZ0LsH6btSash9rwni0DafQ1x9rQPmnsbee3Ydaj1ee0qh x5dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097632; x=1685689632; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sA2aOIeGNBWRjeeou5Y4ut1mvsbxM8pXHUcMzyX3UOg=; b=gAaPmeNpU2gU523RK+HZY5FRwoi2KOdjAtgVAJvT2sIlqYUYvPgCvn3C6HzjvLvfjk 4yBNdcUuQvCnz8D27LwbblU87f0uvhvQ4X0FUfCS4C7gUzQju4di58cLNVuT6b8xSUP3 UxUl4Unjtz0N8BRRDgyuX/VA4rPuD7l6KAsIOQ/VLqSCXtyoaAEIwEnvS9ygTlFjA6L0 MT7Gr7omkjbPrjaLywDtOengvb7ZzTg5JkteNnkaOnAbfHeub5g6W8sSrYejgVVd5DPq 04yvV535qiPwD27mJ0n4JzJGRfHeKPzpW971GAWQB7QUlN0mQDrINUoFYp0LYYm31vKV CGtg== X-Gm-Message-State: AC+VfDz4q2WGrSWn8Ia7GZWZM0Qa2Sp7wlLrtAkO5vfxYapnvmXPoENe YP1oanPfkBdLy3mgzNv71J6ArWip5cb+KqYzE0D1/Q== X-Received: by 2002:adf:e484:0:b0:306:320b:5dbd with SMTP id i4-20020adfe484000000b00306320b5dbdmr5508212wrm.71.1683097632016; Wed, 03 May 2023 00:07:12 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:11 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 19/57] accel/tcg: Add have_lse2 support in ldst_atomicity Date: Wed, 3 May 2023 08:06:18 +0100 Message-Id: <20230503070656.1746170-20-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42c; envelope-from=richard.henderson@linaro.org; helo=mail-wr1-x42c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Add fast paths for FEAT_LSE2, using the detection in tcg. Signed-off-by: Richard Henderson --- accel/tcg/ldst_atomicity.c.inc | 37 ++++++++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 4 deletions(-) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 2426b09aef..7ed5d4282d 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -41,6 +41,8 @@ * but we're using tcg/tci/ instead. */ # define HAVE_al16_fast false +#elif defined(__aarch64__) +# define HAVE_al16_fast likely(have_lse2) #elif defined(__x86_64__) && defined(CONFIG_INT128) # define HAVE_al16_fast likely(have_atomic16) #else @@ -48,6 +50,8 @@ #endif #if defined(CONFIG_ATOMIC128) || defined(CONFIG_CMPXCHG128) # define HAVE_al16 true +#elif defined(__aarch64__) +# define HAVE_al16 true #else # define HAVE_al16 false #endif @@ -170,6 +174,14 @@ load_atomic16(void *pv) r.u = qatomic_read__nocheck(p); return r.s; +#elif defined(__aarch64__) + uint64_t l, h; + + /* Via HAVE_al16_fast, FEAT_LSE2 is present: LDP becomes atomic. */ + asm("ldp %0, %1, %2" : "=r"(l), "=r"(h) : "m"(*(__uint128_t *)pv)); + + qemu_build_assert(!HOST_BIG_ENDIAN); + return int128_make128(l, h); #elif defined(__x86_64__) && defined(CONFIG_INT128) Int128Alias r; @@ -412,6 +424,18 @@ load_atom_extract_al16_or_al8(void *pv, int s) r = qatomic_read__nocheck(p16); } return r >> shr; +#elif defined(__aarch64__) + /* + * Via HAVE_al16_fast, FEAT_LSE2 is present. + * LDP becomes single-copy atomic if 16-byte aligned, and + * single-copy atomic on the parts if 8-byte aligned. + */ + uintptr_t pi = (uintptr_t)pv; + int shr = (pi & 7) * 8; + uint64_t l, h; + + asm("ldp %0, %1, %2" : "=r"(l), "=r"(h) : "m"(*(__uint128_t *)(pi & ~7))); + return (l >> shr) | (h << (-shr & 63)); #elif defined(__x86_64__) && defined(CONFIG_INT128) uintptr_t pi = (uintptr_t)pv; int shr = (pi & 7) * 8; @@ -767,10 +791,15 @@ store_atomic16(void *pv, Int128Alias val) l = int128_getlo(val.s); h = int128_gethi(val.s); - asm("0: ldxp %0, xzr, %1\n\t" - "stxp %w0, %2, %3, %1\n\t" - "cbnz %w0, 0b" - : "=&r"(t), "=Q"(*(__uint128_t *)pv) : "r"(l), "r"(h)); + if (HAVE_al16_fast) { + /* Via HAVE_al16_fast, FEAT_LSE2 is present: STP becomes atomic. */ + asm("stp %1, %2, %0" : "=Q"(*(__uint128_t *)pv) : "r"(l), "r"(h)); + } else { + asm("0: ldxp %0, xzr, %1\n\t" + "stxp %w0, %2, %3, %1\n\t" + "cbnz %w0, 0b" + : "=&r"(t), "=Q"(*(__uint128_t *)pv) : "r"(l), "r"(h)); + } return; } #elif defined(CONFIG_CMPXCHG128) From patchwork Wed May 3 07:06:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678606 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp903753wrs; Wed, 3 May 2023 00:09:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7JMD4/ZTeS9JzGsBPPZsg2cgujsbc4xViAt9p53X+kYgPGnCN+x5o8Ki5OWlTd489nEnv7 X-Received: by 2002:a05:6214:4118:b0:5ef:519:b27 with SMTP id kc24-20020a056214411800b005ef05190b27mr7424189qvb.35.1683097787756; Wed, 03 May 2023 00:09:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097787; cv=none; d=google.com; s=arc-20160816; b=DWb3IVVKkZBBz4Ka+bbovln8BIm5jCJXcNILSxS3eeDrMIi/7Ibc7rP9LHhL7XNaG8 MF29BofwMQSppUSQZ/XvSR+kyKMOdvcbaNfC59i9YdwW/d3YCmF175YBTZtiSxOp6VoY m/0mLBlpaP1NpqxZnWs8pGyvRdcOjIdrIpRRkVfiSLTmV0JRwFRI5DcEGmLydqY0jpE6 S49tXIS14WsqBiNQgPsDE25g2DgcXVggAwL+Aw8MTsHh+7t+Ze1Vw6LG1ASY5SIcVmjM mYrFGF4xLGI5y0lEH68kGdXomJ7ypqyyGQV+momZ1+dCU3CKtJSfzkn1UtUsCsJQYzce un7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=+GIQLXvELUmUJjDCvdIMzpNNEl1FqkUOfANtu8VRqCQ=; b=Xnv3lGVbq/FmGMrxCWErDqzqMAC6tR0KptapGCdLbmJ1bZTw/xsfd2Y41p8BABZg1j iq4XWXK1zD+Qre5JemOAAaOB7aVZC5eBlKSs7CdXDgqnrqJUL3PTWQX+WdnuoOgLdMWY 65JMjcpioB9q9nfs3sSnSfOcuBK3rzS4OG/FY/2iTrkTCT+5mjU9uAJi1ThS6vfUGjJ6 dhXvlHnslCfx2Qe0hrmC9yYL/4qy1ZjT/k7tLbj5So8I2wTSwzLOfAdGh/iqq/csH6Lv 0QScwe4sA9Ke2yjHSwleYHBskHG01GvfQVgJX3MFwup7IaoQHHzF2m2V8+1BC9ZxZvpA 2FFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=x4D7Vz1I; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w11-20020a05620a444b00b00748720f717dsi19514932qkp.334.2023.05.03.00.09.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:09:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=x4D7Vz1I; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bC-0001Mp-D8; Wed, 03 May 2023 03:08:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aS-0006ky-V0 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:41 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a2-0005jA-NU for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:40 -0400 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-3f19a80a330so29503005e9.2 for ; Wed, 03 May 2023 00:07:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097632; x=1685689632; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+GIQLXvELUmUJjDCvdIMzpNNEl1FqkUOfANtu8VRqCQ=; b=x4D7Vz1I6JUArvbbar9XVMyUKopfupZ9FilT2AW+bZWjyLRN87nV+k3LSABAwCovib WpE18wNRTWTldXHPZST3Mga2MEkkMrnVOFmvf6FRPCoTcmDOpFqFVEwJqIOBk4nlDHVB kBFmtZJf56U8xKy75kMCW2NTY56e7tQeulBcZixUGSQOqxh9mLvxsGJ+t1+QFAz3DWrL wQUF7EjlP1IAQ9Rs3pKUNn3riMKY+Jjzs22dcbr9K60nvDKkMdnBHkbz7uJCSKIi6O7x eO3G9GflU2cOoHq0S1jpUgXdcA2kZs2PLBKc78j8APHrMn4JDzdjxfOWJleC9pphon2t d1Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097632; x=1685689632; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+GIQLXvELUmUJjDCvdIMzpNNEl1FqkUOfANtu8VRqCQ=; b=B/U4HeTXtlCXVafB90e3mUkKUrYseVMZPq2Ouqf660/Mar2OX1HXERdAc/x971SYVs Xbu8qlCWfohA3kG107sd3G2INp2cZREgNWChHzIfJuyz/I49CT175r/RbJRsN3ej1q3r ydKr8Mmr1sicD+EMtZkz/MxIhsWfs0lXfvGs9SwQxV6sIC/YVIK81mK8CRkA68iGRKq+ vfYb3E2W+uxYBXHI62+m60a6EfDW9s98DZ9eHrzBkRDpf/p6eN5uc8PJYEVoLfSoTh0i r/luE3jASpP6yqw3ZPkn1dbL7pQz3RDPnXeAldsuyTuogD2FCv8a9fQ5rmCc+JIF6RW6 AiKA== X-Gm-Message-State: AC+VfDzUSuaSNc3PSVM0PeSoIbvy5UYyR4iEW2mO9S3YG0ph4TdW+RzC TcxYhqXoJvMdJsqaxypKOe6OCU4Le7NWDe3t2YvBMw== X-Received: by 2002:a5d:544c:0:b0:306:264d:5667 with SMTP id w12-20020a5d544c000000b00306264d5667mr8210349wrv.41.1683097632722; Wed, 03 May 2023 00:07:12 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:12 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 20/57] tcg: Introduce TCG_OPF_TYPE_MASK Date: Wed, 3 May 2023 08:06:19 +0100 Message-Id: <20230503070656.1746170-21-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Reorg TCG_OPF_64BIT and TCG_OPF_VECTOR into a two-bit field so that we can add TCG_OPF_128BIT without requiring another bit. Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- include/tcg/tcg.h | 22 ++++++++++++---------- tcg/optimize.c | 15 ++++++++++++--- tcg/tcg.c | 4 ++-- tcg/aarch64/tcg-target.c.inc | 8 +++++--- tcg/tci/tcg-target.c.inc | 3 ++- 5 files changed, 33 insertions(+), 19 deletions(-) diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h index b19e167e1d..efbd891f87 100644 --- a/include/tcg/tcg.h +++ b/include/tcg/tcg.h @@ -932,24 +932,26 @@ typedef struct TCGArgConstraint { /* Bits for TCGOpDef->flags, 8 bits available, all used. */ enum { + /* Two bits describing the output type. */ + TCG_OPF_TYPE_MASK = 0x03, + TCG_OPF_32BIT = 0x00, + TCG_OPF_64BIT = 0x01, + TCG_OPF_VECTOR = 0x02, + TCG_OPF_128BIT = 0x03, /* Instruction exits the translation block. */ - TCG_OPF_BB_EXIT = 0x01, + TCG_OPF_BB_EXIT = 0x04, /* Instruction defines the end of a basic block. */ - TCG_OPF_BB_END = 0x02, + TCG_OPF_BB_END = 0x08, /* Instruction clobbers call registers and potentially update globals. */ - TCG_OPF_CALL_CLOBBER = 0x04, + TCG_OPF_CALL_CLOBBER = 0x10, /* Instruction has side effects: it cannot be removed if its outputs are not used, and might trigger exceptions. */ - TCG_OPF_SIDE_EFFECTS = 0x08, - /* Instruction operands are 64-bits (otherwise 32-bits). */ - TCG_OPF_64BIT = 0x10, + TCG_OPF_SIDE_EFFECTS = 0x20, /* Instruction is optional and not implemented by the host, or insn is generic and should not be implemened by the host. */ - TCG_OPF_NOT_PRESENT = 0x20, - /* Instruction operands are vectors. */ - TCG_OPF_VECTOR = 0x40, + TCG_OPF_NOT_PRESENT = 0x40, /* Instruction is a conditional branch. */ - TCG_OPF_COND_BRANCH = 0x80 + TCG_OPF_COND_BRANCH = 0x80, }; typedef struct TCGOpDef { diff --git a/tcg/optimize.c b/tcg/optimize.c index 9614fa3638..37d46f2a1f 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -2051,12 +2051,21 @@ void tcg_optimize(TCGContext *s) copy_propagate(&ctx, op, def->nb_oargs, def->nb_iargs); /* Pre-compute the type of the operation. */ - if (def->flags & TCG_OPF_VECTOR) { + switch (def->flags & TCG_OPF_TYPE_MASK) { + case TCG_OPF_VECTOR: ctx.type = TCG_TYPE_V64 + TCGOP_VECL(op); - } else if (def->flags & TCG_OPF_64BIT) { + break; + case TCG_OPF_128BIT: + ctx.type = TCG_TYPE_I128; + break; + case TCG_OPF_64BIT: ctx.type = TCG_TYPE_I64; - } else { + break; + case TCG_OPF_32BIT: ctx.type = TCG_TYPE_I32; + break; + default: + qemu_build_not_reached(); } /* Assume all bits affected, no bits known zero, no sign reps. */ diff --git a/tcg/tcg.c b/tcg/tcg.c index d0afabf194..cb5ca9b612 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -2294,7 +2294,7 @@ static void tcg_dump_ops(TCGContext *s, FILE *f, bool have_prefs) nb_iargs = def->nb_iargs; nb_cargs = def->nb_cargs; - if (def->flags & TCG_OPF_VECTOR) { + if ((def->flags & TCG_OPF_TYPE_MASK) == TCG_OPF_VECTOR) { col += ne_fprintf(f, "v%d,e%d,", 64 << TCGOP_VECL(op), 8 << TCGOP_VECE(op)); } @@ -4782,7 +4782,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op) tcg_out_extrl_i64_i32(s, new_args[0], new_args[1]); break; default: - if (def->flags & TCG_OPF_VECTOR) { + if ((def->flags & TCG_OPF_TYPE_MASK) == TCG_OPF_VECTOR) { tcg_out_vec_op(s, op->opc, TCGOP_VECL(op), TCGOP_VECE(op), new_args, const_args); } else { diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 3adc5fd3a3..43acb4fbcb 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1921,9 +1921,11 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg args[TCG_MAX_OP_ARGS], const int const_args[TCG_MAX_OP_ARGS]) { - /* 99% of the time, we can signal the use of extension registers - by looking to see if the opcode handles 64-bit data. */ - TCGType ext = (tcg_op_defs[opc].flags & TCG_OPF_64BIT) != 0; + /* + * 99% of the time, we can signal the use of extension registers + * by looking to see if the opcode handles 32-bit data or not. + */ + TCGType ext = (tcg_op_defs[opc].flags & TCG_OPF_TYPE_MASK) != TCG_OPF_32BIT; /* Hoist the loads of the most common arguments. */ TCGArg a0 = args[0]; diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc index 4cf03a579c..e31640d109 100644 --- a/tcg/tci/tcg-target.c.inc +++ b/tcg/tci/tcg-target.c.inc @@ -790,7 +790,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, CASE_32_64(sextract) /* Optional (TCG_TARGET_HAS_sextract_*). */ { TCGArg pos = args[2], len = args[3]; - TCGArg max = tcg_op_defs[opc].flags & TCG_OPF_64BIT ? 64 : 32; + TCGArg max = ((tcg_op_defs[opc].flags & TCG_OPF_TYPE_MASK) + == TCG_OPF_32BIT ? 32 : 64); tcg_debug_assert(pos < max); tcg_debug_assert(pos + len <= max); From patchwork Wed May 3 07:06:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678674 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp909833wrs; Wed, 3 May 2023 00:26:49 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7tNNt0RCBbf5XtLCJQwjWCAjb2ZMr9jI4VpyUBGyWqmwH/ifqxDSg+SE6YarVmzcsT/VJH X-Received: by 2002:a05:6214:27c8:b0:61b:5dbc:9d85 with SMTP id ge8-20020a05621427c800b0061b5dbc9d85mr7437147qvb.20.1683098808878; Wed, 03 May 2023 00:26:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098808; cv=none; d=google.com; s=arc-20160816; b=cgIw9Vla+MeW+wNd0fng4wh1EtTIC7W90Fqik9OIvVq9Rx/wl6sKilUzZ7uQMPPzra XSSg4rOUY3gUMg1TDe2Vj7PPti02YL/bfUP4HjuBHaRAtVxMbRdJ2S9Hhu0VOsxYOOqc C+YqNtJefk3wt9GzZZLbqmltV8huiPg/U8F4plNSql6r+eAYgDDrJKelTBV5mE1+8a/L /xsS38uDitXEXC+1IT12hxmOz7dYGirMS/vsOonYo0lZUJ8WhGANWXL1EkWq1lR3FraF ZnXG0PnuleX6o12E4tor5IhC1pffHC9K+tKVYMw7pc8TBMS3zET4q7GADmHxqOI5Vy4+ NZLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=lbibCyTnr9kxedetkEeMeR0n/g84TcpVDVIMMTB9mS4=; b=pmx8EMvdMcODOe8ltH/mjMUJgmIbdFBL87BLnDZKmbKby2YvoGQDiTslObuCTiiWY2 2SS5g4iG+bvmYmagPOZeGLJTKmteJzgr4bwfGkkwY/QAuTZ0dSQVfsSTWRca42j9I08r 70LFAUCnGbrff2SwRT4u9T0OuKHnFj8pj312q4nqRREcSBXkRhmyhb7smrU8DdlhaGfU wvtfKMmksKe817e34HC5XZ9f1/d+tkcqMOMuYt+nNYm8yMjnT7mMt+ygc6g2KUERdACt 1Hd1vOAEPqHAB//dflftDCNWpE1sdx/DbB9PND3VsXk+Pu1dto2U8J84zdywUrrGOfND TKlw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cujk5uo0; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ay35-20020a05620a17a300b0074df4bb07fcsi19069994qkb.531.2023.05.03.00.26.48 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:26:48 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cujk5uo0; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6b6-0000lZ-Pb; Wed, 03 May 2023 03:08:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aT-0006pF-Ox for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:41 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a2-0005jW-OD for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:41 -0400 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-3f1950f569eso30009015e9.2 for ; Wed, 03 May 2023 00:07:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097633; x=1685689633; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lbibCyTnr9kxedetkEeMeR0n/g84TcpVDVIMMTB9mS4=; b=cujk5uo0cS3wrev1l1yyS8/Z9ALAiUePLuWendhDGafxK5Pn8UE9fi737stdRe971n ZJ5DxBzfu5pcDmaN7YRdnfOdjtEzBZMgJ/MgfJCf86VE/wab5v30oizEvK7umMWb1y6c wtD93PUD3NI8usE8nZuHa7ygSiEtSQtfLIs3MN9+C5KMEacnoiiv9GZf49LqUML6diG1 E7LImcMVQF85JPkQuGDB5iLOJUVbTYw3h6uPXcueNI7esn7fbf7cPvf9jI49igKIshUF U4vpykI/1T0LMDrhcU/jV6rAcss2TaRPe1SJobtMadXXtWNfYqCOpqIOc0iqE9Tdk3Np 9LfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097633; x=1685689633; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lbibCyTnr9kxedetkEeMeR0n/g84TcpVDVIMMTB9mS4=; b=DweE4s90O3GPi+8o0SwMAqDO3EWdJ+syM2aLMeNcjbLa4PQ/vRG2EjWzR9L8Y1NzfV gfqCMX4ZbyLLRWPaQFJrEPha3evAOin4rPRcbGnhnvD8QtRwYjZCkgTCqzEI9akBD69d sGyB4jGPPaJz/PA1Brh3iZgzpw9lhqN2Lra87dvNAcRGJX9l3gCQNVitQZSjuEnCrOwo 4/JYz6MZ1Tv8kIqxVwDLHFJJt6d872VzjNskatYab6RXTrISyPpeIXHXo6pTpeFa3LjV 40a/gFkZO0ysvsX4Oet2YjQ2y+MWvTrchWn/aS6r+GXY3dwAzLrCJlpq1QNHk5Q1Ud8J x1Dw== X-Gm-Message-State: AC+VfDzzg8uJ8+F5Ub0vNYIoYYDpPPzyBIrBaZXk7Nv5l47wHNEbB0CW CpJQUj8nIN01ItAaLSnmCpYGZIlHgH8LYziBbYdm8g== X-Received: by 2002:a7b:c7d0:0:b0:3f1:75b3:60df with SMTP id z16-20020a7bc7d0000000b003f175b360dfmr13232821wmk.41.1683097633369; Wed, 03 May 2023 00:07:13 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 21/57] tcg/i386: Use full load/store helpers in user-only mode Date: Wed, 3 May 2023 08:06:20 +0100 Message-Id: <20230503070656.1746170-22-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/i386/tcg-target.c.inc | 52 +++------------------------------------ 1 file changed, 4 insertions(+), 48 deletions(-) diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index f838683fc3..e78d4d4aa7 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -1776,7 +1776,6 @@ typedef struct { int seg; } HostAddress; -#if defined(CONFIG_SOFTMMU) /* * Because i686 has no register parameters and because x86_64 has xchg * to handle addr/data register overlap, we have placed all input arguments @@ -1812,7 +1811,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) /* resolve label address */ tcg_patch32(label_ptr[0], s->code_ptr - label_ptr[0] - 4); - if (TARGET_LONG_BITS > TCG_TARGET_REG_BITS) { + if (label_ptr[1]) { tcg_patch32(label_ptr[1], s->code_ptr - label_ptr[1] - 4); } @@ -1834,7 +1833,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) /* resolve label address */ tcg_patch32(label_ptr[0], s->code_ptr - label_ptr[0] - 4); - if (TARGET_LONG_BITS > TCG_TARGET_REG_BITS) { + if (label_ptr[1]) { tcg_patch32(label_ptr[1], s->code_ptr - label_ptr[1] - 4); } @@ -1844,51 +1843,8 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) tcg_out_jmp(s, l->raddr); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - /* resolve label address */ - tcg_patch32(l->label_ptr[0], s->code_ptr - l->label_ptr[0] - 4); - - if (TCG_TARGET_REG_BITS == 32) { - int ofs = 0; - - tcg_out_st(s, TCG_TYPE_PTR, TCG_AREG0, TCG_REG_ESP, ofs); - ofs += 4; - - tcg_out_st(s, TCG_TYPE_I32, l->addrlo_reg, TCG_REG_ESP, ofs); - ofs += 4; - if (TARGET_LONG_BITS == 64) { - tcg_out_st(s, TCG_TYPE_I32, l->addrhi_reg, TCG_REG_ESP, ofs); - ofs += 4; - } - - tcg_out_pushi(s, (uintptr_t)l->raddr); - } else { - tcg_out_mov(s, TCG_TYPE_TL, tcg_target_call_iarg_regs[1], - l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0], TCG_AREG0); - - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_RAX, (uintptr_t)l->raddr); - tcg_out_push(s, TCG_REG_RAX); - } - - /* "Tail call" to the helper, with the return address back inline. */ - tcg_out_jmp(s, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} +#ifndef CONFIG_SOFTMMU static HostAddress x86_guest_base = { .index = -1 }; @@ -1920,7 +1876,7 @@ static inline int setup_guest_base_seg(void) return 0; } #endif /* setup_guest_base_seg */ -#endif /* SOFTMMU */ +#endif /* !SOFTMMU */ /* * For softmmu, perform the TLB load and compare. From patchwork Wed May 3 07:06:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678655 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908237wrs; Wed, 3 May 2023 00:21:51 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7tW0iVhAQy6Dr+7K9sgWBIgBgHRbXrUziZ9mvVGqvjbRWeOBJ+hSlprxO7mp/6FawPndW+ X-Received: by 2002:a05:6214:3009:b0:61b:65f9:c1e6 with SMTP id ke9-20020a056214300900b0061b65f9c1e6mr6976064qvb.17.1683098511310; Wed, 03 May 2023 00:21:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098511; cv=none; d=google.com; s=arc-20160816; b=LNxliEE8f+Lysc3S7sSG4mgjWdZ3Ha7MK5Vyzlz4vJ9qlTGWQrGDp1ZOpcnzGD3NDx C23E3dHqyC8yJlpLH2HVTBW+2g2AMb0qjfV8w68r1ZDBqtCjQFnlVd7RA7RsHoUu237D rAK7W3x1cydpEBafxno0/hxxBa4rdmhHwTDIgBNywVrUQvSDcDJunXJSao4e955rZ1kC zT0VRdJOkBwyh9Ifl7yWb0yGyMCtQxD24Dq1MrwgCqL9L745xVhQpnUETBH1o5fILP/Q T3dzyBLb/kmoy7EAeUu/BPXID9ZQlFyx4x+wUrxHni17g3ByNBPUy+H4ZaV516D4IGnC GzhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=5j/yMASWGStjwMM59Hf05R5CdVFVYJotd2cRr9oH9m0=; b=N7D/VLlxKjQPEiUK3bQMjW8PZsDchy6T6HJLz2l7sVHlh8svfLpAUP+0x9ZWrbwhjF nuHTzl9/EP/xfVwnbnYn9jBDmi7DzC+XjR816pmZ0UoL69gnNL30hK7ytIiQhGtWGnFV TEIZ1yrEf2nlLrGwT+X5KnjoebyH/5KBKPcKlxpXvG5Pn4PKNJYV5RaPi7enMtr9Z4Uy hpONtJMwkNIQlHKcv4mtKfWtp0K8EZcBG8iLhQ61mxQ8M/TwbCsqK99znMm8BmHkDQd6 YX8ZZfE+akNqwMM5ISvt8YfeNp6Iv/dHr8dQ1AeQ2oJHRjWAOylI8qYZxj/eKkOfkNYE I9iA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CaQjqtpS; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id dy53-20020a05620a60f500b00751424e1615si5713919qkb.556.2023.05.03.00.21.51 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:21:51 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CaQjqtpS; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6b9-0000zt-GW; Wed, 03 May 2023 03:08:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aW-00070o-2D for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:44 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a4-0005jz-Kg for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:43 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-3f1728c2a57so47624925e9.0 for ; Wed, 03 May 2023 00:07:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097634; x=1685689634; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5j/yMASWGStjwMM59Hf05R5CdVFVYJotd2cRr9oH9m0=; b=CaQjqtpSDch0Exsm8wEG8Gdb9dWKcyj8xeVd9MEC/vB96p6Hd23o/LlGHSzGg2acZy zilBP52CFOe2LQ8GsjwqHWUEFjfwQfI6WP/mPrdzq4tubr7u4OyCA9aHwy+f0Q9wA7lN BCyPRZtKgAMUiRIRJ4CbSjHsLASEeW2M5eeR+4CGNvHPXJypFN2RL1Oq19sVS+3+ni/i edGh7iugrEyZ1u4MUr4O7b8MtFUz09VfK3xhVzBnMlp0PYOA4ocGJ++W4k9dgumZFHtK NGaagHS/dgtuOTeWYwouQVgSsku7qxBe7ZPcTOZyJth6wVMe6crRYTLipdWRap7GoAmm b1Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097634; x=1685689634; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5j/yMASWGStjwMM59Hf05R5CdVFVYJotd2cRr9oH9m0=; b=U/ktTdWsIOokA9CYjvepy+hNV7IhZvvC+bC2i6ANfkgHrO06MJ2Gv7NN9YU94+onsU CtV14kW+9nfih27dKPlyXMx5zAE81HgkaK9EDsCqKYL9HuYDD3424HAkWFD+mXVE9Xbn TuVI9Q8Qx/JciDMRwjeh4wlFeEe5wB3tqWWbLhlHk0eOh6GAwvyoFQlNfBLP7izwVz73 Vzhg4Y/HhcfqBrisvplrQBvGeSnBFquFO1UVfL3A5L8vDaOVgSDnQU3Hmi4DdDT0DPga SUxXye+XWNzGjUrdWJScauE2gtQQHbym8tphimw4r3rdozLA+XTe7a03eTdvKhU6Cqd2 q3kA== X-Gm-Message-State: AC+VfDzCMGjE6+jiYfcdyhEl3NM/BW4P6+pIx3MzTnzlbNtK6yK5RI/4 Byz49XZNAZl+w1XU6I4pslAi4C8OEk1PZKcnflwn9w== X-Received: by 2002:a1c:7912:0:b0:3f0:41b3:9256 with SMTP id l18-20020a1c7912000000b003f041b39256mr13784978wme.10.1683097634480; Wed, 03 May 2023 00:07:14 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:14 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 22/57] tcg/aarch64: Use full load/store helpers in user-only mode Date: Wed, 3 May 2023 08:06:21 +0100 Message-Id: <20230503070656.1746170-23-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/aarch64/tcg-target.c.inc | 35 ----------------------------------- 1 file changed, 35 deletions(-) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 43acb4fbcb..09c9ecad0f 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1595,7 +1595,6 @@ typedef struct { TCGType index_ext; } HostAddress; -#ifdef CONFIG_SOFTMMU static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 1, .tmp = { TCG_REG_TMP } }; @@ -1628,40 +1627,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) tcg_out_goto(s, lb->raddr); return true; } -#else -static void tcg_out_adr(TCGContext *s, TCGReg rd, const void *target) -{ - ptrdiff_t offset = tcg_pcrel_diff(s, target); - tcg_debug_assert(offset == sextract64(offset, 0, 21)); - tcg_out_insn(s, 3406, ADR, rd, offset); -} - -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - if (!reloc_pc19(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_X1, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_X0, TCG_AREG0); - - /* "Tail call" to the helper, with the return address back inline. */ - tcg_out_adr(s, TCG_REG_LR, l->raddr); - tcg_out_goto_long(s, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* CONFIG_SOFTMMU */ /* * For softmmu, perform the TLB load and compare. From patchwork Wed May 3 07:06:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678626 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp905174wrs; Wed, 3 May 2023 00:13:12 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4WywXBGudDYQUdgWezmz4QqV0pbTEKI4RqPd92xrqJn8oopXmhT+6Wn2YBw3BjaFWTxALI X-Received: by 2002:a05:6214:4116:b0:56e:a976:7d16 with SMTP id kc22-20020a056214411600b0056ea9767d16mr8746751qvb.51.1683097992728; Wed, 03 May 2023 00:13:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097992; cv=none; d=google.com; s=arc-20160816; b=NBjoC4l/ozZewG++IJfjDNqM/Q14bHJIv9bTkP+Jaii4eS0YjcltciMfxNxR4qZlRw 9Ua7KnmmhlE/iUG3N2+VM2Djnh4eacZHEYLfE6KJDYtSZbAmjHDa7n9UCUOT9HUMylZP UQmTzsiQJaUC45fM2Hck1rk2FIvvXnmveD3DG+MxrdNaOm0foApWbob2wNJ8un1FXR5/ X9YmR1TPuBuulpbThiqnDi1Oa/nZ9lsB2Z1dVDTLY8PFaMc5yyomN4soTf6XXQKMWahT bnHzzqsWzn7O/0WWjrsE748vGKhmVQBvFsOuS4d3X7YxxVYG4vslhBy6p7yuu4m59bqP H7yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=RwxIOwfcK07f8Ih0O3BZ3uVtj/nizIzXDkTDJdsGEMk=; b=sSkZgL9hD0o2FIrJQ7h1JX6QqTFx7UkAG/0rEj1LixXwf6jkcrgxuHbh8OCcKcC7UJ tgGMnVRcAFXcHoIzwaHFKD/LKlec7ilhWODQjKQNOnkgSF78KkyFWoGXiIv7hZRQZF7/ NPaAw+nSzXCY0Kc3EXODZ+jHBvhov3DAz5wwJEcIplUJ4MGj3SQn06/oMV9Yb2L1WT11 HQ6pPj3mLP1Ezuvw4pkwCRgf9o+3bj2ULXVQGmHXLRzM62wDTEywNhYktY1BW76ur5gs 9ceWHzo6APGmEFHuWejD0vx7IztxQm3KMMxDXt9oryEdSBLuBSk7oTRcv02lMkOYHlh2 +RrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ar9PbvoX; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r16-20020ac85c90000000b003f1f38ea856si7741996qta.317.2023.05.03.00.13.12 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:13:12 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ar9PbvoX; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6b9-0000yF-3d; Wed, 03 May 2023 03:08:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6aW-00070U-1t for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:44 -0400 Received: from mail-wm1-x336.google.com ([2a00:1450:4864:20::336]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a4-0005ka-L0 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:43 -0400 Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-3f19afc4f60so29573105e9.1 for ; Wed, 03 May 2023 00:07:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097635; x=1685689635; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RwxIOwfcK07f8Ih0O3BZ3uVtj/nizIzXDkTDJdsGEMk=; b=Ar9PbvoXEisP7CprGBEplweyw1ndLyo4wA/3DQ2mgQuwfkD1lZo81FDOV9V6KSKf2/ U+lNP943moxxmCET4sjKRGpFPz9dzbq7AIq1kWYIs4Tyb5/+oFGmL7ISYyUo+ReEogu4 ZVZpUiIfEsF+7YjicK0mLD+egTz1YHpuZnkgHt2oxmTqwwguhZK6lzXPiC8HN50uu5UG b16JUmwmzlApGmhpEpxHNMsgmC+zCkz8/AL4nqee8S+CwFL8mP/mzDDRRFS/dJWeZWJj 2rjB1k1PxqCiXPC4yRhhNbIFCJyAsua2JnyD28/uNfxXEMyzD5kSERz6Ofal8b6yNZMY h5wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097635; x=1685689635; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RwxIOwfcK07f8Ih0O3BZ3uVtj/nizIzXDkTDJdsGEMk=; b=fFxNfjwHECO9hsWGjjs8PzHsKUUwrkeODlnI04i1FV9k9ASPSwYEbv7iUtjaqc7R40 5rpO6gc13fY2hlsn7eiKOh+E2ahyBoOee/hVAzUoUaANwJaTqyppHrJnUzthLulROMQi fDo5QkXf+9r0uuPjGTojE4lP/0Zcv0eMXqSr6VIaHYXwQ4qz7afUcBth7kAlI4LbgyOA lPCLolftcw69X9enA3vVGH7pPEFDZWHJZsEIQP6zAJb1aPLN9R6RG+8i7ARa89CtCz7/ GGKmMtYNoHTUqmEnvx5VjIJTE/yS4aGkxNuVOb+vr2jxCAc94V+OJmRwzaYdNZ+oJH4X B6Aw== X-Gm-Message-State: AC+VfDzH4BGmbVQb+j7qUXrc8M/+mi4N/x67d0Z0S1daLSANzfFpZj9+ d09UWj6AdJgtgHyn2ace86Pez2pdGLHWNIUxNYsZKQ== X-Received: by 2002:a5d:452b:0:b0:2fe:6b1e:3818 with SMTP id j11-20020a5d452b000000b002fe6b1e3818mr13618694wra.51.1683097635211; Wed, 03 May 2023 00:07:15 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:14 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 23/57] tcg/ppc: Use full load/store helpers in user-only mode Date: Wed, 3 May 2023 08:06:22 +0100 Message-Id: <20230503070656.1746170-24-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::336; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x336.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/ppc/tcg-target.c.inc | 44 ---------------------------------------- 1 file changed, 44 deletions(-) diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 0963156a78..733f67c7a5 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -1962,7 +1962,6 @@ static const uint32_t qemu_stx_opc[(MO_SIZE + MO_BSWAP) + 1] = { [MO_BSWAP | MO_UQ] = STDBRX, }; -#if defined (CONFIG_SOFTMMU) static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { if (arg < 0) { @@ -2012,49 +2011,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) tcg_out_b(s, 0, lb->raddr); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - if (!reloc_pc14(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - if (TCG_TARGET_REG_BITS < TARGET_LONG_BITS) { - TCGReg arg = TCG_REG_R4; - - arg |= (TCG_TARGET_CALL_ARG_I64 == TCG_CALL_ARG_EVEN); - if (l->addrlo_reg != arg) { - tcg_out_mov(s, TCG_TYPE_I32, arg, l->addrhi_reg); - tcg_out_mov(s, TCG_TYPE_I32, arg + 1, l->addrlo_reg); - } else if (l->addrhi_reg != arg + 1) { - tcg_out_mov(s, TCG_TYPE_I32, arg + 1, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_I32, arg, l->addrhi_reg); - } else { - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_R0, arg); - tcg_out_mov(s, TCG_TYPE_I32, arg, arg + 1); - tcg_out_mov(s, TCG_TYPE_I32, arg + 1, TCG_REG_R0); - } - } else { - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_R4, l->addrlo_reg); - } - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_R3, TCG_AREG0); - - /* "Tail call" to the helper, with the return address back inline. */ - tcg_out_call_int(s, 0, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* SOFTMMU */ typedef struct { TCGReg base; From patchwork Wed May 3 07:06:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678614 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp904315wrs; Wed, 3 May 2023 00:11:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6MMIFyOhJ5uAIBD6DErOvn0gi874jvGYLqRsMRXWv0y4q5jSTYbmxkBpna2k2BJlwu9NzX X-Received: by 2002:a05:6214:1d26:b0:5ef:d5b0:c33f with SMTP id f6-20020a0562141d2600b005efd5b0c33fmr1806873qvd.2.1683097874402; Wed, 03 May 2023 00:11:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097874; cv=none; d=google.com; s=arc-20160816; b=S2h/AD6VwBVWvtR5v2cNB0NGHEufIQ4/cot+68krJL6M4QxzwoSiQ7RYMKyW7QH93g CdC2aXwQBrduQBEtrVblnsw9qIXttBnXNDwid8ftqTkKp3ZY1+yJKKCDrm9SIOWgc2m0 3+hgR2aRumpNQkQjzCaBuEwqqr/PMDmWr0MQAxbZTT7wTeKUCV1LtNompqfzdVi/Jvgf jHK8pchJ/nERnWEk3s7vQ0ik7B6wfrp1g/keqfYkCeh1+4bOBgxocvB35eEpgfBm9s1f Hnc/F6j0cZKOHAGoT+uaFXq/13cdkRj2HU5s763mmLXClnmJPRMiUgw3Hu/zV2yUqUZt yBjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=FBYlnrp5r0dnt/Qn2+Cyqv4p//zkrum+IysmcShvf5s=; b=rFfefR1QV5ve8UM39ve/c5XHDy0NK6KnYdts0oLoBAOikoERY97VOZsG92ACd4IjRo Hh5jD9TDB0u5pFMk0c0wMdEM1wNuRgxRSswCPtBBYp0fGbmV/iywFNO0vXDo4nG1mxrU Xp6GJLxqI0h9Z1W7T1jK/uB0eAQ7kIeZo1xrQ4vBPx/xgpWq4o7gmnucg9yXpbRTf2up 3jAy4byQd3sG+pE0AAIXewAD863l4bh5onz1Rl2NNI26l/W0EHIbz1TLUbLxgqrB9Esl 3VgJVsEyDVFLApk/haEuw+AT5I43yf8qtGJFwwkb3RDSwQbI//7IwTH7Rc4U/DDrbXW9 3L1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="fq/Q/Fim"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id v21-20020a05620a0f1500b00747a2434db3si18382585qkl.767.2023.05.03.00.11.14 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:11:14 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="fq/Q/Fim"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bk-0004jh-Ud; Wed, 03 May 2023 03:09:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6al-0008F7-S9 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:00 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aF-0005kw-Ao for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:59 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f19a80a330so29503455e9.2 for ; Wed, 03 May 2023 00:07:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097636; x=1685689636; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FBYlnrp5r0dnt/Qn2+Cyqv4p//zkrum+IysmcShvf5s=; b=fq/Q/Fim4Jowm2cI85zyikEMpFfBtR77DowUda/NBWVIKllP5uPYynmWMACj/xapBK HkORVmtzIauZfQZKyXLER4o3RKAN4JxudX8TsmCDFL2rS3yQ21r0vOpx7ZSJXvw2YYgc 55zfYEudQgCg9K/p6KTj3Y+TP2ZwnUyg8IZQdsfq1abFo7Fw2BYJDFwaMscp4sB3nB4i p9jF2DvfwegHbBOnp2y13Y/Whh/46kBJSVThCByTUMofc9RR6tQUt+BLKO2M/r2zPJwq gUlvFe5ijZ09VWbSz6S86yCKKjnxaSqmepNrUXhfwhfbQSC4MiWfEez4yKu/vy1npTOV oMqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097636; x=1685689636; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FBYlnrp5r0dnt/Qn2+Cyqv4p//zkrum+IysmcShvf5s=; b=BN4D2nHzS0P4qGnBfcjw1bHhW/7sOrxlkUVGJEHEsWN1mPUbViCKHzug+ETvWb7wVM dlMOwMZV7Jx0ae6cPwNUJWSswo5RnPctRbixnJkqXr6vuhGNG9GdNoDeNgwHWGyFOf6V QQjzq3I+QSleGceEce6e1XPtTZEFIZ6g2zYzTmKMY0+u3wllwUmvo4dPLx74jymUvxHz xd1w4NYsaEbldLI97Hr4Xzmg9qhlKrj7eqaDBb0AX2PAa2+3A8diBeHLDwHeeMLCScvL mO/vs1RSWpu7pkgVh5bswUycU/fS9tlpeQJkM9RNHqBlAKSJZCLU4lGZm6Sj3rbkODRg CQFw== X-Gm-Message-State: AC+VfDwnLqHBXyugkSORTyYv7woXV1weMdU2keiqatHMwlW+5ZJpsPy9 UB+ZT1qPndV0TLlOnaaUYoEgO42msvO6LlV3lUrTjg== X-Received: by 2002:a05:600c:b54:b0:3f0:a785:f0e0 with SMTP id k20-20020a05600c0b5400b003f0a785f0e0mr13230204wmr.40.1683097635906; Wed, 03 May 2023 00:07:15 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:15 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 24/57] tcg/loongarch64: Use full load/store helpers in user-only mode Date: Wed, 3 May 2023 08:06:23 +0100 Message-Id: <20230503070656.1746170-25-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/loongarch64/tcg-target.c.inc | 30 ------------------------------ 1 file changed, 30 deletions(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index d1bc29826f..e651ec5c71 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -783,7 +783,6 @@ static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val, * Load/store helpers for SoftMMU, and qemu_ld/st implementations */ -#if defined(CONFIG_SOFTMMU) static bool tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) { tcg_out_opc_b(s, 0); @@ -822,35 +821,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) tcg_out_call_int(s, qemu_st_helpers[opc & MO_SIZE], false); return tcg_out_goto(s, l->raddr); } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - /* resolve label address */ - if (!reloc_br_sk16(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_A1, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_A0, TCG_AREG0); - - /* tail call, with the return address back inline. */ - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_RA, (uintptr_t)l->raddr); - tcg_out_call_int(s, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st), true); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -#endif /* CONFIG_SOFTMMU */ typedef struct { TCGReg base; From patchwork Wed May 3 07:06:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678677 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp910116wrs; Wed, 3 May 2023 00:27:44 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ62sqDZpAJVPuhmM0suL/+ZLYwQuw2PtjwSIhYCGFqaAFL0+QR3GNKz7jNYTF8w91Mx0eaQ X-Received: by 2002:a05:6214:20e7:b0:5ad:2a05:ddd1 with SMTP id 7-20020a05621420e700b005ad2a05ddd1mr7114321qvk.34.1683098864469; Wed, 03 May 2023 00:27:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098864; cv=none; d=google.com; s=arc-20160816; b=L7r/FfF9/YX7joS7qnm4FlyuV+eSHk+fQRrisIsS9YcBBivXCA1pkZ0InfUcZxNWJQ TNyfuvw6tkkWA0Ik/7gnJLtDcj+VcwvuVPSMWSUxBHWUKaxpZ0oKEPHloGsZrDCimn7B Ezii08a53FARcDpVtPBXYPOSVzgK0k7hFa7Ci51GNQ54b0rNopKIu5hC5Omy2BaFKI18 GXmLDOmzJ3grbabGuA//a9663BdEwUmElNjCv9ji7Ne+hfX/yr/2PYVexaZe2/UlGZ1w R/MAZ7/hVkQoiDTyS1lvHHGXuBfOjNISEIJxgHd8w1sqmf961JVF6bdqkiZL5d+e+tQN 1TpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=FTTMHsLpwQYpWnG78YNnZ0NpPI0wmyIkbfAohw2iqQs=; b=dRCqif/m2UR/4fLOjifDEvt6u+MwHxItMTzm1Pd65RYYYbh2l6Y6AhD1DUcVTchvPT +VfmFYYQLXP2TC2uKtQYqykiRcoPs0unlgTOZrjYpN9vmNdGkL2X5oArYO24kNdZvjnS rGzNgdis5gDkZB+pDxg53ZwyTwqJYStjNLXQp91KzcXTk9LGtBaPUylQ7/qWA+TCIqKd ZkDJ5ReT6yxClLrZJR6nJpH1G+h2sD5BFFD7LzhQ+XMgLJOeeNK24FoJhxf3nzCsXNPk tlWk3VwRD93D+iEX0arXron9qt2rZPOgj16eS25C5IE9DmpFE9xSl1DEcxxOc+Y2G+XK FK6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="EE/BdF3L"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id im4-20020a056214246400b00618bdab5ca6si3533851qvb.55.2023.05.03.00.27.44 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:27:44 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="EE/BdF3L"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bV-0002qs-FZ; Wed, 03 May 2023 03:08:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ab-0007V7-0o for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:49 -0400 Received: from mail-wr1-x42e.google.com ([2a00:1450:4864:20::42e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a6-0005lo-Nv for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:48 -0400 Received: by mail-wr1-x42e.google.com with SMTP id ffacd0b85a97d-2f95231618aso2891790f8f.1 for ; Wed, 03 May 2023 00:07:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097636; x=1685689636; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FTTMHsLpwQYpWnG78YNnZ0NpPI0wmyIkbfAohw2iqQs=; b=EE/BdF3LUZn0+RFbj9eD8sRo31AJoIRhtSAp1qW8WgHvbz6cWjryCZWl8nq9rpnDRk RrDVLl6XCN+F24/viQwtm1H6beKrAY8RlN/ZiX93tE96zG8xPRuPnKx0J6XH6a99f3nR O5m5UbYfpKCocJPxjPxyRpte0wTyFHTj25W11r+hlVHP10tannfDNwWZyxp5EFVQ0GSK c48pYlXwQJbyxhPewnacqF1ZDwwk/3C/0hx6lDSi2wPp+EbXauhUXVj91KnLdMQXsY6T 9lZHYgVYmY+s3fqK9GKBZOWIfW3KyU63sj1DZLPbftXKwuLIkJbjEsig0ZQB+XwKlZlg cGPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097636; x=1685689636; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FTTMHsLpwQYpWnG78YNnZ0NpPI0wmyIkbfAohw2iqQs=; b=IJKZTI3YsGMNX1hK7GcQJrODB+uaCxk0vbULO2iicvt0sDOrM1NdA9awPOH18FIwts 8sE7Plljul4XbPGHrPUmPkifrpBOB8ZiRiGIRzX0ze9a4TcISaAPoOUBQGBn5jEAlttg JPNFtSomMloNPvfTJWkjoB1dSJZ0cQixLnSIG2idBbqLtD5B1G3GTjpqYdi6PVSJC2W3 8MBLo5dAeIhWndePkcEuMQvG5pYoHWycsXVqlriJso+mDlb8L9WTPKPy/T3jP9Ceo6x9 jPG+u5bPnjihqdxzAYi+MWE21WBh3pFGzEZWDPpDucEFuUwGsgxKjiqQT+iSSrnT9tRm AUHQ== X-Gm-Message-State: AC+VfDwQJ5dLGXeQcc94PT0t4IN57gwFByAcAcLiEZkZsxLxs89bBFZO gToLZnbvlCPew1Uwe9qvsCbuEz246peXiCRAF6FKAQ== X-Received: by 2002:adf:e881:0:b0:306:30e0:ba44 with SMTP id d1-20020adfe881000000b0030630e0ba44mr5007513wrm.6.1683097636502; Wed, 03 May 2023 00:07:16 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:16 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 25/57] tcg/riscv: Use full load/store helpers in user-only mode Date: Wed, 3 May 2023 08:06:24 +0100 Message-Id: <20230503070656.1746170-26-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42e; envelope-from=richard.henderson@linaro.org; helo=mail-wr1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/riscv/tcg-target.c.inc | 29 ----------------------------- 1 file changed, 29 deletions(-) diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 8ed0e2f210..19cd4507fb 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -846,7 +846,6 @@ static void tcg_out_mb(TCGContext *s, TCGArg a0) * Load/store and TLB */ -#if defined(CONFIG_SOFTMMU) static void tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) { tcg_out_opc_jump(s, OPC_JAL, TCG_REG_ZERO, 0); @@ -893,34 +892,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) tcg_out_goto(s, l->raddr); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - /* resolve label address */ - if (!reloc_sbimm12(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_A1, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_A0, TCG_AREG0); - - /* tail call, with the return address back inline. */ - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_RA, (uintptr_t)l->raddr); - tcg_out_call_int(s, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st), true); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* CONFIG_SOFTMMU */ /* * For softmmu, perform the TLB load and compare. From patchwork Wed May 3 07:06:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678646 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp907597wrs; Wed, 3 May 2023 00:19:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6EJFbXqOpzfBOlJW1D12YTH/d71h+8cb0XfNg8N78w1ppCnlckP44bsAcfKjjWqWPY/Zc/ X-Received: by 2002:ac8:7dc8:0:b0:3f2:54be:eaae with SMTP id c8-20020ac87dc8000000b003f254beeaaemr3643575qte.24.1683098395372; Wed, 03 May 2023 00:19:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098395; cv=none; d=google.com; s=arc-20160816; b=Uc6XWiCec9HNLh7ECCCLLukZzh4GLsIBTeY5fFuSlOBxrzF/JW6pisfh0Q2JbKBafQ haCH3D6F9Rd2gH/XGtTrtuyZTUUZx56DEIzRqVTcO86fqDdSc9686o2iJJtpi+KVUbGK d31j1K52zLChdFHHfMj6VIFwbkxd2qy8GPlOqAylJaZIIM3ksNBIZXhmiY/cPLXuF0ug CHVo5ghWLSD/UguxwF0mTI4jU+BGy1av82qCPotfxkFFrPmDEaPtJ2qUGJzjbCtM8MyB EvaZoNSDMDPQ7oXOV4wALz2s2+yqPfU9SixNZu9Gm4hCG+ldUSvq0I3KChPqBwQH6agk V5yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=knj9vLRVgMuHbxUsbTpxPOOdPonmqcw2C7rsb1Cv/YA=; b=Kgt6y2SYpzbRDNOy22+a8Z+Ixhmi7RJSEX07MagGA2AW+ccEdeJHKOHOcpkB9I2CPT QtWKu1oFo9o9CokR0Ma+zMM8aUmH9Kzm/tJ0tpMnCvlxaPE+GmAWTsTuuRkRFM+IZYx7 RqSuneReIuZbn2AkhjBCfaEyihJNijJ9pvxashy3nZ9p5kPQ+S35q6HBRjvBt2LfvH3z Z48AmzYL+cOdfCvE7naLOfBI+Qia5lLeZA4DxoFSDUkg5zsOVGKifhTjbpOb4S1A69eH XNNf1iAfUDCN6W2PKh2ZdQgnCOXV4/ZoRfCcumY/YBnEaZuakMOgyW79xic+y2PxVsaa pEgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=yCaNAwLg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o16-20020ac85a50000000b003ef6163f84bsi16235025qta.207.2023.05.03.00.19.55 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:19:55 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=yCaNAwLg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bm-0004my-JA; Wed, 03 May 2023 03:09:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ap-0008Mw-Hs for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:04 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aF-0005c4-VS for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:03 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f1950f5628so46175625e9.3 for ; Wed, 03 May 2023 00:07:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097637; x=1685689637; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=knj9vLRVgMuHbxUsbTpxPOOdPonmqcw2C7rsb1Cv/YA=; b=yCaNAwLgqG2lVfPtK/AQezWWlBLk51lXTQHiUHce1ujQTleWX1IeXN2+4No0mrd3+Q TDIACK0+XhYFD5VPXa9ockPg0F/GiD0wv+MfQdlCMJqCSrzxIT1KH7rgI1Mj5Ng8DIy2 zjdvzHc8Qxw69z+/DxayZVXDkiryAfVLdgcaZPoJ8oV4aaPPNU6thz7kUrhuQVwg924e IERL0CTU9vwfMHlDYSupLqN/sAR5tt74TGpeE0ok3UAoQYODjksDghj/+vHF/3q537by x0sDqfo9Nt7ozRf4OkzVq6yOBGJ0Yzmm4/lhdWez+G0nuOZ9nLBqYUh4PBNrAKaMpYTc 9jMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097637; x=1685689637; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=knj9vLRVgMuHbxUsbTpxPOOdPonmqcw2C7rsb1Cv/YA=; b=UCeVTzklM5MowKsQPcWa08A7f5KqNeFfjGSibuf2DWFOaFo1CumbCWGB9gYz3707dH 7pusxne4OxJw9SDKb+NL1R9CKoTGuLKgz4Haw78lWm3nvLK5CdwLIxVXvfbKzECFMEqH x122R6lVkjBybDzKuVF/S+CT1cMwO9FhdwYI44AIiu/VZue9G78P36xVfc2zOuQmFbdm RCqIk8Trc0LuHr+2CPV3BmtMIEvarRtNxRrdMtUeUcExjdhZgxtLLAS2HZylCcxtnaZk v02yoBG4nwoWFRKwO7vvjx9ASiyt1lPsR+tf+xaKuxdLWPZ/G73y8QO8D6o4bokh9/Pr U+mA== X-Gm-Message-State: AC+VfDwATSgNGqY+jlCmBdVZMQukciEI+ljNMC3/BIa198z+58NM5Vx2 vyZRphazyJ9CLUnx7fc2765HvOkB+UBfEzLBp7z+Kw== X-Received: by 2002:a05:600c:ad9:b0:3f1:7bac:d411 with SMTP id c25-20020a05600c0ad900b003f17bacd411mr14672180wmr.39.1683097637180; Wed, 03 May 2023 00:07:17 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:16 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 26/57] tcg/arm: Adjust constraints on qemu_ld/st Date: Wed, 3 May 2023 08:06:25 +0100 Message-Id: <20230503070656.1746170-27-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Always reserve r3 for tlb softmmu lookup. Fix a bug in user-only ALL_QLDST_REGS, in that r14 is clobbered by the BLNE that leads to the misaligned trap. Signed-off-by: Richard Henderson --- tcg/arm/tcg-target-con-set.h | 16 ++++++++-------- tcg/arm/tcg-target-con-str.h | 5 ++--- tcg/arm/tcg-target.c.inc | 23 ++++++++--------------- 3 files changed, 18 insertions(+), 26 deletions(-) diff --git a/tcg/arm/tcg-target-con-set.h b/tcg/arm/tcg-target-con-set.h index b8849b2478..229ae258ac 100644 --- a/tcg/arm/tcg-target-con-set.h +++ b/tcg/arm/tcg-target-con-set.h @@ -12,19 +12,19 @@ C_O0_I1(r) C_O0_I2(r, r) C_O0_I2(r, rIN) -C_O0_I2(s, s) +C_O0_I2(q, q) C_O0_I2(w, r) -C_O0_I3(s, s, s) -C_O0_I3(S, p, s) +C_O0_I3(q, q, q) +C_O0_I3(Q, p, q) C_O0_I4(r, r, rI, rI) -C_O0_I4(S, p, s, s) -C_O1_I1(r, l) +C_O0_I4(Q, p, q, q) +C_O1_I1(r, q) C_O1_I1(r, r) C_O1_I1(w, r) C_O1_I1(w, w) C_O1_I1(w, wr) C_O1_I2(r, 0, rZ) -C_O1_I2(r, l, l) +C_O1_I2(r, q, q) C_O1_I2(r, r, r) C_O1_I2(r, r, rI) C_O1_I2(r, r, rIK) @@ -39,8 +39,8 @@ C_O1_I2(w, w, wZ) C_O1_I3(w, w, w, w) C_O1_I4(r, r, r, rI, rI) C_O1_I4(r, r, rIN, rIK, 0) -C_O2_I1(e, p, l) -C_O2_I2(e, p, l, l) +C_O2_I1(e, p, q) +C_O2_I2(e, p, q, q) C_O2_I2(r, r, r, r) C_O2_I4(r, r, r, r, rIN, rIK) C_O2_I4(r, r, rI, rI, rIN, rIK) diff --git a/tcg/arm/tcg-target-con-str.h b/tcg/arm/tcg-target-con-str.h index 24b4b59feb..f83f1d3919 100644 --- a/tcg/arm/tcg-target-con-str.h +++ b/tcg/arm/tcg-target-con-str.h @@ -10,9 +10,8 @@ */ REGS('e', ALL_GENERAL_REGS & 0x5555) /* even regs */ REGS('r', ALL_GENERAL_REGS) -REGS('l', ALL_QLOAD_REGS) -REGS('s', ALL_QSTORE_REGS) -REGS('S', ALL_QSTORE_REGS & 0x5555) /* even qstore */ +REGS('q', ALL_QLDST_REGS) +REGS('Q', ALL_QLDST_REGS & 0x5555) /* even qldst */ REGS('w', ALL_VECTOR_REGS) /* diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 8b0d526659..a02804dd69 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -353,23 +353,16 @@ static bool patch_reloc(tcg_insn_unit *code_ptr, int type, #define ALL_VECTOR_REGS 0xffff0000u /* - * r0-r2 will be overwritten when reading the tlb entry (softmmu only) - * and r0-r1 doing the byte swapping, so don't use these. - * r3 is removed for softmmu to avoid clashes with helper arguments. + * r0-r3 will be overwritten when reading the tlb entry (softmmu only); + * r14 will be overwritten by the BLNE branching to the slow path. */ #ifdef CONFIG_SOFTMMU -#define ALL_QLOAD_REGS \ +#define ALL_QLDST_REGS \ (ALL_GENERAL_REGS & ~((1 << TCG_REG_R0) | (1 << TCG_REG_R1) | \ (1 << TCG_REG_R2) | (1 << TCG_REG_R3) | \ (1 << TCG_REG_R14))) -#define ALL_QSTORE_REGS \ - (ALL_GENERAL_REGS & ~((1 << TCG_REG_R0) | (1 << TCG_REG_R1) | \ - (1 << TCG_REG_R2) | (1 << TCG_REG_R14) | \ - ((TARGET_LONG_BITS == 64) << TCG_REG_R3))) #else -#define ALL_QLOAD_REGS ALL_GENERAL_REGS -#define ALL_QSTORE_REGS \ - (ALL_GENERAL_REGS & ~((1 << TCG_REG_R0) | (1 << TCG_REG_R1))) +#define ALL_QLDST_REGS (ALL_GENERAL_REGS & ~(1 << TCG_REG_R14)) #endif /* @@ -2203,13 +2196,13 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) return C_O1_I4(r, r, r, rI, rI); case INDEX_op_qemu_ld_i32: - return TARGET_LONG_BITS == 32 ? C_O1_I1(r, l) : C_O1_I2(r, l, l); + return TARGET_LONG_BITS == 32 ? C_O1_I1(r, q) : C_O1_I2(r, q, q); case INDEX_op_qemu_ld_i64: - return TARGET_LONG_BITS == 32 ? C_O2_I1(e, p, l) : C_O2_I2(e, p, l, l); + return TARGET_LONG_BITS == 32 ? C_O2_I1(e, p, q) : C_O2_I2(e, p, q, q); case INDEX_op_qemu_st_i32: - return TARGET_LONG_BITS == 32 ? C_O0_I2(s, s) : C_O0_I3(s, s, s); + return TARGET_LONG_BITS == 32 ? C_O0_I2(q, q) : C_O0_I3(q, q, q); case INDEX_op_qemu_st_i64: - return TARGET_LONG_BITS == 32 ? C_O0_I3(S, p, s) : C_O0_I4(S, p, s, s); + return TARGET_LONG_BITS == 32 ? C_O0_I3(Q, p, q) : C_O0_I4(Q, p, q, q); case INDEX_op_st_vec: return C_O0_I2(w, r); From patchwork Wed May 3 07:06:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678603 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp903528wrs; Wed, 3 May 2023 00:09:03 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ74cKxcunq3FEttcbVSep+J3/lNYShPkR3lnINKmJg2yBvxV8GGc3panPa84o3nlffu0lqM X-Received: by 2002:a05:622a:1387:b0:3e6:4fab:478f with SMTP id o7-20020a05622a138700b003e64fab478fmr30032845qtk.43.1683097743664; Wed, 03 May 2023 00:09:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097743; cv=none; d=google.com; s=arc-20160816; b=C9zZkTWqnpgvyh6XMIEsMA7MUgjdSBwHw8Mca2rtPOoEzuZCAZGBo0zJYVv0shnYBP xNdN3pWv4YUCn5x9vXEBjYIKg/4G111YFm06X5xV8X72vCigasgszy43S0+VnSncZ+CS /oxRYqXYRm2ib0KWYGH/KPwvD9YsXIstDlwV/Xv11QmDk0TiKugKAmB2D8iGbIRjJMdJ SpvdjP3myA4hsqhpfY6AT3KFfKhaL0mq47E+cIlpl7oUV6yjl8VUMOcNiPw0s1qnvUoi YM+gY0hjOeiO0+H3o8Bzk54sFBEiTJdk9dmwogKE0LMR+SQo3TpsdZvIG5AsRSQaHw0m DRMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Xpq7kVqaR8GJII9yFdioUA+mHFs0Vvahpm47qCN+m6U=; b=S3u2TtQmBiwNGkVqCWPeULlxCFyh4qqvTRiQwTB9QVVXK9TYZgkAA6e1Z0KJWKFtEH f2wP3fvpbsnhdqBbvZgrgdrYTX2oLKh1hz0O9X5seoE/jdi8l2rxYUluF26MJl5nDbnB O+5U/VrfLI8rEL9Y2JR/5kqL8j6qs3cuVDTKMzPFa4FkObpO9Xptg7sZmNcEbvzIcP43 a0KDIfdghZ4pkza5OaqMwwPKj+SdP6xmn7wpxZB28sbd+q1Xow8eS8RT/CT58BKrbOkR bfmPcTaGhpqbgw5AkpTpBu3xyLqWuoJrJVYvKn9phf3Vs8mUSeLYBpS5oDmUPvl5I/oy Q4zg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=knoefAvh; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id e29-20020a05620a015d00b0074def505916si17537661qkn.180.2023.05.03.00.09.03 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:09:03 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=knoefAvh; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bH-0001na-OX; Wed, 03 May 2023 03:08:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ad-0007l3-69 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:51 -0400 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a6-0005ac-RY for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:50 -0400 Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-3f1e2555b5aso31680195e9.0 for ; Wed, 03 May 2023 00:07:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097638; x=1685689638; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Xpq7kVqaR8GJII9yFdioUA+mHFs0Vvahpm47qCN+m6U=; b=knoefAvhROr7EPY6f4Hwq1RDQEx+Ix9wY74EEMUmm3japJhdr90P728D+Z87AWxpNj m1Mt2pXXQpcNDu1qUDgr+ID3B6y3uWh3n62QxArws5ZIokThji5AsYFt1YoAt9MZ6WdK eR0qqUPBB6iVFf31SdC3IRhP10Juy2G1qowRYMis1TsW1jX1eEtwM0JsAEVEaoea2Rmf nnQtYM/z3jLGbCIBMPfGoQZJp7xrw16Je1b+UyHv7YhYWuojJPzOKCKvR+WrpjTpWlQg HLDaumaGXY2y1Ij1g2PZ2A77/uh4CzLqjNWL8YgQ2CAFCGniwTKwya9L3VZIs/z9QTRI XMOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097638; x=1685689638; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xpq7kVqaR8GJII9yFdioUA+mHFs0Vvahpm47qCN+m6U=; b=f8S7avnQtsUeUhWhN9aaoqY3ZvvdQII6pXa1AT4/fxhh4qxzDY2DlF3kExUwKsRMtz XCmoqhSJ0YFz7+eE5BwMu8KyHcPnW0UJfVlaGrq+SNeXtY4o3MVbaz6RiB7GuHiMBDIl YMQ3jLsfrVvu4pqcWyp8ej9XZxtyBcaBdm5fS/85RMgVHQU5fq9EVOqqa6Z4sIRJe13O sEKA09lZZk9Jju6fvnZ4OV7fCNO+weEOd2B5B5b75OZSKS2+sNyoeJXSEDvVYBlsq/27 9gP5oB00FaeVOg6s4eWv7w8vGaHoEFfFUciKJ6G80hSnespzP78lUwsivztfeN8FM/JX oZ2w== X-Gm-Message-State: AC+VfDxtUR9YnEklH81XzwMkv/Gi8yT5ccQ/ugnLivXU0zzQolv9v2JG wv0HYphYv8FCOnSvNQcTbYpjcqSz93Ej8e+LGE9adw== X-Received: by 2002:a7b:c408:0:b0:3f1:75b6:8c7 with SMTP id k8-20020a7bc408000000b003f175b608c7mr13318615wmi.37.1683097637959; Wed, 03 May 2023 00:07:17 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:17 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 27/57] tcg/arm: Use full load/store helpers in user-only mode Date: Wed, 3 May 2023 08:06:26 +0100 Message-Id: <20230503070656.1746170-28-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x333.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/arm/tcg-target.c.inc | 45 ---------------------------------------- 1 file changed, 45 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index a02804dd69..eb0542f32e 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1325,7 +1325,6 @@ typedef struct { bool index_scratch; } HostAddress; -#ifdef CONFIG_SOFTMMU static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { /* We arrive at the slow path via "BLNE", so R14 contains l->raddr. */ @@ -1368,50 +1367,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) tcg_out_goto(s, COND_AL, qemu_st_helpers[opc & MO_SIZE]); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - if (!reloc_pc24(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - if (TARGET_LONG_BITS == 64) { - /* 64-bit target address is aligned into R2:R3. */ - TCGMovExtend ext[2] = { - { .dst = TCG_REG_R2, .dst_type = TCG_TYPE_I32, - .src = l->addrlo_reg, - .src_type = TCG_TYPE_I32, .src_ext = MO_UL }, - { .dst = TCG_REG_R3, .dst_type = TCG_TYPE_I32, - .src = l->addrhi_reg, - .src_type = TCG_TYPE_I32, .src_ext = MO_UL }, - }; - tcg_out_movext2(s, &ext[0], &ext[1], TCG_REG_TMP); - } else { - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_R1, l->addrlo_reg); - } - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_R0, TCG_AREG0); - - /* - * Tail call to the helper, with the return address back inline, - * just for the clarity of the debugging traceback -- the helper - * cannot return. We have used BLNE to arrive here, so LR is - * already set. - */ - tcg_out_goto(s, COND_AL, (const void *) - (l->is_ld ? helper_unaligned_ld : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* SOFTMMU */ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TCGReg addrlo, TCGReg addrhi, From patchwork Wed May 3 07:06:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678775 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp915821wrs; Wed, 3 May 2023 00:46:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4FTungL65kIQlpX8x/u5G+8ElawVzarBYbNCLjvKtzleO5cNNcQMrxL+ugjfqXCDSJLQCi X-Received: by 2002:ac8:5e53:0:b0:3bf:be7d:b3e5 with SMTP id i19-20020ac85e53000000b003bfbe7db3e5mr30147276qtx.41.1683099999485; Wed, 03 May 2023 00:46:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683099999; cv=none; d=google.com; s=arc-20160816; b=KUR3pV9RGZhFHHOl45BDOprFoYDm+Ci+a7rXBaiTQ3r4h2g3GUNODBzSLIeq7Xzw5u Vtzdlnm+zT73MsrVWbuCIm2r374pGx2jent/vxskY2Baof0t3XtwQHjEo/k8PEO8JZWX kDNt0CJVoRa81gM2ujk2X5G4iZRtWjrrsQChBOreQOamF4RVyxDBZpQrkN1bCVT3PPJe htfFM7PqpqVkN4dDxI4cKOdAoN2OVI0b5qA+fuZ45yRsd96QtClOMrLk4VJrwh0xGD6o ElIK5TwNwloc0+R2vxOQ4Pb8zUIJWaYg/HGr0VFlbftu9RlMngkw1G+Ta5tQtkjuEBsz LuTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=RTRyUJJ//8McRlEJCpdcC3CCDoyJJ5ltYvB50/bCFGE=; b=JDxhFMvKmXJqUCnHMUXYqdKFhV0c3K2d3W9kaI2eZW+nWxNkEIOuMbpYOd1cf1eQ2x zAqzeU+OIZqA9xxiSPz8SJKu0uG1lQL+bwbe7SioryljcwzBfEfJHUdbjAy1TfjyzI+I mR8YP7H8pXvQC17fEzsXUohJnwXQ9q70Edjo7cNbPF8B9/E+TR5P/NmzQYNYj1V8HcZ+ ZMLCAkHG0k+9H+x+XZl+FbEVqlvpSLK/l1SM6OQai4GGxiCfmS4ZYexs+KYgaw2tl0yU zM2OVplWuAPunnFNc+1ujS7xTglQiNX9R73zquws31S8zKXENA81RLWsuQKj+gh2lZg2 PN4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=M0AVoLL9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a13-20020a05620a124d00b00753be8107c4si58083qkl.163.2023.05.03.00.46.39 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:46:39 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=M0AVoLL9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bW-0002yY-6K; Wed, 03 May 2023 03:08:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6af-0007uJ-Gc for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:53 -0400 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a8-0005n4-6L for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:53 -0400 Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-3f315712406so11277255e9.0 for ; Wed, 03 May 2023 00:07:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097639; x=1685689639; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RTRyUJJ//8McRlEJCpdcC3CCDoyJJ5ltYvB50/bCFGE=; b=M0AVoLL90VsU6dUb5aND6Tr+2Yq0dKst3kcummph/xAeuXLqXlicMIsMuDgcgGwe1e 4gVa2Ui1iVDVrt4pr8MwsD7DmFZE0WawVPciUSjaLKqDUAWrYPEqDV5d0STAYfGBHl8W nRwa9Xpl5snKuORBQZVjgzBB6+pYRc9w25n4bbw58TLJoP0g6T51iP+Iu7MgzRA2W6pi 73RXlWR6wQCG3Zxzpwzvxkm2zr1G5Jl57G8e+5vN+10phEJwkjzorVO0M0uDSaE0JR2L UVPPoDESWL9rv6AYkXo4lTzyc5gU3cTMmQWQ3QrOXE+Zp1Z4cNPCbkIMGoYLycOnsV+5 gVNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097639; x=1685689639; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RTRyUJJ//8McRlEJCpdcC3CCDoyJJ5ltYvB50/bCFGE=; b=E4El2HwjGYAe/x6gtYsK56JP4mVAxx6vaMIz9LzsvIpUHycK6QnZct2yJVtVoqFUqs Km6fth64Ph8Imi+iM2CRO5p3P7m1G2MnG9gIxHMst2GDqRMKfElNNRMEhTIfx+40XheY AwxAgLgWR3xcH2egQQnQB8kW8tFJQLA5lXFI5dS3lRWbiNqOgvWeiHm/RU8jd9SZ4Nbb xjJtVLg9jZ7um2YPyRyivsoPJmu5vpbtpgiRg4/SpcHXygNQ8BufSYTDkC4Uo1pr8ET0 lsf5xOdZbAkDbyMk9lWaog3bYJUYcOqeof7H/t7Fbqsrtyl7j/odOCYnEdf/FM+b5hxS wVDQ== X-Gm-Message-State: AC+VfDwdSqhFeFfpXjjhw1CqNzzuIbZXqX+LbSLJCLASkTjxE6eZKGKO JK5bsuCsTK1mhg9YJQa9K3aclGA8iOpBDa9mDcMr5w== X-Received: by 2002:a1c:c916:0:b0:3f1:71d3:8ddf with SMTP id f22-20020a1cc916000000b003f171d38ddfmr601759wmb.14.1683097638761; Wed, 03 May 2023 00:07:18 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:18 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 28/57] tcg/mips: Use full load/store helpers in user-only mode Date: Wed, 3 May 2023 08:06:27 +0100 Message-Id: <20230503070656.1746170-29-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x333.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/mips/tcg-target.c.inc | 57 ++------------------------------------- 1 file changed, 2 insertions(+), 55 deletions(-) diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index 7770ef46bd..fa0f334e8d 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1075,7 +1075,6 @@ static void tcg_out_call(TCGContext *s, const tcg_insn_unit *arg, tcg_out_nop(s); } -#if defined(CONFIG_SOFTMMU) /* We have four temps, we might as well expose three of them. */ static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 3, .tmp = { TCG_TMP0, TCG_TMP1, TCG_TMP2 } @@ -1088,8 +1087,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) /* resolve label address */ if (!reloc_pc16(l->label_ptr[0], tgt_rx) - || (TCG_TARGET_REG_BITS < TARGET_LONG_BITS - && !reloc_pc16(l->label_ptr[1], tgt_rx))) { + || (l->label_ptr[1] && !reloc_pc16(l->label_ptr[1], tgt_rx))) { return false; } @@ -1118,8 +1116,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) /* resolve label address */ if (!reloc_pc16(l->label_ptr[0], tgt_rx) - || (TCG_TARGET_REG_BITS < TARGET_LONG_BITS - && !reloc_pc16(l->label_ptr[1], tgt_rx))) { + || (l->label_ptr[1] && !reloc_pc16(l->label_ptr[1], tgt_rx))) { return false; } @@ -1139,56 +1136,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - void *target; - - if (!reloc_pc16(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - if (TCG_TARGET_REG_BITS < TARGET_LONG_BITS) { - /* A0 is env, A1 is skipped, A2:A3 is the uint64_t address. */ - TCGReg a2 = MIPS_BE ? l->addrhi_reg : l->addrlo_reg; - TCGReg a3 = MIPS_BE ? l->addrlo_reg : l->addrhi_reg; - - if (a3 != TCG_REG_A2) { - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A2, a2); - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A3, a3); - } else if (a2 != TCG_REG_A3) { - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A3, a3); - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A2, a2); - } else { - tcg_out_mov(s, TCG_TYPE_I32, TCG_TMP0, TCG_REG_A2); - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A2, TCG_REG_A3); - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A3, TCG_TMP0); - } - } else { - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_A1, l->addrlo_reg); - } - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_A0, TCG_AREG0); - - /* - * Tail call to the helper, with the return address back inline. - * We have arrived here via BNEL, so $31 is already set. - */ - target = (l->is_ld ? helper_unaligned_ld : helper_unaligned_st); - tcg_out_call_int(s, target, true); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* SOFTMMU */ - typedef struct { TCGReg base; MemOp align; From patchwork Wed May 3 07:06:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678691 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp910573wrs; Wed, 3 May 2023 00:29:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4MMZLm9PCqGmr1RHRtMfdngxoKIkF9sxFd3gr6i7mayzFK5eLN+px4AWRKF2wXYh5pOMYp X-Received: by 2002:a05:6214:130c:b0:5e9:429b:559f with SMTP id pn12-20020a056214130c00b005e9429b559fmr9483845qvb.13.1683098954504; Wed, 03 May 2023 00:29:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098954; cv=none; d=google.com; s=arc-20160816; b=hUVIbzqly5CcheUdINDZ7Gf7xa8hiX4sxDtUW4T2cYX0W+4pVyrhNRbt/dTjH8HC8A 4lfzFmfQNQP74e+AbOriDi/mZXysCb0jzEqe0v2a63CfqkB/9L4ihD5F3qUv3FnkltXB QKTOG62Eh9ZEDobEP4FfUP5XXMBhX1kvKMVYZQft1GXZt7JvfzhLFoWGE8sqoJmMTmKu WRrA+95glRtk9num812zTF+YZvU5lvdvdIcvtSC/wn6seJO486od9A2P/SLgPW1kWFhi 2is6eD0shTxSx1nI7EZ9RCCtvMBN1VHd2/sQW5JYbzhUA9AacJwShDmhAkzlhfXhP3Zx okLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ZBFIqUFc75JDtrlEORyx+gi0WPPCYnPmt/oOy5jLScI=; b=qb4XQDhLB9P7Bgd40eQOTWkMcMoUHrAH3s8QQNQWC6Tst2Mhv92ithWVglZ8bv7HDR o/y48BVaa5Wy1hSDDjsoWIXfNG3VdXkAfi/eL4qEGOGOqpds/Em7bBsKQBdeBjoZzht8 sB6fq5sW1eNHkD7Xr1wseQ7+iMlNcI/FXpQ0sLZFuQb2mHToz+w9S7E+Wt2wbDNYb/kn LotqV9NhoijPd28a8YQn0FxA1xkBmu2HesihfGopnibNOrQ5YOr2Dt62U2H+7H0knOh7 eK0sT2z1zt9CAQMvM8aKFFMPWPvj0ZpebJxOz2ruRdFiUujbR9SjqqH8DZRzUSiDnbRp UY1Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=xEoYu6rg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d125-20020a376883000000b0074acd08028fsi18087353qkc.292.2023.05.03.00.29.14 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:29:14 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=xEoYu6rg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bh-0004QQ-0O; Wed, 03 May 2023 03:08:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ag-0007yH-EO for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:54 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6a9-0005nR-0K for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:54 -0400 Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-3f173af665fso29995625e9.3 for ; Wed, 03 May 2023 00:07:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097639; x=1685689639; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZBFIqUFc75JDtrlEORyx+gi0WPPCYnPmt/oOy5jLScI=; b=xEoYu6rgIF7QNrJmJ6vMWfNALnwaHb784XkbcKgFVKWar4VcaTkyqJO7HDyqCp704+ +1F0oSztq27LXj7ABPJe0yI5iyWbCuWU8Xj5RpO3wlNf8PJYEx0CakmSFxzKqJlo/aSL Uxl73qkemxXGHMK92BUu0J4Z7vXmH0Z3t1jCW6DlUvYdLHN33k7/LKoowx5dBZbXls+Y 1Gi1+ziQVU6YVFOVwrU58MOoMAVSaZS1lRtKNkusbPHTatWlATqRBPci1nb1uyhntYml K1SGwBeohpbaezcj5HOFgKohSQnNmPgp/1LV/Gl3NXpWSGrrDS9Y0iLQQj/KcfQy99Kq Dd3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097639; x=1685689639; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZBFIqUFc75JDtrlEORyx+gi0WPPCYnPmt/oOy5jLScI=; b=hpaPuteTxNFgceXsx8D6qQbbzRzEeFvMBEh0RHO31aYga63MNBkqSL9KgzLA+4tm9P GbA6HWzKTD8xyOp3pKn9pNwmjMHcZOj3MznoCdv980zdAxikJpF26znwj99yElnnJU2W Wsr1jSd03ftVfQkPJ4HUzInMn1uZEnxcA8YQxIaybt8G/Solty7JsBs1+L0j29NapF6M JyJWh6W0J5R31ulY2ARcuLPw6QYxVornWo7ZwyphCPoKBtQo1Wv06FHcnVpFx1KHtOUe vQGXc4tPVo1RRCY3RldRY/uWoK5Zo1kTif78ksdxXi3Xh0AkN3qjPLbjUyBP6ZmeX+uc SpwA== X-Gm-Message-State: AC+VfDzPItw4bq7cflBkSNGD9iizJ1H6fOEXNVJIMXIyzUtpipvB/lag h3YageTc0YJMlobFYPFAsAjXxdalcE6TofCfauE8OA== X-Received: by 2002:a1c:4b1a:0:b0:3f3:195b:d17c with SMTP id y26-20020a1c4b1a000000b003f3195bd17cmr13898157wma.24.1683097639427; Wed, 03 May 2023 00:07:19 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:19 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 29/57] tcg/s390x: Use full load/store helpers in user-only mode Date: Wed, 3 May 2023 08:06:28 +0100 Message-Id: <20230503070656.1746170-30-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/s390x/tcg-target.c.inc | 29 ----------------------------- 1 file changed, 29 deletions(-) diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index 968977be98..de8aed5f77 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -1679,7 +1679,6 @@ static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data, } } -#if defined(CONFIG_SOFTMMU) static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 1, .tmp = { TCG_TMP0 } }; @@ -1716,34 +1715,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) tgen_gotoi(s, S390_CC_ALWAYS, lb->raddr); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - if (!patch_reloc(l->label_ptr[0], R_390_PC16DBL, - (intptr_t)tcg_splitwx_to_rx(s->code_ptr), 2)) { - return false; - } - - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_R3, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_R2, TCG_AREG0); - - /* "Tail call" to the helper, with the return address back inline. */ - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R14, (uintptr_t)l->raddr); - tgen_gotoi(s, S390_CC_ALWAYS, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* CONFIG_SOFTMMU */ /* * For softmmu, perform the TLB load and compare. From patchwork Wed May 3 07:06:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678636 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp906590wrs; Wed, 3 May 2023 00:17:05 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ78foNv0FVFJqvTGeiBOBE4z9MbOLfticXR+t5ciZS/bkIfPSuoisqyu2QJ7yBbanS4bs0b X-Received: by 2002:a05:6214:625:b0:5e8:e6ac:594e with SMTP id a5-20020a056214062500b005e8e6ac594emr9412133qvx.29.1683098225376; Wed, 03 May 2023 00:17:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098225; cv=none; d=google.com; s=arc-20160816; b=WjojvT2WxYotS+Vym9Rifxeb4MkjloLhOClf5EB/eaaCFo6COWQK0rch4GXOZadQ05 jWaJ/SHYyZ57COx4M8wOgpcXop/V9QAOIfR/dkzLS0Iy660x1E1+wv4hmZz2jIchGyOW +qWpOH6fUflM7g9JkuKIc1iilA0CH/f57sbU9aT7Ufv9Rr0zQGpE/SgKQ0+tzAT8Jj5f GUK2QAU0sTtTM9/A9Ax5ibDKLaUMFrUZ344H/TG7w2ukESBojIZzGD93PBDy17vUWHBE pEs7pgRLY61iyLJ4gMQrhuQLUAIcvjlKFBVxATWHJH47UltUpFjCjKtfX8uo7XoXCu6A SIqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=YyONh+q6+JPHpqSgz1eedz8qFY8evuyzGRq8DlknugQ=; b=W16THZxoXTLEs9mQkaRi+ZQE7rBcM8h3hbZJQ0Ql6vyOJx5dP4wzUmYvDW3e0N0B2+ gVI+b/RjHB7DoqN50aCSy4Zj855l4PPC0KhHI2RhL4jd5V6L0953pq2UItfEk021lRp3 5EhqwnLDt0aBIzFhujCge1qc4nsjleGo5AcJIddctrsuorRi6XF88qWVIuLUqPOoaodD T301hAv7uoQhZvHevBxxMv/ymavF0LQ+FxWBF+YTCsyVFpfH4ej1MaNuBy9HFx0l3O/W QtKit5xcmTwsu9Sr43G3AtO/HZyFaiA5+Hq5UZ80WSGANRH90U1+hK+33b5NrSOOwZQc RzUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eifodROG; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id iv5-20020ad45ce5000000b005cba48da2ddsi2654712qvb.511.2023.05.03.00.17.05 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:17:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eifodROG; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bK-00020P-82; Wed, 03 May 2023 03:08:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ah-000836-LB for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:55 -0400 Received: from mail-wm1-x32c.google.com ([2a00:1450:4864:20::32c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aA-0005nr-Hh for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:55 -0400 Received: by mail-wm1-x32c.google.com with SMTP id 5b1f17b1804b1-3f1cfed93e2so47412355e9.3 for ; Wed, 03 May 2023 00:07:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097640; x=1685689640; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YyONh+q6+JPHpqSgz1eedz8qFY8evuyzGRq8DlknugQ=; b=eifodROGaa0Bfuhjwal6HwptkmXiePiHlatVwhte2HnZkZxsMBg1HBzDd5MRjcGkOj qu+218ldrcNI+iRVB8Wyz2dMbZYitSEzgpXTzIcFGGrre2824oRMOCvvXsepG1yC31lH Hl++jWNsCShHsKk24eOISDX1Iz13VbN6YwIIgDBwmznV/UDnIsuJeqC4mJU9Ti4hDmt6 lGDGp3rUaDa/IiH/L5y6GhAPT6FfgaGs9aZ6SJlmnRs3yRko2Tm5kPKGHluh8r2jaVOo iX4a7S/JxxBNkCbA7viIsdeonJj6vYoWozXtgMJrQ6xnEiMKGzequExKnaWeL5ZtaZYL Ualg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097640; x=1685689640; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YyONh+q6+JPHpqSgz1eedz8qFY8evuyzGRq8DlknugQ=; b=hkVcCFncoUfvV5XY+J8d3Wvj9wIcWQCog9B1N33EYp5HQprODiD6U8mK4HWz0tvL2E 24RP6O+6qiRZ2YG+jhNTHfDiOgWAIWQi5WVP+PM18T6TICGMeSuQlbUScsyZmvs/CqMl Ta8jyLYA8olIh2oB8hggnizgMbhL2uKd2QM5scZSzrfZbzLdC5BQhbNdvtSeyjO7k38o 4+F2oin6Dpele4+O+1AXBHuddTsnL8C8gUluuHKRx88FYwi45KbsraHDq+JhyTtTi3dw F8AD9FA0bapRH/kBTSrgY8U9angca4KN2tkfSmUAUHmcQWXaxk2Bb5LQdcrav0Qg4mub cZTw== X-Gm-Message-State: AC+VfDzX02afeY5acObmGp5UaJCUfBDFp9bb65yNNlVuxleZoGzCtOv0 H7rbqCHXPxwtF3hjtIVJo/kTfLRYuh+pxTl/rip07g== X-Received: by 2002:a1c:7317:0:b0:3f0:46ca:f201 with SMTP id d23-20020a1c7317000000b003f046caf201mr13847663wmb.1.1683097640102; Wed, 03 May 2023 00:07:20 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:19 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 30/57] tcg/sparc64: Allocate %g2 as a third temporary Date: Wed, 3 May 2023 08:06:29 +0100 Message-Id: <20230503070656.1746170-31-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32c; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index e997db2645..64464ab363 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -83,9 +83,10 @@ static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { #define ALL_GENERAL_REGS MAKE_64BIT_MASK(0, 32) #define ALL_QLDST_REGS (ALL_GENERAL_REGS & ~SOFTMMU_RESERVE_REGS) -/* Define some temporary registers. T2 is used for constant generation. */ +/* Define some temporary registers. T3 is used for constant generation. */ #define TCG_REG_T1 TCG_REG_G1 -#define TCG_REG_T2 TCG_REG_O7 +#define TCG_REG_T2 TCG_REG_G2 +#define TCG_REG_T3 TCG_REG_O7 #ifndef CONFIG_SOFTMMU # define TCG_GUEST_BASE_REG TCG_REG_I5 @@ -110,7 +111,6 @@ static const int tcg_target_reg_alloc_order[] = { TCG_REG_I4, TCG_REG_I5, - TCG_REG_G2, TCG_REG_G3, TCG_REG_G4, TCG_REG_G5, @@ -492,8 +492,8 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, static void tcg_out_movi(TCGContext *s, TCGType type, TCGReg ret, tcg_target_long arg) { - tcg_debug_assert(ret != TCG_REG_T2); - tcg_out_movi_int(s, type, ret, arg, false, TCG_REG_T2); + tcg_debug_assert(ret != TCG_REG_T3); + tcg_out_movi_int(s, type, ret, arg, false, TCG_REG_T3); } static void tcg_out_ext8s(TCGContext *s, TCGType type, TCGReg rd, TCGReg rs) @@ -885,10 +885,8 @@ static void tcg_out_jmpl_const(TCGContext *s, const tcg_insn_unit *dest, { uintptr_t desti = (uintptr_t)dest; - /* Be careful not to clobber %o7 for a tail call. */ tcg_out_movi_int(s, TCG_TYPE_PTR, TCG_REG_T1, - desti & ~0xfff, in_prologue, - tail_call ? TCG_REG_G2 : TCG_REG_O7); + desti & ~0xfff, in_prologue, TCG_REG_T2); tcg_out_arithi(s, tail_call ? TCG_REG_G0 : TCG_REG_O7, TCG_REG_T1, desti & 0xfff, JMPL); } @@ -1856,6 +1854,7 @@ static void tcg_target_init(TCGContext *s) tcg_regset_set_reg(s->reserved_regs, TCG_REG_O6); /* stack pointer */ tcg_regset_set_reg(s->reserved_regs, TCG_REG_T1); /* for internal use */ tcg_regset_set_reg(s->reserved_regs, TCG_REG_T2); /* for internal use */ + tcg_regset_set_reg(s->reserved_regs, TCG_REG_T3); /* for internal use */ } #define ELF_HOST_MACHINE EM_SPARCV9 From patchwork Wed May 3 07:06:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678623 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp904735wrs; Wed, 3 May 2023 00:12:15 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4KJncmystF1W7SJm4hO1LNSHLXFTwb7VekNI+2pYccElj+LQdoJWX1W5+nKWD7q22n0v2W X-Received: by 2002:a05:6214:20e5:b0:61b:5dd6:1f26 with SMTP id 5-20020a05621420e500b0061b5dd61f26mr8561750qvk.28.1683097935096; Wed, 03 May 2023 00:12:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097935; cv=none; d=google.com; s=arc-20160816; b=xgqeTa72BohX8pPf4melSQ845s2pRTUh49CugyvPcDzoxUk/uF76cpLnKSxaZ71R3G OOk073w7Ct94qVAZQbfIYuHsuzYYDxCnkbNjrtW92xM+CD4LqFE8R6ZZbEz2QpyWEaQC IrhSgVKBEuyvAG2E8rDy40CID5lS/VmXNkXf1TfavwVTYultriNiAv6BSDlnSfaYmAuE NhWLn/WHMc1ye8POYNWikac/NFUJYNs18qhTYVGm1170rQmIvMwKi+Fmng/bQx+T8eJI 1OD4rs1WunkbcIOtRF0As7rpA3PSNNhA+QGruh8XLzBoY4pX+O9XdA0QGbwzhfZ3g/58 /c4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=szr01PFwF0Du6v147zPw0I7dg80IAC207oSuL39I73s=; b=qeLgWHeqWJ3bU+8M1GefziG8ADKF3fMGTQAmw8rJxtdVloImu2YahpcoED3uCudSUI +27t7osu/EGaC0f6qIs8TVMFSZqnO+Tps9NbjEip78Y6p+1MizrZlrECQEqWxtLrA7VF bHrluOGQbMc4zftfCdVCdO17HZ/lHpGUhcbilG5DZZ5nBuxBdYNBHMbywJQpvAyErFoO oRP8R4qom2EF9CMmpBp5NvxTCO9G3fUawISCv+1HJbhAoC3bXnRmtL1fjNf2O5k7p8yy ADGuH80RKCngtmDLS+ai4RuFuCt7/Mf/G4FAN/ZzCMOkq7+8uLICj5QNV+lMPropFBDc qekw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="mrSB/5y1"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o2-20020ad45c82000000b005e32ca3bf52si19486742qvh.314.2023.05.03.00.12.14 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:12:15 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="mrSB/5y1"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bS-0002Y7-Hr; Wed, 03 May 2023 03:08:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ah-00084r-WD for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:56 -0400 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aA-0005oM-I5 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:55 -0400 Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-3f19afc4fbfso47410075e9.2 for ; Wed, 03 May 2023 00:07:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097641; x=1685689641; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=szr01PFwF0Du6v147zPw0I7dg80IAC207oSuL39I73s=; b=mrSB/5y1gBwi0UIp8PrPLvghQTx6w5zJLLgOePo8wTbZ1WRYFubxlOtpEDTVtVhmul iX3WCp25m22sQQr8tF1qa82JcQYFD9WxxQoNU/nCJVh+XlCxlcUrAwZqdUq6YpIT/juX lFTckq3uijIvl4/vgJfz0uWYkijm6UYISuPp2AbaAh+MFV5zBAtijHDjKTnXSIHMaPg6 Fv6Vc+meBXhbq1uuPQoREZVzTW/GSAsbVoLrkdmoK7tamgNFwGUlET5+rxeuOVvj9dyy B4WWXv0I0UiNMUHoaphJInT579ZQ8FT5BP4uO180pyXWfWuJVPLp1Ys/JOQpp878bNyz KXSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097641; x=1685689641; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=szr01PFwF0Du6v147zPw0I7dg80IAC207oSuL39I73s=; b=Aj6/AZt9lsXuDq6EAvbWubacyzSQpKF3jo+KJ76DgFjp7NmEh01Fc/lC1HZpY69eOh +9hoQ9Gr+W72tseuoHYtKpjngcm49T8mvOAGCE3t2+oBx5cEoyxqOEHiCsKD6xJAt072 3hy6XdNh6jbu9k/wxh07Gupmk1C/YKUk0IqfX5xutadZlF0ZDPDFTxBvPzuVvtbKGN51 lTpCQB4fCgHZ1G8MrbLorjbxHfqRF+cQmHGp8oAmWa+nu08gzfzWksbdd/V0j4AJ9k7Q iDpuIebHzar8ZrO0rl0SZxWqSbGKUyjP+6z5ptsi2tZJ2IMSQtChAhDHqORE2inPj5c2 GpHw== X-Gm-Message-State: AC+VfDyLqKhPeYGV/EclJmT6UaetVi2FeClUBkcciFkeP1GxZhTuRfhn urlM/B/UPPCXVOWozsgq6NG44rkO1DQyQ4FFOJrnCw== X-Received: by 2002:a05:600c:21c8:b0:3f1:745d:821 with SMTP id x8-20020a05600c21c800b003f1745d0821mr13747059wmj.23.1683097640815; Wed, 03 May 2023 00:07:20 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:20 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 31/57] tcg/sparc64: Rename tcg_out_movi_imm13 to tcg_out_movi_s13 Date: Wed, 3 May 2023 08:06:30 +0100 Message-Id: <20230503070656.1746170-32-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x333.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Emphasize that the constant is signed. Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 64464ab363..2e6127d506 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -399,7 +399,7 @@ static void tcg_out_sethi(TCGContext *s, TCGReg ret, uint32_t arg) tcg_out32(s, SETHI | INSN_RD(ret) | ((arg & 0xfffffc00) >> 10)); } -static void tcg_out_movi_imm13(TCGContext *s, TCGReg ret, int32_t arg) +static void tcg_out_movi_s13(TCGContext *s, TCGReg ret, int32_t arg) { tcg_out_arithi(s, ret, TCG_REG_G0, arg, ARITH_OR); } @@ -408,7 +408,7 @@ static void tcg_out_movi_imm32(TCGContext *s, TCGReg ret, int32_t arg) { if (check_fit_i32(arg, 13)) { /* A 13-bit constant sign-extended to 64-bits. */ - tcg_out_movi_imm13(s, ret, arg); + tcg_out_movi_s13(s, ret, arg); } else { /* A 32-bit constant zero-extended to 64 bits. */ tcg_out_sethi(s, ret, arg); @@ -425,15 +425,15 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, tcg_target_long hi, lo = (int32_t)arg; tcg_target_long test, lsb; - /* A 32-bit constant, or 32-bit zero-extended to 64-bits. */ - if (type == TCG_TYPE_I32 || arg == (uint32_t)arg) { - tcg_out_movi_imm32(s, ret, arg); + /* A 13-bit constant sign-extended to 64-bits. */ + if (check_fit_tl(arg, 13)) { + tcg_out_movi_s13(s, ret, arg); return; } - /* A 13-bit constant sign-extended to 64-bits. */ - if (check_fit_tl(arg, 13)) { - tcg_out_movi_imm13(s, ret, arg); + /* A 32-bit constant, or 32-bit zero-extended to 64-bits. */ + if (type == TCG_TYPE_I32 || arg == (uint32_t)arg) { + tcg_out_movi_imm32(s, ret, arg); return; } @@ -767,7 +767,7 @@ static void tcg_out_setcond_i32(TCGContext *s, TCGCond cond, TCGReg ret, default: tcg_out_cmp(s, c1, c2, c2const); - tcg_out_movi_imm13(s, ret, 0); + tcg_out_movi_s13(s, ret, 0); tcg_out_movcc(s, cond, MOVCC_ICC, ret, 1, 1); return; } @@ -803,11 +803,11 @@ static void tcg_out_setcond_i64(TCGContext *s, TCGCond cond, TCGReg ret, /* For 64-bit signed comparisons vs zero, we can avoid the compare if the input does not overlap the output. */ if (c2 == 0 && !is_unsigned_cond(cond) && c1 != ret) { - tcg_out_movi_imm13(s, ret, 0); + tcg_out_movi_s13(s, ret, 0); tcg_out_movr(s, cond, ret, c1, 1, 1); } else { tcg_out_cmp(s, c1, c2, c2const); - tcg_out_movi_imm13(s, ret, 0); + tcg_out_movi_s13(s, ret, 0); tcg_out_movcc(s, cond, MOVCC_XCC, ret, 1, 1); } } @@ -844,7 +844,7 @@ static void tcg_out_addsub2_i64(TCGContext *s, TCGReg rl, TCGReg rh, if (use_vis3_instructions && !is_sub) { /* Note that ADDXC doesn't accept immediates. */ if (bhconst && bh != 0) { - tcg_out_movi_imm13(s, TCG_REG_T2, bh); + tcg_out_movi_s13(s, TCG_REG_T2, bh); bh = TCG_REG_T2; } tcg_out_arith(s, rh, ah, bh, ARITH_ADDXC); @@ -866,7 +866,7 @@ static void tcg_out_addsub2_i64(TCGContext *s, TCGReg rl, TCGReg rh, * so the adjustment fits 12 bits. */ if (bhconst) { - tcg_out_movi_imm13(s, TCG_REG_T2, bh + (is_sub ? -1 : 1)); + tcg_out_movi_s13(s, TCG_REG_T2, bh + (is_sub ? -1 : 1)); } else { tcg_out_arithi(s, TCG_REG_T2, bh, 1, is_sub ? ARITH_SUB : ARITH_ADD); @@ -1036,7 +1036,7 @@ static void tcg_target_qemu_prologue(TCGContext *s) tcg_code_gen_epilogue = tcg_splitwx_to_rx(s->code_ptr); tcg_out_arithi(s, TCG_REG_G0, TCG_REG_I7, 8, RETURN); /* delay slot */ - tcg_out_movi_imm13(s, TCG_REG_O0, 0); + tcg_out_movi_s13(s, TCG_REG_O0, 0); build_trampolines(s); } @@ -1430,7 +1430,7 @@ static void tcg_out_exit_tb(TCGContext *s, uintptr_t a0) { if (check_fit_ptr(a0, 13)) { tcg_out_arithi(s, TCG_REG_G0, TCG_REG_I7, 8, RETURN); - tcg_out_movi_imm13(s, TCG_REG_O0, a0); + tcg_out_movi_s13(s, TCG_REG_O0, a0); return; } else { intptr_t tb_diff = tcg_tbrel_diff(s, (void *)a0); From patchwork Wed May 3 07:06:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678634 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp906049wrs; Wed, 3 May 2023 00:15:28 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6ej8U8GuPvs+IjzDTVKUoEJPFvxOjt0faz1dK+BTXLGtRXTDUYpGG6H67ZjK2O65IGuuna X-Received: by 2002:ac8:5f52:0:b0:3bf:da69:6107 with SMTP id y18-20020ac85f52000000b003bfda696107mr30006870qta.67.1683098128402; Wed, 03 May 2023 00:15:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098128; cv=none; d=google.com; s=arc-20160816; b=DpIeMTQtKLe/9xWJxSJVuM3Ueoz9UaLQKeJbPgnm9afrkO+x/LTx5FlK3cSXyOAmWO PcgPnZmUIDGA5N5OQWqSg5u2UISy74QBCU3rCxbU10kzbRvOj50Oi/4QK+NONQOKXoMs b/VmuoGEVp5QLphMvMCfZpa8XS/f3YPQDerG7ceYn2jSZ6/aPsqa5lCSmx68RJw23Vgp wkpMr3jwcj1HCUwOVKiwczL5YR1IRD5RsKh8Uh5h6f9oKnuiU7lFzM+exwwDZtgVc0Xo hy8LzRIUrhzjZgSTyYM4z3DIamnchdVdSHlgcQFFVCLUBgJPd+LYVli6hCgkur8fpzua 4C4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=larFjkYHXxL78J3QFkp+Ja9AW0TXM73Zch/rm8HTMf0=; b=ponUcXw+z6ZOoc797IN9b9MVcMiVJp8ZERuO9H4tLOroap785atpPvOXvFKoq2lUbT zT1wfvtv+UERhmcJL+QKetuBUcZJaeJ3itrWr0PmMty/zxCHldz8/rg8yuhOxoZeb8k8 1wzGgxckCLKhztttltCxBtFhVBarzfklUvSGLOzZGGNt+11neF5HljQg+YJThRcE2eZ4 YNS2+OTLEeNRV3bqi21frCDxpoNB8qDxhKHS5TWsymr43QmdlOZVkqrarPOJ00/gE+gR 59Sz+s2S0RO7GgVcYPXsrn6b2Dve2DhP9FHd8koYqm21Lism8J9d5VoXNkLJbkzQzscl 23Aw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="QNe/52nj"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id b8-20020ac85bc8000000b003f018e18c29si13202099qtb.260.2023.05.03.00.15.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:15:28 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="QNe/52nj"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6cL-0005KQ-Cf; Wed, 03 May 2023 03:09:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ap-0008Lj-13 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:04 -0400 Received: from mail-wm1-x32c.google.com ([2a00:1450:4864:20::32c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aE-0005ou-To for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:01 -0400 Received: by mail-wm1-x32c.google.com with SMTP id 5b1f17b1804b1-3f4000ec71dso80495e9.2 for ; Wed, 03 May 2023 00:07:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097642; x=1685689642; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=larFjkYHXxL78J3QFkp+Ja9AW0TXM73Zch/rm8HTMf0=; b=QNe/52njhOB+/Mb7fLCO9gmErw9EnKT2Zl/hYs85W0XMz45Kqcv7OUKcYvv6tDXaEA aSEfcp0gLDBbuIleOL876H7+/+y1lP4X7qnuwVvAWNuZCZMmhPkIb/tViV4RiJRK58lu 2L+Jahqx93APaq2+/hYLOOX8kvk7g0BVAIjCG4zis90Xfm319bOdXHUleXC+LX8nWJ5t zXXPa9aZuo0iq6zhF6Yco2BnxqTSYDKt+jx7V8gaZU5dye1TN0bypky6nH3A/FDU/KCD bjmQChznJR/QaOJfByi7UxWFPUfkw+kI0/3bf8xmvzjjBkxSZea5EmX9lkZn/cBjGZAk eUNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097642; x=1685689642; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=larFjkYHXxL78J3QFkp+Ja9AW0TXM73Zch/rm8HTMf0=; b=hi2lYH301Jnr0lyEwo7lRMjEajHep1hfspd/O9EjofzBA67LwGRERRjVs6Df32ts3Y aGS5DciQo7uTN/PHAFaisKSiTg5xSIE3dHuiB1r4JoUiZnJMwULAGM38V2MElNnOnjk3 ZA+vZkGSkdX9SEbC3Ha/1pGVTbiqi68aGpVv9103ssNSS3bFrTk70EsQUpS7am7o127j sBs7E4v//KqpxAR+FVAxaTJz4ZbDU5G0fobSnhhedvh2GifB4vEay/kMDY7jjLr2B5mo bEF80lvKMce+pA6f4m9Et/fJx/jJQhuRqIs7BAK14cklWPKlR0abQ5eAA1DqBXsLCU3J S+1g== X-Gm-Message-State: AC+VfDw3H60lY2EiqI4F+30zK8CaORTmiYjghnvUUuHYlFhKgARmEpHt h/ti6SqWBO96bRZSfknOmHqthKvvoDp6S7s9OZCfWg== X-Received: by 2002:a7b:cb46:0:b0:3f3:468c:a780 with SMTP id v6-20020a7bcb46000000b003f3468ca780mr3070506wmj.4.1683097641531; Wed, 03 May 2023 00:07:21 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:21 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 32/57] tcg/sparc64: Rename tcg_out_movi_imm32 to tcg_out_movi_u32 Date: Wed, 3 May 2023 08:06:31 +0100 Message-Id: <20230503070656.1746170-33-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32c; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Emphasize that the constant is unsigned. Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 2e6127d506..e244209890 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -399,22 +399,18 @@ static void tcg_out_sethi(TCGContext *s, TCGReg ret, uint32_t arg) tcg_out32(s, SETHI | INSN_RD(ret) | ((arg & 0xfffffc00) >> 10)); } +/* A 13-bit constant sign-extended to 64 bits. */ static void tcg_out_movi_s13(TCGContext *s, TCGReg ret, int32_t arg) { tcg_out_arithi(s, ret, TCG_REG_G0, arg, ARITH_OR); } -static void tcg_out_movi_imm32(TCGContext *s, TCGReg ret, int32_t arg) +/* A 32-bit constant zero-extended to 64 bits. */ +static void tcg_out_movi_u32(TCGContext *s, TCGReg ret, uint32_t arg) { - if (check_fit_i32(arg, 13)) { - /* A 13-bit constant sign-extended to 64-bits. */ - tcg_out_movi_s13(s, ret, arg); - } else { - /* A 32-bit constant zero-extended to 64 bits. */ - tcg_out_sethi(s, ret, arg); - if (arg & 0x3ff) { - tcg_out_arithi(s, ret, ret, arg & 0x3ff, ARITH_OR); - } + tcg_out_sethi(s, ret, arg); + if (arg & 0x3ff) { + tcg_out_arithi(s, ret, ret, arg & 0x3ff, ARITH_OR); } } @@ -433,7 +429,7 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, /* A 32-bit constant, or 32-bit zero-extended to 64-bits. */ if (type == TCG_TYPE_I32 || arg == (uint32_t)arg) { - tcg_out_movi_imm32(s, ret, arg); + tcg_out_movi_u32(s, ret, arg); return; } @@ -477,13 +473,13 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, /* A 64-bit constant decomposed into 2 32-bit pieces. */ if (check_fit_i32(lo, 13)) { hi = (arg - lo) >> 32; - tcg_out_movi_imm32(s, ret, hi); + tcg_out_movi_u32(s, ret, hi); tcg_out_arithi(s, ret, ret, 32, SHIFT_SLLX); tcg_out_arithi(s, ret, ret, lo, ARITH_ADD); } else { hi = arg >> 32; - tcg_out_movi_imm32(s, ret, hi); - tcg_out_movi_imm32(s, scratch, lo); + tcg_out_movi_u32(s, ret, hi); + tcg_out_movi_u32(s, scratch, lo); tcg_out_arithi(s, ret, ret, 32, SHIFT_SLLX); tcg_out_arith(s, ret, ret, scratch, ARITH_OR); } From patchwork Wed May 3 07:06:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678665 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp909210wrs; Wed, 3 May 2023 00:24:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5zcr06302yTnVICZXxlKGAMDxDFYtMUKlspy3zwwC2slmkoLFprE6qA17G8+SxnKvat4S+ X-Received: by 2002:a05:622a:1c3:b0:3ef:6420:6bd1 with SMTP id t3-20020a05622a01c300b003ef64206bd1mr31586325qtw.28.1683098695529; Wed, 03 May 2023 00:24:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098695; cv=none; d=google.com; s=arc-20160816; b=miuNbPPQF3W5ihpyFp9La0gn7PSw9b6UgWLVx1kx++aSXVRH7uB76kq3gzMAUiD4mp DF+92TTDKPzmCxwQaeULKQHHI7uCvTyDNij2AYoBMkV2IdRESUGAe0DfP/DbTeyOGVKY 1P1hFHBzUzz8DpOwZ0rXnPDIr/VBVIMyVVSqdBKXVOUOMM/Bx9Q5jfLuvvBLXG6JAfJh QEZZpN+QVwWG+Cjg0yeVsavhivLY6R4AwOXe95LzdG1i/s06aMQ69pwyYp4gxJVT67m8 FpwwaTlw/fPmCoM35Gfoe4GPPzffDV70Y5okcqpPrcpHBlYk1mZrq47e+zfJJGGmS5TS VgBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=6s7rQstWrDxaC1BjVuEYZobw8VSYDLiLkliwIltqh+E=; b=VHorugxvjVw5kGVRWIk5vtVw6fj10aPJFi7AVAYw5h9q/yU/3AryIuEVv0R6FJNCK7 lq1ReoZe3nHoWXUn/VxNxCGo9MIC6YrotzDVMMg7ts1HZLZoEI1xQ4xM4oWlbIW0G+m0 ViWjJVQ+1TG6n5ivvNgj4YQzbGwsEVaknbrCqQf5QxceFSTmy+x0it0hig+8hzIr76Ay 9Qh0ZxlIHgBpeBQFb1+1blt+E2DB8ZE4WtDsu66iI0cVIOd7ru6GTbGVEIftLUYziLVC hdKzClh6UMkhgxFykodGSSdgyc8fr5GnnBLYBf+Z7oxvNVTqvJT/cK1PXde8tcZFaNv/ ldJQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DQZStCmf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id e10-20020a05622a110a00b003eb1461db6dsi18217662qty.518.2023.05.03.00.24.55 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:24:55 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DQZStCmf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bj-0004eK-Mn; Wed, 03 May 2023 03:08:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ak-0008AP-8c for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:58 -0400 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aC-0005p5-Qe for qemu-devel@nongnu.org; Wed, 03 May 2023 03:07:57 -0400 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3f199696149so30072215e9.0 for ; Wed, 03 May 2023 00:07:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097642; x=1685689642; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6s7rQstWrDxaC1BjVuEYZobw8VSYDLiLkliwIltqh+E=; b=DQZStCmfpITqXVoPKKs046wsCDC6HKbvcDtAnqC7TguHvFgGaIT2G/NeXwws19QZLw gKp+COQvhNBrVGbaAUqbiS0yMUNLqXFk2ggkvJQ0b5oySg4ZcV+nI+5hh5Rrn5IapeYV sEJ0d3feP9kFdq3cFTXVJpOpvd8QJ/pwn5aDJhLqaS+ElnFBT/vEmOJd/HY0yd9tJOCA XHhgKMR1DToVJIY2MWvOB5fW1TJCC92FaKjmih0D9PI3b37b+pstBxjc/WCpPVvo5/cB 7C9IrOWeXOQAWPiLNQsNAQqOD3Z94t0hxAA8/GImAQHo/Ho4A8yrx8m8/mrRcBdKSyoz VRvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097642; x=1685689642; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6s7rQstWrDxaC1BjVuEYZobw8VSYDLiLkliwIltqh+E=; b=ZHgnPCVEiK5lt/VExyB1PdNjc1cyGi+zbv6+XJPovGi02W32nSTmuqBz2t4pLsKzsP U210pYD/amv7NDBhGENl5OeX/7DmZg7qoNmi1oJ5Q3KdBowWSzEbbqqPdB9MwMdV7tuG Es59jf7UkKhDOF4xggzQ7iv5KgTcZETRyxKU1REiUo9jQOSbIhokkFgr9LgK24356uIN e74NMXirGww4D7EZZzkwSAGiIG2kK0klRXuq5i82M+dmcDItW9uYADLBiSYojPyoIelQ /6E/Znjf4pIBHui+eQZlxUga/EVvnOuAMubZrM/xct+++Ylqa4va46WjmwXgZzLA9g0h Jvzg== X-Gm-Message-State: AC+VfDzYyo3KaH8scAgWOda/dz08Cu7xPLU1Dp6w02ZoI60m2XJNWCHe 4y/HJCWHyVhuUyiRYwxLTlktBDTyChwMWZqpaHzY+g== X-Received: by 2002:a7b:c388:0:b0:3f1:7bb5:9d71 with SMTP id s8-20020a7bc388000000b003f17bb59d71mr12991250wmj.33.1683097642518; Wed, 03 May 2023 00:07:22 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:22 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 33/57] tcg/sparc64: Split out tcg_out_movi_s32 Date: Wed, 3 May 2023 08:06:32 +0100 Message-Id: <20230503070656.1746170-34-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::329; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x329.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/sparc64/tcg-target.c.inc | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index e244209890..4375a06377 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -405,6 +405,13 @@ static void tcg_out_movi_s13(TCGContext *s, TCGReg ret, int32_t arg) tcg_out_arithi(s, ret, TCG_REG_G0, arg, ARITH_OR); } +/* A 32-bit constant sign-extended to 64 bits. */ +static void tcg_out_movi_s32(TCGContext *s, TCGReg ret, int32_t arg) +{ + tcg_out_sethi(s, ret, ~arg); + tcg_out_arithi(s, ret, ret, (arg & 0x3ff) | -0x400, ARITH_XOR); +} + /* A 32-bit constant zero-extended to 64 bits. */ static void tcg_out_movi_u32(TCGContext *s, TCGReg ret, uint32_t arg) { @@ -444,8 +451,7 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, /* A 32-bit constant sign-extended to 64-bits. */ if (arg == lo) { - tcg_out_sethi(s, ret, ~arg); - tcg_out_arithi(s, ret, ret, (arg & 0x3ff) | -0x400, ARITH_XOR); + tcg_out_movi_s32(s, ret, arg); return; } From patchwork Wed May 3 07:06:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678642 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp906723wrs; Wed, 3 May 2023 00:17:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7S3pJopZlcd3W0I4jDWV0etQ+6spvJK//nwpP4/hK9K92Czez0e9yj/rozcl95+ShQMzU7 X-Received: by 2002:a05:622a:199b:b0:3f0:842d:3c2a with SMTP id u27-20020a05622a199b00b003f0842d3c2amr29948637qtc.20.1683098245410; Wed, 03 May 2023 00:17:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098245; cv=none; d=google.com; s=arc-20160816; b=GxAxatOJvNpS3LxpEHVGx5aWcUTxUzhawNjv57buZhNQbUIJSGVgIAL5MdPo2sg+3F K24/J/sFxJwf1GdTfVp0ZAj32VPo/Nie5mblZDz3RkqlrKHWqIC1wfzRu76Yfpn1/B9P DHfsrXUls/EaLmKcsp5bWbZwVktvgVAV5eoXPFy17ZJcaQmdAfaDsI0CPTlxM2xDGB1f vu7bLGhZGokk6W9Ztdu+y5MkfmMTx7CpFIJEFJJFsetLPBfzCNY+4REmGOqsgIfPBg5V 4twyPDxVXgMXX5svwKGA3rzhtSvVOfThktCtERqO67o2Jh4ecFN0Qh9nWKF4ctACa5AZ u7PQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8KAB/H13Cm0D6bHkoWgDoGCikvqnWURt7Ewb7NyDsZI=; b=Mc570YkheFoIYrO6djrY+iyqiZZqfVIM5ptuwjohgNvSdGVrX/1HNElg8BZoxDAh5/ zSwl92CJt1lwj6eGTVst3Rcw9vNPRnOUBFkBUVEmFK1re3OTMPdNOb8ctIDkrp5I5zuc PymDxeh7XRQ9Nzo0w5fCfrSUAPhbp17JBHoCr71l5apR4rjQO529FKj9Y5t3WfW1fV4H 6RoGNzkKPr6EMmujmtF3Jmod6kZVN5fjVpN9QJpXG4P99VH5vw9q121Cc1nx8Pbv7d15 gtmKTctCTCr4yyBilumPW5+Fo/tA52sKupUPX1MBJt4fhQR0a1z4BjBVVDjaPyALC0Wl +m/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=I7ySu1BQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c22-20020ac87dd6000000b003f24c092677si694353qte.104.2023.05.03.00.17.25 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:17:25 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=I7ySu1BQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6cU-0006FR-Ag; Wed, 03 May 2023 03:09:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6au-0008V2-Jj for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:12 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aM-0005dG-QR for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:07 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f178da21b2so49059165e9.1 for ; Wed, 03 May 2023 00:07:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097643; x=1685689643; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8KAB/H13Cm0D6bHkoWgDoGCikvqnWURt7Ewb7NyDsZI=; b=I7ySu1BQ6QADedWHDn3HmTtAMtcRFP7MgMV/nr3dYi26Ebw+9A0nrMGiPytpbtrvsB DxLEyBrDaxnvN2KIsRxCcqv6/aWoYB5I8VGfWR9zngfQddpJHx6njf+bgc5j9IQOlHQJ HeipQn3woK3lu98M58TMECvcyTQO9u+lEcgo6DEbfuMFAgNiC0eqjViRJUxHSAfJYSpB kLeePu8gaq5kLatIt1t4Armu1i8zw+xF/y++EO+Gb5GbbGCPLWe9JPSeiYiMoKCeDckX bOoH8A0qWNIEMeR0XKiYjtdZWPaCPc89/1Kb6DLmnuTf23cfSUrS28O9quDqp90ckjdi UbDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097643; x=1685689643; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8KAB/H13Cm0D6bHkoWgDoGCikvqnWURt7Ewb7NyDsZI=; b=YRxcWtsxkFYaDKlp0MA/HksqIYAZwGbyWicnrlMr1fvG5F1w9ApC3RnnWx31WWJzQ0 MULBGSwdBvYRzMHhF6gP721LwhhVM3EfXBGfWivGjoFhGXDrWIZLwnv6+W/ZhL0RViuv /j9EqckuY8gpuuVpVH41AtWhTb+XSah2rcCei7uOhC6fWDId0lirYhZX+kL51ry/Oz6O 97qrU7LiEUyk1blZpZJ7eem54A1gQ1/y4h4T2izqpxgDpdE00H0MEqZXt1r4mOkouiDO LPUaDxTo4tiKGHVLoGDpiPkBEDpNshvRchyhsgQoK9jHUiEa9Ttd+LQf204Swwvt+fFx gICg== X-Gm-Message-State: AC+VfDweANY7Z99y8LWJvA89WX+2b5z3O/h9knMZhixL8ePAg7KXkDT8 MhUojIiFzU7TXRvCEjc4OuZksMRH0EuL9w7WCfNX/g== X-Received: by 2002:a05:600c:2046:b0:3f1:9503:4db0 with SMTP id p6-20020a05600c204600b003f195034db0mr14143885wmg.13.1683097643219; Wed, 03 May 2023 00:07:23 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:22 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 34/57] tcg/sparc64: Use standard slow path for softmmu Date: Wed, 3 May 2023 08:06:33 +0100 Message-Id: <20230503070656.1746170-35-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Drop the target-specific trampolines for the standard slow path. This lets us use tcg_out_helper_{ld,st}_args, and handles the new atomicity bits within MemOp. At the same time, use the full load/store helpers for user-only mode. Drop inline unaligned access support for user-only mode, as it does not handle atomicity. Use TCG_REG_T[1-3] in the tlb lookup, instead of TCG_REG_O[0-2]. This allows the constraints to be simplified. Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target-con-set.h | 2 - tcg/sparc64/tcg-target-con-str.h | 1 - tcg/sparc64/tcg-target.h | 1 + tcg/sparc64/tcg-target.c.inc | 610 +++++++++---------------------- 4 files changed, 182 insertions(+), 432 deletions(-) diff --git a/tcg/sparc64/tcg-target-con-set.h b/tcg/sparc64/tcg-target-con-set.h index 31e6fea1fc..434bf25072 100644 --- a/tcg/sparc64/tcg-target-con-set.h +++ b/tcg/sparc64/tcg-target-con-set.h @@ -12,8 +12,6 @@ C_O0_I1(r) C_O0_I2(rZ, r) C_O0_I2(rZ, rJ) -C_O0_I2(sZ, s) -C_O1_I1(r, s) C_O1_I1(r, r) C_O1_I2(r, r, r) C_O1_I2(r, rZ, rJ) diff --git a/tcg/sparc64/tcg-target-con-str.h b/tcg/sparc64/tcg-target-con-str.h index 8f5c7aef97..0577ec4942 100644 --- a/tcg/sparc64/tcg-target-con-str.h +++ b/tcg/sparc64/tcg-target-con-str.h @@ -9,7 +9,6 @@ * REGS(letter, register_mask) */ REGS('r', ALL_GENERAL_REGS) -REGS('s', ALL_QLDST_REGS) /* * Define constraint letters for constants: diff --git a/tcg/sparc64/tcg-target.h b/tcg/sparc64/tcg-target.h index ffe22b1d21..7434cc99d4 100644 --- a/tcg/sparc64/tcg-target.h +++ b/tcg/sparc64/tcg-target.h @@ -155,6 +155,7 @@ extern bool use_vis3_instructions; #define TCG_TARGET_DEFAULT_MO (0) #define TCG_TARGET_HAS_MEMORY_BSWAP 1 +#define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS #endif diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 4375a06377..0237188d65 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -27,6 +27,7 @@ #error "unsupported code generation mode" #endif +#include "../tcg-ldst.c.inc" #include "../tcg-pool.c.inc" #ifdef CONFIG_DEBUG_TCG @@ -70,18 +71,7 @@ static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { #define TCG_CT_CONST_S13 0x200 #define TCG_CT_CONST_ZERO 0x400 -/* - * For softmmu, we need to avoid conflicts with the first 3 - * argument registers to perform the tlb lookup, and to call - * the helper function. - */ -#ifdef CONFIG_SOFTMMU -#define SOFTMMU_RESERVE_REGS MAKE_64BIT_MASK(TCG_REG_O0, 3) -#else -#define SOFTMMU_RESERVE_REGS 0 -#endif -#define ALL_GENERAL_REGS MAKE_64BIT_MASK(0, 32) -#define ALL_QLDST_REGS (ALL_GENERAL_REGS & ~SOFTMMU_RESERVE_REGS) +#define ALL_GENERAL_REGS MAKE_64BIT_MASK(0, 32) /* Define some temporary registers. T3 is used for constant generation. */ #define TCG_REG_T1 TCG_REG_G1 @@ -918,82 +908,6 @@ static void tcg_out_mb(TCGContext *s, TCGArg a0) tcg_out32(s, MEMBAR | (a0 & TCG_MO_ALL)); } -#ifdef CONFIG_SOFTMMU -static const tcg_insn_unit *qemu_ld_trampoline[MO_SSIZE + 1]; -static const tcg_insn_unit *qemu_st_trampoline[MO_SIZE + 1]; - -static void build_trampolines(TCGContext *s) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(qemu_ld_helpers); ++i) { - if (qemu_ld_helpers[i] == NULL) { - continue; - } - - /* May as well align the trampoline. */ - while ((uintptr_t)s->code_ptr & 15) { - tcg_out_nop(s); - } - qemu_ld_trampoline[i] = tcg_splitwx_to_rx(s->code_ptr); - - /* Set the retaddr operand. */ - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_O3, TCG_REG_O7); - /* Tail call. */ - tcg_out_jmpl_const(s, qemu_ld_helpers[i], true, true); - /* delay slot -- set the env argument */ - tcg_out_mov_delay(s, TCG_REG_O0, TCG_AREG0); - } - - for (i = 0; i < ARRAY_SIZE(qemu_st_helpers); ++i) { - if (qemu_st_helpers[i] == NULL) { - continue; - } - - /* May as well align the trampoline. */ - while ((uintptr_t)s->code_ptr & 15) { - tcg_out_nop(s); - } - qemu_st_trampoline[i] = tcg_splitwx_to_rx(s->code_ptr); - - /* Set the retaddr operand. */ - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_O4, TCG_REG_O7); - - /* Tail call. */ - tcg_out_jmpl_const(s, qemu_st_helpers[i], true, true); - /* delay slot -- set the env argument */ - tcg_out_mov_delay(s, TCG_REG_O0, TCG_AREG0); - } -} -#else -static const tcg_insn_unit *qemu_unalign_ld_trampoline; -static const tcg_insn_unit *qemu_unalign_st_trampoline; - -static void build_trampolines(TCGContext *s) -{ - for (int ld = 0; ld < 2; ++ld) { - void *helper; - - while ((uintptr_t)s->code_ptr & 15) { - tcg_out_nop(s); - } - - if (ld) { - helper = helper_unaligned_ld; - qemu_unalign_ld_trampoline = tcg_splitwx_to_rx(s->code_ptr); - } else { - helper = helper_unaligned_st; - qemu_unalign_st_trampoline = tcg_splitwx_to_rx(s->code_ptr); - } - - /* Tail call. */ - tcg_out_jmpl_const(s, helper, true, true); - /* delay slot -- set the env argument */ - tcg_out_mov_delay(s, TCG_REG_O0, TCG_AREG0); - } -} -#endif - /* Generate global QEMU prologue and epilogue code */ static void tcg_target_qemu_prologue(TCGContext *s) { @@ -1039,8 +953,6 @@ static void tcg_target_qemu_prologue(TCGContext *s) tcg_out_arithi(s, TCG_REG_G0, TCG_REG_I7, 8, RETURN); /* delay slot */ tcg_out_movi_s13(s, TCG_REG_O0, 0); - - build_trampolines(s); } static void tcg_out_nop_fill(tcg_insn_unit *p, int count) @@ -1051,381 +963,224 @@ static void tcg_out_nop_fill(tcg_insn_unit *p, int count) } } -#if defined(CONFIG_SOFTMMU) +static const TCGLdstHelperParam ldst_helper_param = { + .ntmp = 1, .tmp = { TCG_REG_T1 } +}; -/* We expect to use a 13-bit negative offset from ENV. */ -QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) > 0); -QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12)); - -/* Perform the TLB load and compare. - - Inputs: - ADDRLO and ADDRHI contain the possible two parts of the address. - - MEM_INDEX and S_BITS are the memory context and log2 size of the load. - - WHICH is the offset into the CPUTLBEntry structure of the slot to read. - This should be offsetof addr_read or addr_write. - - The result of the TLB comparison is in %[ix]cc. The sanitized address - is in the returned register, maybe %o0. The TLB addend is in %o1. */ - -static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index, - MemOp opc, int which) +static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) { + MemOp opc = get_memop(lb->oi); + MemOp sgn; + + if (!patch_reloc(lb->label_ptr[0], R_SPARC_WDISP19, + (intptr_t)tcg_splitwx_to_rx(s->code_ptr), 0)) { + return false; + } + + /* Use inline tcg_out_ext32s; otherwise let the helper sign-extend. */ + sgn = (opc & MO_SIZE) < MO_32 ? MO_SIGN : 0; + + tcg_out_ld_helper_args(s, lb, &ldst_helper_param); + tcg_out_call(s, qemu_ld_helpers[opc & (MO_SIZE | sgn)], NULL); + tcg_out_ld_helper_ret(s, lb, sgn, &ldst_helper_param); + + tcg_out_bpcc0(s, COND_A, BPCC_A | BPCC_PT, 0); + return patch_reloc(s->code_ptr - 1, R_SPARC_WDISP19, + (intptr_t)lb->raddr, 0); +} + +static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) +{ + MemOp opc = get_memop(lb->oi); + + if (!patch_reloc(lb->label_ptr[0], R_SPARC_WDISP19, + (intptr_t)tcg_splitwx_to_rx(s->code_ptr), 0)) { + return false; + } + + tcg_out_st_helper_args(s, lb, &ldst_helper_param); + tcg_out_call(s, qemu_st_helpers[opc & MO_SIZE], NULL); + + tcg_out_bpcc0(s, COND_A, BPCC_A | BPCC_PT, 0); + return patch_reloc(s->code_ptr - 1, R_SPARC_WDISP19, + (intptr_t)lb->raddr, 0); +} + +typedef struct { + TCGReg base; + TCGReg index; +} HostAddress; + +/* + * For softmmu, perform the TLB load and compare. + * For useronly, perform any required alignment tests. + * In both cases, return a TCGLabelQemuLdst structure if the slow path + * is required and fill in @h with the host address for the fast path. + */ +static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, + TCGReg addr_reg, MemOpIdx oi, + bool is_ld) +{ + TCGLabelQemuLdst *ldst = NULL; + MemOp opc = get_memop(oi); + unsigned a_bits = get_alignment_bits(opc); + unsigned s_bits = opc & MO_SIZE; + unsigned a_mask; + + /* We don't support unaligned accesses. */ + a_bits = MAX(a_bits, s_bits); + a_mask = (1u << a_bits) - 1; + +#ifdef CONFIG_SOFTMMU + int mem_index = get_mmuidx(oi); int fast_off = TLB_MASK_TABLE_OFS(mem_index); int mask_off = fast_off + offsetof(CPUTLBDescFast, mask); int table_off = fast_off + offsetof(CPUTLBDescFast, table); - const TCGReg r0 = TCG_REG_O0; - const TCGReg r1 = TCG_REG_O1; - const TCGReg r2 = TCG_REG_O2; - unsigned s_bits = opc & MO_SIZE; - unsigned a_bits = get_alignment_bits(opc); - tcg_target_long compare_mask; + int cmp_off = is_ld ? offsetof(CPUTLBEntry, addr_read) + : offsetof(CPUTLBEntry, addr_write); + int add_off = offsetof(CPUTLBEntry, addend); + int compare_mask; + int cc; /* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */ - tcg_out_ld(s, TCG_TYPE_PTR, r0, TCG_AREG0, mask_off); - tcg_out_ld(s, TCG_TYPE_PTR, r1, TCG_AREG0, table_off); + QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) > 0); + QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12)); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_T2, TCG_AREG0, mask_off); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_T3, TCG_AREG0, table_off); /* Extract the page index, shifted into place for tlb index. */ - tcg_out_arithi(s, r2, addr, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS, - SHIFT_SRL); - tcg_out_arith(s, r2, r2, r0, ARITH_AND); + tcg_out_arithi(s, TCG_REG_T1, addr_reg, + TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS, SHIFT_SRL); + tcg_out_arith(s, TCG_REG_T1, TCG_REG_T1, TCG_REG_T2, ARITH_AND); /* Add the tlb_table pointer, creating the CPUTLBEntry address into R2. */ - tcg_out_arith(s, r2, r2, r1, ARITH_ADD); + tcg_out_arith(s, TCG_REG_T1, TCG_REG_T1, TCG_REG_T3, ARITH_ADD); - /* Load the tlb comparator and the addend. */ - tcg_out_ld(s, TCG_TYPE_TL, r0, r2, which); - tcg_out_ld(s, TCG_TYPE_PTR, r1, r2, offsetof(CPUTLBEntry, addend)); + /* Load the tlb comparator and the addend. */ + tcg_out_ld(s, TCG_TYPE_TL, TCG_REG_T2, TCG_REG_T1, cmp_off); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_T1, TCG_REG_T1, add_off); + h->base = TCG_REG_T1; - /* Mask out the page offset, except for the required alignment. - We don't support unaligned accesses. */ - if (a_bits < s_bits) { - a_bits = s_bits; - } - compare_mask = (tcg_target_ulong)TARGET_PAGE_MASK | ((1 << a_bits) - 1); + /* Mask out the page offset, except for the required alignment. */ + compare_mask = TARGET_PAGE_MASK | a_mask; if (check_fit_tl(compare_mask, 13)) { - tcg_out_arithi(s, r2, addr, compare_mask, ARITH_AND); + tcg_out_arithi(s, TCG_REG_T3, addr_reg, compare_mask, ARITH_AND); } else { - tcg_out_movi(s, TCG_TYPE_TL, r2, compare_mask); - tcg_out_arith(s, r2, addr, r2, ARITH_AND); + tcg_out_movi_s32(s, TCG_REG_T3, compare_mask); + tcg_out_arith(s, TCG_REG_T3, addr_reg, TCG_REG_T3, ARITH_AND); } - tcg_out_cmp(s, r0, r2, 0); + tcg_out_cmp(s, TCG_REG_T2, TCG_REG_T3, 0); - /* If the guest address must be zero-extended, do so now. */ + ldst = new_ldst_label(s); + ldst->is_ld = is_ld; + ldst->oi = oi; + ldst->addrlo_reg = addr_reg; + ldst->label_ptr[0] = s->code_ptr; + + /* bne,pn %[xi]cc, label0 */ + cc = TARGET_LONG_BITS == 64 ? BPCC_XCC : BPCC_ICC; + tcg_out_bpcc0(s, COND_NE, BPCC_PN | cc, 0); +#else + if (a_bits != s_bits) { + /* + * Test for at least natural alignment, and defer + * everything else to the helper functions. + */ + tcg_debug_assert(check_fit_tl(a_mask, 13)); + tcg_out_arithi(s, TCG_REG_G0, addr_reg, a_mask, ARITH_ANDCC); + + ldst = new_ldst_label(s); + ldst->is_ld = is_ld; + ldst->oi = oi; + ldst->addrlo_reg = addr_reg; + ldst->label_ptr[0] = s->code_ptr; + + /* bne,pn %icc, label0 */ + tcg_out_bpcc0(s, COND_NE, BPCC_PN | BPCC_ICC, 0); + } + h->base = guest_base ? TCG_GUEST_BASE_REG : TCG_REG_G0; +#endif + + /* If the guest address must be zero-extended, do in the delay slot. */ if (TARGET_LONG_BITS == 32) { - tcg_out_ext32u(s, r0, addr); - return r0; + tcg_out_ext32u(s, TCG_REG_T2, addr_reg); + h->index = TCG_REG_T2; + } else { + if (ldst) { + tcg_out_nop(s); + } + h->index = addr_reg; } - return addr; + return ldst; } -#endif /* CONFIG_SOFTMMU */ - -static const int qemu_ld_opc[(MO_SSIZE | MO_BSWAP) + 1] = { - [MO_UB] = LDUB, - [MO_SB] = LDSB, - [MO_UB | MO_LE] = LDUB, - [MO_SB | MO_LE] = LDSB, - - [MO_BEUW] = LDUH, - [MO_BESW] = LDSH, - [MO_BEUL] = LDUW, - [MO_BESL] = LDSW, - [MO_BEUQ] = LDX, - [MO_BESQ] = LDX, - - [MO_LEUW] = LDUH_LE, - [MO_LESW] = LDSH_LE, - [MO_LEUL] = LDUW_LE, - [MO_LESL] = LDSW_LE, - [MO_LEUQ] = LDX_LE, - [MO_LESQ] = LDX_LE, -}; - -static const int qemu_st_opc[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = STB, - - [MO_BEUW] = STH, - [MO_BEUL] = STW, - [MO_BEUQ] = STX, - - [MO_LEUW] = STH_LE, - [MO_LEUL] = STW_LE, - [MO_LEUQ] = STX_LE, -}; static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr, MemOpIdx oi, TCGType data_type) { - MemOp memop = get_memop(oi); - tcg_insn_unit *label_ptr; + static const int ld_opc[(MO_SSIZE | MO_BSWAP) + 1] = { + [MO_UB] = LDUB, + [MO_SB] = LDSB, + [MO_UB | MO_LE] = LDUB, + [MO_SB | MO_LE] = LDSB, -#ifdef CONFIG_SOFTMMU - unsigned memi = get_mmuidx(oi); - TCGReg addrz; - const tcg_insn_unit *func; + [MO_BEUW] = LDUH, + [MO_BESW] = LDSH, + [MO_BEUL] = LDUW, + [MO_BESL] = LDSW, + [MO_BEUQ] = LDX, + [MO_BESQ] = LDX, - addrz = tcg_out_tlb_load(s, addr, memi, memop, - offsetof(CPUTLBEntry, addr_read)); + [MO_LEUW] = LDUH_LE, + [MO_LESW] = LDSH_LE, + [MO_LEUL] = LDUW_LE, + [MO_LESL] = LDSW_LE, + [MO_LEUQ] = LDX_LE, + [MO_LESQ] = LDX_LE, + }; - /* The fast path is exactly one insn. Thus we can perform the - entire TLB Hit in the (annulled) delay slot of the branch - over the TLB Miss case. */ + TCGLabelQemuLdst *ldst; + HostAddress h; - /* beq,a,pt %[xi]cc, label0 */ - label_ptr = s->code_ptr; - tcg_out_bpcc0(s, COND_E, BPCC_A | BPCC_PT - | (TARGET_LONG_BITS == 64 ? BPCC_XCC : BPCC_ICC), 0); - /* delay slot */ - tcg_out_ldst_rr(s, data, addrz, TCG_REG_O1, - qemu_ld_opc[memop & (MO_BSWAP | MO_SSIZE)]); + ldst = prepare_host_addr(s, &h, addr, oi, true); - /* TLB Miss. */ + tcg_out_ldst_rr(s, data, h.base, h.index, + ld_opc[get_memop(oi) & (MO_BSWAP | MO_SSIZE)]); - tcg_out_mov(s, TCG_TYPE_REG, TCG_REG_O1, addrz); - - /* We use the helpers to extend SB and SW data, leaving the case - of SL needing explicit extending below. */ - if ((memop & MO_SSIZE) == MO_SL) { - func = qemu_ld_trampoline[MO_UL]; - } else { - func = qemu_ld_trampoline[memop & MO_SSIZE]; + if (ldst) { + ldst->type = data_type; + ldst->datalo_reg = data; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); } - tcg_debug_assert(func != NULL); - tcg_out_call_nodelay(s, func, false); - /* delay slot */ - tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_O2, oi); - - /* We let the helper sign-extend SB and SW, but leave SL for here. */ - if ((memop & MO_SSIZE) == MO_SL) { - tcg_out_ext32s(s, data, TCG_REG_O0); - } else { - tcg_out_mov(s, TCG_TYPE_REG, data, TCG_REG_O0); - } - - *label_ptr |= INSN_OFF19(tcg_ptr_byte_diff(s->code_ptr, label_ptr)); -#else - TCGReg index = (guest_base ? TCG_GUEST_BASE_REG : TCG_REG_G0); - unsigned a_bits = get_alignment_bits(memop); - unsigned s_bits = memop & MO_SIZE; - unsigned t_bits; - - if (TARGET_LONG_BITS == 32) { - tcg_out_ext32u(s, TCG_REG_T1, addr); - addr = TCG_REG_T1; - } - - /* - * Normal case: alignment equal to access size. - */ - if (a_bits == s_bits) { - tcg_out_ldst_rr(s, data, addr, index, - qemu_ld_opc[memop & (MO_BSWAP | MO_SSIZE)]); - return; - } - - /* - * Test for at least natural alignment, and assume most accesses - * will be aligned -- perform a straight load in the delay slot. - * This is required to preserve atomicity for aligned accesses. - */ - t_bits = MAX(a_bits, s_bits); - tcg_debug_assert(t_bits < 13); - tcg_out_arithi(s, TCG_REG_G0, addr, (1u << t_bits) - 1, ARITH_ANDCC); - - /* beq,a,pt %icc, label */ - label_ptr = s->code_ptr; - tcg_out_bpcc0(s, COND_E, BPCC_A | BPCC_PT | BPCC_ICC, 0); - /* delay slot */ - tcg_out_ldst_rr(s, data, addr, index, - qemu_ld_opc[memop & (MO_BSWAP | MO_SSIZE)]); - - if (a_bits >= s_bits) { - /* - * Overalignment: A successful alignment test will perform the memory - * operation in the delay slot, and failure need only invoke the - * handler for SIGBUS. - */ - tcg_out_call_nodelay(s, qemu_unalign_ld_trampoline, false); - /* delay slot -- move to low part of argument reg */ - tcg_out_mov_delay(s, TCG_REG_O1, addr); - } else { - /* Underalignment: load by pieces of minimum alignment. */ - int ld_opc, a_size, s_size, i; - - /* - * Force full address into T1 early; avoids problems with - * overlap between @addr and @data. - */ - tcg_out_arith(s, TCG_REG_T1, addr, index, ARITH_ADD); - - a_size = 1 << a_bits; - s_size = 1 << s_bits; - if ((memop & MO_BSWAP) == MO_BE) { - ld_opc = qemu_ld_opc[a_bits | MO_BE | (memop & MO_SIGN)]; - tcg_out_ldst(s, data, TCG_REG_T1, 0, ld_opc); - ld_opc = qemu_ld_opc[a_bits | MO_BE]; - for (i = a_size; i < s_size; i += a_size) { - tcg_out_ldst(s, TCG_REG_T2, TCG_REG_T1, i, ld_opc); - tcg_out_arithi(s, data, data, a_size, SHIFT_SLLX); - tcg_out_arith(s, data, data, TCG_REG_T2, ARITH_OR); - } - } else if (a_bits == 0) { - ld_opc = LDUB; - tcg_out_ldst(s, data, TCG_REG_T1, 0, ld_opc); - for (i = a_size; i < s_size; i += a_size) { - if ((memop & MO_SIGN) && i == s_size - a_size) { - ld_opc = LDSB; - } - tcg_out_ldst(s, TCG_REG_T2, TCG_REG_T1, i, ld_opc); - tcg_out_arithi(s, TCG_REG_T2, TCG_REG_T2, i * 8, SHIFT_SLLX); - tcg_out_arith(s, data, data, TCG_REG_T2, ARITH_OR); - } - } else { - ld_opc = qemu_ld_opc[a_bits | MO_LE]; - tcg_out_ldst_rr(s, data, TCG_REG_T1, TCG_REG_G0, ld_opc); - for (i = a_size; i < s_size; i += a_size) { - tcg_out_arithi(s, TCG_REG_T1, TCG_REG_T1, a_size, ARITH_ADD); - if ((memop & MO_SIGN) && i == s_size - a_size) { - ld_opc = qemu_ld_opc[a_bits | MO_LE | MO_SIGN]; - } - tcg_out_ldst_rr(s, TCG_REG_T2, TCG_REG_T1, TCG_REG_G0, ld_opc); - tcg_out_arithi(s, TCG_REG_T2, TCG_REG_T2, i * 8, SHIFT_SLLX); - tcg_out_arith(s, data, data, TCG_REG_T2, ARITH_OR); - } - } - } - - *label_ptr |= INSN_OFF19(tcg_ptr_byte_diff(s->code_ptr, label_ptr)); -#endif /* CONFIG_SOFTMMU */ } static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr, MemOpIdx oi, TCGType data_type) { - MemOp memop = get_memop(oi); - tcg_insn_unit *label_ptr; + static const int st_opc[(MO_SIZE | MO_BSWAP) + 1] = { + [MO_UB] = STB, -#ifdef CONFIG_SOFTMMU - unsigned memi = get_mmuidx(oi); - TCGReg addrz; - const tcg_insn_unit *func; + [MO_BEUW] = STH, + [MO_BEUL] = STW, + [MO_BEUQ] = STX, - addrz = tcg_out_tlb_load(s, addr, memi, memop, - offsetof(CPUTLBEntry, addr_write)); + [MO_LEUW] = STH_LE, + [MO_LEUL] = STW_LE, + [MO_LEUQ] = STX_LE, + }; - /* The fast path is exactly one insn. Thus we can perform the entire - TLB Hit in the (annulled) delay slot of the branch over TLB Miss. */ - /* beq,a,pt %[xi]cc, label0 */ - label_ptr = s->code_ptr; - tcg_out_bpcc0(s, COND_E, BPCC_A | BPCC_PT - | (TARGET_LONG_BITS == 64 ? BPCC_XCC : BPCC_ICC), 0); - /* delay slot */ - tcg_out_ldst_rr(s, data, addrz, TCG_REG_O1, - qemu_st_opc[memop & (MO_BSWAP | MO_SIZE)]); + TCGLabelQemuLdst *ldst; + HostAddress h; - /* TLB Miss. */ + ldst = prepare_host_addr(s, &h, addr, oi, false); - tcg_out_mov(s, TCG_TYPE_REG, TCG_REG_O1, addrz); - tcg_out_movext(s, (memop & MO_SIZE) == MO_64 ? TCG_TYPE_I64 : TCG_TYPE_I32, - TCG_REG_O2, data_type, memop & MO_SIZE, data); + tcg_out_ldst_rr(s, data, h.base, h.index, + st_opc[get_memop(oi) & (MO_BSWAP | MO_SIZE)]); - func = qemu_st_trampoline[memop & MO_SIZE]; - tcg_debug_assert(func != NULL); - tcg_out_call_nodelay(s, func, false); - /* delay slot */ - tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_O3, oi); - - *label_ptr |= INSN_OFF19(tcg_ptr_byte_diff(s->code_ptr, label_ptr)); -#else - TCGReg index = (guest_base ? TCG_GUEST_BASE_REG : TCG_REG_G0); - unsigned a_bits = get_alignment_bits(memop); - unsigned s_bits = memop & MO_SIZE; - unsigned t_bits; - - if (TARGET_LONG_BITS == 32) { - tcg_out_ext32u(s, TCG_REG_T1, addr); - addr = TCG_REG_T1; + if (ldst) { + ldst->type = data_type; + ldst->datalo_reg = data; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); } - - /* - * Normal case: alignment equal to access size. - */ - if (a_bits == s_bits) { - tcg_out_ldst_rr(s, data, addr, index, - qemu_st_opc[memop & (MO_BSWAP | MO_SIZE)]); - return; - } - - /* - * Test for at least natural alignment, and assume most accesses - * will be aligned -- perform a straight store in the delay slot. - * This is required to preserve atomicity for aligned accesses. - */ - t_bits = MAX(a_bits, s_bits); - tcg_debug_assert(t_bits < 13); - tcg_out_arithi(s, TCG_REG_G0, addr, (1u << t_bits) - 1, ARITH_ANDCC); - - /* beq,a,pt %icc, label */ - label_ptr = s->code_ptr; - tcg_out_bpcc0(s, COND_E, BPCC_A | BPCC_PT | BPCC_ICC, 0); - /* delay slot */ - tcg_out_ldst_rr(s, data, addr, index, - qemu_st_opc[memop & (MO_BSWAP | MO_SIZE)]); - - if (a_bits >= s_bits) { - /* - * Overalignment: A successful alignment test will perform the memory - * operation in the delay slot, and failure need only invoke the - * handler for SIGBUS. - */ - tcg_out_call_nodelay(s, qemu_unalign_st_trampoline, false); - /* delay slot -- move to low part of argument reg */ - tcg_out_mov_delay(s, TCG_REG_O1, addr); - } else { - /* Underalignment: store by pieces of minimum alignment. */ - int st_opc, a_size, s_size, i; - - /* - * Force full address into T1 early; avoids problems with - * overlap between @addr and @data. - */ - tcg_out_arith(s, TCG_REG_T1, addr, index, ARITH_ADD); - - a_size = 1 << a_bits; - s_size = 1 << s_bits; - if ((memop & MO_BSWAP) == MO_BE) { - st_opc = qemu_st_opc[a_bits | MO_BE]; - for (i = 0; i < s_size; i += a_size) { - TCGReg d = data; - int shift = (s_size - a_size - i) * 8; - if (shift) { - d = TCG_REG_T2; - tcg_out_arithi(s, d, data, shift, SHIFT_SRLX); - } - tcg_out_ldst(s, d, TCG_REG_T1, i, st_opc); - } - } else if (a_bits == 0) { - tcg_out_ldst(s, data, TCG_REG_T1, 0, STB); - for (i = 1; i < s_size; i++) { - tcg_out_arithi(s, TCG_REG_T2, data, i * 8, SHIFT_SRLX); - tcg_out_ldst(s, TCG_REG_T2, TCG_REG_T1, i, STB); - } - } else { - /* Note that ST*A with immediate asi must use indexed address. */ - st_opc = qemu_st_opc[a_bits + MO_LE]; - tcg_out_ldst_rr(s, data, TCG_REG_T1, TCG_REG_G0, st_opc); - for (i = a_size; i < s_size; i += a_size) { - tcg_out_arithi(s, TCG_REG_T2, data, i * 8, SHIFT_SRLX); - tcg_out_arithi(s, TCG_REG_T1, TCG_REG_T1, a_size, ARITH_ADD); - tcg_out_ldst_rr(s, TCG_REG_T2, TCG_REG_T1, TCG_REG_G0, st_opc); - } - } - } - - *label_ptr |= INSN_OFF19(tcg_ptr_byte_diff(s->code_ptr, label_ptr)); -#endif /* CONFIG_SOFTMMU */ } static void tcg_out_exit_tb(TCGContext *s, uintptr_t a0) @@ -1744,6 +1499,8 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_extu_i32_i64: case INDEX_op_extrl_i64_i32: case INDEX_op_extrh_i64_i32: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: return C_O1_I1(r, r); case INDEX_op_st8_i32: @@ -1753,6 +1510,8 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_st_i32: case INDEX_op_st32_i64: case INDEX_op_st_i64: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: return C_O0_I2(rZ, r); case INDEX_op_add_i32: @@ -1802,13 +1561,6 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_muluh_i64: return C_O1_I2(r, r, r); - case INDEX_op_qemu_ld_i32: - case INDEX_op_qemu_ld_i64: - return C_O1_I1(r, s); - case INDEX_op_qemu_st_i32: - case INDEX_op_qemu_st_i64: - return C_O0_I2(sZ, s); - default: g_assert_not_reached(); } From patchwork Wed May 3 07:06:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678656 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908653wrs; Wed, 3 May 2023 00:23:10 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ453lrWAJxy3u/zT3xi1lwhnd8VN/cMkkcCr3JQ4EU1Hno/WhFo47jJoh5osmpvrLmHI9xG X-Received: by 2002:a05:622a:449:b0:3f0:a4a2:9f0e with SMTP id o9-20020a05622a044900b003f0a4a29f0emr27079327qtx.52.1683098590006; Wed, 03 May 2023 00:23:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098589; cv=none; d=google.com; s=arc-20160816; b=tbpK4TO3n8+grXG6Kvs26r6ZHZRmE7G8s8XvnOys81VO8YXghsBDjNU9ZPZGLQscpA rWSQyN6W94wgjtaBiWZTFN4PeQ2QQ2JybwxodxqR6pQzlNcJIqOWWEyQvqZi8V1YYZoq eiihaMNK9ROzaRX02GMVQCeZX47tSnd9mmtb7VWLRiMp/docQYrbuhVGtoj7o2xrF46P 9b4aPfzsWHVeTA8mIPS/ifhENdH1hykNP5fZP5B9hwj+ax/joKI1OtS4ZU68IXLuD/7U YsunqkmVaF7lpL+OEW3a/6ZCnMUUxENj2mxF3uwxf4/wgtpw6B6gL8N65ewd/iU9rzmO DgaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8dOhNAZCA6lBCroa7XUF6EpxbAxZlxMP8g62cl8mX24=; b=NAUpcvTKiPAdZaqPUh72gmDQFEQciGGTmGbU67cuGCuYGSixVuqajNayhFEUhTmO41 LbHa7xebFWNF+pCTorgO6HIj0ykDA46/YhDa7LSbnn1vuR33ViBb1tJsqP0hZFO2Klfc 5FRUDuFAbsv7bwaVO+lcGMHrCwwEm3PbLGOgF6oy9U1XKzPTagdlabhID6Ik1S8KQDEO Z9WG6Ri1xTrGyhlnOn9ebsiq19fBsd45LNK3+nQwwoAY0pkEbJnQrqojKpfPKrxPHTdi QAshxlj4jXsv9JsGu17fIydSgXi032WdlzB2vZlwm2kq80O4tmEp6qHn1ysQKIWuNDMB 7Z/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=j8mxuMsR; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y2-20020a05622a004200b003f209a855dbsi5153925qtw.42.2023.05.03.00.23.09 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:23:09 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=j8mxuMsR; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bo-0004ub-SH; Wed, 03 May 2023 03:09:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ap-0008Lk-2P for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:04 -0400 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aE-0005pm-Sh for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:01 -0400 Received: by mail-wm1-x330.google.com with SMTP id 5b1f17b1804b1-3f192c23fffso29497045e9.3 for ; Wed, 03 May 2023 00:07:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097644; x=1685689644; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8dOhNAZCA6lBCroa7XUF6EpxbAxZlxMP8g62cl8mX24=; b=j8mxuMsRornH4TXgd8vucrTFCuB7pnWF+a7hrNni1WjqJjx/VMpfYZp1vH1fQErl4U FRlMhJGuAlMrADkfp5oLNybcCQw9wjk7BIIw2voFD1iXGnFwTXS74Qm4kViJuDu8J0ye O+YOru42nVdTDkBVG6p1zteeEhZkY3TGdbOm0PgnbYbqNp2Gx39zT/dDtCYwRzbiPe03 v9f34LA7LIxQaAphOXXIrQKKknd5XSPY1pe3HzrzZyG85RTj+hvd/lReCBfYLcDQzv3H fmuy43aMZNKgSbCnytXpIW/VFG1c6THaDftgVGp+gc+BKJsg+Kfpmo3woxxXcZgbvv+v yeTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097644; x=1685689644; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8dOhNAZCA6lBCroa7XUF6EpxbAxZlxMP8g62cl8mX24=; b=gWzXa+eTXXUxXjS+DmjpzYyrwb0Nmm7EwWL8cROgTg54YBy+kkQwafWLC2koisTB2G SvOAYqlCY/Z3D5OWnP0r8p+/GAJ45mAerEOXxeiF4AH7h9OFseFSVMLomxHYEkaOu4xl Ev7urc3ySM5GKY1s25tzfSj0zjflHtTILVosrTBABgRrBCyDUE0oaDHkFsOwpODTFtOb beflhzg1Itlvmn002qHqJebSn5YzKqIRRZFvHUy85939EjJ17qVfN7gV5AQXAUzz6M7D vjVvZQ5aebrikZlWthbOZPYhi5rTKHfXNq5aCujJ8Fo3sqjtgpETOz/F2Acp3b/flOSi 6pcA== X-Gm-Message-State: AC+VfDz7muRNfX70NX6RVrkp+yqG/jX554+IwQpEr/QTwhCnMcY0JRZk p/UsT42i6vL4gjcUwcNXmaM2Tlow6XQ2iIisufRAOw== X-Received: by 2002:a05:600c:2318:b0:3f1:72ec:400d with SMTP id 24-20020a05600c231800b003f172ec400dmr13219593wmo.33.1683097644056; Wed, 03 May 2023 00:07:24 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:23 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 35/57] accel/tcg: Remove helper_unaligned_{ld,st} Date: Wed, 3 May 2023 08:06:34 +0100 Message-Id: <20230503070656.1746170-36-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org These functions are now unused. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- include/tcg/tcg-ldst.h | 6 ------ accel/tcg/user-exec.c | 10 ---------- 2 files changed, 16 deletions(-) diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 64f48e6990..7dd57013e9 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -60,10 +60,4 @@ void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, void helper_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, MemOpIdx oi, uintptr_t retaddr); -#ifdef CONFIG_USER_ONLY - -G_NORETURN void helper_unaligned_ld(CPUArchState *env, target_ulong addr); -G_NORETURN void helper_unaligned_st(CPUArchState *env, target_ulong addr); - -#endif /* CONFIG_USER_ONLY */ #endif /* TCG_LDST_H */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 8f86254eb4..7b824dcde8 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -889,16 +889,6 @@ void page_reset_target_data(target_ulong start, target_ulong last) { } /* The softmmu versions of these helpers are in cputlb.c. */ -void helper_unaligned_ld(CPUArchState *env, target_ulong addr) -{ - cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_LOAD, GETPC()); -} - -void helper_unaligned_st(CPUArchState *env, target_ulong addr) -{ - cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, GETPC()); -} - static void *cpu_mmu_lookup(CPUArchState *env, abi_ptr addr, MemOp mop, uintptr_t ra, MMUAccessType type) { From patchwork Wed May 3 07:06:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678769 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp915434wrs; Wed, 3 May 2023 00:45:17 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7o1gqavIqS0QkW38Qp/k27UZqm+Y+4O7urPTAbAs9A8Eo2uIYccUEIrH8aBcOQHl8qVqE0 X-Received: by 2002:a05:622a:308:b0:3e4:d889:bc63 with SMTP id q8-20020a05622a030800b003e4d889bc63mr30458046qtw.35.1683099917260; Wed, 03 May 2023 00:45:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683099917; cv=none; d=google.com; s=arc-20160816; b=RDym7IzJ4TRfggEpQtHwrjwIc4/c1jPdABBhBvLfQwUbj5Az6TQBwRxnaoEj5qbvI2 uiKnfKenP0POh26bJfsmWo83ZrwqrGsnENEEgQKUVPowVIWqtRT50YTWKr+1Os97Hm9F lqDN/Rkr1Wh29Gopnq/yGoOn6P4xeoXA/X2juS0nVkoiuRkx6DXtwFG3zVJuPsPBnMZw tAuoKGpa7iWwHMLnLQyl6xgyx2YViM4OoJCXlWBJJA9omTU0YpkGh5ymD+tQXiPV2dBK WZmz9tb5WC1AV2gceBbcF+MmqbfdOgudTnAMaKSamdOmnMILPgmXbYH509sNc1fhPV8F UhUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=g0XhypHJuKT2RLCHGyOOS8cpna/MQwPu76GFexENMHE=; b=aAyn23NNKwO/1XsRHskmfzd2MQvBg6cAe9fh2Cc0YQvfzSuGVUJb8A2I+dJ9EBkT9t 14rQ3q2xfaMBgHFii8VcWK7z16GdeyehxPY2EdHxKzXJS3SldlPgm5c4odhC4xbuOZB8 wxI22xsxQakjiyW/B1fk+U+qLImZH3RZvusCFt81ZhevKzk614UgMC8eQFv69yyztwrZ 4cRpkFgCl3HzGYgJgM1iYkyKqsTPWrSCyBCtc9PWBzri5FFeJm07GP0ILbu9ZU0tC0RB tM/2KVO5oM1jnYeXx8MOjp/Zar2ZoqRR1od+JKX0o10Dv+sNgKK66zwms/Y0QwzsDqFc tixQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CO0Sw3lt; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id v5-20020a05622a144500b003eed110305asi18549866qtx.723.2023.05.03.00.45.17 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:45:17 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CO0Sw3lt; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bn-0004nl-Cm; Wed, 03 May 2023 03:09:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6am-0008FD-Pe for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:02 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aE-0005qN-Sw for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:00 -0400 Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-3f1728c2a57so47627065e9.0 for ; Wed, 03 May 2023 00:07:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097645; x=1685689645; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g0XhypHJuKT2RLCHGyOOS8cpna/MQwPu76GFexENMHE=; b=CO0Sw3ltI+LRP4jx8Sqcg7I/yQ9ZmMrVpTIL7MErWesdzwUdYs8CFLkEwMpecXxhgL hW6J1wQrEAHyIDwYHd+UMWEqR691ODuMDVugiZTf3v1mz+tU3k46xGq/xsswx8XOTK4s F869utksGgjQDdnKRb8HTgml/Dgj+RJnBhcuujUBc7ioIIdeIbm3vsoEapBuWZbnGCu4 n2C/ln00C+GetzoLzEYzS6FtNT3Q+wgZq3QaLlnBawcL8sTM0XqM7tYcJTlLa/y92ppU 2Jn8k1tpbmMZeDrJENpog13oTS0wNLcfgaPVRaIBWZohvuD30aw3O8Dg6knSNbsR0taZ d7ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097645; x=1685689645; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g0XhypHJuKT2RLCHGyOOS8cpna/MQwPu76GFexENMHE=; b=ZYQUtMm7ax8u2m/WnLT63cgSr2DNtooeig4Vlalb4yLGx8MSSeyg/VGhmoUSsy61C7 r+ageIoIL+YalaVfwGxUvGym7YQgdFLsUTs0nxSfMhYShuD0GZtLlTOOvZnf5w7g463N LHmKKt5MfI7TC3GSOw17fOBr1Yr5hoUEBigX1KagKaciO9Gti0bcRK98ovkwsahsjvro pIa2bfcMazKMzyPBTQUSGnE+K21MSRFwEGbsclR/1flfdSjCmU6xnj7gYOTn6VHdbWBK cdV47fevNBNHJYdksVQNzyAGpJIkGEiYy9r6ZyEKt+fQhjBYZEZfT4TdhXh8SZ6SkZqh rQFw== X-Gm-Message-State: AC+VfDx87GiO/gusAypeLVo577l03Ux2c7ZCzdxfpb02MDxYrQnk6Qp3 0rLUhJNeGVvF9x2xsfyaAFRDsVMpupySY4kQBZX13A== X-Received: by 2002:a1c:7502:0:b0:3ee:36f:3485 with SMTP id o2-20020a1c7502000000b003ee036f3485mr14848011wmc.8.1683097644870; Wed, 03 May 2023 00:07:24 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:24 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 36/57] tcg/loongarch64: Assert the host supports unaligned accesses Date: Wed, 3 May 2023 08:06:35 +0100 Message-Id: <20230503070656.1746170-37-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org This should be true of all server class loongarch64. Signed-off-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index e651ec5c71..ccc13ffdb4 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -30,6 +30,7 @@ */ #include "../tcg-ldst.c.inc" +#include #ifdef CONFIG_DEBUG_TCG static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { @@ -1674,6 +1675,11 @@ static void tcg_target_qemu_prologue(TCGContext *s) static void tcg_target_init(TCGContext *s) { + unsigned long hwcap = qemu_getauxval(AT_HWCAP); + + /* All server class loongarch have UAL; only embedded do not. */ + assert(hwcap & HWCAP_LOONGARCH_UAL); + tcg_target_available_regs[TCG_TYPE_I32] = ALL_GENERAL_REGS; tcg_target_available_regs[TCG_TYPE_I64] = ALL_GENERAL_REGS; From patchwork Wed May 3 07:06:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678670 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp909297wrs; Wed, 3 May 2023 00:25:09 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ51Ez5Tb9ODwm6aPeTv9jltJHK79neB9Jn0xqcHat1xB2i984tM5k/QSnNXrAopbDt5Z6Fe X-Received: by 2002:ac8:5c53:0:b0:3ed:e76a:88c1 with SMTP id j19-20020ac85c53000000b003ede76a88c1mr31682685qtj.22.1683098709603; Wed, 03 May 2023 00:25:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098709; cv=none; d=google.com; s=arc-20160816; b=DkuxtUeCAt4yFu0p90ptE+AyR2qQy0ZQUbf3ArHL0GCoF90UXg7JgFqTqzsXZDmviM 3NkIp9fAlvP/19F3/dCB8QjRa6yu9okoP4EhF6px4OW7EQkwrkPnF1m8wP5hPHWgNFg9 nr6gY6+UOwQ31IrOO01c6vpKK0jOf2MwyGfHJsHWuzCePtrjNvfqgz4SyF4B4wKAhcly xlQpU6quVAsWFa4KofhCba7TCUnjDrs90iByGgsu32WxoXuw7xAfEEWhuBlsscdEVc56 V6aUMIPOUclOiiUOmrzgFld3rkfwm0LOLGEvgZa1KVhIJyRi5wwSYF09ISMK5BXoKJpS kB3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=KTISKKo85DNTfZp4wzvl7urwXJJu/e57aqD8BfD1Ntg=; b=KSnPuHNHzOZ6mbmV49kkD/4ns0JGZRQhPtLWuT8G04eW/kcreyfYy0+GUW060uBWyY dsadAjSEsBO7sBtDKXXT1Ox5hx6oi1lOzSQEMqeu/HNvVIriXwLKp7gXNBXodYe5LL/z 6rfl+8NtnR8vV5OdDwxfoxLjZUL4f0jZrwSUS1kO+SOJiwFa3Y/Pu89QIj+Hh44ItJ9X dmsCv9STYyjCrqu99BkqVNk1UUyfM+ULucgFXYvK0FqdzelFJWBrT0pS3Pvu1XbH8k1e 0qqvugedtWFYkv0olAaseTZ/K+jm6EpEqi760GI6QpakjePNC3qZylPJmyiZRztcm6Q2 NWMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cwYHKyOq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r13-20020a05620a298d00b006fcaae20037si19358263qkp.741.2023.05.03.00.25.09 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:25:09 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cwYHKyOq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bh-0004RZ-9B; Wed, 03 May 2023 03:08:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6as-0008Uo-Tg for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:10 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aO-0005cK-Qa for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:06 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f315735514so10784685e9.1 for ; Wed, 03 May 2023 00:07:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097645; x=1685689645; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KTISKKo85DNTfZp4wzvl7urwXJJu/e57aqD8BfD1Ntg=; b=cwYHKyOqCC/FeIYGsRxsOxWonMyLlX8UX1ALVk7mWEW9vpJi9ueDnlLMZ7dFFRnWT9 aC5RBdSq7pT4QXtyYxjgAldqdiBXb3aNBtPSaMRjZ6/bUdvOA7aEWNsBqVJmyHxinszA q6Ps8usC5QSzlF8+aCibu2SlGu4IRbnKvCojmDYRsVQblUDlqnmChvhJF0BvRyF3yzkY RkWmnxW68S2x0hWi1hmYMPMeNXk2f+Yaz+QtsHLxPBEml89BWwOgxc5O7fY1Q1rIcdoZ BArXIkPIclf1/NelPO8ZaHQjae9zlqvZSc/Zc9GfJ302cA/nVG2vBUkuC0GUBEv1adyZ OAcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097645; x=1685689645; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KTISKKo85DNTfZp4wzvl7urwXJJu/e57aqD8BfD1Ntg=; b=EbR0ZzXqDlvNYTOy/hoFq2P/dVzl6BZdWZQ7L0QHGzTmsFsMThdxEy8LaI3VojDxQE /96ln6fUX+nPQn071opx4SFC03gSFmYVkLR+F94/BGja4fTUe7fqQ8mNu/kK4K4EJ7uh 6JJCviRl81klIl6nvy2CHPQYGhtRFoBvKTP53+n3L1Jtw+kIDNCCaLshXCBbwwvMhnVn yw7HW3dhXkzckA/sWwYb6UNIkaJMmEWg2y4UFtztDpi7E+jojm8ERDbFIVra/BS1Tf0X VIpnBCuOJs24tao3fKpAFjj4wYDFxKUC8kiGPj5u36i32qaFe1hYO8tItz+CkEJDCnXp YWiQ== X-Gm-Message-State: AC+VfDxuZfE946BI7UQrHNLYaKewkfHoxv/QpPecWtoX1Od4sdl9grWQ EFOXj9cmdicki8FtS2yzeC2+kAWYBpJ/FdYlSdi5+A== X-Received: by 2002:a5d:56ca:0:b0:303:daff:f1a3 with SMTP id m10-20020a5d56ca000000b00303dafff1a3mr786450wrw.1.1683097645455; Wed, 03 May 2023 00:07:25 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:25 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 37/57] tcg/loongarch64: Support softmmu unaligned accesses Date: Wed, 3 May 2023 08:06:36 +0100 Message-Id: <20230503070656.1746170-38-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Test the final byte of an unaligned access. Use BSTRINS.D to clear the range of bits, rather than AND. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/loongarch64/tcg-target.c.inc | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index ccc13ffdb4..20cb21b264 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -848,7 +848,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, int fast_ofs = TLB_MASK_TABLE_OFS(mem_index); int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask); int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table); - tcg_target_long compare_mask; ldst = new_ldst_label(s); ldst->is_ld = is_ld; @@ -872,14 +871,20 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP2, TCG_REG_TMP2, offsetof(CPUTLBEntry, addend)); - /* We don't support unaligned accesses. */ + /* + * For aligned accesses, we check the first byte and include the alignment + * bits within the address. For unaligned access, we check that we don't + * cross pages using the address of the last byte of the access. + */ if (a_bits < s_bits) { - a_bits = s_bits; + unsigned a_mask = (1u << a_bits) - 1; + unsigned s_mask = (1u << s_bits) - 1; + tcg_out_addi(s, TCG_TYPE_TL, TCG_REG_TMP1, addr_reg, s_mask - a_mask); + } else { + tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_TMP1, addr_reg); } - /* Clear the non-page, non-alignment bits from the address. */ - compare_mask = (tcg_target_long)TARGET_PAGE_MASK | ((1 << a_bits) - 1); - tcg_out_movi(s, TCG_TYPE_TL, TCG_REG_TMP1, compare_mask); - tcg_out_opc_and(s, TCG_REG_TMP1, TCG_REG_TMP1, addr_reg); + tcg_out_opc_bstrins_d(s, TCG_REG_TMP1, TCG_REG_ZERO, + a_bits, TARGET_PAGE_BITS - 1); /* Compare masked address with the TLB entry. */ ldst->label_ptr[0] = s->code_ptr; From patchwork Wed May 3 07:06:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678719 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp912801wrs; Wed, 3 May 2023 00:36:27 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ55MhwOraA6omegeqx/qI+TmL0a6WTv88YKPk/amdyRsOE0WViVOuZ81gK71oiKJnhaNBaK X-Received: by 2002:a05:6214:5094:b0:61b:714f:73f8 with SMTP id kk20-20020a056214509400b0061b714f73f8mr1520632qvb.0.1683099387552; Wed, 03 May 2023 00:36:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683099387; cv=none; d=google.com; s=arc-20160816; b=kYWU0HrO6nDsWEvGl84Q9TWI2Xq/W48prAr+v7Z/8AVnSoAQDWMcglsiN8E5aEYTgi MadSSLkVb95PQmjTnPqC6Xe53+ABb5vQGhlJVmPlytDZWvXkUwP9iIg0CjK7GAEUYdfk LdJPMc/x4eks3HnPQ2y5ozniFMWTHa4vqEPdTvQTE2klNtjfKcA9cm0mw3EaYyLc/9WZ SNLunEJCEf9FuPhWAqTD7l/y+Y5WjpqLREyg9i1P0oUINDSqselGLAz5dOpmJMUrvv5S Ioujp0kc9Cvq+EI7EOQfCNN09EUUGtCw9U1MRVIbgrM0JFB1ZTtGfmIcK7Lv0kaf8PYB A5TA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=hwPT0K7vPN2gwriWFptyrPUN/cJGPYaIAkx6d9u1m44=; b=wS/b9exJrA7gRLqTlwjEZwF1s1KgCjs8wkSvBbFI9+ZC9iHGZrReI0Q/JEZ64TvhhI R6N6tTT1ToBEvriLkUF1RDrHSzcho1d1N2iic93t9CM1IVNKnn4iKX7CVI5ISADPcTWu TMQwZKpZKi+isqMZU9ExGNjRKxS2dpM8Ol6kAvH2ZJJoAS5fm8StK6Im8mMrWpA0Detr HUdmMmqSK3/Y5IkwM/7fkVHntP6XMTNWOjQo3OczJLMepdwbV3tylIU11aD1bHO++uid d/Zix+9H2h4ABYf1DCLzHv7WknqCW+dIm/hMyubol+l/K+cpI2M2b9OKNDs/WzuXJgvS ubnA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CC6VwXXf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i11-20020ad45c6b000000b005a4b7547a60si3096770qvh.588.2023.05.03.00.36.27 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:36:27 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CC6VwXXf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6cP-0005eO-UH; Wed, 03 May 2023 03:09:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6ar-0008RD-55 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:06 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aG-0005qs-G7 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:04 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-3f199696149so30072715e9.0 for ; Wed, 03 May 2023 00:07:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097646; x=1685689646; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hwPT0K7vPN2gwriWFptyrPUN/cJGPYaIAkx6d9u1m44=; b=CC6VwXXfMkXSLW1Nj9lbrxfMpMDuU/+/ETS2oHEIbOTv/6tnR1jrCnn1ciIrqiBofB a2dtyrP19nWBU//i+B/M4iaQ3YjWajafl70gclE81nsjsE0bi4kk2KKJLew5oQRUCEZF 6N74gwe4zzbnH8K0Tj8acTyGaLyZRyYIYhBShB/+sHYnw3CP+CSkRaD6cTJc80ARxHPa xQ5uX1nRnniIJAjCHOsWM95kLBcO9GsxNsu6JFZLjIq2Lz953pNlU2Yw2p0bQh1PcqLR 6QzOeAJj3b0MKGc+LMQigEN+Ug2trksGUZR8oI73tiN7D5U7BA7YX4wRoLiMg/vQol+q CC0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097646; x=1685689646; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hwPT0K7vPN2gwriWFptyrPUN/cJGPYaIAkx6d9u1m44=; b=RPzjvn6PY0NO6mIL2Lj531FaJPMH/n5zo4q/P+gB596N8d2aFPz+KrTrx6X+Iz0PM4 xjMoayqZ7gf1O09+fkg/ksHrCBXGIEkrg3svwURUGX8FC6bZssIpm5cHLxL2mazYFH7k 7/+dSVp/scv3Wt5SUr14aFtiemanuFsAxc+St4xl+eR73ESkO7gvVoMB/Ys9Gr9EIFRJ OMMUCQd/IuBL1Bbqti/8vsHKhPDYB+RUQrp3R8Zv+oglaBu/c87pMN732hsAXBrtneOm dWYpd3Zi7MFcng5SLOYRi2/8axzO9fe7gRZCUaaQ8ybrqanhcRb97uVTxXuVDf3CWA74 pVxg== X-Gm-Message-State: AC+VfDxZievVqa0pyLM/SZXBFP46bdzsR0UaCGlyxMluOm+2BaSXmYzY mb4HZc+aXKpLr/XJQKpsm5kUEL/rspVfREwYKV8j7g== X-Received: by 2002:a1c:f20b:0:b0:3f3:1299:5625 with SMTP id s11-20020a1cf20b000000b003f312995625mr13780006wmc.30.1683097646133; Wed, 03 May 2023 00:07:26 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:25 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 38/57] tcg/riscv: Support softmmu unaligned accesses Date: Wed, 3 May 2023 08:06:37 +0100 Message-Id: <20230503070656.1746170-39-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The system is required to emulate unaligned accesses, even if the hardware does not support it. The resulting trap may or may not be more efficient than the qemu slow path. There are linux kernel patches in flight to allow userspace to query hardware support; we can re-evaluate whether to enable this by default after that. In the meantime, softmmu now matches useronly, where we already assumed that unaligned accesses are supported. Signed-off-by: Richard Henderson Reviewed-by: LIU Zhiwei --- tcg/riscv/tcg-target.c.inc | 48 ++++++++++++++++++++++---------------- 1 file changed, 28 insertions(+), 20 deletions(-) diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 19cd4507fb..415e6c6e15 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -910,12 +910,13 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; + unsigned s_mask = (1u << s_bits) - 1; int mem_index = get_mmuidx(oi); int fast_ofs = TLB_MASK_TABLE_OFS(mem_index); int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask); int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table); - TCGReg mask_base = TCG_AREG0, table_base = TCG_AREG0; - tcg_target_long compare_mask; + int compare_mask; + TCGReg addr_adj; ldst = new_ldst_label(s); ldst->is_ld = is_ld; @@ -924,14 +925,33 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) > 0); QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 11)); - tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP0, mask_base, mask_ofs); - tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, table_base, table_ofs); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP0, TCG_AREG0, mask_ofs); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, table_ofs); tcg_out_opc_imm(s, OPC_SRLI, TCG_REG_TMP2, addr_reg, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS); tcg_out_opc_reg(s, OPC_AND, TCG_REG_TMP2, TCG_REG_TMP2, TCG_REG_TMP0); tcg_out_opc_reg(s, OPC_ADD, TCG_REG_TMP2, TCG_REG_TMP2, TCG_REG_TMP1); + /* + * For aligned accesses, we check the first byte and include the alignment + * bits within the address. For unaligned access, we check that we don't + * cross pages using the address of the last byte of the access. + */ + addr_adj = addr_reg; + if (a_bits < s_bits) { + addr_adj = TCG_REG_TMP0; + tcg_out_opc_imm(s, TARGET_LONG_BITS == 32 ? OPC_ADDIW : OPC_ADDI, + addr_adj, addr_reg, s_mask - a_mask); + } + compare_mask = TARGET_PAGE_MASK | a_mask; + if (compare_mask == sextreg(compare_mask, 0, 12)) { + tcg_out_opc_imm(s, OPC_ANDI, TCG_REG_TMP1, addr_adj, compare_mask); + } else { + tcg_out_movi(s, TCG_TYPE_TL, TCG_REG_TMP1, compare_mask); + tcg_out_opc_reg(s, OPC_AND, TCG_REG_TMP1, TCG_REG_TMP1, addr_adj); + } + /* Load the tlb comparator and the addend. */ tcg_out_ld(s, TCG_TYPE_TL, TCG_REG_TMP0, TCG_REG_TMP2, is_ld ? offsetof(CPUTLBEntry, addr_read) @@ -939,29 +959,17 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP2, TCG_REG_TMP2, offsetof(CPUTLBEntry, addend)); - /* We don't support unaligned accesses. */ - if (a_bits < s_bits) { - a_bits = s_bits; - } - /* Clear the non-page, non-alignment bits from the address. */ - compare_mask = (tcg_target_long)TARGET_PAGE_MASK | a_mask; - if (compare_mask == sextreg(compare_mask, 0, 12)) { - tcg_out_opc_imm(s, OPC_ANDI, TCG_REG_TMP1, addr_reg, compare_mask); - } else { - tcg_out_movi(s, TCG_TYPE_TL, TCG_REG_TMP1, compare_mask); - tcg_out_opc_reg(s, OPC_AND, TCG_REG_TMP1, TCG_REG_TMP1, addr_reg); - } - /* Compare masked address with the TLB entry. */ ldst->label_ptr[0] = s->code_ptr; tcg_out_opc_branch(s, OPC_BNE, TCG_REG_TMP0, TCG_REG_TMP1, 0); /* TLB Hit - translate address using addend. */ + addr_adj = addr_reg; if (TARGET_LONG_BITS == 32) { - tcg_out_ext32u(s, TCG_REG_TMP0, addr_reg); - addr_reg = TCG_REG_TMP0; + addr_adj = TCG_REG_TMP0; + tcg_out_ext32u(s, addr_adj, addr_reg); } - tcg_out_opc_reg(s, OPC_ADD, TCG_REG_TMP0, TCG_REG_TMP2, addr_reg); + tcg_out_opc_reg(s, OPC_ADD, TCG_REG_TMP0, TCG_REG_TMP2, addr_adj); *pbase = TCG_REG_TMP0; #else if (a_mask) { From patchwork Wed May 3 07:06:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678648 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp907642wrs; Wed, 3 May 2023 00:20:02 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6t0ES097oUCEaWXbduIiVBeJp9WpQUDmErj9wSeHLQadLE/3oEgBHdWnC37WUA3/5pLaYF X-Received: by 2002:a05:6214:62b:b0:5ef:4763:2f61 with SMTP id a11-20020a056214062b00b005ef47632f61mr9284867qvx.20.1683098401787; Wed, 03 May 2023 00:20:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098401; cv=none; d=google.com; s=arc-20160816; b=uomaoXrZYBlKUaCfU+n96RTO1ega5LVjvM2/MrpGKEGKA7UqnLWa9oGo+IHFz/C2l/ IryugOZvG1HTQvHIt9FNMYK5W62vSiBlPRGtd+2q9WKWDR2snyk+hiWgaicHwZbsi5a5 QpdMtKFsO4iooEFcdgJ5aYRPRiNWqVvvST4qqNhpWtj9UcypiOZ9e+TKl8cEd+nZIUnO brFpFuK3+ow12nKJXoxQkHAEseNwNmTqGwRpYM1egM9XFd6+ol1hKjIGcs8Vr3BVoPfm wWtK22HdMRmZ+khqDJUXUH0PCRUFpUct17J3LRCt8igc4XctCovgTVz+ixnp0DP8izTR yDJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=MmKM8rjtmKZde67WpB0XICl6ZGDTJWgOM1OCctjceu0=; b=rw4RDJnmSMYqMZGEFs8C7nYygEdMEpL6bRjuahB8HvrqMrSzMDwT09NahNduApRTGL 11caQ6A9lR6xwLA8bgm886fBEYuzVojKGrNx8buYPrbjFO9OehcU2CKHBDhzYTs5OUpi 4rMdpmsDLaU3RWa8f4F7d6QIu59dX715sCnV+eAKj3u98OmRJJCoZfqF2zp+Dv1aI4AY PFyT4dYrh9uy2dzTl2pedoUTvDKf/OV93tzH+FsmmlZs5IfnTh8zmNIQNzqvezIh0sc2 e4Y9ghoBXn4IE9xml0fnLTmAfaPLusDyNqTBLI5HZDGZ+QVA/laiyiGgVlu4KR4loUuG piDA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=EL6STVPJ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id fn9-20020ad45d69000000b0061b6a445f0esi1134500qvb.106.2023.05.03.00.20.01 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:20:01 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=EL6STVPJ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6bj-0004dN-FO; Wed, 03 May 2023 03:08:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6as-0008TZ-LC for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:10 -0400 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6aH-0005re-Br for qemu-devel@nongnu.org; Wed, 03 May 2023 03:08:06 -0400 Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-3f192c23fffso29497485e9.3 for ; Wed, 03 May 2023 00:07:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097647; x=1685689647; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MmKM8rjtmKZde67WpB0XICl6ZGDTJWgOM1OCctjceu0=; b=EL6STVPJiUSqzELjRYBDD6+uAVvIt23+Z7I11Gs8XARTyUru2JB8MBXM96Q+4HM2Od VxDXkrXYxnWUhlZcyW8VHpYcnVmf+y3U0vsM0vNl0c4zczAy3WZ8J+3/G0niZSSF/3XY 1S9bSkbA0xgL9ukvDP8gs9ha5nIsn+46NUFElTk9FAVI2HuV/+jcwVjYarpYnVc4H/Z+ NKdqtO6bpIkhlLiYwL4s/YTsLBqAQ03MnL4sMZyPCFFAbwINacqB2CuamiX7lqDuFagx UJJpnejI/1ztJr8O5LeWUzk7eI7edBKODtJVbgZTiUUCnB5Hx5RMj+TVMmtdXl3wj4so N8vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097647; x=1685689647; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MmKM8rjtmKZde67WpB0XICl6ZGDTJWgOM1OCctjceu0=; b=bc7J2DWYleP5CEJAkM1AyokJLjfvfeBFi+eERTGp5fQCzR5B1MuzssefioliBWb4FL wsS+lK/tY/6yUXbC5s2/43h9cZCMxE+is3tMSVlTz78gMisX8wkLWvPBHrYMdUMRcXBq SJ5xwN/FdMWTpoFwF/DM/d/68VxdyvN/pXUt/n5ynvhL+rkgmOn+BArSkxtkQo0eP9Vj B1SghkrI91YIhXcE/KkOjrjBxO9GJjCby72v8kEQRD/sMVkIyqRJnwdAWQnsFUrOTvbt j/GtITIk7Sm4sOpi5GkIKoKymGb8jrFCJJ9yCQo047IlzZvzAX9w8WwiVM1PvBe0pk0O G4dA== X-Gm-Message-State: AC+VfDw/0tqLldnMCxUb4uWm/b2VngrB4gPCebCrN5zxTClpbTnOohMX 7v/8w7bMa8IcyqK0AQ8kQnJMH/utgZhNXrO13qAJWQ== X-Received: by 2002:a7b:c4c1:0:b0:3f0:5519:9049 with SMTP id g1-20020a7bc4c1000000b003f055199049mr15033804wmk.8.1683097646818; Wed, 03 May 2023 00:07:26 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id q3-20020a1cf303000000b003f3157988f8sm921184wmq.26.2023.05.03.00.07.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:07:26 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 39/57] tcg: Introduce tcg_target_has_memory_bswap Date: Wed, 3 May 2023 08:06:38 +0100 Message-Id: <20230503070656.1746170-40-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::335; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x335.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro with a function with a memop argument. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/aarch64/tcg-target.h | 1 - tcg/arm/tcg-target.h | 1 - tcg/i386/tcg-target.h | 3 --- tcg/loongarch64/tcg-target.h | 2 -- tcg/mips/tcg-target.h | 2 -- tcg/ppc/tcg-target.h | 1 - tcg/riscv/tcg-target.h | 2 -- tcg/s390x/tcg-target.h | 2 -- tcg/sparc64/tcg-target.h | 1 - tcg/tcg-internal.h | 2 ++ tcg/tci/tcg-target.h | 2 -- tcg/tcg-op.c | 20 +++++++++++--------- tcg/aarch64/tcg-target.c.inc | 5 +++++ tcg/arm/tcg-target.c.inc | 5 +++++ tcg/i386/tcg-target.c.inc | 5 +++++ tcg/loongarch64/tcg-target.c.inc | 5 +++++ tcg/mips/tcg-target.c.inc | 5 +++++ tcg/ppc/tcg-target.c.inc | 5 +++++ tcg/riscv/tcg-target.c.inc | 5 +++++ tcg/s390x/tcg-target.c.inc | 5 +++++ tcg/sparc64/tcg-target.c.inc | 5 +++++ tcg/tci/tcg-target.c.inc | 5 +++++ 22 files changed, 63 insertions(+), 26 deletions(-) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index 3c0b0d312d..378e01d9d8 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -154,7 +154,6 @@ extern bool have_lse2; #define TCG_TARGET_HAS_cmpsel_vec 0 #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h index def2a189e6..4c2d3332d5 100644 --- a/tcg/arm/tcg-target.h +++ b/tcg/arm/tcg-target.h @@ -150,7 +150,6 @@ extern bool use_neon_instructions; #define TCG_TARGET_HAS_cmpsel_vec 0 #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index 0421776cb8..8fe6958abd 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -240,9 +240,6 @@ extern bool have_atomic16; #include "tcg/tcg-mo.h" #define TCG_TARGET_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD) - -#define TCG_TARGET_HAS_MEMORY_BSWAP have_movbe - #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 17b8193aa5..75c3d80ed2 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -173,6 +173,4 @@ typedef enum { #define TCG_TARGET_NEED_LDST_LABELS -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 - #endif /* LOONGARCH_TCG_TARGET_H */ diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h index 42bd7fff01..47088af9cb 100644 --- a/tcg/mips/tcg-target.h +++ b/tcg/mips/tcg-target.h @@ -205,8 +205,6 @@ extern bool use_mips32r2_instructions; #endif #define TCG_TARGET_DEFAULT_MO 0 -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 - #define TCG_TARGET_NEED_LDST_LABELS #endif diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h index af81c5a57f..d55f0266bb 100644 --- a/tcg/ppc/tcg-target.h +++ b/tcg/ppc/tcg-target.h @@ -179,7 +179,6 @@ extern bool have_vsx; #define TCG_TARGET_HAS_cmpsel_vec 0 #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 1 #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/riscv/tcg-target.h b/tcg/riscv/tcg-target.h index dddf2486c1..dece3b3c27 100644 --- a/tcg/riscv/tcg-target.h +++ b/tcg/riscv/tcg-target.h @@ -168,6 +168,4 @@ typedef enum { #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 - #endif diff --git a/tcg/s390x/tcg-target.h b/tcg/s390x/tcg-target.h index a05b473117..fe05680124 100644 --- a/tcg/s390x/tcg-target.h +++ b/tcg/s390x/tcg-target.h @@ -172,8 +172,6 @@ extern uint64_t s390_facilities[3]; #define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_BY_REF #define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_BY_REF -#define TCG_TARGET_HAS_MEMORY_BSWAP 1 - #define TCG_TARGET_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD) #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/sparc64/tcg-target.h b/tcg/sparc64/tcg-target.h index 7434cc99d4..f6cd86975a 100644 --- a/tcg/sparc64/tcg-target.h +++ b/tcg/sparc64/tcg-target.h @@ -154,7 +154,6 @@ extern bool use_vis3_instructions; #define TCG_AREG0 TCG_REG_I0 #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 1 #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h index 0f1ba01a9a..67b698bd5c 100644 --- a/tcg/tcg-internal.h +++ b/tcg/tcg-internal.h @@ -126,4 +126,6 @@ static inline TCGv_i64 TCGV128_HIGH(TCGv_i128 t) return temp_tcgv_i64(tcgv_i128_temp(t) + o); } +bool tcg_target_has_memory_bswap(MemOp memop); + #endif /* TCG_INTERNAL_H */ diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h index 7140a76a73..364012e4d2 100644 --- a/tcg/tci/tcg-target.h +++ b/tcg/tci/tcg-target.h @@ -176,6 +176,4 @@ typedef enum { We prefer consistency across hosts on this. */ #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 1 - #endif /* TCG_TARGET_H */ diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 9101d334b6..85f22458c9 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -2959,7 +2959,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop) oi = make_memop_idx(memop, idx); orig_memop = memop; - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { memop &= ~MO_BSWAP; /* The bswap primitive benefits from zero-extended input. */ if ((memop & MO_SSIZE) == MO_SW) { @@ -2996,7 +2996,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop) memop = tcg_canonicalize_memop(memop, 0, 1); oi = make_memop_idx(memop, idx); - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { swap = tcg_temp_ebb_new_i32(); switch (memop & MO_SIZE) { case MO_16: @@ -3045,7 +3045,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop) oi = make_memop_idx(memop, idx); orig_memop = memop; - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { memop &= ~MO_BSWAP; /* The bswap primitive benefits from zero-extended input. */ if ((memop & MO_SIGN) && (memop & MO_SIZE) < MO_64) { @@ -3091,7 +3091,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop) memop = tcg_canonicalize_memop(memop, 1, 1); oi = make_memop_idx(memop, idx); - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { swap = tcg_temp_ebb_new_i64(); switch (memop & MO_SIZE) { case MO_16: @@ -3168,11 +3168,6 @@ static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) tcg_debug_assert((orig & MO_SIZE) == MO_128); tcg_debug_assert((orig & MO_SIGN) == 0); - /* Use a memory ordering implemented by the host. */ - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (orig & MO_BSWAP)) { - mop_1 &= ~MO_BSWAP; - } - /* Reduce the size to 64-bit. */ mop_1 = (mop_1 & ~MO_SIZE) | MO_64; @@ -3202,6 +3197,13 @@ static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) default: g_assert_not_reached(); } + + /* Use a memory ordering implemented by the host. */ + if ((orig & MO_BSWAP) && !tcg_target_has_memory_bswap(mop_1)) { + mop_1 &= ~MO_BSWAP; + mop_2 &= ~MO_BSWAP; + } + ret[0] = mop_1; ret[1] = mop_2; } diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 09c9ecad0f..8e5f3d3688 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1595,6 +1595,11 @@ typedef struct { TCGType index_ext; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 1, .tmp = { TCG_REG_TMP } }; diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index eb0542f32e..e5aed03247 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1325,6 +1325,11 @@ typedef struct { bool index_scratch; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { /* We arrive at the slow path via "BLNE", so R14 contains l->raddr. */ diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index e78d4d4aa7..7c72bf6684 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -1776,6 +1776,11 @@ typedef struct { int seg; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return have_movbe; +} + /* * Because i686 has no register parameters and because x86_64 has xchg * to handle addr/data register overlap, we have placed all input arguments diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 20cb21b264..62bf823084 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -828,6 +828,11 @@ typedef struct { TCGReg index; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + /* * For softmmu, perform the TLB load and compare. * For useronly, perform any required alignment tests. diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index fa0f334e8d..cd0254a0d7 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1141,6 +1141,11 @@ typedef struct { MemOp align; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + /* * For softmmu, perform the TLB load and compare. * For useronly, perform any required alignment tests. diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 733f67c7a5..f0a4118bbb 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2017,6 +2017,11 @@ typedef struct { TCGReg index; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return true; +} + /* * For softmmu, perform the TLB load and compare. * For useronly, perform any required alignment tests. diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 415e6c6e15..37870c89fc 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -853,6 +853,11 @@ static void tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) tcg_debug_assert(ok); } +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + /* We have three temps, we might as well expose them. */ static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 3, .tmp = { TCG_REG_TMP0, TCG_REG_TMP1, TCG_REG_TMP2 } diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index de8aed5f77..22f0206b5a 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -1574,6 +1574,11 @@ typedef struct { int disp; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return true; +} + static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data, HostAddress h) { diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 0237188d65..bb23038529 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -1011,6 +1011,11 @@ typedef struct { TCGReg index; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return true; +} + /* * For softmmu, perform the TLB load and compare. * For useronly, perform any required alignment tests. diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc index e31640d109..89f693050c 100644 --- a/tcg/tci/tcg-target.c.inc +++ b/tcg/tci/tcg-target.c.inc @@ -964,3 +964,8 @@ static void tcg_target_init(TCGContext *s) static inline void tcg_target_qemu_prologue(TCGContext *s) { } + +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return true; +} From patchwork Wed May 3 07:06:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678673 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp909817wrs; Wed, 3 May 2023 00:26:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7U9cELLQN3EH+cLytHaW7TRv5P1ej0n2UNkJaF7jAgkAPzXOfZu13WXxDBZuYcJBKIriaH X-Received: by 2002:a05:622a:144b:b0:3ef:5b0c:2900 with SMTP id v11-20020a05622a144b00b003ef5b0c2900mr28492320qtx.5.1683098807732; Wed, 03 May 2023 00:26:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098807; cv=none; d=google.com; s=arc-20160816; b=UVoN40X/NpXVSLXzn+TUy6Pygl7YjuLFDmhoZ/yUaqU4Gjs+F06/9UpsL/iRNX2T4C aWQSUfoL6qO3LJknTFe0E9kfs7/RsPSG1o1xMX5jI1HC0VkF3dC4rJGyyN+Q9CrHAFeR juQB/mW7VR6CIKEx7CzBOReRVYeco67vmIvKip6HIQRVVzo4q63pHI4kbykiDl602Amx l2gr0Fzks5tDOpccteqQaCnDeEYsSX/nXQUOjsF+AR0Ywu9HYuKgkwBR2xeRv2YeLzjn B0ApF7YQpE8lpSEGz4swP0TsGG3XFkpxxPNN03hJM7Is1Bci+uuSP/VDPY4VtlmzgIKP P3pQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=7tjn+bhFEykcOOu8pztGK+c3m4ZRnhhBjNTo+m7c7SU=; b=0znIL3EGBpARbjy8tbaXR+3FQYoD78ErRy+GSUNnkO1HnHU5fqaWGcdPUfFU8ObYii Oxjn1T62RFct2y3WBDm7jCyPO1cZndec2ais7WYnLJsqlpmP4ZbXSM2RCskUqCGVcMWU 4qhhMCk4HqUgouiSMoI+X8r2400WuQsNkMMedUSvRXIYLKjGGu1etqLkE3O4dwYLVRQH nX/LgHCo3ioQhhSd0b98RrTH3wJ8Xthn9aqiNMD32ak9UtuqGhLPSXaL2EQiCKxfNHNt 9Zi35o82OA0fDOlCFjdtDSd00i9tLubyZqWivoil/P0ZtH42LHzt3Q4zKm5je66NDC9l kqDA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=z3bBjk4F; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id v10-20020a05622a130a00b003ef3fba077asi18864860qtk.558.2023.05.03.00.26.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:26:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=z3bBjk4F; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6df-0003XP-2o; Wed, 03 May 2023 03:10:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dI-0002jT-7c for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:40 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dC-00071M-6V for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:33 -0400 Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3f19ab99540so46284955e9.2 for ; Wed, 03 May 2023 00:10:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097828; x=1685689828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7tjn+bhFEykcOOu8pztGK+c3m4ZRnhhBjNTo+m7c7SU=; b=z3bBjk4FKXtC9lrujsWFIA2f+Qv06gN6s4Pzovyw0tpiHvZVdcUIQKDz2ehpTpdTLS F5CWAVCP/3YWgPW2k0CGsHkAPbQh0LnOmiJrKmEj5hwizkK3/xIjFJvMdEJY4oVPS3nP +66YoxHw9AAMOILkABUBRpd80xKGeDT/m11XTOTDqPuEV3L1UZsifkc6I85JyaDD3FOv 28QuLsOJTE8gqbDtBZ2HOXU6uPHvBiFMrZxaNBzB5XWJ/Hoh4csA8r7b/9yvjSSn9iyR uj52vMYRbi54gjtKQJA6ini096B11BreFThsQpcy8wq+VXP0+1YH3LcdJv+1EcqF75MO VAOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097828; x=1685689828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7tjn+bhFEykcOOu8pztGK+c3m4ZRnhhBjNTo+m7c7SU=; b=JvH6wr5TVpRqvAZz6p89ZUr4mA1W/SeuZpgr20ioAb55izHgItgi66TKwOY2xvn7C3 VqMP9ttDpeUZXCiJnkBONHxnSv99lMaX3eilBlpCVd3bh0w108otmgMem3J46RCX0U7F yQIoPIhSPDFWo6MHXBD9EGZQLOYTjvQ8XhCsMdTsSjxPyDQBECongutt2uWYw+ej2JMW 6E+hJK6bdPdhHUJudxsp76nMJOffd5Mu70O6NKHSKkb9rSr6pfMocqemKq3CCKBrDYaE 9qv3Xpk1uFHIJBYtaZZBhWa3xbVAOw5AKZy3w04Vp6HeAUShzBB1g4jhM0vxoLM+VB5B UrfA== X-Gm-Message-State: AC+VfDwccK8ZcVFKEI9viDEHa4aBSd9WRzhn1pPvZJVrMNge+bWtEpth p7Oxei9pkJuVA384UKvYLyLmILGdCjbVQvDp9rk5Jg== X-Received: by 2002:a05:600c:281:b0:3f0:9fef:1028 with SMTP id 1-20020a05600c028100b003f09fef1028mr13653182wmk.17.1683097828039; Wed, 03 May 2023 00:10:28 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:27 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 40/57] tcg: Add INDEX_op_qemu_{ld,st}_i128 Date: Wed, 3 May 2023 08:06:39 +0100 Message-Id: <20230503070656.1746170-41-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::332; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x332.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Add opcodes for backend support for 128-bit memory operations. Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- include/tcg/tcg-opc.h | 8 +++++ tcg/aarch64/tcg-target.h | 2 ++ tcg/arm/tcg-target.h | 2 ++ tcg/i386/tcg-target.h | 2 ++ tcg/loongarch64/tcg-target.h | 1 + tcg/mips/tcg-target.h | 2 ++ tcg/ppc/tcg-target.h | 2 ++ tcg/riscv/tcg-target.h | 2 ++ tcg/s390x/tcg-target.h | 2 ++ tcg/sparc64/tcg-target.h | 2 ++ tcg/tci/tcg-target.h | 2 ++ tcg/tcg-op.c | 69 ++++++++++++++++++++++++++++++++---- tcg/tcg.c | 4 +++ docs/devel/tcg-ops.rst | 11 +++--- 14 files changed, 101 insertions(+), 10 deletions(-) diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h index dd444734d9..94cf7c5d6a 100644 --- a/include/tcg/tcg-opc.h +++ b/include/tcg/tcg-opc.h @@ -213,6 +213,14 @@ DEF(qemu_st8_i32, 0, TLADDR_ARGS + 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS | IMPL(TCG_TARGET_HAS_qemu_st8_i32)) +/* Only for 64-bit hosts at the moment. */ +DEF(qemu_ld_i128, 2, 1, 1, + TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT | + IMPL(TCG_TARGET_HAS_qemu_ldst_i128)) +DEF(qemu_st_i128, 0, 3, 1, + TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT | + IMPL(TCG_TARGET_HAS_qemu_ldst_i128)) + /* Host vector support. */ #define IMPLVEC TCG_OPF_VECTOR | IMPL(TCG_TARGET_MAYBE_vec) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index 378e01d9d8..74ee2ed255 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -129,6 +129,8 @@ extern bool have_lse2; #define TCG_TARGET_HAS_muluh_i64 1 #define TCG_TARGET_HAS_mulsh_i64 1 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_HAS_v64 1 #define TCG_TARGET_HAS_v128 1 #define TCG_TARGET_HAS_v256 0 diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h index 4c2d3332d5..65efc538f4 100644 --- a/tcg/arm/tcg-target.h +++ b/tcg/arm/tcg-target.h @@ -125,6 +125,8 @@ extern bool use_neon_instructions; #define TCG_TARGET_HAS_rem_i32 0 #define TCG_TARGET_HAS_qemu_st8_i32 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_HAS_v64 use_neon_instructions #define TCG_TARGET_HAS_v128 use_neon_instructions #define TCG_TARGET_HAS_v256 0 diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index 8fe6958abd..943af6775e 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -194,6 +194,8 @@ extern bool have_atomic16; #define TCG_TARGET_HAS_qemu_st8_i32 1 #endif +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + /* We do not support older SSE systems, only beginning with AVX1. */ #define TCG_TARGET_HAS_v64 have_avx1 #define TCG_TARGET_HAS_v128 have_avx1 diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 75c3d80ed2..482901ac15 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -168,6 +168,7 @@ typedef enum { #define TCG_TARGET_HAS_muls2_i64 0 #define TCG_TARGET_HAS_muluh_i64 1 #define TCG_TARGET_HAS_mulsh_i64 1 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 #define TCG_TARGET_DEFAULT_MO (0) diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h index 47088af9cb..7277a117ef 100644 --- a/tcg/mips/tcg-target.h +++ b/tcg/mips/tcg-target.h @@ -204,6 +204,8 @@ extern bool use_mips32r2_instructions; #define TCG_TARGET_HAS_ext16u_i64 0 /* andi rt, rs, 0xffff */ #endif +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_DEFAULT_MO 0 #define TCG_TARGET_NEED_LDST_LABELS diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h index d55f0266bb..0914380bd7 100644 --- a/tcg/ppc/tcg-target.h +++ b/tcg/ppc/tcg-target.h @@ -149,6 +149,8 @@ extern bool have_vsx; #define TCG_TARGET_HAS_mulsh_i64 1 #endif +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + /* * While technically Altivec could support V64, it has no 64-bit store * instruction and substituting two 32-bit stores makes the generated diff --git a/tcg/riscv/tcg-target.h b/tcg/riscv/tcg-target.h index dece3b3c27..494c986b49 100644 --- a/tcg/riscv/tcg-target.h +++ b/tcg/riscv/tcg-target.h @@ -163,6 +163,8 @@ typedef enum { #define TCG_TARGET_HAS_muluh_i64 1 #define TCG_TARGET_HAS_mulsh_i64 1 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_DEFAULT_MO (0) #define TCG_TARGET_NEED_LDST_LABELS diff --git a/tcg/s390x/tcg-target.h b/tcg/s390x/tcg-target.h index fe05680124..170007bea5 100644 --- a/tcg/s390x/tcg-target.h +++ b/tcg/s390x/tcg-target.h @@ -140,6 +140,8 @@ extern uint64_t s390_facilities[3]; #define TCG_TARGET_HAS_muluh_i64 0 #define TCG_TARGET_HAS_mulsh_i64 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_HAS_v64 HAVE_FACILITY(VECTOR) #define TCG_TARGET_HAS_v128 HAVE_FACILITY(VECTOR) #define TCG_TARGET_HAS_v256 0 diff --git a/tcg/sparc64/tcg-target.h b/tcg/sparc64/tcg-target.h index f6cd86975a..31c5537379 100644 --- a/tcg/sparc64/tcg-target.h +++ b/tcg/sparc64/tcg-target.h @@ -151,6 +151,8 @@ extern bool use_vis3_instructions; #define TCG_TARGET_HAS_muluh_i64 use_vis3_instructions #define TCG_TARGET_HAS_mulsh_i64 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_AREG0 TCG_REG_I0 #define TCG_TARGET_DEFAULT_MO (0) diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h index 364012e4d2..28dc6d5cfc 100644 --- a/tcg/tci/tcg-target.h +++ b/tcg/tci/tcg-target.h @@ -127,6 +127,8 @@ #define TCG_TARGET_HAS_mulu2_i32 1 #endif /* TCG_TARGET_REG_BITS == 64 */ +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + /* Number of registers available. */ #define TCG_TARGET_NB_REGS 16 diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 85f22458c9..06d3181fd0 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -3216,7 +3216,7 @@ static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) void tcg_gen_qemu_ld_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) { - MemOpIdx oi = make_memop_idx(memop, idx); + const MemOpIdx oi = make_memop_idx(memop, idx); tcg_debug_assert((memop & MO_SIZE) == MO_128); tcg_debug_assert((memop & MO_SIGN) == 0); @@ -3224,9 +3224,36 @@ void tcg_gen_qemu_ld_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); addr = plugin_prep_mem_callbacks(addr); - /* TODO: allow the tcg backend to see the whole operation. */ + /* TODO: For now, force 32-bit hosts to use the helper. */ + if (TCG_TARGET_HAS_qemu_ldst_i128 && TCG_TARGET_REG_BITS == 64) { + TCGv_i64 lo, hi; + TCGArg addr_arg; + MemOpIdx adj_oi; + bool need_bswap = false; - if (use_two_i64_for_i128(memop)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { + lo = TCGV128_HIGH(val); + hi = TCGV128_LOW(val); + adj_oi = make_memop_idx(memop & ~MO_BSWAP, idx); + need_bswap = true; + } else { + lo = TCGV128_LOW(val); + hi = TCGV128_HIGH(val); + adj_oi = oi; + } + +#if TARGET_LONG_BITS == 32 + addr_arg = tcgv_i32_arg(addr); +#else + addr_arg = tcgv_i64_arg(addr); +#endif + tcg_gen_op4ii_i64(INDEX_op_qemu_ld_i128, lo, hi, addr_arg, adj_oi); + + if (need_bswap) { + tcg_gen_bswap64_i64(lo, lo); + tcg_gen_bswap64_i64(hi, hi); + } + } else if (use_two_i64_for_i128(memop)) { MemOp mop[2]; TCGv addr_p8; TCGv_i64 x, y; @@ -3269,7 +3296,7 @@ void tcg_gen_qemu_ld_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) void tcg_gen_qemu_st_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) { - MemOpIdx oi = make_memop_idx(memop, idx); + const MemOpIdx oi = make_memop_idx(memop, idx); tcg_debug_assert((memop & MO_SIZE) == MO_128); tcg_debug_assert((memop & MO_SIGN) == 0); @@ -3277,9 +3304,39 @@ void tcg_gen_qemu_st_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) tcg_gen_req_mo(TCG_MO_ST_LD | TCG_MO_ST_ST); addr = plugin_prep_mem_callbacks(addr); - /* TODO: allow the tcg backend to see the whole operation. */ + /* TODO: For now, force 32-bit hosts to use the helper. */ - if (use_two_i64_for_i128(memop)) { + if (TCG_TARGET_HAS_qemu_ldst_i128 && TCG_TARGET_REG_BITS == 64) { + TCGv_i64 lo, hi; + TCGArg addr_arg; + MemOpIdx adj_oi; + bool need_bswap = false; + + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { + lo = tcg_temp_new_i64(); + hi = tcg_temp_new_i64(); + tcg_gen_bswap64_i64(lo, TCGV128_HIGH(val)); + tcg_gen_bswap64_i64(hi, TCGV128_LOW(val)); + adj_oi = make_memop_idx(memop & ~MO_BSWAP, idx); + need_bswap = true; + } else { + lo = TCGV128_LOW(val); + hi = TCGV128_HIGH(val); + adj_oi = oi; + } + +#if TARGET_LONG_BITS == 32 + addr_arg = tcgv_i32_arg(addr); +#else + addr_arg = tcgv_i64_arg(addr); +#endif + tcg_gen_op4ii_i64(INDEX_op_qemu_st_i128, lo, hi, addr_arg, adj_oi); + + if (need_bswap) { + tcg_temp_free_i64(lo); + tcg_temp_free_i64(hi); + } + } else if (use_two_i64_for_i128(memop)) { MemOp mop[2]; TCGv addr_p8; TCGv_i64 x, y; diff --git a/tcg/tcg.c b/tcg/tcg.c index cb5ca9b612..b0e30a55ca 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -1718,6 +1718,10 @@ bool tcg_op_supported(TCGOpcode op) case INDEX_op_qemu_st8_i32: return TCG_TARGET_HAS_qemu_st8_i32; + case INDEX_op_qemu_ld_i128: + case INDEX_op_qemu_st_i128: + return TCG_TARGET_HAS_qemu_ldst_i128; + case INDEX_op_mov_i32: case INDEX_op_setcond_i32: case INDEX_op_brcond_i32: diff --git a/docs/devel/tcg-ops.rst b/docs/devel/tcg-ops.rst index f3f451b77f..6a166c5665 100644 --- a/docs/devel/tcg-ops.rst +++ b/docs/devel/tcg-ops.rst @@ -672,19 +672,20 @@ QEMU specific operations | This operation is optional. If the TCG backend does not implement the goto_ptr opcode, emitting this op is equivalent to emitting exit_tb(0). - * - qemu_ld_i32/i64 *t0*, *t1*, *flags*, *memidx* + * - qemu_ld_i32/i64/i128 *t0*, *t1*, *flags*, *memidx* - qemu_st_i32/i64 *t0*, *t1*, *flags*, *memidx* + qemu_st_i32/i64/i128 *t0*, *t1*, *flags*, *memidx* qemu_st8_i32 *t0*, *t1*, *flags*, *memidx* - | Load data at the guest address *t1* into *t0*, or store data in *t0* at guest - address *t1*. The _i32/_i64 size applies to the size of the input/output + address *t1*. The _i32/_i64/_i128 size applies to the size of the input/output register *t0* only. The address *t1* is always sized according to the guest, and the width of the memory operation is controlled by *flags*. | | Both *t0* and *t1* may be split into little-endian ordered pairs of registers - if dealing with 64-bit quantities on a 32-bit host. + if dealing with 64-bit quantities on a 32-bit host, or 128-bit quantities on + a 64-bit host. | | The *memidx* selects the qemu tlb index to use (e.g. user or kernel access). The flags are the MemOp bits, selecting the sign, width, and endianness @@ -693,6 +694,8 @@ QEMU specific operations | For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a 64-bit memory access specified in *flags*. | + | For qemu_ld/st_i128, these are only supported for a 64-bit host. + | | For i386, qemu_st8_i32 is exactly like qemu_st_i32, except the size of the memory operation is known to be 8-bit. This allows the backend to provide a different set of register constraints. From patchwork Wed May 3 07:06:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678711 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp912114wrs; Wed, 3 May 2023 00:34:04 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Z6U0ktcQZIbcJn/giigu0RoG9KP5Rn5UXmtpuySs3+0WqRMPeyTzel6B1NHdHxoEcxkgS X-Received: by 2002:a05:622a:5ca:b0:3ef:64d9:f0b3 with SMTP id d10-20020a05622a05ca00b003ef64d9f0b3mr29005819qtb.14.1683099243853; Wed, 03 May 2023 00:34:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683099243; cv=none; d=google.com; s=arc-20160816; b=0HR2Q+37nJnpv3XJVI/SxuMzwRokMyU9KfhRQBTU5BYfIrLwbHDWVoKO180KoJBTmy FwG39HIBt90k+UYRaGrDpIVFa3tfzhluFMIOsA3ilRdn3uUQ5Vh3GD3Yzm0KZJgMm2/u SWl3wF7Kg0NHnlwJHMqO3aUYRh9GehXjKMMylgTn1IgFf9TkywIXuP8RuPf1IJrZj6sB 0QGd5xOKt1DgCwxNRWbc9AghDM5hGwAeu/HSGl8wWmxpbJG3RPJkrtbcn1rRm9xp9Sgg 4cuDXA2OIs2h7JpNWA/FqWjxD77DRF7HDfuscGs+NOdR/dcCPNuIr4/e+FDjnZeIf24V ccjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=RIuPmH6ahEdkBPNFgC0JloIF81VLEbIJTekYJKef9xw=; b=ig+rPeXa+YZG26CJWelpk8PzGqtxrJA55cForvaO6NmJzReh3cerGI9YWSxohOuPur vJdVJjqKmURBocNF43FUiKzU/PddvrEQEsRvQSr/70Psv08GVvdSiPQaB77UouW8lSbZ WFzCt4Nj/kt/ms2n8gb2co6L0PIotX1/046Ssv7CA8v79oFJtOn0o4ta5keE3vbLCGli HDDl/IcMamOIQviej7DP3rlzGAM5tYJ8Nq4IoQQjQTd1dXWTwDvFKJHnoO3tFXLNeJRB XH8i8bs7NcLy29SNQ8YOKnkfKzOwWXbWhsGadAjlp5z+BTiBhk41AtcPDetkU/M6uIPu rQ1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=fUHKKZ7b; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id v7-20020a379307000000b00747a2434dbcsi17913894qkd.776.2023.05.03.00.34.03 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:34:03 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=fUHKKZ7b; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6ds-0004bO-IC; Wed, 03 May 2023 03:11:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dL-0002qQ-Nu for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:46 -0400 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dF-00071j-RT for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:38 -0400 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3f4000ec71dso102065e9.2 for ; Wed, 03 May 2023 00:10:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097829; x=1685689829; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RIuPmH6ahEdkBPNFgC0JloIF81VLEbIJTekYJKef9xw=; b=fUHKKZ7be5qnEq91nqyhWSgouAz52BFbY184+vHS6J4hS4RU4WZaTjhKg8pgzgMP4D p6I6QBFp3siRsAZMbONpV2jivSNiLPgyRGdCE2SnDA5kZURdh3KcVt4Q0MzcqV5svZnE bVS5TzEdB8WcLFo1GBgpEedbjTOVN3HPY6NW2MCgcAdzaX6QdmcqVsmyfwtCsVCEej4z Bytv2/AXH2kTTxAv8QkWDBL+VGYF/LYwewMtolCalnyNMEhZJn+uzhbITN5HMSSflnBA EFwC+I2W76pL+H5XiVND2Alw7IF2DUKaQlqn/qen00lPRQFdwsaDagJIPq9xD4iAbYOw 5+jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097829; x=1685689829; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RIuPmH6ahEdkBPNFgC0JloIF81VLEbIJTekYJKef9xw=; b=UcGfqSgooUEDJa4X4T8DOooTDbIAl08IMQ9h6IcYjVdJvT3gWJjZZ6/O4f86vDFhL2 4pCRcNbkD5AfMF56fzTUEztLRbM4jKkVmxomAwn+VFdOZwS0crAlRq7kVTyxut8uPvSj AWx+w1VQltqrrWAUwXVVvHaEfiKitm2SF54Y3Eg4VBs8vUi3jsXsQhXTpYfhIplXVYNs y9lmiQL8Tnyg6EnChYkIhGpI/9Z9HEHV0wbQfX4YXrZXLJ9sCOyXSIe8PH3HzpT3r3DG DCdL3PtrRVjJMUH9b9VokQFi72H9ScgJ4LQ8ICiidGe9hQ6CEJxA5XWZ1SSN3D9NFQWU mOTA== X-Gm-Message-State: AC+VfDwFVAbZ37q3UszNE67ETN49NXDtsJe/bFGyU6eiu8dc4v0UGjsE JkQuOu8KH0OwCYnmau/i2ZVUeeiB9tIGksijkpJFAA== X-Received: by 2002:a05:600c:21cb:b0:3f2:73a:32fc with SMTP id x11-20020a05600c21cb00b003f2073a32fcmr13859787wmj.32.1683097829155; Wed, 03 May 2023 00:10:29 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:28 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 41/57] tcg: Support TCG_TYPE_I128 in tcg_out_{ld, st}_helper_{args, ret} Date: Wed, 3 May 2023 08:06:40 +0100 Message-Id: <20230503070656.1746170-42-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::329; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x329.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/tcg.c | 174 ++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 148 insertions(+), 26 deletions(-) diff --git a/tcg/tcg.c b/tcg/tcg.c index b0e30a55ca..3905d3041c 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -206,6 +206,7 @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] __attribute__((unused)) = { [MO_UQ] = helper_ldq_mmu, #if TCG_TARGET_REG_BITS == 64 [MO_SL] = helper_ldsl_mmu, + [MO_128] = helper_ld16_mmu, #endif }; @@ -214,6 +215,9 @@ static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = { [MO_16] = helper_stw_mmu, [MO_32] = helper_stl_mmu, [MO_64] = helper_stq_mmu, +#if TCG_TARGET_REG_BITS == 64 + [MO_128] = helper_st16_mmu, +#endif }; TCGContext tcg_init_ctx; @@ -773,6 +777,15 @@ static TCGHelperInfo info_helper_ld64_mmu = { | dh_typemask(ptr, 4) /* uintptr_t ra */ }; +static TCGHelperInfo info_helper_ld128_mmu = { + .flags = TCG_CALL_NO_WG, + .typemask = dh_typemask(i128, 0) /* return Int128 */ + | dh_typemask(env, 1) + | dh_typemask(tl, 2) /* target_ulong addr */ + | dh_typemask(i32, 3) /* unsigned oi */ + | dh_typemask(ptr, 4) /* uintptr_t ra */ +}; + static TCGHelperInfo info_helper_st32_mmu = { .flags = TCG_CALL_NO_WG, .typemask = dh_typemask(void, 0) @@ -793,6 +806,16 @@ static TCGHelperInfo info_helper_st64_mmu = { | dh_typemask(ptr, 5) /* uintptr_t ra */ }; +static TCGHelperInfo info_helper_st128_mmu = { + .flags = TCG_CALL_NO_WG, + .typemask = dh_typemask(void, 0) + | dh_typemask(env, 1) + | dh_typemask(tl, 2) /* target_ulong addr */ + | dh_typemask(i128, 3) /* Int128 data */ + | dh_typemask(i32, 4) /* unsigned oi */ + | dh_typemask(ptr, 5) /* uintptr_t ra */ +}; + #ifdef CONFIG_TCG_INTERPRETER static ffi_type *typecode_to_ffi(int argmask) { @@ -1206,8 +1229,10 @@ static void tcg_context_init(unsigned max_cpus) init_call_layout(&info_helper_ld32_mmu); init_call_layout(&info_helper_ld64_mmu); + init_call_layout(&info_helper_ld128_mmu); init_call_layout(&info_helper_st32_mmu); init_call_layout(&info_helper_st64_mmu); + init_call_layout(&info_helper_st128_mmu); #ifdef CONFIG_TCG_INTERPRETER init_ffi_layouts(); @@ -5361,6 +5386,9 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, case MO_64: info = &info_helper_ld64_mmu; break; + case MO_128: + info = &info_helper_ld128_mmu; + break; default: g_assert_not_reached(); } @@ -5375,8 +5403,33 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, tcg_out_helper_load_slots(s, nmov, mov, parm); - /* No special attention for 32 and 64-bit return values. */ - tcg_debug_assert(info->out_kind == TCG_CALL_RET_NORMAL); + switch (info->out_kind) { + case TCG_CALL_RET_NORMAL: + case TCG_CALL_RET_BY_VEC: + break; + case TCG_CALL_RET_BY_REF: + /* + * The return reference is in the first argument slot. + * We need memory in which to return: re-use the top of stack. + */ + { + int ofs_slot0 = arg_slot_stk_ofs(0); + + if (arg_slot_reg_p(0)) { + tcg_out_addi_ptr(s, tcg_target_call_iarg_regs[0], + TCG_REG_CALL_STACK, ofs_slot0); + } else { + tcg_debug_assert(parm->ntmp != 0); + tcg_out_addi_ptr(s, parm->tmp[0], + TCG_REG_CALL_STACK, ofs_slot0); + tcg_out_st(s, TCG_TYPE_PTR, parm->tmp[0], + TCG_REG_CALL_STACK, ofs_slot0); + } + } + break; + default: + g_assert_not_reached(); + } tcg_out_helper_load_common_args(s, ldst, parm, info, next_arg); } @@ -5385,11 +5438,18 @@ static void tcg_out_ld_helper_ret(TCGContext *s, const TCGLabelQemuLdst *ldst, bool load_sign, const TCGLdstHelperParam *parm) { + MemOp mop = get_memop(ldst->oi); TCGMovExtend mov[2]; + int ofs_slot0; - if (ldst->type <= TCG_TYPE_REG) { - MemOp mop = get_memop(ldst->oi); + switch (ldst->type) { + case TCG_TYPE_I64: + if (TCG_TARGET_REG_BITS == 32) { + break; + } + /* fall through */ + case TCG_TYPE_I32: mov[0].dst = ldst->datalo_reg; mov[0].src = tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, 0); mov[0].dst_type = ldst->type; @@ -5415,25 +5475,49 @@ static void tcg_out_ld_helper_ret(TCGContext *s, const TCGLabelQemuLdst *ldst, mov[0].src_ext = mop & MO_SSIZE; } tcg_out_movext1(s, mov); - } else { - assert(TCG_TARGET_REG_BITS == 32); + return; - mov[0].dst = ldst->datalo_reg; - mov[0].src = - tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, HOST_BIG_ENDIAN); - mov[0].dst_type = TCG_TYPE_I32; - mov[0].src_type = TCG_TYPE_I32; - mov[0].src_ext = MO_32; + case TCG_TYPE_I128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + ofs_slot0 = arg_slot_stk_ofs(0); + switch (TCG_TARGET_CALL_RET_I128) { + case TCG_CALL_RET_NORMAL: + break; + case TCG_CALL_RET_BY_VEC: + tcg_out_st(s, TCG_TYPE_V128, + tcg_target_call_oarg_reg(TCG_CALL_RET_BY_VEC, 0), + TCG_REG_CALL_STACK, ofs_slot0); + /* fall through */ + case TCG_CALL_RET_BY_REF: + tcg_out_ld(s, TCG_TYPE_I64, ldst->datalo_reg, + TCG_REG_CALL_STACK, ofs_slot0 + 8 * HOST_BIG_ENDIAN); + tcg_out_ld(s, TCG_TYPE_I64, ldst->datahi_reg, + TCG_REG_CALL_STACK, ofs_slot0 + 8 * !HOST_BIG_ENDIAN); + return; + default: + g_assert_not_reached(); + } + break; - mov[1].dst = ldst->datahi_reg; - mov[1].src = - tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, !HOST_BIG_ENDIAN); - mov[1].dst_type = TCG_TYPE_REG; - mov[1].src_type = TCG_TYPE_REG; - mov[1].src_ext = MO_32; - - tcg_out_movext2(s, mov, mov + 1, parm->ntmp ? parm->tmp[0] : -1); + default: + g_assert_not_reached(); } + + mov[0].dst = ldst->datalo_reg; + mov[0].src = + tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, HOST_BIG_ENDIAN); + mov[0].dst_type = TCG_TYPE_I32; + mov[0].src_type = TCG_TYPE_I32; + mov[0].src_ext = TCG_TARGET_REG_BITS == 32 ? MO_32 : MO_64; + + mov[1].dst = ldst->datahi_reg; + mov[1].src = + tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, !HOST_BIG_ENDIAN); + mov[1].dst_type = TCG_TYPE_REG; + mov[1].src_type = TCG_TYPE_REG; + mov[1].src_ext = TCG_TARGET_REG_BITS == 32 ? MO_32 : MO_64; + + tcg_out_movext2(s, mov, mov + 1, parm->ntmp ? parm->tmp[0] : -1); } static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, @@ -5457,6 +5541,10 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, info = &info_helper_st64_mmu; data_type = TCG_TYPE_I64; break; + case MO_128: + info = &info_helper_st128_mmu; + data_type = TCG_TYPE_I128; + break; default: g_assert_not_reached(); } @@ -5474,13 +5562,47 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, /* Handle data argument. */ loc = &info->in[next_arg]; - n = tcg_out_helper_add_mov(mov + nmov, loc, data_type, ldst->type, - ldst->datalo_reg, ldst->datahi_reg); - next_arg += n; - nmov += n; - tcg_debug_assert(nmov <= ARRAY_SIZE(mov)); + switch (loc->kind) { + case TCG_CALL_ARG_NORMAL: + case TCG_CALL_ARG_EXTEND_U: + case TCG_CALL_ARG_EXTEND_S: + n = tcg_out_helper_add_mov(mov + nmov, loc, data_type, ldst->type, + ldst->datalo_reg, ldst->datahi_reg); + next_arg += n; + nmov += n; + tcg_out_helper_load_slots(s, nmov, mov, parm); + break; + + case TCG_CALL_ARG_BY_REF: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_debug_assert(data_type == TCG_TYPE_I128); + tcg_out_st(s, TCG_TYPE_I64, + HOST_BIG_ENDIAN ? ldst->datahi_reg : ldst->datalo_reg, + TCG_REG_CALL_STACK, arg_slot_stk_ofs(loc[0].ref_slot)); + tcg_out_st(s, TCG_TYPE_I64, + HOST_BIG_ENDIAN ? ldst->datalo_reg : ldst->datahi_reg, + TCG_REG_CALL_STACK, arg_slot_stk_ofs(loc[1].ref_slot)); + + tcg_out_helper_load_slots(s, nmov, mov, parm); + + if (arg_slot_reg_p(loc->arg_slot)) { + tcg_out_addi_ptr(s, tcg_target_call_iarg_regs[loc->arg_slot], + TCG_REG_CALL_STACK, + arg_slot_stk_ofs(loc->ref_slot)); + } else { + tcg_debug_assert(parm->ntmp != 0); + tcg_out_addi_ptr(s, parm->tmp[0], TCG_REG_CALL_STACK, + arg_slot_stk_ofs(loc->ref_slot)); + tcg_out_st(s, TCG_TYPE_PTR, parm->tmp[0], + TCG_REG_CALL_STACK, arg_slot_stk_ofs(loc->arg_slot)); + } + next_arg += 2; + break; + + default: + g_assert_not_reached(); + } - tcg_out_helper_load_slots(s, nmov, mov, parm); tcg_out_helper_load_common_args(s, ldst, parm, info, next_arg); } From patchwork Wed May 3 07:06:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678699 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp911420wrs; Wed, 3 May 2023 00:31:58 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5HQF9RwVvQGOyGJcGRvCaqK7igK8PrTNX5y5DQ9c6sj0YoQrF4is/7kprYKbQjJ8/6krPu X-Received: by 2002:ad4:5ae7:0:b0:5b2:fb2:4b1d with SMTP id c7-20020ad45ae7000000b005b20fb24b1dmr10237602qvh.12.1683099117970; Wed, 03 May 2023 00:31:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683099117; cv=none; d=google.com; s=arc-20160816; b=lKBO6PslI0rlydHrvJR5r4vq0VA9JQAIclV9rJ5VpBevfyVKznpnqzjlHd2pq+uASg TqJXy0/9vY4Pxq6zhUsdh1+1s0v6VhYN1rIulB57Pnh6UOdMPFx6CK9M9AaGzNA/9UmR ng55srnrI//SCBrBMdRwkw+0Q9I4Q7kDL0hr4uWyQlFbWpJLHV1RRcLUebTgq5Kr8SCT g5JlWGtIQJb4XhnIPKcds58A3BgeZ44zBbf26suY6IEntazqbp03P1hvHJYKWa5EecZ3 XVBUuqvPbXusJl8zvAyUG25ydWegiQuV7njznxNDePJVRG+7KyUYe/K/ORzeLo9XcqN3 ggNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ml1dYN+0ocahvRRJROPmQB5pqTLfE2AuSNUfwF2mFh4=; b=edsAW7tF4kBLYqX155RNTz63OiSsVKa7FJn1KVHvz5FdPxFNIV5R1tSsEYQsWjMAL9 fJX4f1m7Is0Y4BmRKjfW18hUS+JM28KOaQiEzI8iUdcId2cvcNXgAyevemov1dl7/tZ2 4Ms5dA7XoSa3dpLTC7+2dXa32fGCuMaM82WDpd5vJZKUO4PdiDhH5Lh3xAeAgiLf/Bhk 98Cz1DKeFIHYmKbsNE60cJCkt6o2G+A2IqB6lzpzT9XnDZ27sQBEnge56kdmCsMi377v QMIOpNQAY9LapeCSLPmWBUPEZroOrajUVC6pItzSShNW766Q9Edw7fWSFRhDLVOpxguE mGig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Cbap4TH6; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id e10-20020a0562140d8a00b005ef5b29c20dsi17864339qve.57.2023.05.03.00.31.57 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:31:57 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Cbap4TH6; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dg-0003io-OK; Wed, 03 May 2023 03:11:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dL-0002qP-Mb for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:46 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dF-000720-Qb for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:38 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f4000ec71dso102175e9.2 for ; Wed, 03 May 2023 00:10:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097830; x=1685689830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ml1dYN+0ocahvRRJROPmQB5pqTLfE2AuSNUfwF2mFh4=; b=Cbap4TH6dsr8ok1n7INwlJqpkcBV955R8zccU9z6DmiHHMsVI5BS0sDDMmmCSzjAuv W04XImevlf9hVXvlajHYaOxnEvXmHyaCydk/Dd6pk2OXQSI0bvQZd1AFUebjFrJI+ejS 5dJngeFP1fZCXxobrXc7a0H2Uha3gx0k/Heutdc+OParUGkfDiQ740riOy/ankCRg6Sc UJv3zWjUIU/KqPNfUieZdtpIg5PeX1nGBQNoO9hDI6FSYD7DbMn+I1LSn9ndB3iJ2Qrk pgympACo+yobu4fYb1vUwNj0xrXnTn5LqF2OKZmhN1h3WHsaJYYyXjCDY1Z/Pl3hF4Tm cXeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097830; x=1685689830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ml1dYN+0ocahvRRJROPmQB5pqTLfE2AuSNUfwF2mFh4=; b=Jv4ACvGVppd7uh7kPLycexGkRLyY7iqc173WO519idqOilblHmEab8ny6qmVYC5/oZ ysKxBBi6zBKjBpoNgjuzSwarJbrB0bgxjFab8cVWj0bJBjsKitoQC7mw1hvT5vjC6+bg nj/C863iclzuaQ9GkPgMDEi4hU3s1x7Kvg/X79sWz9uu59VbZakgMOEhHKStMcQSAAOx 0bVvRzwTVHnc26lDUS15e6u26Bzrs8JUwtSRQbZ5DgtYBOsi4JvijSjeUhpva4KTf5uh hNQoRANlz0Yo0apunemrvCqNa016G1o8z69hT9R+yEfL63qNG3mUJAswV1CY4ZF7mkRp jN4g== X-Gm-Message-State: AC+VfDyHODjm8fI6kX2BhZwzyhZQ+OJ2LTSsDwR8DjHcZ/CFRKAw9Pez yULL+G9iBaIyLC3Q81EP6mT7vQbQHVQfzXbmz1ih8g== X-Received: by 2002:a1c:6a18:0:b0:3f1:7372:f98f with SMTP id f24-20020a1c6a18000000b003f17372f98fmr13373219wmc.41.1683097829891; Wed, 03 May 2023 00:10:29 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:29 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 42/57] tcg: Introduce atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:41 +0100 Message-Id: <20230503070656.1746170-43-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Examine MemOp for atomicity and alignment, adjusting alignment as required to implement atomicity on the host. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/tcg.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/tcg/tcg.c b/tcg/tcg.c index 3905d3041c..2422da64ac 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -220,6 +220,11 @@ static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = { #endif }; +static MemOp atom_and_align_for_opc(TCGContext *s, MemOp *p_atom_a, + MemOp *p_atom_u, MemOp opc, + MemOp host_atom, bool allow_two_ops) + __attribute__((unused)); + TCGContext tcg_init_ctx; __thread TCGContext *tcg_ctx; @@ -5123,6 +5128,70 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op) } } +/* + * Return the alignment and atomicity to use for the inline fast path + * for the given memory operation. The alignment may be larger than + * that specified in @opc, and the correct alignment will be diagnosed + * by the slow path helper. + */ +static MemOp atom_and_align_for_opc(TCGContext *s, MemOp *p_atom_a, + MemOp *p_atom_u, MemOp opc, + MemOp host_atom, bool allow_two_ops) +{ + MemOp align = get_alignment_bits(opc); + MemOp atom, atmax, atmin, size = opc & MO_SIZE; + + /* When serialized, no further atomicity required. */ + if (s->gen_tb->cflags & CF_PARALLEL) { + atom = opc & MO_ATOM_MASK; + } else { + atom = MO_ATOM_NONE; + } + + atmax = opc & MO_ATMAX_MASK; + if (atmax == MO_ATMAX_SIZE) { + atmax = size; + } else { + atmax = atmax >> MO_ATMAX_SHIFT; + } + + switch (atom) { + case MO_ATOM_NONE: + /* The operation requires no specific atomicity. */ + atmax = atmin = MO_8; + break; + case MO_ATOM_IFALIGN: + /* If unaligned, the subobjects are bytes. */ + atmin = MO_8; + break; + case MO_ATOM_WITHIN16: + /* If unaligned, there are subobjects if atmax < size. */ + atmin = (atmax < size ? atmax : MO_8); + atmax = size; + break; + case MO_ATOM_SUBALIGN: + /* If unaligned but not odd, there are subobjects up to atmax - 1. */ + atmin = (atmax == MO_8 ? MO_8 : atmax - 1); + break; + default: + g_assert_not_reached(); + } + + /* + * If there are subobjects, and the host model does not match, then we + * need to raise the initial alignment check. If the backend is prepared + * to double-check alignment and issue two half size ops, we need not + * raise initial alignment beyond half. + */ + if (atmin > MO_8 && host_atom != atom) { + align = MAX(align, size - allow_two_ops); + } + + *p_atom_a = atmax; + *p_atom_u = atmin; + return align; +} + /* * Similarly for qemu_ld/st slow path helpers. * We must re-implement tcg_gen_callN and tcg_reg_alloc_call simultaneously, From patchwork Wed May 3 07:06:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678635 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp906589wrs; Wed, 3 May 2023 00:17:05 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7pNmIS/GMyyd1IlEVbn/trcCn4APIY4x0I6jF571vzlSoag43oEaf1W1wS7mnIoCkwvzuu X-Received: by 2002:ac8:5851:0:b0:3f2:1678:ceaa with SMTP id h17-20020ac85851000000b003f21678ceaamr16949775qth.46.1683098225320; Wed, 03 May 2023 00:17:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098225; cv=none; d=google.com; s=arc-20160816; b=Fj+rhy7G10mfin5yynX0/pTImjsSb3t0X3ADI/P7+BUrU+Ebr1r8Mt3+d5+sQaEALf nM7daw1V5T/TJgJgPK0KZ9Wx8N1gHprDEWb6UQGnDuA0WtE5GXOYnslETbHaYb3L1EFt s7fU+oiTXsc1qYW5wnc7MNf2fzS0qLzu80oDHRQWsSaMCD0YiknumV5q8iOmwY4cZput f/pQd1XYY3mhidDszWqozCAeLLqtUtwAg61xEz7G+sQn3UP5EPILCkjMcChIKfp7H/O4 errUQf5q2sgLyRdECuEVIjLe0o79bL9lYDJYYS91hLGkSdTOqzobZNL1q9FMhFzZQ2BE FcRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Ie4qD4RsOKAembPT/VL7Wty2S+rmvIpYPQ8gnkDX74w=; b=MCuMO1PfHvP4UJCCksN665CdJ8TKdQVyFZY8y/OeUzndL2hzfnIcG9E6bXWNc58j6R SnmklVpCL3rrRvTZcOEuljerzRmbsL6KE6Nbrk2FlaRdSqtASLVl9TxUrSwIUFuGZop4 bMToFk2k/t7L7msdVeLmozPzWbo7cw/4stpr463S5uWt+hcOwRtPB7g2i7XVaKhJlPGQ vXYY8pGa+CyCWxttKDecmgioCKcm0Xy7SfqkuUAsJnMmtzaRUS/PEXtaaGctz+fyYeTT B5+6i+WPVt/N1LgKVpYadHwus0qaz/qBNDsMbJja3LsabMtA49ChMFNK+0mTmHJ8hgzH 0AlQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ivJXbfqO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q4-20020a05622a030400b003e9f01abff8si18518432qtw.803.2023.05.03.00.17.05 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:17:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ivJXbfqO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dp-0004W7-Ou; Wed, 03 May 2023 03:11:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dQ-0002vo-Fl for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:47 -0400 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dI-00072Q-6U for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:43 -0400 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3f19a80a330so29525565e9.2 for ; Wed, 03 May 2023 00:10:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097830; x=1685689830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ie4qD4RsOKAembPT/VL7Wty2S+rmvIpYPQ8gnkDX74w=; b=ivJXbfqOVAy4sCeV//Ge6LVUFpdTvj1BhbXCSm5CJyRqssQFp64eDlkhxnars94GTj phlLEQarljBSjQSZkEA4UHsBdoCxELuW3DbI1EJFHDizk1Vk5461uIRRuVOvKJ1OPbd4 UJnaI8vTv0b1Mz9mAxEnMPPTBvmVBwq9IDGLSlhL19jKEOhoSYXcRsfMp3AxdqrCjgTN cG0DDKfHQj8SKOPt02OMg7rLm65NV1diYL7Cut0VhjskovLP/lfpMQ8o9WsQjGHVqFqV BP1lp3HDe1hz4QiA+0jiEbMDusR/jHjqbcZhxRSbA8YtJPUA2NCI9TMQwNwf3NA1dNwO opHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097830; x=1685689830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ie4qD4RsOKAembPT/VL7Wty2S+rmvIpYPQ8gnkDX74w=; b=lAqQp8l+CXj/ymaW4gZkP5J/cDcp/XqscA5v9QSidQJbToREiHJHHu60OIUELxNZRi LcO1Y2Mq8MQztAfOYkqx6OcDdiPjQK8dPy6rcj73dyHwEmmU/+/unthwRh2IovJdpVKQ PNb0/JHX0C64JfZxlxR6accD8/mrmdX3+73lBw1RCYTvlUuhWg/6ZoS2DOGB30lpwIsT b3FNFktgMuB/+/vMAAHiP6lZXBGwkvBShQ1mg8p9AVaEbvdjewU/dnf1g+sva5i1JZu0 bdFmcGdVTZROXGl/mGfjP1B7+MN+oMyW0Xq+OMPXQ5jhtAZ1JzOxT91eRqYAThGuMWTn mIvQ== X-Gm-Message-State: AC+VfDzkDPsmZTWRhDdRX+FdWaC+TKDrrL00J53DpuvhAvMXTZNGYw9M wiXrCEIfjPJt/XOGYCmRjoqWNwBRNRjLlhJQ/1InLQ== X-Received: by 2002:a7b:c446:0:b0:3ee:6d55:8b73 with SMTP id l6-20020a7bc446000000b003ee6d558b73mr13330105wmi.29.1683097830591; Wed, 03 May 2023 00:10:30 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:30 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 43/57] tcg/i386: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:42 +0100 Message-Id: <20230503070656.1746170-44-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::329; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x329.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/i386/tcg-target.c.inc | 34 ++++++++++++++++++++++------------ 1 file changed, 22 insertions(+), 12 deletions(-) diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 7c72bf6684..3e21f067d6 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -1774,6 +1774,8 @@ typedef struct { int index; int ofs; int seg; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1895,8 +1897,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned a_mask = (1 << a_bits) - 1; + MemOp atom_u; + unsigned a_mask; + + h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + MO_ATOM_IFALIGN, false); + a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU int cmp_ofs = is_ld ? offsetof(CPUTLBEntry, addr_read) @@ -1941,10 +1947,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TLB_MASK_TABLE_OFS(mem_index) + offsetof(CPUTLBDescFast, table)); - /* If the required alignment is at least as large as the access, simply - copy the address and mask. For lesser alignments, check that we don't - cross pages for the complete access. */ - if (a_bits >= s_bits) { + /* + * If the required alignment is at least as large as the access, simply + * copy the address and mask. For lesser alignments, check that we don't + * cross pages for the complete access. + */ + if (a_mask >= s_mask) { tcg_out_mov(s, ttype, TCG_REG_L1, addrlo); } else { tcg_out_modrm_offset(s, OPC_LEA + trexw, TCG_REG_L1, @@ -1976,12 +1984,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_L0, TCG_REG_L0, offsetof(CPUTLBEntry, addend)); - *h = (HostAddress) { - .base = addrlo, - .index = TCG_REG_L0, - }; + h->base = addrlo; + h->index = TCG_REG_L0; + h->ofs = 0; + h->seg = 0; #else - if (a_bits) { + if (a_mask) { ldst = new_ldst_label(s); ldst->is_ld = is_ld; @@ -1996,8 +2004,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, s->code_ptr += 4; } - *h = x86_guest_base; h->base = addrlo; + h->index = x86_guest_base.index; + h->ofs = x86_guest_base.ofs; + h->seg = x86_guest_base.seg; #endif return ldst; From patchwork Wed May 3 07:06:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678632 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp905823wrs; Wed, 3 May 2023 00:14:52 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6ea7OKnDM04tCd8AYo5CatTU0xvBtm1HjzttC4KzJRNo0/pRWeH8H4gd2eY9kg9w2OXHYZ X-Received: by 2002:a05:622a:1906:b0:3ef:4a1d:4b99 with SMTP id w6-20020a05622a190600b003ef4a1d4b99mr31623550qtc.33.1683098092193; Wed, 03 May 2023 00:14:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098092; cv=none; d=google.com; s=arc-20160816; b=J1tETD/4cIGPwzXyb6SIF8SuRQfT+PRrtJMFe96d7I7BZm/r3oLOxOcVUQDA5fcP8I qRihvb5nLJfBibkWWmZkUuBxudApCOYMdHYYTlBn6zSR/oBZYKc7arLaRejIABe1X5eK zbllpDIrovMkIVlq+DSyO5eel7msJhw8m4M9m+sAxBx0ohZrCyEruc0PFCn+7qlTJYSG wHDCM4KO1hbZfxWKqY3I5EiYh9RUM8ORKSU+ODPwzNz1IrMwdF9c7wJhgx5nNpNxU2sF 7IT2DPqyN9CA+PDDqOwhTrQC98XllujMG5ZMAuRi2g1ia5a+c98l8NnJkUS2STyMOKol 6dEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=tw0I+ADtSsoTUZP6PXChoGX/kwSbfoaMV4Di5FGXG0I=; b=sfqGeOZsic5+zUFi2DTwjKb9yS/INvfcMxWbYCkqeh1IhB5rcRuWkbp6jJHWADLFUJ RpaPr1d3NutiBch22l2+zc5+Pasn4q0FUtzCxjtEALGuZPtFWhUvbWni8ykyyP6pDXV+ bIX56CX8XpqNSsQiwIhE0HAEYlneU9+rFB8U6DsdpYXDz04KEKNicqZL9wGejou3duAq DcW2UN5bhty+PBz7FX9uEuKmsL97G5+XwocqIi4bIpX5SMNE4KW3YuEjet4CuuKBzmAS 4nwzxTIkcQhTNBix0S0TZXEExjqXUBu/E5O0fgn0BkX4P2a76jLiQPdAsCk7ayxu+93I Et5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fwc9zbfE; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id f1-20020a05620a280100b0074a5ff7452bsi12408655qkp.652.2023.05.03.00.14.52 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:14:52 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fwc9zbfE; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dk-00044o-1a; Wed, 03 May 2023 03:11:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dL-0002qM-Ez for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:43 -0400 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dF-00072z-Qw for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:38 -0400 Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-3f19ab994ccso49127045e9.2 for ; Wed, 03 May 2023 00:10:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097831; x=1685689831; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tw0I+ADtSsoTUZP6PXChoGX/kwSbfoaMV4Di5FGXG0I=; b=fwc9zbfEUwtfnGlWwuG+P6ZbcQaKq1wfCvN+bjePFqM7/htQPSXRa9UuF0FwCkttDO NG4c6Ft3DcBwmfwJoWaVnzFv/3AU1SgIsIfHSSm95lO/q7AQbV1WPJ0c2H11rbhcbHog S6DMjopqMkugCz7wr//9HJZ78I3sBKWUqodXNrJ9/MKO2NsV8ldxO/iRCmcJnqJC6Sen 9vliFiQRHqMWdoWqZ+SHmw8yvIHFW/HA76WCPaVzxgpldPNDIfAwTnIS/5d5MwajQKk5 Juwcg+Yya+7ZZ/19vnZ/O1s7jj9qa/AfF6x35MRF0U2XxTbo84sSCutLf2U8q8/DTOB8 /+vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097831; x=1685689831; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tw0I+ADtSsoTUZP6PXChoGX/kwSbfoaMV4Di5FGXG0I=; b=DtlHv3H1gkMz//KKa2kArZVnlqKCV3zM/HP1lsuh9DtCE13rcXxkVWOTF6k8ik6pvK GCOrAPlYe/F5T/HhKgWCbqndNtYtTTjuPKlUjzvSXvuqO5oTuT+MhUrWUtu0SbxKodnr jMCPJFXlBi0P/kugZUFsasR0GdAXXeNFg5dek+zjEK8Q9euZJTN1HRNz6FbhP6JHbi5S lb8r+jX4DLEFzTx7GVLdAtDLivQy/ZK5KeOqeaFdylZmhaSJK/QLivBHEiPDdxKePsae L3Ig/jSMKaoMyjHjr3eILOg1JVg8F4Em3fQJK09GDWaHB792uvbtU/kAM1WplhrAiX2J qJaw== X-Gm-Message-State: AC+VfDz7TpYq6txhcJL9vjzHK6rXVsFNGXBlAuQV47KtCt0L2kcDkNGT 7U6B2rgFRh+WM4hFoBWe0dOtUpCH6+tDmfNPOJdCtQ== X-Received: by 2002:a7b:cd82:0:b0:3f3:3cba:2f1e with SMTP id y2-20020a7bcd82000000b003f33cba2f1emr6261301wmj.23.1683097831328; Wed, 03 May 2023 00:10:31 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:31 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 44/57] tcg/aarch64: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:43 +0100 Message-Id: <20230503070656.1746170-45-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::335; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x335.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/aarch64/tcg-target.c.inc | 38 +++++++++++++++++++----------------- 1 file changed, 20 insertions(+), 18 deletions(-) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 8e5f3d3688..1d6d382edd 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1593,6 +1593,8 @@ typedef struct { TCGReg base; TCGReg index; TCGType index_ext; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1646,8 +1648,14 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TCGType addr_type = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32; TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned a_mask = (1u << a_bits) - 1; + MemOp atom_u; + unsigned a_mask; + + h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + have_lse2 ? MO_ATOM_WITHIN16 + : MO_ATOM_IFALIGN, + false); + a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; @@ -1693,7 +1701,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * bits within the address. For unaligned access, we check that we don't * cross pages using the address of the last byte of the access. */ - if (a_bits >= s_bits) { + if (a_mask >= s_mask) { x3 = addr_reg; } else { tcg_out_insn(s, 3401, ADDI, TARGET_LONG_BITS == 64, @@ -1713,11 +1721,9 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst->label_ptr[0] = s->code_ptr; tcg_out_insn(s, 3202, B_C, TCG_COND_NE, 0); - *h = (HostAddress){ - .base = TCG_REG_X1, - .index = addr_reg, - .index_ext = addr_type - }; + h->base = TCG_REG_X1, + h->index = addr_reg; + h->index_ext = addr_type; #else if (a_mask) { ldst = new_ldst_label(s); @@ -1735,17 +1741,13 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, } if (USE_GUEST_BASE) { - *h = (HostAddress){ - .base = TCG_REG_GUEST_BASE, - .index = addr_reg, - .index_ext = addr_type - }; + h->base = TCG_REG_GUEST_BASE; + h->index = addr_reg; + h->index_ext = addr_type; } else { - *h = (HostAddress){ - .base = addr_reg, - .index = TCG_REG_XZR, - .index_ext = TCG_TYPE_I64 - }; + h->base = addr_reg; + h->index = TCG_REG_XZR; + h->index_ext = TCG_TYPE_I64; } #endif From patchwork Wed May 3 07:06:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678628 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp905338wrs; Wed, 3 May 2023 00:13:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Ex4tYT/w8dmY1HcaZFUBfi00d7OjeyUHTtT9OdchOPqLDuoQV5RE1ucGW+iv8jZ7pPv/X X-Received: by 2002:a05:6214:5194:b0:5ef:45b4:d4fd with SMTP id kl20-20020a056214519400b005ef45b4d4fdmr6780567qvb.47.1683098022380; Wed, 03 May 2023 00:13:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098022; cv=none; d=google.com; s=arc-20160816; b=qwOO5QzIkSpaV7XuvR+enZeIWAKmNVyoe3Oza1NPHL5dzYhL9kVVyaKKgBzmSre5tD PWd8fUPkaQgmzQlK7VR2yTz25Jk+i5WcIUot7p4sIoy7Sc826gh7egSJ4o1uELNdmBHa OYiEkOmAcXD7D28r7Kbj3DOPlKmkGcU/KN4z4JtbgYRvzHtZH6j/LVwEAG45At8D7Rko 0LDHnEvp9486rejnH8p03wrq/2U6oyFGWNnJzQH9g1MjcVOCoq2eYLXTdf7jG1VT0avJ m+suCYH6y1AnPz/ka7kjEfWXsbOebcbhUZmJFaVH5Z4chMmC3QAgDWk3b28e16k/io8M gx8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=xNYQj2Zg5TENytieXxeMxebRkQ90woYbMY4x/xns2OM=; b=oaM5Lz03SVtpz2sCFA5OcKEJ3Lr6ibITFP4H1fAK97yOdRbEHtVVVuIrr2RJP8lOIa PEtPgd8M5u/+AZrpWRyvZCEVabuox3MyDxl0lMxm5zWqR5Bl7lYdJtZu7Xg9qpfF3aln 7lkUwU1wTl/Ghpt49V0NfI3XooJdYQn8RdoSdo5ZRd2Q5y51BED45rWgmDsz3uNPehmw +a6OCEcc/OPTAtdClK/hhJnxeEXmNQP4uvtYQAHWTXeg2D9WztJ1123h+OZCefgFRM8J IM81w5aqUfTz34dYrqJsbfa2g2klhLhpZRpbXBfU8MrahbI7CavK2gRCroAa0o7wel3M sTmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UvG4Kq09; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id e3-20020ac84e43000000b003ef6349dc0csi15447130qtw.50.2023.05.03.00.13.42 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:13:42 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UvG4Kq09; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dw-0004sl-3h; Wed, 03 May 2023 03:11:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dU-0002yt-JO for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:50 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dL-00073S-6S for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:46 -0400 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-3f3331f928cso29229885e9.2 for ; Wed, 03 May 2023 00:10:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097832; x=1685689832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xNYQj2Zg5TENytieXxeMxebRkQ90woYbMY4x/xns2OM=; b=UvG4Kq09uplNadGTc8PJAfzwYnX9k6t1vOVCKn/OyHdSAr4drcDpMFLfbQScZpbAw7 xkTr/mUzcldVFwMvk+bj0jlwGO0/toFV9eu638UsOxTUDSBnp5PteLU92Mj1Pz7F/LDd dqy/OK7Avr+pqceaEdrjOuWaanQAMVcMfbQ/PWxCk3rc+fys1W6R+Ah5rpXwHGIECNV0 RJ+jxq97rYCkSIIk3L6/MPo8xR9ipsoF87ygHL/ykD74Ob6dLueXNr0+x0JVvxAuec2c a1ioth2rGJKJ6NyA0oNlAKlNGE4mpouLHL/yyvRoOLsyb5DGC84hx4DpdCKbX+F/m/M5 JRGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097832; x=1685689832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xNYQj2Zg5TENytieXxeMxebRkQ90woYbMY4x/xns2OM=; b=UDQMxxQAOB0iwR3DCG5tB6Tu+S01lD2tbogd8B9dke829SBRD5o+gbpVU4/4mAljgE jBkOl5LcQm5AwyBnrBuhArsJJGpwi9ZP0aa5VaLNIOAUz8Q9InaoOG8rEgKy4NXES2dp m2OLtkNCTPVESBE/g2F2eK6ySWPaMTnMxHIK2TSot0Hw6RSRD06aJClWatAwb0gl+CBX EQ32t2xB3/h7aGAzzE4eIq0RLXpjOnMvwWbgT/tzBsf798IW209G1lXD8Dkth/HtTd1x qDA5RNrRRiRGHjWK8r8py4VGVTApo2X5qfdQBMbBzksRIeF7YL9ISNIqUiMyvB+Mt6LM 66+w== X-Gm-Message-State: AC+VfDwyGA7gm7qE8U6va7J9fr9tCUUp1SDHFTejVj3Z6QmBiLjuxQJb b2WRCka9eWXAtxue8ieoizu5ataVuo9zVGaTWqrkPQ== X-Received: by 2002:a7b:c856:0:b0:3f1:7288:1912 with SMTP id c22-20020a7bc856000000b003f172881912mr13430836wml.33.1683097832005; Wed, 03 May 2023 00:10:32 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:31 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 45/57] tcg/arm: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:44 +0100 Message-Id: <20230503070656.1746170-46-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/arm/tcg-target.c.inc | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index e5aed03247..edd995e04f 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1323,6 +1323,8 @@ typedef struct { TCGReg base; int index; bool index_scratch; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1379,8 +1381,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - MemOp a_bits = get_alignment_bits(opc); - unsigned a_mask = (1 << a_bits) - 1; + MemOp a_bits, atom_a, atom_u; + unsigned a_mask; + + a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + MO_ATOM_IFALIGN, false); + a_mask = (1 << a_bits) - 1; #ifdef CONFIG_SOFTMMU int mem_index = get_mmuidx(oi); @@ -1498,6 +1504,9 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, }; #endif + h->align = a_bits; + h->atom = atom_a; + return ldst; } From patchwork Wed May 3 07:06:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678784 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp916848wrs; Wed, 3 May 2023 00:50:13 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6LK++Y9isc7TFkbTl1ywQ9KyitEZGQaRg0MmFmnbgh6e8UAZm6O1+utHpwBGHDCiyZkTKu X-Received: by 2002:a05:622a:11c4:b0:3ec:d85d:2afe with SMTP id n4-20020a05622a11c400b003ecd85d2afemr30623052qtk.2.1683100213392; Wed, 03 May 2023 00:50:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683100213; cv=none; d=google.com; s=arc-20160816; b=oxpmN0z791S/dzwKRZPKV65DuoYbc/rxLr73yF2u12E8CyQh5CPH+yYvxVsQALaTZS 0k1NOXv8O7zl8WIBRPbHrcJo8vCwmgXEdohE6VPV9LUEng4pDV+nsRHmqcJJg4JiOAKC CPIIdyLxNCb1JQ2ckvytETHUQ6Cw6XL6CQH0YBVr1HLjw0IKlZEmccjQzxC0TFGS7poW 1k0KSsaHbVQoYMg3ZFNHnZHrvNPPZ1rxqagb18qw64s3ShnxATgNQOghEVSGEQ82tEh5 KiorXls6WZM0jX3GXIiKuZ+d7WO4gU8mKRWQ9hXFiMzNWaVPyEJmbRMFm1BzlwCgZR6C 0+6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=VhQ4va+L9W76HQPa92crHAm/u67MMsKKHsaZ1OacYxI=; b=EzFVFmGzmc+Yo8sywffvIh/raFhYFooGO+xxsoqejeijchU66Lwi9L3FOU0alCPEDq SxGq+crehT9Z7/4X0df/5bCwPVHz5RCkOoTNKpgecuUEOL2Oienlnri4Xdl/tKMS6e3V PtkzO5PgobGIonZQipEPtQBChzSyDjxuqHOP3HfFmtRd5EE58MuTxnXMq0f1sdWoIR0y 8RgkXU2TVj3vbJBLyhyab46npHQjWKBvcKLnYEAi0/ePCdxHDRRD0zD1wfjivUgPe7Pi v0UBMWUWipwnQNU6dYj3Bc4EP9bal9ug/5ik58hBl0H2athj+t4isf/b3ldIag+1VC/J JE/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YfptR6pq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q16-20020a05620a025000b0074df191efffsi17962625qkn.151.2023.05.03.00.50.13 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:50:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YfptR6pq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dn-0004Ny-8o; Wed, 03 May 2023 03:11:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dP-0002vg-7P for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:46 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dI-00073z-6P for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:42 -0400 Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3f1cfed93e2so47445575e9.3 for ; Wed, 03 May 2023 00:10:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097832; x=1685689832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VhQ4va+L9W76HQPa92crHAm/u67MMsKKHsaZ1OacYxI=; b=YfptR6pqg1Vnv0yQUAJz8GXu0u9C5ZCRaGoqgRok2t9NXbAF3kw1MsI8r3SZYdp+py Y8FdFW6U81nJnORVVXzt9vtDJw2X0ujdKHvgpuFc4Ng1kP5lFFfiW4fRJi5tzv1fUR48 RljohxYQ/KiOfi9FP/o5Jrx/SBLlxSUJBg59TAwHPuDJzikfeEQ/aG5yFWaCvHsfl8Fj kaDfx23Z6erdRu6WlBKZEQ00w7SMk+LjzTIR2Ljq14siaaVUqwuT6HyGHbw2UhkhUzX2 1OuDXyBvzWf1pvM/pP8PLZFEx5TLVoLiRljiF2cGh/B+DuijPyq+D6xTnTSd72ymJADB bZYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097832; x=1685689832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VhQ4va+L9W76HQPa92crHAm/u67MMsKKHsaZ1OacYxI=; b=HlrzqRZuLHbS+UrmMaE2+qGEJrm3/acMnQd7jfAgQ2/Hvy1Z56wdd3qlqYgHh/xT19 NlDqm2S1e/zQ/S1S7t1LIU7AkcZRR4/+lnZhINFBN/8adpsWPi2Sz0kdauyYI6+yA2Uo W0g+5gH8+49B5LEFL3plP54g8CN7R7hX616H/HnozR38NhFh22IlpBFmsJhxsDmE2dst FQ84vnibpXmbsRMZwIQRPMBb4tCCog2Cse3JYN5ZErqHn+B2shYzZiHJIhQa8S22TK1s nSr1+VZvh53D+uBM3Oo3qRqIqoy/SlUcZpxP5w/DF6gS7LN9ulGssb6UIaZvgvIFHlNv w1eA== X-Gm-Message-State: AC+VfDwYWHW8oFMAvDeXVyai2j/H+6ZO4v/qfuB32/JVBp/dBZXhMj3L wWmyafoyvA741rrUH7WM0f6cSGZsl4rBInYG6LJ6ag== X-Received: by 2002:a7b:c047:0:b0:3f1:979f:a734 with SMTP id u7-20020a7bc047000000b003f1979fa734mr13807508wmc.11.1683097832611; Wed, 03 May 2023 00:10:32 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:32 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 46/57] tcg/loongarch64: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:45 +0100 Message-Id: <20230503070656.1746170-47-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::332; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x332.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/loongarch64/tcg-target.c.inc | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 62bf823084..43341524f2 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -826,6 +826,8 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) typedef struct { TCGReg base; TCGReg index; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -845,7 +847,11 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); + MemOp a_bits, atom_u; + + a_bits = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + MO_ATOM_IFALIGN, false); + h->align = a_bits; #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; From patchwork Wed May 3 07:06:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678743 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp914333wrs; Wed, 3 May 2023 00:41:44 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7iln/DEp1jeRgginojZEcmSMg/mumKbqVC6jFaSPYO0gxQksIv8y88PcwDV0ZyHIyFEk4U X-Received: by 2002:ad4:596b:0:b0:5ef:77c4:4540 with SMTP id eq11-20020ad4596b000000b005ef77c44540mr9029454qvb.27.1683099704542; Wed, 03 May 2023 00:41:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683099704; cv=none; d=google.com; s=arc-20160816; b=DR7Zk/Fbcz1xchvATK55poTizTF/8j1p1FzQ1/rzm7pt58qX+bfjD7sQvw8TECkalz 5cnsXc4/vt4P9ubHbntDqvJTvDTc5ZT+O++OIhaR0zALwN1QaLDpPAQagbi6Qtq6jrvd NNva7f0dV8zwkIJYhSRPtoilLdBc8UnNrHwnC4WeBmU993q1dwcjBXZI8VPZApJoixTz 83vzINnr8tURtdOCtAMVgc7D4cbIpEnohsodyVlnLVk7yd5ydtP4XzIHqBtNuFDMMeya MNNzXXpWAFcJFOPosT4stJOCtmAcNF5lrC+A6XJg663cogKtHDbDV19gcsWfO2hbJExc IXiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=naApWSmi65k1gRAeLMa4iVYTmTT0eNi8LAPHTVnKRRQ=; b=ErULAPEXHHFMWmA+vvYwzXHdBlBkjissjDJjGVqOlFP6CkkhMRD6810zqzsulKepoR T3PF0G+F94sG45ZO5o5v+G7St65rTCbv7sYJzM7hjhTHTJINQwxOw46qSZCfQ11RCEol PGV2RKI5fTcvkW4LDVZr7Ge0/slsUtmBoAnLP1PZfwn8Zxh+1ljq6NBxw1JgW3kj8qwH TcH7Thfws1J5zIMaEk7WFz3mIFgEQ/tAnKagimWcoCV1FxKeWYkIsdMy6WbL8Lqxa3ln yAb/1Rw7moiEiTQvSLwXjnsw9oCW5bKbN4StJbKSlsoh2Ww6MfA+30aYszuwRF5DQLnP tvDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Cozna/z3"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a16-20020ac85b90000000b003eec58800e5si18907770qta.564.2023.05.03.00.41.44 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:41:44 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Cozna/z3"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dn-0004O5-91; Wed, 03 May 2023 03:11:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dN-0002tj-Rd for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:46 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dI-00074P-63 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:41 -0400 Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3f178da21afso31657765e9.1 for ; Wed, 03 May 2023 00:10:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097833; x=1685689833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=naApWSmi65k1gRAeLMa4iVYTmTT0eNi8LAPHTVnKRRQ=; b=Cozna/z3vEy8KdOLgrqsl6EX/BGC2ZOsrTPy5wQbRqmzfmFMhPU9A6Hm76MtVqWzk6 otQ7+GCxa12CDoihSNRlfQcd2iYEFVMEXQ6+bDDgBLUFCwzEBgEHLDWfIzIPYKvp0GJK fYmpDzX7A5gg6FifOmEkPpV0AystCG1rYeBNXb05oMz8Rlq6oaFIj0V38AeevM8jQ6Z9 jV8o3pT10/5hpUnyhHe4Gqe1g8M9v7YqN+ZMPgLSP1NoyiNk8DmHBPdEmZJo6Ihr/sKL jRVu/lPSSpyIt1kvBQYbywITtiyC8bTNnafO85nKfdi4DYcr1p6f88xdPQc7RMq41CJu iuiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097833; x=1685689833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=naApWSmi65k1gRAeLMa4iVYTmTT0eNi8LAPHTVnKRRQ=; b=Rm/sTPI2VLg4zn9dcP5yweE3UHVsSWGFfnlUmHzAWoR0aOLTJYk171BfQl/fCGX7GJ OQLKtZ6gMP+7s9Hm0Ov8oP86ChANx/J6xGCSND++cj7PSt8FuEnaIBU+ixknP2d846yo 3+jDfk4zvQwgANeK8kXt3Ca+bEW+/4O6iRT/yqO/Pb5t/wjFu5ObFrzUPuKClAVlGnga j9NP8Jg5e/jGetAQAmlOEfOCqsGvtgpiPRUQeyTj71FkKkvKjkiC+K8ohFLvN8z8GFhj oY9zCjIQkD49f9O6SycVeMl44ikSorj0qheHhYQNxFdnRPNGqsuwtUILAWJt6/MxnEfS 8yiw== X-Gm-Message-State: AC+VfDx0pj3yNfvH4DMmblZmGff5dWzQs/WEeWL08vlYo6qt4El5JSRG C9oYLSWp4RAWU+4cXyOR+2K8s63MioHoeIKoCyfm2g== X-Received: by 2002:a5d:4a0f:0:b0:304:6d32:d589 with SMTP id m15-20020a5d4a0f000000b003046d32d589mr14614726wrq.18.1683097833345; Wed, 03 May 2023 00:10:33 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:33 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 47/57] tcg/mips: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:46 +0100 Message-Id: <20230503070656.1746170-48-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::332; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x332.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/mips/tcg-target.c.inc | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index cd0254a0d7..43a8ffac17 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1139,6 +1139,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) typedef struct { TCGReg base; MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1158,11 +1159,16 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); + MemOp a_bits, atom_u; unsigned s_bits = opc & MO_SIZE; - unsigned a_mask = (1 << a_bits) - 1; + unsigned a_mask; TCGReg base; + a_bits = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + MO_ATOM_IFALIGN, false); + h->align = a_bits; + a_mask = (1 << a_bits) - 1; + #ifdef CONFIG_SOFTMMU unsigned s_mask = (1 << s_bits) - 1; int mem_index = get_mmuidx(oi); @@ -1281,7 +1287,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, #endif h->base = base; - h->align = a_bits; return ldst; } From patchwork Wed May 3 07:06:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678637 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp906644wrs; Wed, 3 May 2023 00:17:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7KJXn32KkUUsbEfa5uLNAuJAhTj6v5h+TmmPjfWB+rDWcPNilcYvgkN29zOh9soKeCWnBr X-Received: by 2002:ad4:5f4a:0:b0:5ef:4565:a441 with SMTP id p10-20020ad45f4a000000b005ef4565a441mr9356159qvg.13.1683098233831; Wed, 03 May 2023 00:17:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098233; cv=none; d=google.com; s=arc-20160816; b=hnlprGJZbA1CvhDT8KKCAk5z9OXytQ5a4bCKAyDQLgx0xNuiNiPb2Fg77+EW4rlKZP VS3+3z3+57NCMr/FxSfJqHs0gLa8vu0y+j0BW89LXDFyujQ2y6BK6nhbnV1Ih7xbVm8w 2uN77wAbPJBqfB2XNNm5vxehRHyCFgdEkLYC7Qn9rVBNC+OiyggQFoRsIwgNkktputIG P/XHbUfTice32Dka34mICnW/LRbglQRHqMcqc64IJcs7dBPisdCSIgyaIWuZsb7HPX2N 1f+m6TXm1v+b8k6Emfqqi4+TYjTRp2IUzpzoFf3v+bgtgMABaVJA9xo0q00w75ZniY1q /N+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=IuwWU1uPieHcXl9aK8i752yth75yKtHblQaohZpSR+k=; b=m4uFNcQau3F0EWQ1brLfZLJkA/v7bMhFUTlafnMkE78aF/3ZHhbqz6MwWNqTfYwwQj 8aPJc+Sy3EboUJg9Ttxo5uifOnw5hqboiM9w5DEFZn6QMqvEqLXFOrOdjd6xqQP26PZi krNphE4KZPxJHYJX6FPt6WIs17zxWuvZ4NCcOPucdsxVEWENWzyrnn1XLBZTmuOLKfER jMf2pNwlcXwp1APHq2NjIo2ye+AbcCQW0ZbQZhtaOykAUJdLAwz5BatClFz/x3lz3U7B fH7ktw62tGSARirGjuIHlR11cPjya0Ze0Wm0Nova6BtMNvwlHldKaEqG30anNbmxUYVa Ux9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=y92PkcNO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h1-20020a05620a244100b0074de8883cf8si19480070qkn.721.2023.05.03.00.17.13 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:17:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=y92PkcNO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dl-0004GB-JC; Wed, 03 May 2023 03:11:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dL-0002qO-F6 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:46 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dH-00074t-UV for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:38 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-3f1763ee8f8so30940305e9.1 for ; Wed, 03 May 2023 00:10:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097834; x=1685689834; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IuwWU1uPieHcXl9aK8i752yth75yKtHblQaohZpSR+k=; b=y92PkcNOpzq+QHGKF1ZrxtDQYUzUeOSQwnRrPHcw2qxlOgHljRVQcn8iZJ8frAW2b6 93Xmrk+Dx9CZ+oJ+zLgDiAzaixPD7GHC1QcYSHRA5XRycAAW/HKpcyanbgjLcPltfxM8 ApnJMF6msIP1GtZ4ITEOgutPWlizglZ8bOgwXGcUYRoYq085NBvlTRlzypg6KRUpdYgN aPjxV7GsMk4T1Bo6aq7flRTQcozgzPWnKnniaCyfUFhGqeVyZ1jpEYAy49By+I2nXxWP srnyOPWtLjA/+18WD3ABDj6uvuIMo/afJlYGx9znIPEYksIVLsrseNXYe+IyY9O9cbde xUNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097834; x=1685689834; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IuwWU1uPieHcXl9aK8i752yth75yKtHblQaohZpSR+k=; b=hm45V+KzS0Acd3FwucpF3h2k2l86/5kz1s+HVdZzXsbprzNpDi7hQrT0tn4uo7n6JW ZwGHb4dwgozi0IGYmZnOovk07t9/Gjgzt7B44sXI1YiKEfC2PbT6DbF8v4+VNFUlfR+L 8nOc8S7Y5QgLeTtoNdBEs51S512V8Nz2YmnSxBEwR1sDRdCKDRTf+lItYqaYsLicy4PR 9O/KAfmm8s/eHnj0pMgnEqeVbMHKB44Heo5nxKJqWUuI7BBH8haSjN9uqPNv4RwLUmOe b4nTsXeG//Fl3vkfx/cpDtxqrko9FAszx+DtMv2THF+vO3QAkti94VOQFxbs0z2N4qMM LP7w== X-Gm-Message-State: AC+VfDwSLQjlt9pSwmfd8XfuA74KNgOVrIUHM0PuzoHsxYB1S7fJHmBb 02j3zMaZ13VXvkL+0jS5y8zLnJQi0rdMofBg08faFg== X-Received: by 2002:a7b:c85a:0:b0:3f0:9e2b:22de with SMTP id c26-20020a7bc85a000000b003f09e2b22demr14347444wml.22.1683097834050; Wed, 03 May 2023 00:10:34 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:33 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 48/57] tcg/ppc: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:47 +0100 Message-Id: <20230503070656.1746170-49-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/ppc/tcg-target.c.inc | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index f0a4118bbb..60375804cd 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2034,7 +2034,22 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); + MemOp a_bits, atom_a, atom_u; + + /* + * Book II, Section 1.4, Single-Copy Atomicity, specifies: + * + * Before 3.0, "An access that is not atomic is performed as a set of + * smaller disjoint atomic accesses. In general, the number and alignment + * of these accesses are implementation-dependent." Thus MO_ATOM_IFALIGN. + * + * As of 3.0, "the non-atomic access is performed as described in + * the corresponding list", which matches MO_ATOM_SUBALIGN. + */ + a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + have_isa_3_00 ? MO_ATOM_SUBALIGN + : MO_ATOM_IFALIGN, + false); #ifdef CONFIG_SOFTMMU int mem_index = get_mmuidx(oi); From patchwork Wed May 3 07:06:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678627 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp905321wrs; Wed, 3 May 2023 00:13:37 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5WcHjfkO+eQCJ1tbQ9eUvoAxG8wr4z0poZqcsYWbiqadxUxuBPXVjbCDss3h94ig3pSWyB X-Received: by 2002:a05:6214:3016:b0:5fe:dffc:ff0e with SMTP id ke22-20020a056214301600b005fedffcff0emr9596326qvb.43.1683098016912; Wed, 03 May 2023 00:13:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098016; cv=none; d=google.com; s=arc-20160816; b=uchw3mRLajGvtHujM8uUeugQA/hBzmNWPXZ4B0LP3R0S/MqlYbX0kKKjN4HWcDnv5t I6+WNUiMUFfWPrbSJioI3nV5beW2skCIdzWSWIUXivP4XBVe6IdT18bgzirz4GVwpe/Z auI7VEKVy1m7y7icYrrS17w70aYvC4irpLYzZgsXoNkQUuVXK3gS/mKOsDKDqufQ0+n1 x3UOsPTlUF76Nug+VGuWux2/yQNM10maU6Ku/7nln4tz2Ce4g+jO9VFpyJdLFwlOyU2t rWVwFEElfKzGlfsfhNhHbtnoXYfdHqBqxwh7kTtVFZdjvxBXDUeWZYliIwUe5BntDsym ATog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=u6ngJlz4BnT3IPA+iAnWpyxcqcDly3tsfFDXkJo5Tbg=; b=XaKTQTULsmjV27tIdvxqs8zD7r0vRPc+v3u8LSPXh0yLsRLpCuaI+5AK7r/Y9VAmOL WCtnOby1bVjg/vjaE/yz+4rEwO425kqtRiVC6OwNRXt9cLphuyPq+Q9oteYq+5OUfJLu eTvuVKdAQ42V7clMeHxXp1RXQCitT1YGhxPctpDgYADg4fwz+q8omKCLq0S9CkR1vOvD EtPTAZJa7YT2aO1pqEBiWpKKOE/ACn6Btko0sGRTEvQpSS5Lwzk3NnsHXKtzFMtX4BuM OipWwwvORqweh7riPe9gB5n0hxaZHZL5wFa9KMYxL9BU8KpZCMwU+jjTPxKQ6gLmRtlf jSZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="H/GKTy3e"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d3-20020a05621421c300b0061b79c5fb31si35183qvh.308.2023.05.03.00.13.36 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:13:36 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="H/GKTy3e"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6du-0004k4-No; Wed, 03 May 2023 03:11:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dR-0002vv-IZ for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:48 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dL-00075J-6r for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:45 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f19afc4fd8so29024415e9.2 for ; Wed, 03 May 2023 00:10:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097835; x=1685689835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=u6ngJlz4BnT3IPA+iAnWpyxcqcDly3tsfFDXkJo5Tbg=; b=H/GKTy3elP7oPl/zWmeAblrkKGUV8xQ80Gj1pICA4e/45V7tGUI+iFM300M2+v7Zjc 1i6vpYhky+t9ojiXA8LBa/y5wes+PkYbZbwaTS3GZx8rEkKGAlVf9kh5uE1Wl9CQQxMs Zu+w7TqV4vfxlCCw2SBqavsiS3FGreKHt8PoShR1972U7y6oyI/kztRgNO5/Aen3IUbW 7UipN31GaQxtuCERPsvEu26IlGO2H7/1G7cIKo2LSz/WTQ/HOx7i7QpvRu7+qcYTvDLM IWYD0WYWBj4uSVmP4LjdmDj6HRO12+o/7Q/HYqJeckpAJX/rucTKE1ZIaf8Jq3T3R2Qe o6Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097835; x=1685689835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=u6ngJlz4BnT3IPA+iAnWpyxcqcDly3tsfFDXkJo5Tbg=; b=gHlziPZ4+0k3RtIPGZyYjTxN1d1LIkFpIoq0RfVQYF0cxSiex+MG2oiPpAU/IlJ/+N FWOrd28bymBQrSh3JQJ17+C9CvzAg1IzRhb20Ou0W//A9WzmpWwIadT3X+gygqITaHr6 LCZDdNu5VNvQl3+NbFN3x1rWM8lWpOe7rX7wR5gRsLf7g8aQfrbZn25i8lwjhXUaaCFz Rwl+RahT6lKpgr7oKhHSeRt15pSsrThtp4qCWodr4ZzcaE6q3c059C8VUvr5NfyitcwD aOh8hOIdavKc1RhaXsWLJCq1gcyEZ7o5WRFCd4eXsizuKKCL0pmS3zChodqhsCe/THtF sctQ== X-Gm-Message-State: AC+VfDzr53Q+kBYXxrZAGvDMllVV2Z5aTVQfCY+Q9Pn5wSmlfURPV460 7Xvh5kbnLVRiN15XeruWITFUzSyh2PulSDE3wdrQTA== X-Received: by 2002:a1c:7205:0:b0:3f1:6e88:5785 with SMTP id n5-20020a1c7205000000b003f16e885785mr14757198wmc.14.1683097834872; Wed, 03 May 2023 00:10:34 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:34 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 49/57] tcg/riscv: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:48 +0100 Message-Id: <20230503070656.1746170-50-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/riscv/tcg-target.c.inc | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 37870c89fc..4dd33c73e8 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -910,8 +910,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned a_mask = (1u << a_bits) - 1; + MemOp a_bits, atom_a, atom_u; + unsigned a_mask; + + a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + MO_ATOM_IFALIGN, false); + a_mask = (1u << a_bits) - 1; #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; From patchwork Wed May 3 07:06:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678647 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp907638wrs; Wed, 3 May 2023 00:20:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6unHFnyUdde9OJsuQHX06d1kN396klb4vBMZ9R8xI9gpi5guffzl5YYd7gDWIxO8sUbBnf X-Received: by 2002:a05:622a:107:b0:3bf:a061:6cb1 with SMTP id u7-20020a05622a010700b003bfa0616cb1mr27522633qtw.46.1683098401516; Wed, 03 May 2023 00:20:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098401; cv=none; d=google.com; s=arc-20160816; b=AcPESpVIzlk7MgZ3QxBJIClbjl/CdTEn/gfV1fDttc4+Tx1cpQ2PPUCFpICO321HBL ZqXG1DiMqOeUr3j3iWpsA/b9fUOiB3XNsTxmURQfvYPkicOWiaqUCxxQ1Ez0jugTAvEQ Ij3Ve6PavCm0dW8s3BlCOp5EYLkaNF5NciVf3r/R/ahdhcposjAS0h+6zlCywkww2Orl gzRvSuFSMfa9lUya5gnG3V6UvkVJhvJw3XXd7e+FjnvKTiYcvBrOdY3HPS4UjbtVEJwz Zo34ejEO7x0/GnY2m2SX5tukMcISBzaL1DLsGDo4oTWQXtnLill+8BjcdISDJymApEe9 29dA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=V5RMJ17ugwtLDyPRP13L9tNn+ld/75MDb1q/pfzv0DU=; b=jRApdbsaYwML/OUIL7mkrhnn6mEEzC7G5s45g63Vc4Xyd8HWZikL7acVXjJ3x5xLm3 RMmzn1uTJAEiJmyTX4tC2H+lDG8yzUVUFuN7wLxa/dAHFisJqKJqwe3p+sviEkMRfvLt vwXKPKzoQuolQReHAIRlbuFLCGyxcON+YZ1SW1MIcozt5VV+fNM8yJcYEIuUfsusR4it 8vyXNx4xg6SQfER51XvSxtMqB9JNSsuKPWj2m3uwGU9ivpXSAgz9qG7oNfI3dyf52wKE n/vmnxxnUh26ukbpvGninVB36NUoqIOB8UxZZ598YXvJH1QMUbqiZIUANPSEfCr/Wl8s 0Jug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Zn+HHXzn; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j22-20020ac85f96000000b003e3961d1a6csi18647298qta.766.2023.05.03.00.20.01 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:20:01 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Zn+HHXzn; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dl-0004ET-EH; Wed, 03 May 2023 03:11:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dR-0002vw-Jt for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:48 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dL-00075X-5X for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:45 -0400 Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-3f1763ee8f8so30940525e9.1 for ; Wed, 03 May 2023 00:10:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097835; x=1685689835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V5RMJ17ugwtLDyPRP13L9tNn+ld/75MDb1q/pfzv0DU=; b=Zn+HHXznZSvz5DjBkYn6ROVwfncn8g31Uo41M6h+iNHj5AMhje8ixZChoxeSYMW6XA qFZeGWLgknDIL9ZAc/i3nIivffL+BuLhLRWYCUJSk5r8O8d5ayetP82CCQPRyEBjHuFA BC4qHd5PKJcPCVwL2ww7dEtMX90YTcvYnTzPsg8HbvZMNFkchW8kduuqbZEKMMo2vLje hgIJff9SzOrUAB3pGgFDYeLm/QEocF7yHO+Lf1q437uRecqtr+ma46ifD5S0vz4fG/Wh vgqPl0PnMNmYIHR7qwq/Nemr2a8Bh+DKvkTao1ZwCZ9egs+0KOkEv6cnOdoxBQZXrmWo uDfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097835; x=1685689835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V5RMJ17ugwtLDyPRP13L9tNn+ld/75MDb1q/pfzv0DU=; b=AnnHQT1BvUo3/prbhLiVWvlwxg+MU/AkSJH3yDVV+9/1KoQXA2n+/62ET5gjDffUx/ wxnG+V36wDZfRk5+F87g+BJ/ryuA0ghZ/4R8fv64MUMQlHtW3AbD5a9Yuz+oMLzXAsX+ ti7xjJ3yLMkyG7gHthEXXCR/Q3l1a5SigaxwB1MZhQXm0B59HXLdtPtVLfP7+rGlq8GR t8/t94kyZD/fUi28rJLvyCyXYTmtL/cPBr29BJjX1d3MRVV7g+82KCP5bv/kXRBaIJ+s SKgzTPn344NnLep5xjM2eBAmb/1JnBp+m1WQGMKV1D5YfCBKngRO8LYY2SJODXnvLqw4 B0ww== X-Gm-Message-State: AC+VfDzkWBHW/kIDoIMKYYvxw2tTtndbi8w5S8uKZziPYZ9Sln2dy8dS Bja3pYDn2h3/OGos6P1CN4R6ZQIjwI30/q3Zfpd41g== X-Received: by 2002:a1c:f019:0:b0:3f1:78a7:6bd2 with SMTP id a25-20020a1cf019000000b003f178a76bd2mr14151889wmb.27.1683097835587; Wed, 03 May 2023 00:10:35 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:35 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 50/57] tcg/s390x: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:49 +0100 Message-Id: <20230503070656.1746170-51-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/s390x/tcg-target.c.inc | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index 22f0206b5a..ddd9860a6a 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -1572,6 +1572,8 @@ typedef struct { TCGReg base; TCGReg index; int disp; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1733,8 +1735,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned a_mask = (1u << a_bits) - 1; + MemOp atom_u; + unsigned a_mask; + + h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + MO_ATOM_IFALIGN, false); + a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; @@ -1764,7 +1770,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * bits within the address. For unaligned access, we check that we don't * cross pages using the address of the last byte of the access. */ - a_off = (a_bits >= s_bits ? 0 : s_mask - a_mask); + a_off = (a_mask >= s_mask ? 0 : s_mask - a_mask); tlb_mask = (uint64_t)TARGET_PAGE_MASK | a_mask; if (a_off == 0) { tgen_andi_risbg(s, TCG_REG_R0, addr_reg, tlb_mask); @@ -1806,7 +1812,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst->addrlo_reg = addr_reg; /* We are expecting a_bits to max out at 7, much lower than TMLL. */ - tcg_debug_assert(a_bits < 16); + tcg_debug_assert(a_mask <= 0xffff); tcg_out_insn(s, RI, TMLL, addr_reg, a_mask); tcg_out16(s, RI_BRC | (7 << 4)); /* CC in {1,2,3} */ From patchwork Wed May 3 07:06:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678779 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp916236wrs; Wed, 3 May 2023 00:48:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6cMtUkbSHQUcwBdMOM5PTDhIzL2WgfcIyI7TRI79lfG2DFKqN8JAXqwBmIarxnEZQKYmCK X-Received: by 2002:ac8:5991:0:b0:3e8:e986:b20a with SMTP id e17-20020ac85991000000b003e8e986b20amr1523079qte.16.1683100088600; Wed, 03 May 2023 00:48:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683100088; cv=none; d=google.com; s=arc-20160816; b=Pcev1qP46k1sw7lvfVFQo4BDzwtxiF8DR6aMQC14X0qEdiup00AHCbzoYV6nDHR4pT nYQ5pYvWvuBCKA2v7FsiXrwapN0bq0j7+SE7De8CxW6o6sPMNDwms1hT0ibYtWwgNac4 KNxFspz8WSpTxG5GddVHJq8cUYTjzELzHCR40siaF1vORTr8omE/iKa4p8TX4uI21qf0 Idgmstx3fm5i+lR0V5axbULXcmi//eTu3XuFuBd/kt1QiP+1vUcm4iqep8O3phrGnFHu YzhCF/YvSTb+bCIu3NWmH/vfDXwqAU3dCVAz0l614x+ibm7MQcLZ6BOzbA2rGZPFliCN FWWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=5+Rgd6h8ypNJWz++9VNujvYl5qeiVkqzvRz2suad2+I=; b=zzRL1aFgh8dATGra+GaeYr5IsLipInuAs77HiMRmSx+CPTNjLd1cDfw82FWsb2wiK+ 9I265evU6uYST3Cb2W4SrixSCoZ0m1wvAvBLwPDnZ1HekJ7wEWX5zd7v9obhJTLfXFmB g8IzJjww3+bdShnFIfh8Ho1MZx8Vkr05YCGFofMOumodUgMFxwbrnQ17v+b9YKmiw9sd KyvM5DVYzk6cn2HiCeLjAePSUsGDrmP1Pu+7RmvTYpa4YH9wKa21srNpgfzB34M9BFdK Smjq6sOkFXYxxhpvnnbRUPlwoCwiCjIzQIBq6imXyit3sZRR2kmu7vkPcTX6zh7YRc0n Ejqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eYCmvUQ7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q2-20020a05620a024200b00745e3f7670dsi17765515qkn.353.2023.05.03.00.48.08 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:48:08 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eYCmvUQ7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dx-00052o-HB; Wed, 03 May 2023 03:11:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dU-0002ye-Ha for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:50 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dL-000762-8A for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:46 -0400 Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3f1763ee8f8so30940575e9.1 for ; Wed, 03 May 2023 00:10:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097836; x=1685689836; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5+Rgd6h8ypNJWz++9VNujvYl5qeiVkqzvRz2suad2+I=; b=eYCmvUQ7jW+zifuuVm6g7CyarVPdggpG4tmso//8686ULorCizgcwd53eoCKRxT6VB tbEh7m7sVf5ANunCPxQE3+XWM9c+v+wocXRryH1DN0hToxkUmcxjC+ERWNJsQ+kqm9cm HDI4s12+4orfxFOSuObx8MozUPXXXRm7e/bv987BQatu3G/+y+ELEgW+KGnJVcs2SAPY svWpnL5RUUEdR92qTd5Dm2mM++KOO5ujK4ueMJXgsyUWMYql7GXFByLsbXI4vhQdLVRL KWYS1oH2WT4znWVh6LRkyRy61wlBncGZP4Gow9ITv4jZcejYEulP/4+DnrgE1nzNyjrW wO/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097836; x=1685689836; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5+Rgd6h8ypNJWz++9VNujvYl5qeiVkqzvRz2suad2+I=; b=eVZVkvTi+YDH+6FCTDhYyAu6jr7CLMLrDrdl0OTY7tx9FSJXq8iMApb1l3Wv/SP69n 3yITKKunlOeb8RIKMRRcC+8B6XCxY7QME0cRE/8hGDPKwJtypqFkTUKv+zM0wUizOMg8 LDz/qw9Wc+AiDFqvxX26vH+1Xy4thOAMlUkYGcQ66ggwxQZzKFgbgghV/VmA0wZa7OI9 hspLE1vieQFWtBJF1gUj4fGZpWEVirihcY1pqYr+iuHTFqBBq6em5j32E1lkuaZgOqQp PS7JKha5MvR8Crjd+co7Twc5wSbTiKJTAlQkGWtoI7N237JeI7U78zpBXyRJS/2iayDw +QPg== X-Gm-Message-State: AC+VfDz3gNOj+QGwSrLkdlMUnz1zA4B5RFKkQ6ryCSeF34/H27+6zFaw qd9fWcjJoLhcodo7aqYL25QFZHUXlBlxWAMS7ehBzA== X-Received: by 2002:a05:600c:d7:b0:3f1:93c2:4df6 with SMTP id u23-20020a05600c00d700b003f193c24df6mr13402268wmm.12.1683097836219; Wed, 03 May 2023 00:10:36 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:35 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 51/57] tcg/sparc64: Use atom_and_align_for_opc Date: Wed, 3 May 2023 08:06:50 +0100 Message-Id: <20230503070656.1746170-52-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::332; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x332.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index bb23038529..4f9ec02b1f 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -1028,11 +1028,13 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned s_bits = opc & MO_SIZE; + MemOp s_bits = opc & MO_SIZE; + MemOp a_bits, atom_a, atom_u; unsigned a_mask; /* We don't support unaligned accesses. */ + a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + MO_ATOM_IFALIGN, false); a_bits = MAX(a_bits, s_bits); a_mask = (1u << a_bits) - 1; From patchwork Wed May 3 07:06:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678644 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp906968wrs; Wed, 3 May 2023 00:18:05 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6Emo/437/yRxU00DP9wj6MaYxfjp/L43Hjp7EOt4Jb1Z+RUACG5kJ6NwdfV6JoTtx8go4G X-Received: by 2002:a05:622a:391:b0:3ef:5ec1:dacf with SMTP id j17-20020a05622a039100b003ef5ec1dacfmr28729159qtx.32.1683098285123; Wed, 03 May 2023 00:18:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098285; cv=none; d=google.com; s=arc-20160816; b=dQHLTgjO4KlLP3tHz7SR55A8/GIeaXnVbuZ86+7IYleshtcLq8FsmAl60LMJTTWO7T XH7PD0fuHkD48tL3ng6zUZTnZHRiD35I4KnhPrAw2T6Qg9MQ9fJHfDIHIlaLFsF/Fnpm iBE0WO0BnkHxrWFcgANEJ4tik5sD6aJaI2Cp/fY6cjXCSlSgeraO53r749HZEhmvTque tVMA3p8mYFTijdFy8yYmIxTSwIcJDNBWbB1IYL6DwpkZe6wOM9zkTNSAlsD8fVUVmxUJ SMiC1IzMEMs8YcLxpDO4FOvfi+Y2S5mtb4cqgziy4u10E2jRfnnm8026BfwLGiqxpvn1 rtBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=EZO7SCJGqCS84y/GA95YftKK1byl/0ehMpYN3JvXR7I=; b=OdgNhLU87aViFZ69/0J6YZiYHEMB/b6J29GPFtRgaSVzPiyw0h8k/0CPbz56N+yshz I7KDgls429clYFS8R1vqy/GrYVfUAveV8Lqpo6H2G9x80/IuPnxtmRYUJQDll8MFvUxV glhOZbmzUlcdgkkJ0as8XdS9aehJgE79DR6aEaIxKRthUpVOBmw9lW0rWDG+EWHbTtqk KtLzsV/6SfsTtQY7f5pw+ftfLmZA26TsjYcK6G+pvn4HwgEcOaz16tuZ8e5HRRsa2xck t7NsavQ/DFEqY2+kpvklO2pAyTtwPvQNcG4EQ9iXC5DU2OCNQCl3xGPYoHW4oLhzIwWw KpOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hEhSLcYy; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r27-20020ae9d61b000000b00745bbaa74afsi18083121qkk.298.2023.05.03.00.18.04 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:18:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hEhSLcYy; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6e8-0006Oh-1S; Wed, 03 May 2023 03:11:28 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dU-0002zO-Tt for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:52 -0400 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dL-00076U-5w for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:48 -0400 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3f1e2555b5aso31705455e9.0 for ; Wed, 03 May 2023 00:10:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097837; x=1685689837; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EZO7SCJGqCS84y/GA95YftKK1byl/0ehMpYN3JvXR7I=; b=hEhSLcYygoqsEbTL+lw+jbCnLzEl6y/CQ4QPoPtcNeHEBsukUvEbIR1pQcCZYeSztr We9Vl3whUaqGoMB7Si9cLm4FZCtbz1OcvTIsf7s/7S0ioJAxBZA0bhJ/XlPrr56dCfqu TYv888rq3eD+qp/EOrSJ8RD8RuH5OBBSAyQiWuwXlbdiBVu6jWarmmnfaSL/xnX9bd0D Aw5S/oW5qC4BNNJCLCDS6kMYPVu0dgHr7FkZehDZnkuS1/h5xBRtG304bNiDKUoYGXLX 4g/Z+v2dGb6Fq0MGHyRiUF3tclFs01GDwh4zc/zfkYqw3BC5aqKHeulpA0/Z3Z33cHoQ ZStg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097837; x=1685689837; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EZO7SCJGqCS84y/GA95YftKK1byl/0ehMpYN3JvXR7I=; b=hvUirzdIMGDnbmo+vfYI9KM7kJV1pxp3GbI3L5CnuRzE008498UkcvegrSBZR4sLYY HqD3JbN3UGLUBapm1PPb/Gvd0XnJsr6E5TCCw5KUgRObQUthAFQhjbCqYTynVU0Xzbq+ 1n4roiT/FCSCIES4EPUSjogJ1uGIYsR8MgWPP9A8DiZONxJ7oyfmUUxODVGytzW5g9lV b+ONyXZWOz8BZRFaRm5g2ChSmdqqEE27u/yMMwpLNperdhxO1UFvcqfyY9w/Z5nGMys+ ie2IoKT8CiK1RhA3HT+dev13/6mpXd2SGV6GXQjBy0uocmGpZYYPiOWCrwn8dhweKc9i sJmA== X-Gm-Message-State: AC+VfDwz2z8ymvKH6WHTZ+mPGybneuK2lMhbOXemEJLZFEAVezADoAuJ dqhSPcu27YEotDPlU/p4yQQVL/x1HwCzQakhgRPzkw== X-Received: by 2002:a1c:7404:0:b0:3f1:92aa:4eb8 with SMTP id p4-20020a1c7404000000b003f192aa4eb8mr14667499wmc.16.1683097837035; Wed, 03 May 2023 00:10:37 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:36 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 52/57] tcg/i386: Honor 64-bit atomicity in 32-bit mode Date: Wed, 3 May 2023 08:06:51 +0100 Message-Id: <20230503070656.1746170-53-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::329; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x329.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use the fpu to perform 64-bit loads and stores. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/i386/tcg-target.c.inc | 44 +++++++++++++++++++++++++++++++++------ 1 file changed, 38 insertions(+), 6 deletions(-) diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 3e21f067d6..5c6c64c48a 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -468,6 +468,10 @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct) #define OPC_GRP5 (0xff) #define OPC_GRP14 (0x73 | P_EXT | P_DATA16) +#define OPC_ESCDF (0xdf) +#define ESCDF_FILD_m64 5 +#define ESCDF_FISTP_m64 7 + /* Group 1 opcode extensions for 0x80-0x83. These are also used as modifiers for OPC_ARITH. */ #define ARITH_ADD 0 @@ -2091,7 +2095,20 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, datalo = datahi; datahi = t; } - if (h.base == datalo || h.index == datalo) { + if (h.atom == MO_64) { + /* + * Atomicity requires that we use use a single 8-byte load. + * For simplicity and code size, always use the FPU for this. + * Similar insns using SSE/AVX are merely larger. + * Load from memory in one go, then store back to the stack, + * from whence we can load into the correct integer regs. + */ + tcg_out_modrm_sib_offset(s, OPC_ESCDF + h.seg, ESCDF_FILD_m64, + h.base, h.index, 0, h.ofs); + tcg_out_modrm_offset(s, OPC_ESCDF, ESCDF_FISTP_m64, TCG_REG_ESP, 0); + tcg_out_modrm_offset(s, movop, datalo, TCG_REG_ESP, 0); + tcg_out_modrm_offset(s, movop, datahi, TCG_REG_ESP, 4); + } else if (h.base == datalo || h.index == datalo) { tcg_out_modrm_sib_offset(s, OPC_LEA, datahi, h.base, h.index, 0, h.ofs); tcg_out_modrm_offset(s, movop + h.seg, datalo, datahi, 0); @@ -2161,12 +2178,27 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, if (TCG_TARGET_REG_BITS == 64) { tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo, h.base, h.index, 0, h.ofs); + break; + } + if (use_movbe) { + TCGReg t = datalo; + datalo = datahi; + datahi = t; + } + if (h.atom == MO_64) { + /* + * Atomicity requires that we use use one 8-byte store. + * For simplicity, and code size, always use the FPU for this. + * Similar insns using SSE/AVX are merely larger. + * Assemble the 8-byte quantity in required endianness + * on the stack, load to coproc unit, and store. + */ + tcg_out_modrm_offset(s, movop, datalo, TCG_REG_ESP, 0); + tcg_out_modrm_offset(s, movop, datahi, TCG_REG_ESP, 4); + tcg_out_modrm_offset(s, OPC_ESCDF, ESCDF_FILD_m64, TCG_REG_ESP, 0); + tcg_out_modrm_sib_offset(s, OPC_ESCDF + h.seg, ESCDF_FISTP_m64, + h.base, h.index, 0, h.ofs); } else { - if (use_movbe) { - TCGReg t = datalo; - datalo = datahi; - datahi = t; - } tcg_out_modrm_sib_offset(s, movop + h.seg, datalo, h.base, h.index, 0, h.ofs); tcg_out_modrm_sib_offset(s, movop + h.seg, datahi, From patchwork Wed May 3 07:06:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678653 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908149wrs; Wed, 3 May 2023 00:21:38 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5N6w781PoKku2he0MLxIi2u1sKkqyvKAXyJzNeKa8awA/Hn/jY0tlEPrm6zMMvOfEVbEiV X-Received: by 2002:a05:622a:60c:b0:3bf:bb1f:3c2b with SMTP id z12-20020a05622a060c00b003bfbb1f3c2bmr28876943qta.6.1683098498202; Wed, 03 May 2023 00:21:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098498; cv=none; d=google.com; s=arc-20160816; b=koseSSFbWstdl3JtfhnaZDbmnHSLVtuF8c2YhytnqUxNUe7gYTujDIXUUTca8e57nX gZ7q/gkk2WD/Xg7AwiKqkNanLKB9wCtGoEKdhsREu8YzQc6kFx30tirc+VGU45iHPoDS 8CDEW63T8lMlKRYnuMLVxZTPMZPjgK8QL0Ev+D/0N0TNbQDh8H5+cwY7cYt8t70wn12t ewKUZ7gq0ViGOwBljji1hDkcv3pQOLWGM2Nl4RmjkgjyGUGLiZpyPOvatQJzu15SPWip nCg+me/J5nmHg62TdZD7ana/rwHSO0Q+p5Y8E/ABvploBAQajmiGAzc1LAIhg5ttgVTL Lfnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=zZ2vX3VIS+s7f165Kf/LrOrnBD196gGqIoBw1FC/3ew=; b=YbZD8qs6kxuMoTkPXBOBnEs9AkyU0gW2Pu2Rh5Q3+Nb9dbfBSh39fOluSW7A7vrIm8 NC2WRZM49BW8jxHAp+pNo9Im93vPUjH/OmVWXAqyv2hzf7xSf4Vb4LPZkMdHGJ+LRe8Z Y0xRq1D6a+3fV4y7UuK44WnasJkQMj9bdkQl5w+Z+Zf3VAli6/hps9piMQS0vdrjEMcG aC02U1jJjnFIGx1kE/8+4OSNAyPs80/ZrY7c2nLB2jSDRqukWGME5DO9CwdaKeOQHQR4 npJbZrysiFP4BX8jFzjvj21zBftkhHY1cwmCBbl5Yb+gEByAHNbVKNKmvajojNrDKbQx HTxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ygQhs+nE; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id f6-20020ac859c6000000b003e88460879dsi18817869qtf.361.2023.05.03.00.21.38 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:21:38 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ygQhs+nE; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6e0-0005Ma-1P; Wed, 03 May 2023 03:11:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dU-0002zM-Tq for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:52 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dL-00071K-6e for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:48 -0400 Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-3f19ab99540so46286275e9.2 for ; Wed, 03 May 2023 00:10:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097838; x=1685689838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zZ2vX3VIS+s7f165Kf/LrOrnBD196gGqIoBw1FC/3ew=; b=ygQhs+nEIuF1f1lsERMBiqQ3kz02wBgKIiacT7oHPDD/D6uYzbcw9iLiocJfR9L+h2 6Y7/6Nd40JUIIQSPT6fAv2oG2eb1U5sJoTo7NZ1D2Rk/GcTQ0/VAHWnqmBcO0VXeiAmK vdE/EIYyce1gSC4Kdsg1Y8YwEB6A6HBYChw26yaI989UUW/PBLdOqMK2n6AtlQJQT/Mu oVT6kLmev+a2Wyskg6cE0Ec78Htr4vDBDgp7G+YgFvAZhs7/TD2y+2MlAn121mpYFZBn jvKxFZcSZjyo78KjjAsFj0CyKueBCAT3AAPIcFG6qUSROchFnAm6OWKTAOmgwsWQ/6Lw HPpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097838; x=1685689838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zZ2vX3VIS+s7f165Kf/LrOrnBD196gGqIoBw1FC/3ew=; b=R6dQop57nqmVhgslUSwOrY6xk9x8n9wAClOZoRVNvRGsODhWatDb9kgf32isM5QouT KtNwgq0TmrHrfFGOEDn1KXTJCzDz2vIZNlAvqYZW/65jV/4tew/FdO34dJk4uLnjsWLb LPPrWIylVvow/w2stWkMGH8VweTKmrarug/Z/QkNNNOPHkgkGVETulRNZLSjDytJqIZ/ xE2LS0K22U4qDQ0lYVK2tvftIVU7C5S0lximvhpsEGUaB6KYjdluC1BRck/tuLkvAt51 /EclmsKjkJshu7opANluWNAHk8OuVfRLJtGEIvrQWvmDmMw/HUAxcT61pM3772UtVfDv zaAg== X-Gm-Message-State: AC+VfDxLZ7ScuGb1fqu/ih7W8381gy0hwQ/Yc3kLHs7NwKhJdf2kIwgh tt15mJ+b56/ttMYQ9FsLbsGE0b9Ly4kw0kVUG/WsrA== X-Received: by 2002:a1c:7917:0:b0:3f1:6942:e024 with SMTP id l23-20020a1c7917000000b003f16942e024mr14748581wme.27.1683097837828; Wed, 03 May 2023 00:10:37 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:37 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 53/57] tcg/i386: Support 128-bit load/store with have_atomic16 Date: Wed, 3 May 2023 08:06:52 +0100 Message-Id: <20230503070656.1746170-54-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/i386/tcg-target.h | 3 +- tcg/i386/tcg-target.c.inc | 184 +++++++++++++++++++++++++++++++++++++- 2 files changed, 182 insertions(+), 5 deletions(-) diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index 943af6775e..7f69997e30 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -194,7 +194,8 @@ extern bool have_atomic16; #define TCG_TARGET_HAS_qemu_st8_i32 1 #endif -#define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 \ + (TCG_TARGET_REG_BITS == 64 && have_atomic16) /* We do not support older SSE systems, only beginning with AVX1. */ #define TCG_TARGET_HAS_v64 have_avx1 diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 5c6c64c48a..a2739977a6 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -91,6 +91,8 @@ static const int tcg_target_reg_alloc_order[] = { #endif }; +#define TCG_TMP_VEC TCG_REG_XMM5 + static const int tcg_target_call_iarg_regs[] = { #if TCG_TARGET_REG_BITS == 64 #if defined(_WIN64) @@ -347,6 +349,8 @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct) #define OPC_PCMPGTW (0x65 | P_EXT | P_DATA16) #define OPC_PCMPGTD (0x66 | P_EXT | P_DATA16) #define OPC_PCMPGTQ (0x37 | P_EXT38 | P_DATA16) +#define OPC_PEXTRD (0x16 | P_EXT3A | P_DATA16) +#define OPC_PINSRD (0x22 | P_EXT3A | P_DATA16) #define OPC_PMAXSB (0x3c | P_EXT38 | P_DATA16) #define OPC_PMAXSW (0xee | P_EXT | P_DATA16) #define OPC_PMAXSD (0x3d | P_EXT38 | P_DATA16) @@ -1784,7 +1788,22 @@ typedef struct { bool tcg_target_has_memory_bswap(MemOp memop) { - return have_movbe; + MemOp atom_a, atom_u; + + if (!have_movbe) { + return false; + } + if ((memop & MO_SIZE) <= MO_64) { + return true; + } + + /* + * Reject 16-byte memop with 16-byte atomicity, i.e. VMOVDQA, + * but do allow a pair of 64-bit operations, i.e. MOVBEQ. + */ + (void)atom_and_align_for_opc(tcg_ctx, &atom_a, &atom_u, memop, + MO_ATOM_IFALIGN, true); + return atom_a <= MO_64; } /* @@ -1812,6 +1831,30 @@ static const TCGLdstHelperParam ldst_helper_param = { static const TCGLdstHelperParam ldst_helper_param = { }; #endif +static void tcg_out_vec_to_pair(TCGContext *s, TCGType type, + TCGReg l, TCGReg h, TCGReg v) +{ + int rexw = type == TCG_TYPE_I32 ? 0 : P_REXW; + + /* vpmov{d,q} %v, %l */ + tcg_out_vex_modrm(s, OPC_MOVD_EyVy + rexw, v, 0, l); + /* vpextr{d,q} $1, %v, %h */ + tcg_out_vex_modrm(s, OPC_PEXTRD + rexw, v, 0, h); + tcg_out8(s, 1); +} + +static void tcg_out_pair_to_vec(TCGContext *s, TCGType type, + TCGReg v, TCGReg l, TCGReg h) +{ + int rexw = type == TCG_TYPE_I32 ? 0 : P_REXW; + + /* vmov{d,q} %l, %v */ + tcg_out_vex_modrm(s, OPC_MOVD_VyEy + rexw, v, 0, l); + /* vpinsr{d,q} $1, %h, %v, %v */ + tcg_out_vex_modrm(s, OPC_PINSRD + rexw, v, v, h); + tcg_out8(s, 1); +} + /* * Generate code for the slow path for a load at the end of block */ @@ -1901,11 +1944,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - MemOp atom_u; + MemOp atom_u, s_bits; unsigned a_mask; + s_bits = opc & MO_SIZE; h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, - MO_ATOM_IFALIGN, false); + MO_ATOM_IFALIGN, s_bits == MO_128); a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU @@ -1915,7 +1959,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TCGType tlbtype = TCG_TYPE_I32; int trexw = 0, hrexw = 0, tlbrexw = 0; unsigned mem_index = get_mmuidx(oi); - unsigned s_bits = opc & MO_SIZE; unsigned s_mask = (1 << s_bits) - 1; target_ulong tlb_mask; @@ -2120,6 +2163,69 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, h.base, h.index, 0, h.ofs + 4); } break; + + case MO_128: + { + TCGLabel *l1 = NULL, *l2 = NULL; + bool use_pair = h.align < MO_128; + + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + + if (!use_pair) { + tcg_debug_assert(!use_movbe); + /* + * Atomicity requires that we use use VMOVDQA. + * If we've already checked for 16-byte alignment, that's all + * we need. If we arrive here with lesser alignment, then we + * have determined that less than 16-byte alignment can be + * satisfied with two 8-byte loads. + */ + if (h.align < MO_128) { + use_pair = true; + l1 = gen_new_label(); + l2 = gen_new_label(); + + tcg_out_testi(s, h.base, 15); + tcg_out_jxx(s, JCC_JNE, l2, true); + } + + tcg_out_vex_modrm_sib_offset(s, OPC_MOVDQA_VxWx + h.seg, + TCG_TMP_VEC, 0, + h.base, h.index, 0, h.ofs); + tcg_out_vec_to_pair(s, TCG_TYPE_I64, datalo, + datahi, TCG_TMP_VEC); + + if (use_pair) { + tcg_out_jxx(s, JCC_JMP, l1, true); + tcg_out_label(s, l2); + } + } + if (use_pair) { + if (use_movbe) { + TCGReg t = datalo; + datalo = datahi; + datahi = t; + } + if (h.base == datalo || h.index == datalo) { + tcg_out_modrm_sib_offset(s, OPC_LEA, datahi, + h.base, h.index, 0, h.ofs); + tcg_out_modrm_offset(s, movop + P_REXW + h.seg, + datalo, datahi, 0); + tcg_out_modrm_offset(s, movop + P_REXW + h.seg, + datahi, datahi, 8); + } else { + tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo, + h.base, h.index, 0, h.ofs); + tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datahi, + h.base, h.index, 0, h.ofs + 8); + } + } + if (l1) { + tcg_out_label(s, l1); + } + } + break; + default: g_assert_not_reached(); } @@ -2205,6 +2311,60 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, h.base, h.index, 0, h.ofs + 4); } break; + + case MO_128: + { + TCGLabel *l1 = NULL, *l2 = NULL; + bool use_pair = h.align < MO_128; + + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + + if (!use_pair) { + tcg_debug_assert(!use_movbe); + /* + * Atomicity requires that we use use VMOVDQA. + * If we've already checked for 16-byte alignment, that's all + * we need. If we arrive here with lesser alignment, then we + * have determined that less that 16-byte alignment can be + * satisfied with two 8-byte loads. + */ + if (h.align < MO_128) { + use_pair = true; + l1 = gen_new_label(); + l2 = gen_new_label(); + + tcg_out_testi(s, h.base, 15); + tcg_out_jxx(s, JCC_JNE, l2, true); + } + + tcg_out_pair_to_vec(s, TCG_TYPE_I64, TCG_TMP_VEC, + datalo, datahi); + tcg_out_vex_modrm_sib_offset(s, OPC_MOVDQA_WxVx + h.seg, + TCG_TMP_VEC, 0, + h.base, h.index, 0, h.ofs); + + if (use_pair) { + tcg_out_jxx(s, JCC_JMP, l1, true); + tcg_out_label(s, l2); + } + } + if (use_pair) { + if (use_movbe) { + TCGReg t = datalo; + datalo = datahi; + datahi = t; + } + tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo, + h.base, h.index, 0, h.ofs); + tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datahi, + h.base, h.index, 0, h.ofs + 8); + } + if (l1) { + tcg_out_label(s, l1); + } + } + break; + default: g_assert_not_reached(); } @@ -2528,6 +2688,10 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_qemu_ld(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64); } break; + case INDEX_op_qemu_ld_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128); + break; case INDEX_op_qemu_st_i32: case INDEX_op_qemu_st8_i32: if (TCG_TARGET_REG_BITS >= TARGET_LONG_BITS) { @@ -2545,6 +2709,10 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_qemu_st(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64); } break; + case INDEX_op_qemu_st_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128); + break; OP_32_64(mulu2): tcg_out_modrm(s, OPC_GRP3_Ev + rexw, EXT3_MUL, args[3]); @@ -3239,6 +3407,13 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) : TARGET_LONG_BITS <= TCG_TARGET_REG_BITS ? C_O0_I3(L, L, L) : C_O0_I4(L, L, L, L)); + case INDEX_op_qemu_ld_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + return C_O2_I1(r, r, L); + case INDEX_op_qemu_st_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + return C_O0_I3(L, L, L); + case INDEX_op_brcond2_i32: return C_O0_I4(r, r, ri, ri); @@ -4095,6 +4270,7 @@ static void tcg_target_init(TCGContext *s) s->reserved_regs = 0; tcg_regset_set_reg(s->reserved_regs, TCG_REG_CALL_STACK); + tcg_regset_set_reg(s->reserved_regs, TCG_TMP_VEC); #ifdef _WIN64 /* These are call saved, and we don't save them, so don't use them. */ tcg_regset_set_reg(s->reserved_regs, TCG_REG_XMM6); From patchwork Wed May 3 07:06:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678624 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp904874wrs; Wed, 3 May 2023 00:12:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6YqZm8tGMIkObx8kI+qG+ZyQ9V3swf1z1/6HklroNxEBZm/dLv4Yj3JUPQYlb/Rk+I4Wtg X-Received: by 2002:a05:622a:1aa4:b0:3ef:3542:4469 with SMTP id s36-20020a05622a1aa400b003ef35424469mr31559822qtc.46.1683097952657; Wed, 03 May 2023 00:12:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683097952; cv=none; d=google.com; s=arc-20160816; b=K3s5ZEQvUlw3aOHKYnFwqzhDMayzdOVZtBr0/WvIusfyobUPkyEE63Gc3IbQUFhQzE 8Ng5j2Ema37PwqciArNrbDn/jBUy4J4yuhAU1zfKevXWTM6Tboh8PnW7whoxk3A2xpm8 WvE/8osXJEHvSyGIDL+YXzhnjWnXRs3XTDRdvVOiFBWmJ/Ez9sOIZNi+vX7uGfkQAsYQ PQAEu9B6CgFcGI+PYjink4WLQC6uyvxeyzwoM8MOzXwMNE313Kq3HY8VV9OZW0mwfVrH 27DCVMZGacpvzn144qltjvRNcLINctOcLtWIyze2Cf5JJsqYFc8qS29atFFLhCeKQIYd cnpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=c7cEeUCT+DqTBIQlkyakmBiJxoFYBo2GhPEulivzSlM=; b=TvtVc4igaqNRHMvuNPZq9Os542lY8QROUiaWP73RYw2ErzlWWlouAAVDY36kB6jSZA EtI1quPZy2h88SzybA4wmjIZb1nY5RcubXlyesuCVhpV/AaRTRmqqyiX1cC55tQvo8go QbO8TbwO/bxeQ8Vbio41XYS1PhDkx+oYzsm91WIGQOw3pthNcNfPx3u/nZ/J2ClCZI3B oCVNv6MyF9Sdvx2EfZscrv39cevr2GcWDJTMbeZnnXTd6qin3BTJLhbSVK46VnUvtbSD EgUqVjeNLdCQFMPUFvCTAfiFTyh7FEKev7WznM1SbI6AoNmr4XuK5iI/78dvWdnB5wMO C68Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="kBWlj/uf"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id dt34-20020a05620a47a200b0074a8ab7b4b2si17904023qkb.142.2023.05.03.00.12.32 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:12:32 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="kBWlj/uf"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6dz-0005KO-MI; Wed, 03 May 2023 03:11:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dY-00034w-UV for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:55 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dM-00077D-SE for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:52 -0400 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-3f18dacd392so29615875e9.0 for ; Wed, 03 May 2023 00:10:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097838; x=1685689838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=c7cEeUCT+DqTBIQlkyakmBiJxoFYBo2GhPEulivzSlM=; b=kBWlj/uft2eea/gYL3pvXD5wKb3p3MLnlNWED4BVwvufYuy7n87s6kfU6IrRzDcIl/ X8vcjB/8H11RoJm17heFlVneH2fnXcCOdLRUnOOl7cEK+p90vlZ6+zc1/itzfQi8V7fz Zv2yYb+H+5BlYaQ3PCTzaRtE7hZGRJnZmAYXszH4qpbDHMs4dSzjwGpjwlKOSSvPkhdc Becov7b9hz8AtZzfFIbtS5ST8shaS22jCSNINXgwQvnDpN7CtXDQURU7wdbk462xjY6o 3iqNdrVwUCaM9oofheOpyKCLJH2WcwSSJshJsxgkJpVXZqKMGqvOkXwFWP06NThZfsKZ KcIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097838; x=1685689838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=c7cEeUCT+DqTBIQlkyakmBiJxoFYBo2GhPEulivzSlM=; b=Og+fehdYzW0RA3/Uzk8ciPe94EZJ3HAlOmDQUJwJLGx3jWvHY3m0yeg0EukQyqt9cj 3zsCmdnc8cnQZrdrmwJrRGzbwNBhtRcJClePVNpeDcQKHYx39+8YlKsJB3xmwzF7NNCF ZXsOuM838RkZfzt0eJ7+oYb/QzBeXqE/xe8qz7ucC1cv9TNf/uAfdiC/qfytsMqldNj8 VSG8y38KW3zoKyY/6xu8/3+w8FByYLZNbuhLukDir55BEyFj/Xw1hjL1HEwJKRdqXFlg 1rKD1LzElcyaTbWbTmuIU+h9a99ElhrBVtbNjp1TW5DMPLgyaNmqYrEwqbNws+X6XIJ3 4Tbg== X-Gm-Message-State: AC+VfDyU3i4O/65sRZ4qIHcgw8aowifRp2sSVomahdrlq90HqLiYZGT5 DvRBa6UJQ2VdV3IJ3kuJ1paUPEYo3+yWejEzn6aL/A== X-Received: by 2002:a05:600c:22d2:b0:3f1:6458:99a7 with SMTP id 18-20020a05600c22d200b003f1645899a7mr14126121wmg.38.1683097838546; Wed, 03 May 2023 00:10:38 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:38 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 54/57] tcg/aarch64: Rename temporaries Date: Wed, 3 May 2023 08:06:53 +0100 Message-Id: <20230503070656.1746170-55-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org We will need to allocate a second general-purpose temporary. Rename the existing temps to add a distinguishing number. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/aarch64/tcg-target.c.inc | 50 ++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 1d6d382edd..76a6bfd202 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -80,8 +80,8 @@ static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot) bool have_lse; bool have_lse2; -#define TCG_REG_TMP TCG_REG_X30 -#define TCG_VEC_TMP TCG_REG_V31 +#define TCG_REG_TMP0 TCG_REG_X30 +#define TCG_VEC_TMP0 TCG_REG_V31 #ifndef CONFIG_SOFTMMU /* Note that XZR cannot be encoded in the address base register slot, @@ -998,7 +998,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece, static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg r, TCGReg base, intptr_t offset) { - TCGReg temp = TCG_REG_TMP; + TCGReg temp = TCG_REG_TMP0; if (offset < -0xffffff || offset > 0xffffff) { tcg_out_movi(s, TCG_TYPE_PTR, temp, offset); @@ -1150,8 +1150,8 @@ static void tcg_out_ldst(TCGContext *s, AArch64Insn insn, TCGReg rd, } /* Worst-case scenario, move offset to temp register, use reg offset. */ - tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP, offset); - tcg_out_ldst_r(s, insn, rd, rn, TCG_TYPE_I64, TCG_REG_TMP); + tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP0, offset); + tcg_out_ldst_r(s, insn, rd, rn, TCG_TYPE_I64, TCG_REG_TMP0); } static bool tcg_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg) @@ -1367,8 +1367,8 @@ static void tcg_out_call_int(TCGContext *s, const tcg_insn_unit *target) if (offset == sextract64(offset, 0, 26)) { tcg_out_insn(s, 3206, BL, offset); } else { - tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP, (intptr_t)target); - tcg_out_insn(s, 3207, BLR, TCG_REG_TMP); + tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP0, (intptr_t)target); + tcg_out_insn(s, 3207, BLR, TCG_REG_TMP0); } } @@ -1505,7 +1505,7 @@ static void tcg_out_addsub2(TCGContext *s, TCGType ext, TCGReg rl, AArch64Insn insn; if (rl == ah || (!const_bh && rl == bh)) { - rl = TCG_REG_TMP; + rl = TCG_REG_TMP0; } if (const_bl) { @@ -1522,7 +1522,7 @@ static void tcg_out_addsub2(TCGContext *s, TCGType ext, TCGReg rl, possibility of adding 0+const in the low part, and the immediate add instructions encode XSP not XZR. Don't try anything more elaborate here than loading another zero. */ - al = TCG_REG_TMP; + al = TCG_REG_TMP0; tcg_out_movi(s, ext, al, 0); } tcg_out_insn_3401(s, insn, ext, rl, al, bl); @@ -1563,7 +1563,7 @@ static void tcg_out_cltz(TCGContext *s, TCGType ext, TCGReg d, { TCGReg a1 = a0; if (is_ctz) { - a1 = TCG_REG_TMP; + a1 = TCG_REG_TMP0; tcg_out_insn(s, 3507, RBIT, ext, a1, a0); } if (const_b && b == (ext ? 64 : 32)) { @@ -1572,7 +1572,7 @@ static void tcg_out_cltz(TCGContext *s, TCGType ext, TCGReg d, AArch64Insn sel = I3506_CSEL; tcg_out_cmp(s, ext, a0, 0, 1); - tcg_out_insn(s, 3507, CLZ, ext, TCG_REG_TMP, a1); + tcg_out_insn(s, 3507, CLZ, ext, TCG_REG_TMP0, a1); if (const_b) { if (b == -1) { @@ -1585,7 +1585,7 @@ static void tcg_out_cltz(TCGContext *s, TCGType ext, TCGReg d, b = d; } } - tcg_out_insn_3506(s, sel, ext, d, TCG_REG_TMP, b, TCG_COND_NE); + tcg_out_insn_3506(s, sel, ext, d, TCG_REG_TMP0, b, TCG_COND_NE); } } @@ -1603,7 +1603,7 @@ bool tcg_target_has_memory_bswap(MemOp memop) } static const TCGLdstHelperParam ldst_helper_param = { - .ntmp = 1, .tmp = { TCG_REG_TMP } + .ntmp = 1, .tmp = { TCG_REG_TMP0 } }; static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) @@ -1864,7 +1864,7 @@ static void tcg_out_goto_tb(TCGContext *s, int which) set_jmp_insn_offset(s, which); tcg_out32(s, I3206_B); - tcg_out_insn(s, 3207, BR, TCG_REG_TMP); + tcg_out_insn(s, 3207, BR, TCG_REG_TMP0); set_jmp_reset_offset(s, which); } @@ -1883,7 +1883,7 @@ void tb_target_set_jmp_target(const TranslationBlock *tb, int n, ptrdiff_t i_offset = i_addr - jmp_rx; /* Note that we asserted this in range in tcg_out_goto_tb. */ - insn = deposit32(I3305_LDR | TCG_REG_TMP, 5, 19, i_offset >> 2); + insn = deposit32(I3305_LDR | TCG_REG_TMP0, 5, 19, i_offset >> 2); } qatomic_set((uint32_t *)jmp_rw, insn); flush_idcache_range(jmp_rx, jmp_rw, 4); @@ -2079,13 +2079,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_rem_i64: case INDEX_op_rem_i32: - tcg_out_insn(s, 3508, SDIV, ext, TCG_REG_TMP, a1, a2); - tcg_out_insn(s, 3509, MSUB, ext, a0, TCG_REG_TMP, a2, a1); + tcg_out_insn(s, 3508, SDIV, ext, TCG_REG_TMP0, a1, a2); + tcg_out_insn(s, 3509, MSUB, ext, a0, TCG_REG_TMP0, a2, a1); break; case INDEX_op_remu_i64: case INDEX_op_remu_i32: - tcg_out_insn(s, 3508, UDIV, ext, TCG_REG_TMP, a1, a2); - tcg_out_insn(s, 3509, MSUB, ext, a0, TCG_REG_TMP, a2, a1); + tcg_out_insn(s, 3508, UDIV, ext, TCG_REG_TMP0, a1, a2); + tcg_out_insn(s, 3509, MSUB, ext, a0, TCG_REG_TMP0, a2, a1); break; case INDEX_op_shl_i64: @@ -2129,8 +2129,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, if (c2) { tcg_out_rotl(s, ext, a0, a1, a2); } else { - tcg_out_insn(s, 3502, SUB, 0, TCG_REG_TMP, TCG_REG_XZR, a2); - tcg_out_insn(s, 3508, RORV, ext, a0, a1, TCG_REG_TMP); + tcg_out_insn(s, 3502, SUB, 0, TCG_REG_TMP0, TCG_REG_XZR, a2); + tcg_out_insn(s, 3508, RORV, ext, a0, a1, TCG_REG_TMP0); } break; @@ -2532,8 +2532,8 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, break; } } - tcg_out_dupi_vec(s, type, MO_8, TCG_VEC_TMP, 0); - a2 = TCG_VEC_TMP; + tcg_out_dupi_vec(s, type, MO_8, TCG_VEC_TMP0, 0); + a2 = TCG_VEC_TMP0; } if (is_scalar) { insn = cmp_scalar_insn[cond]; @@ -2942,9 +2942,9 @@ static void tcg_target_init(TCGContext *s) s->reserved_regs = 0; tcg_regset_set_reg(s->reserved_regs, TCG_REG_SP); tcg_regset_set_reg(s->reserved_regs, TCG_REG_FP); - tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP); tcg_regset_set_reg(s->reserved_regs, TCG_REG_X18); /* platform register */ - tcg_regset_set_reg(s->reserved_regs, TCG_VEC_TMP); + tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP0); + tcg_regset_set_reg(s->reserved_regs, TCG_VEC_TMP0); } /* Saving pairs: (X19, X20) .. (X27, X28), (X29(fp), X30(lr)). */ From patchwork Wed May 3 07:06:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678649 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp907940wrs; Wed, 3 May 2023 00:21:05 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4vv07xY4ZWKdRsSnCdBdnAjJWGdgAR/Zkg64qjDjS7aNUym66fCW9YlbK8CmZXnG98xpx2 X-Received: by 2002:a05:6214:124a:b0:5f1:6892:7449 with SMTP id r10-20020a056214124a00b005f168927449mr8813473qvv.28.1683098464859; Wed, 03 May 2023 00:21:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098464; cv=none; d=google.com; s=arc-20160816; b=ErlzbUwdp20r8Q49MixQHkGVgjFZUOIt22iknjuC/uXTxP0VdZsbSXzxcbSIWO/rhg Nh0PogitKH9YRVbhFDMzk7PazUwLtt/fozqU0epgS6/B4I61KH20Zm/xhTYkI/JBmrQv ZjXPcZGiz+WY/FYDpK55W4VsVqgUXqdfGERd+/vYslpaUqAhT3h0965ZHd3WqjKdPbOs pp2DtgPclQXoSD0Jqfyx97xm33o57+ysHflcxKilzOvL+D3CGb5Vzg5mjjfCtd+F2YvL Jp12Bmd+IXUx9dcYSi/qjKeuqg9yGwcyk0smvG4b8XLWzEW7QDO291O2akCYoNrkWGFq SR+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=kSvmWvewT6DAwHW7Dk3CYO+nDkMqwiVa8dlhAxhzxaQ=; b=mjAZ6p+/tabxbWUFn0dyzhLnW7ha2nW0GaqxwnZG+GLxyVHXiN7ViHTk5qWLQtg2ht OzB5BvAMzS5DT/9GJ1MlwFnE1qZ46863MADr7QCy8hkOQdggvl+bxwp8ZweT72T+hh7f 6h61On3OXC33ZTPe3E8j3Gm15k5NY7pPJrtmwZhAjJf7+DrM20r9R5HZxNqH/4DYeweI +IoS7YJ+23KtxFw4phlNbn+AFyW9W1stDqS+/pFs/+b+rCJ8n50SKimAJqlgMMjF2rDA FHBY7mQAs2/D46XY4ZfsksWxiesL6YCZIdFMdbk49x4Cr5M2iFbaMN/Kr4NzkHXIYdhh IItg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=q0P4zbfO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 13-20020a0562140d0d00b005e3cb6966afsi923777qvh.597.2023.05.03.00.21.04 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:21:04 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=q0P4zbfO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6e1-0005Sk-9D; Wed, 03 May 2023 03:11:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6da-00035N-Ly for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:55 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dM-00077w-UN for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:54 -0400 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-3f1763ee8f8so30940995e9.1 for ; Wed, 03 May 2023 00:10:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097839; x=1685689839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kSvmWvewT6DAwHW7Dk3CYO+nDkMqwiVa8dlhAxhzxaQ=; b=q0P4zbfOAOurm1+SY9FAvvdKaVvRXqSe9aJF0or2MAGQjf7Hm2IkxL+md3BGvp9vWJ bhY4WongLMAux0j4U5+094iTEOhLxuJiN2Z3WDcGSxzGkIcZV5WSfuGXHBkbRxnK3udv HhyjuYVKjAzoP+6UXXjhhvLUWU7w1NuBLFhVUDJ2EtyynMIo0rWtCfRuW3Mz34dm9NL9 Rx8ZQ4D+BRPDlzoHLylQxSQ+Z1ocKFCIx5jjqBfmpVdnqcW54eh2BHZFesyD3v0Ei/Ic GDy41TJvkB+szgGPcaaP3AWv31jLomVZx1nlqBvkhHiIQh9rSYwI4Y1qm76TH/n4dWMm vPow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097839; x=1685689839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kSvmWvewT6DAwHW7Dk3CYO+nDkMqwiVa8dlhAxhzxaQ=; b=UlEckdXknBjNQ1Cepz1oZXj7H0DlAhjfl/tUAjC54843PxIecHcW9l0QVyPnQLZWBY ahcl80xhMPi2z+EBRpZExC6zWwSxd6qr0XxaEcI+ajbQ5l6Z79LkFqAKkdb/7J8vbzGQ h7QX44j6/nfMHO751pgg5YOfybtL3ZBxtaWR//gAfWVMxzMl5s9NB0s62Nr2jsXjdVgE uVp/MIRZlNfQ3YB3rcskdjoSG6L8BPrY5Xxdwbq+pp2BYLLZCwkNst9r9X82SXv9NesX wIApmxf78p0D16HHFqqV5egcJ0tyo4bYyz+Mfl5z8UGNNkvfUyJtkU3Rm2GE87QL1CDO fLgA== X-Gm-Message-State: AC+VfDxj0bg+POHuhv9Yp1AqT0dsh1g5A9vUBby45bVaiO9pa0z0m4Lg dAzgbg0zrskez8KbQHyYIuPPBUjltJRgl4S9IsjGjA== X-Received: by 2002:a1c:f706:0:b0:3f1:923e:e6bc with SMTP id v6-20020a1cf706000000b003f1923ee6bcmr13453116wmh.0.1683097839278; Wed, 03 May 2023 00:10:39 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:38 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 55/57] tcg/aarch64: Support 128-bit load/store Date: Wed, 3 May 2023 08:06:54 +0100 Message-Id: <20230503070656.1746170-56-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use LDXP+STXP when LSE2 is not present and 16-byte atomicity is required, and LDP/STP otherwise. This requires allocating a second general-purpose temporary, as Rs cannot overlap Rn in STXP. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- tcg/aarch64/tcg-target-con-set.h | 2 + tcg/aarch64/tcg-target.h | 2 +- tcg/aarch64/tcg-target.c.inc | 181 ++++++++++++++++++++++++++++++- 3 files changed, 181 insertions(+), 4 deletions(-) diff --git a/tcg/aarch64/tcg-target-con-set.h b/tcg/aarch64/tcg-target-con-set.h index d6c6866878..74065c7098 100644 --- a/tcg/aarch64/tcg-target-con-set.h +++ b/tcg/aarch64/tcg-target-con-set.h @@ -14,6 +14,7 @@ C_O0_I2(lZ, l) C_O0_I2(r, rA) C_O0_I2(rZ, r) C_O0_I2(w, r) +C_O0_I3(lZ, lZ, l) C_O1_I1(r, l) C_O1_I1(r, r) C_O1_I1(w, r) @@ -33,4 +34,5 @@ C_O1_I2(w, w, wO) C_O1_I2(w, w, wZ) C_O1_I3(w, w, w, w) C_O1_I4(r, r, rA, rZ, rZ) +C_O2_I1(r, r, l) C_O2_I4(r, r, rZ, rZ, rA, rMZ) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index 74ee2ed255..fa6af9746f 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -129,7 +129,7 @@ extern bool have_lse2; #define TCG_TARGET_HAS_muluh_i64 1 #define TCG_TARGET_HAS_mulsh_i64 1 -#define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 1 #define TCG_TARGET_HAS_v64 1 #define TCG_TARGET_HAS_v128 1 diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 76a6bfd202..f1627cb96d 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -81,6 +81,7 @@ bool have_lse; bool have_lse2; #define TCG_REG_TMP0 TCG_REG_X30 +#define TCG_REG_TMP1 TCG_REG_X17 #define TCG_VEC_TMP0 TCG_REG_V31 #ifndef CONFIG_SOFTMMU @@ -404,6 +405,10 @@ typedef enum { I3305_LDR_v64 = 0x5c000000, I3305_LDR_v128 = 0x9c000000, + /* Load/store exclusive. */ + I3306_LDXP = 0xc8600000, + I3306_STXP = 0xc8200000, + /* Load/store register. Described here as 3.3.12, but the helper that emits them can transform to 3.3.10 or 3.3.13. */ I3312_STRB = 0x38000000 | LDST_ST << 22 | MO_8 << 30, @@ -468,6 +473,9 @@ typedef enum { I3406_ADR = 0x10000000, I3406_ADRP = 0x90000000, + /* Add/subtract extended register instructions. */ + I3501_ADD = 0x0b200000, + /* Add/subtract shifted register instructions (without a shift). */ I3502_ADD = 0x0b000000, I3502_ADDS = 0x2b000000, @@ -638,6 +646,12 @@ static void tcg_out_insn_3305(TCGContext *s, AArch64Insn insn, tcg_out32(s, insn | (imm19 & 0x7ffff) << 5 | rt); } +static void tcg_out_insn_3306(TCGContext *s, AArch64Insn insn, TCGReg rs, + TCGReg rt, TCGReg rt2, TCGReg rn) +{ + tcg_out32(s, insn | rs << 16 | rt2 << 10 | rn << 5 | rt); +} + static void tcg_out_insn_3201(TCGContext *s, AArch64Insn insn, TCGType ext, TCGReg rt, int imm19) { @@ -720,6 +734,14 @@ static void tcg_out_insn_3406(TCGContext *s, AArch64Insn insn, tcg_out32(s, insn | (disp & 3) << 29 | (disp & 0x1ffffc) << (5 - 2) | rd); } +static inline void tcg_out_insn_3501(TCGContext *s, AArch64Insn insn, + TCGType sf, TCGReg rd, TCGReg rn, + TCGReg rm, int opt, int imm3) +{ + tcg_out32(s, insn | sf << 31 | rm << 16 | opt << 13 | + imm3 << 10 | rn << 5 | rd); +} + /* This function is for both 3.5.2 (Add/Subtract shifted register), for the rare occasion when we actually want to supply a shift amount. */ static inline void tcg_out_insn_3502S(TCGContext *s, AArch64Insn insn, @@ -1648,17 +1670,17 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TCGType addr_type = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32; TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - MemOp atom_u; + MemOp atom_u, s_bits; unsigned a_mask; + s_bits = opc & MO_SIZE; h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, have_lse2 ? MO_ATOM_WITHIN16 : MO_ATOM_IFALIGN, - false); + s_bits == MO_128); a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU - unsigned s_bits = opc & MO_SIZE; unsigned s_mask = (1u << s_bits) - 1; unsigned mem_index = get_mmuidx(oi); TCGReg x3; @@ -1839,6 +1861,148 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg, } } +static TCGLabelQemuLdst * +prepare_host_addr_base_only(TCGContext *s, HostAddress *h, TCGReg addr_reg, + MemOpIdx oi, bool is_ld) +{ + TCGLabelQemuLdst *ldst; + + ldst = prepare_host_addr(s, h, addr_reg, oi, true); + + /* Compose the final address, as LDP/STP have no indexing. */ + if (h->index != TCG_REG_XZR) { + tcg_out_insn(s, 3501, ADD, TCG_TYPE_I64, TCG_REG_TMP0, + h->base, h->index, + h->index_ext == TCG_TYPE_I32 ? MO_32 : MO_64, 0); + h->base = TCG_REG_TMP0; + h->index = TCG_REG_XZR; + h->index_ext = TCG_TYPE_I64; + } + + return ldst; +} + +static void tcg_out_qemu_ld128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi) +{ + TCGLabelQemuLdst *ldst; + HostAddress h; + + ldst = prepare_host_addr_base_only(s, &h, addr_reg, oi, true); + + if (h.atom < MO_128 || have_lse2) { + tcg_out_insn(s, 3314, LDP, datalo, datahi, h.base, 0, 0, 0); + } else { + TCGLabel *l0, *l1 = NULL; + + /* + * 16-byte atomicity without LSE2 requires LDXP+STXP loop: + * 1: ldxp lo,hi,[addr] + * stxp tmp1,lo,hi,[addr] + * cbnz tmp1, 1b + * + * If we have already checked for 16-byte alignment, that's all + * we need. Otherwise we have determined that misaligned atomicity + * may be handled with two 8-byte loads. + */ + if (h.align < MO_128) { + /* + * TODO: align should be MO_64, so we only need test bit 3, + * which means we could use TBNZ instead of AND+CBNE. + */ + l1 = gen_new_label(); + tcg_out_logicali(s, I3404_ANDI, 0, TCG_REG_TMP1, addr_reg, 15); + tcg_out_brcond(s, TCG_TYPE_I32, TCG_COND_NE, + TCG_REG_TMP1, 0, 1, l1); + } + + l0 = gen_new_label(); + tcg_out_label(s, l0); + + tcg_out_insn(s, 3306, LDXP, TCG_REG_XZR, datalo, datahi, h.base); + tcg_out_insn(s, 3306, STXP, TCG_REG_TMP1, datalo, datahi, h.base); + tcg_out_brcond(s, TCG_TYPE_I32, TCG_COND_NE, TCG_REG_TMP1, 0, 1, l0); + + if (l1) { + TCGLabel *l2 = gen_new_label(); + tcg_out_goto_label(s, l2); + + tcg_out_label(s, l1); + tcg_out_insn(s, 3314, LDP, datalo, datahi, h.base, 0, 0, 0); + + tcg_out_label(s, l2); + } + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + +static void tcg_out_qemu_st128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi) +{ + TCGLabelQemuLdst *ldst; + HostAddress h; + + ldst = prepare_host_addr_base_only(s, &h, addr_reg, oi, false); + + if (h.atom < MO_128 || have_lse2) { + tcg_out_insn(s, 3314, STP, datalo, datahi, h.base, 0, 0, 0); + } else { + TCGLabel *l0, *l1 = NULL; + + /* + * 16-byte atomicity without LSE2 requires LDXP+STXP loop: + * 1: ldxp xzr,tmp1,[addr] + * stxp tmp1,lo,hi,[addr] + * cbnz tmp1, 1b + * + * If we have already checked for 16-byte alignment, that's all + * we need. Otherwise we have determined that misaligned atomicity + * may be handled with two 8-byte stores. + */ + if (h.align < MO_128) { + /* + * TODO: align should be MO_64, so we only need test bit 3, + * which means we could use TBNZ instead of AND+CBNE. + */ + l1 = gen_new_label(); + tcg_out_logicali(s, I3404_ANDI, 0, TCG_REG_TMP1, addr_reg, 15); + tcg_out_brcond(s, TCG_TYPE_I32, TCG_COND_NE, + TCG_REG_TMP1, 0, 1, l1); + } + + l0 = gen_new_label(); + tcg_out_label(s, l0); + + tcg_out_insn(s, 3306, LDXP, TCG_REG_XZR, + TCG_REG_XZR, TCG_REG_TMP1, h.base); + tcg_out_insn(s, 3306, STXP, TCG_REG_TMP1, datalo, datahi, h.base); + tcg_out_brcond(s, TCG_TYPE_I32, TCG_COND_NE, TCG_REG_TMP1, 0, 1, l0); + + if (l1) { + TCGLabel *l2 = gen_new_label(); + tcg_out_goto_label(s, l2); + + tcg_out_label(s, l1); + tcg_out_insn(s, 3314, STP, datalo, datahi, h.base, 0, 0, 0); + + tcg_out_label(s, l2); + } + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + static const tcg_insn_unit *tb_ret_addr; static void tcg_out_exit_tb(TCGContext *s, uintptr_t a0) @@ -2176,6 +2340,12 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, REG0(0), a1, a2, ext); break; + case INDEX_op_qemu_ld_i128: + tcg_out_qemu_ld128(s, a0, a1, a2, args[3]); + break; + case INDEX_op_qemu_st_i128: + tcg_out_qemu_st128(s, REG0(0), REG0(1), a2, args[3]); + break; case INDEX_op_bswap64_i64: tcg_out_rev(s, TCG_TYPE_I64, MO_64, a0, a1); @@ -2813,9 +2983,13 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_qemu_ld_i32: case INDEX_op_qemu_ld_i64: return C_O1_I1(r, l); + case INDEX_op_qemu_ld_i128: + return C_O2_I1(r, r, l); case INDEX_op_qemu_st_i32: case INDEX_op_qemu_st_i64: return C_O0_I2(lZ, l); + case INDEX_op_qemu_st_i128: + return C_O0_I3(lZ, lZ, l); case INDEX_op_deposit_i32: case INDEX_op_deposit_i64: @@ -2944,6 +3118,7 @@ static void tcg_target_init(TCGContext *s) tcg_regset_set_reg(s->reserved_regs, TCG_REG_FP); tcg_regset_set_reg(s->reserved_regs, TCG_REG_X18); /* platform register */ tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP0); + tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP1); tcg_regset_set_reg(s->reserved_regs, TCG_VEC_TMP0); } From patchwork Wed May 3 07:06:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678659 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908675wrs; Wed, 3 May 2023 00:23:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7eqeCZ/MbYsNXW4M3XnIHWlOj0KeSW2OxQGrX9aH5CmPZ/aMVSQ5xbe5h30UOok6fM5tvH X-Received: by 2002:a05:622a:1806:b0:3e6:55b2:35e with SMTP id t6-20020a05622a180600b003e655b2035emr30184922qtc.26.1683098591471; Wed, 03 May 2023 00:23:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098591; cv=none; d=google.com; s=arc-20160816; b=RTHPu13LSRryt9tmDNP2rziHTncF3vDgn1wjdZ4sgK79hsVThvv+4yHk1sZCeYT/nC MUhLCcyV4Or/Mgd+QaWlIDKV6O2pH87i763pDs/NL0H6l/XugzhDs2PZ1KoUuLr6bAmF Q+nh+DRu86krRfkr41/dq2m6JNR1pEa5zMdl9Bu8QF3sNdQGDr2i72DQ71uU6GYMC/xY vjHaasDPNFnFDQ7dcVTnLWTNRwqrDopdwCs6mReBp5FHr1J7by7Z9l6CPOhLqzkB0GH/ Ps/EXPN7VVFRgkEKzbHEGtf0Cs6gNv3HGhnhVzCsGtyT2JzGFWFlmfA1RSiGQdbjqMuN kPIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=74fz9Ld9bLDPRfVJYs6idXiZ8ClJOkSqQyHSmdIG74U=; b=G7yfX25i+K1f6vltPxMWJkXz3XE7cmIaKAMydlRX1QEGehkrhw7TfcTWD2kYwaZLBq WfSTj5fMY+lzm1HN6SkCYJj5bV0Lumsy2REU4P+ZNQ1lDe52AfwveOezrtFu0fEwNC7f 25U6mabg6IqZB4RbbIaR4qLtk4VY08njWzNleAJSkkXib4nUG1KaLEkUylQ7iOGgbk9D yHomFoWaf0q/FMCTlByUksV6+Yse/EayAHRsBvHhIrV6xfX9Ax7LGtgiFVwWWkoqwQxC bOa0aOUYUefbLt416OXK8/ZmYW7PHSFiQauutyhLnJPZiv2BqQpnHckgRcRMkWw6W/bP ymTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=s8McavB8; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d195-20020a3768cc000000b0074defcc2b9esi17843208qkc.56.2023.05.03.00.23.11 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:23:11 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=s8McavB8; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6e2-0005eU-M2; Wed, 03 May 2023 03:11:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6db-00036Q-92 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:56 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dN-00078G-M2 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:54 -0400 Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-3f315712406so11372945e9.0 for ; Wed, 03 May 2023 00:10:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097840; x=1685689840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=74fz9Ld9bLDPRfVJYs6idXiZ8ClJOkSqQyHSmdIG74U=; b=s8McavB8m4Z8NDBrozjfclv1FZhhQGo/Ht0eMdoztK3VgwNbPcJDrMPBW8XIIh4DrD 3RsgseYf63sGqeZr2W4CzQgnGZ4PlrGgnQdZ1NdPWODdIedIacTxJBHjv6OmruCGP0MJ QmES1boa02cTK/YWwWUGIejz7FQTTCiABgVOPZJe/Hdp5j6O6QzTlLaGTQdrgudOp4Qd PffUM2/n4A/hDdx8n3Xubu4HlY9XgyNrZd9bzkCql7aeC2i6IQCjIeG/0SktIqPS8EX5 0nRdE0xZoScn9CbQjZtRlQ0m+m6VUoxD9MTd+tZ/6VolVn8J1ewOsBXEXGuxn1tppQrM HzWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097840; x=1685689840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=74fz9Ld9bLDPRfVJYs6idXiZ8ClJOkSqQyHSmdIG74U=; b=BwNIju3NDN6ysdMdiHL4sd6I0qtVAnnkexGD0fpwIYDnA2Fs6MVzxOIooVHxxyeXv2 kEw3mS9iMOVGM1jnRHZMP6iHavPfStFexQ7cgSr0fmsl+lsevpv0/XrXgdFgtU0vLOrG lhX1vHdSzqXpfOWKn4g381NxykUQIn24X3PADXYUXtc6D0b7sor6nY6jgZDfeKMBJmqk aDBhGDXXZ9hmTPnTlWcK5jxsCw6VyDHeux3eGe8D0/3yhqkieXZz+H+J1xEyWe0Sjv5X 2H/sc2oZYU3SKLYsedc5wt2yh+KqWiroej8VDUfbMp0UAmSuh39s0/3cFeUhlWxcZg3q Nb4A== X-Gm-Message-State: AC+VfDxNeJT+BvkAA8wO2BVy9Co75VRecwXLuuTwnk0pjtAiggV+mxYX y9LT947p7eYd20+6qh2qnVGDwQa7Akv3i4/I2ohWhA== X-Received: by 2002:a05:600c:248:b0:3f1:7287:55ad with SMTP id 8-20020a05600c024800b003f1728755admr726400wmj.10.1683097839949; Wed, 03 May 2023 00:10:39 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:39 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 56/57] tcg/ppc: Support 128-bit load/store Date: Wed, 3 May 2023 08:06:55 +0100 Message-Id: <20230503070656.1746170-57-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use LQ/STQ with ISA v2.07, and 16-byte atomicity is required. Note that these instructions do not require 16-byte alignment. Signed-off-by: Richard Henderson Reviewed-by: Daniel Henrique Barboza --- tcg/ppc/tcg-target-con-set.h | 2 + tcg/ppc/tcg-target-con-str.h | 1 + tcg/ppc/tcg-target.h | 3 +- tcg/ppc/tcg-target.c.inc | 173 +++++++++++++++++++++++++++++++---- 4 files changed, 158 insertions(+), 21 deletions(-) diff --git a/tcg/ppc/tcg-target-con-set.h b/tcg/ppc/tcg-target-con-set.h index f206b29205..bbd7b21247 100644 --- a/tcg/ppc/tcg-target-con-set.h +++ b/tcg/ppc/tcg-target-con-set.h @@ -14,6 +14,7 @@ C_O0_I2(r, r) C_O0_I2(r, ri) C_O0_I2(v, r) C_O0_I3(r, r, r) +C_O0_I3(o, m, r) C_O0_I4(r, r, ri, ri) C_O0_I4(r, r, r, r) C_O1_I1(r, r) @@ -34,6 +35,7 @@ C_O1_I3(v, v, v, v) C_O1_I4(r, r, ri, rZ, rZ) C_O1_I4(r, r, r, ri, ri) C_O2_I1(r, r, r) +C_O2_I1(o, m, r) C_O2_I2(r, r, r, r) C_O2_I4(r, r, rI, rZM, r, r) C_O2_I4(r, r, r, r, rI, rZM) diff --git a/tcg/ppc/tcg-target-con-str.h b/tcg/ppc/tcg-target-con-str.h index 094613cbcb..20846901de 100644 --- a/tcg/ppc/tcg-target-con-str.h +++ b/tcg/ppc/tcg-target-con-str.h @@ -9,6 +9,7 @@ * REGS(letter, register_mask) */ REGS('r', ALL_GENERAL_REGS) +REGS('o', ALL_GENERAL_REGS & 0xAAAAAAAAu) /* odd registers */ REGS('v', ALL_VECTOR_REGS) /* diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h index 0914380bd7..204b70f86a 100644 --- a/tcg/ppc/tcg-target.h +++ b/tcg/ppc/tcg-target.h @@ -149,7 +149,8 @@ extern bool have_vsx; #define TCG_TARGET_HAS_mulsh_i64 1 #endif -#define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 \ + (TCG_TARGET_REG_BITS == 64 && have_isa_2_07) /* * While technically Altivec could support V64, it has no 64-bit store diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 60375804cd..682743a466 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -295,25 +295,27 @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct) #define B OPCD( 18) #define BC OPCD( 16) + #define LBZ OPCD( 34) #define LHZ OPCD( 40) #define LHA OPCD( 42) #define LWZ OPCD( 32) #define LWZUX XO31( 55) -#define STB OPCD( 38) -#define STH OPCD( 44) -#define STW OPCD( 36) - -#define STD XO62( 0) -#define STDU XO62( 1) -#define STDX XO31(149) - #define LD XO58( 0) #define LDX XO31( 21) #define LDU XO58( 1) #define LDUX XO31( 53) #define LWA XO58( 2) #define LWAX XO31(341) +#define LQ OPCD( 56) + +#define STB OPCD( 38) +#define STH OPCD( 44) +#define STW OPCD( 36) +#define STD XO62( 0) +#define STDU XO62( 1) +#define STDX XO31(149) +#define STQ XO62( 2) #define ADDIC OPCD( 12) #define ADDI OPCD( 14) @@ -2015,11 +2017,25 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) typedef struct { TCGReg base; TCGReg index; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) { - return true; + MemOp atom_a, atom_u; + + if ((memop & MO_SIZE) <= MO_64) { + return true; + } + + /* + * Reject 16-byte memop with 16-byte atomicity, + * but do allow a pair of 64-bit operations. + */ + (void)atom_and_align_for_opc(tcg_ctx, &atom_a, &atom_u, memop, + MO_ATOM_IFALIGN, true); + return atom_a <= MO_64; } /* @@ -2034,7 +2050,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - MemOp a_bits, atom_a, atom_u; + MemOp a_bits, atom_u, s_bits; /* * Book II, Section 1.4, Single-Copy Atomicity, specifies: @@ -2046,10 +2062,19 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * As of 3.0, "the non-atomic access is performed as described in * the corresponding list", which matches MO_ATOM_SUBALIGN. */ - a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + s_bits = opc & MO_SIZE; + a_bits = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, have_isa_3_00 ? MO_ATOM_SUBALIGN : MO_ATOM_IFALIGN, - false); + s_bits == MO_128); + + if (TCG_TARGET_REG_BITS == 32) { + /* We don't support unaligned accesses on 32-bits. */ + if (a_bits < s_bits) { + a_bits = s_bits; + } + } + h->align = a_bits; #ifdef CONFIG_SOFTMMU int mem_index = get_mmuidx(oi); @@ -2058,7 +2083,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, int fast_off = TLB_MASK_TABLE_OFS(mem_index); int mask_off = fast_off + offsetof(CPUTLBDescFast, mask); int table_off = fast_off + offsetof(CPUTLBDescFast, table); - unsigned s_bits = opc & MO_SIZE; ldst = new_ldst_label(s); ldst->is_ld = is_ld; @@ -2108,13 +2132,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, /* Clear the non-page, non-alignment bits from the address in R0. */ if (TCG_TARGET_REG_BITS == 32) { - /* We don't support unaligned accesses on 32-bits. - * Preserve the bottom bits and thus trigger a comparison - * failure on unaligned accesses. - */ - if (a_bits < s_bits) { - a_bits = s_bits; - } tcg_out_rlw(s, RLWINM, TCG_REG_R0, addrlo, 0, (32 - a_bits) & 31, 31 - TARGET_PAGE_BITS); } else { @@ -2299,6 +2316,108 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi, } } +static TCGLabelQemuLdst * +prepare_host_addr_index_only(TCGContext *s, HostAddress *h, TCGReg addr_reg, + MemOpIdx oi, bool is_ld) +{ + TCGLabelQemuLdst *ldst; + + ldst = prepare_host_addr(s, h, addr_reg, -1, oi, true); + + /* Compose the final address, as LQ/STQ have no indexing. */ + if (h->base != 0) { + tcg_out32(s, ADD | TAB(TCG_REG_TMP1, h->base, h->index)); + h->index = TCG_REG_TMP1; + h->base = 0; + } + + return ldst; +} + +static void tcg_out_qemu_ld128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi) +{ + TCGLabelQemuLdst *ldst; + HostAddress h; + bool need_bswap; + + ldst = prepare_host_addr_index_only(s, &h, addr_reg, oi, true); + need_bswap = get_memop(oi) & MO_BSWAP; + + if (h.atom == MO_128) { + tcg_debug_assert(!need_bswap); + tcg_debug_assert(datalo & 1); + tcg_debug_assert(datahi == datalo - 1); + tcg_out32(s, LQ | TAI(datahi, h.index, 0)); + } else { + TCGReg d1, d2; + + if (HOST_BIG_ENDIAN ^ need_bswap) { + d1 = datahi, d2 = datalo; + } else { + d1 = datalo, d2 = datahi; + } + + if (need_bswap) { + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, 8); + tcg_out32(s, LDBRX | TAB(d1, 0, h.index)); + tcg_out32(s, LDBRX | TAB(d2, h.index, TCG_REG_R0)); + } else { + tcg_out32(s, LD | TAI(d1, h.index, 0)); + tcg_out32(s, LD | TAI(d2, h.index, 8)); + } + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + +static void tcg_out_qemu_st128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi) +{ + TCGLabelQemuLdst *ldst; + HostAddress h; + bool need_bswap; + + ldst = prepare_host_addr_index_only(s, &h, addr_reg, oi, false); + need_bswap = get_memop(oi) & MO_BSWAP; + + if (h.atom == MO_128) { + tcg_debug_assert(!need_bswap); + tcg_debug_assert(datalo & 1); + tcg_debug_assert(datahi == datalo - 1); + tcg_out32(s, STQ | TAI(datahi, h.index, 0)); + } else { + TCGReg d1, d2; + + if (HOST_BIG_ENDIAN ^ need_bswap) { + d1 = datahi, d2 = datalo; + } else { + d1 = datalo, d2 = datahi; + } + + if (need_bswap) { + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, 8); + tcg_out32(s, STDBRX | TAB(d1, 0, h.index)); + tcg_out32(s, STDBRX | TAB(d2, h.index, TCG_REG_R0)); + } else { + tcg_out32(s, STD | TAI(d1, h.index, 0)); + tcg_out32(s, STD | TAI(d2, h.index, 8)); + } + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + static void tcg_out_nop_fill(tcg_insn_unit *p, int count) { int i; @@ -2849,6 +2968,11 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, args[4], TCG_TYPE_I64); } break; + case INDEX_op_qemu_ld_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_out_qemu_ld128(s, args[0], args[1], args[2], args[3]); + break; + case INDEX_op_qemu_st_i32: if (TCG_TARGET_REG_BITS >= TARGET_LONG_BITS) { tcg_out_qemu_st(s, args[0], -1, args[1], -1, @@ -2870,6 +2994,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, args[4], TCG_TYPE_I64); } break; + case INDEX_op_qemu_st_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_out_qemu_st128(s, args[0], args[1], args[2], args[3]); + break; case INDEX_op_setcond_i32: tcg_out_setcond(s, TCG_TYPE_I32, args[3], args[0], args[1], args[2], @@ -3705,6 +3833,11 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) : TARGET_LONG_BITS == 32 ? C_O0_I3(r, r, r) : C_O0_I4(r, r, r, r)); + case INDEX_op_qemu_ld_i128: + return C_O2_I1(o, m, r); + case INDEX_op_qemu_st_i128: + return C_O0_I3(o, m, r); + case INDEX_op_add_vec: case INDEX_op_sub_vec: case INDEX_op_mul_vec: From patchwork Wed May 3 07:06:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 678652 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp908146wrs; Wed, 3 May 2023 00:21:37 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ60bgwyBdmHkupV4zGbwpX+0HaOdLW+m9ge/YWZEjCjrASZBqq8slrE8UphRFVJ5rXkYSJu X-Received: by 2002:ac8:5dd1:0:b0:3ef:54c9:9869 with SMTP id e17-20020ac85dd1000000b003ef54c99869mr31790472qtx.31.1683098497761; Wed, 03 May 2023 00:21:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683098497; cv=none; d=google.com; s=arc-20160816; b=bqGoIYIDBIZB3NPvviCCudKCzCHxW3P9T53j42YPF00aFqfmaXtls1RI+EXESngMKh S/PoXZlt4GX4hH4mF82eDem7aL9gVnpNd1dQUylu+YKARdxKyw3cuYS9ZRXrzmF+Y+Pg cerLhLkJMg7Sj0wkxkhrFLZQ8cQHu4Kw0zcwpY6uy7j/8HZgXZnS9IQIUW3EzaGGoGNe a49jAEsypz1g4/FbJHLEyXr+YReYcqRTfjwChFZTCDeSW07p97pYw4LlgofaA8JyQw0T +amvDdYK8rDClLLTxPv26TOUoZGRWkwlv3y4PMu8TwmAxPSsO8EHn8lidcdTdHyPhAzg R61w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=255MRdCL5yMgAYzZU5GCo8Mz4wWBTy7IHHcpHB/YBDg=; b=OW9TyET5LrZbSFb349rhV+o95P/iXxkO67CwpGJl9in+JA9PZhX0W/buDWMH23piUa mVDbz3OTC3eP7OTFTMBfecOyiVbDLaL7UKJOuVMXe3DZxKG0igLaE+sPIQa+/rR3GRuA +rs/O68Gso1gyK6B0m+CzHeUdR6R1tDCqq/o9lp9NfPP/MEhuL4JOTf3rS9HoyHKWq7f v+SyAHCKM3oPgjCaLG6+ieBzMU2e15hU4Io6Mr9jY/z4cDPVXX7bvY8upjIuUxl+GNlZ dIwFM8psB4/Wm/ogAuzaHijJTcqmlVh4KXCit7TCDmPDxg0eE8ncmMj7x0vj4KaprhQh +xxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=voxBZquz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q32-20020a05620a2a6000b0074df84fd550si18307072qkp.666.2023.05.03.00.21.37 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 03 May 2023 00:21:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=voxBZquz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pu6e6-0006GU-Ik; Wed, 03 May 2023 03:11:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pu6dc-0003Ci-U6 for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:57 -0400 Received: from mail-wm1-x336.google.com ([2a00:1450:4864:20::336]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pu6dO-00078w-Az for qemu-devel@nongnu.org; Wed, 03 May 2023 03:10:56 -0400 Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-3f182d745deso46370865e9.0 for ; Wed, 03 May 2023 00:10:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1683097840; x=1685689840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=255MRdCL5yMgAYzZU5GCo8Mz4wWBTy7IHHcpHB/YBDg=; b=voxBZquz2WjiUL9ELh60U6XFTPy1nC/vo1GkLUu9ZYIo4GIwnonM+fznkUTQTmFAVO K55wEf4Glv9ibzeJwGIvyMjPaTBnr2Kn5wjBjh6q8ygh7L4fRcIp3ESpZTy+4lI8IexF zLGiZjn926asiWzwQ4GOn45P18RblERAfPtBJay/HPIhLSrNLIUxIj+skVVmtwY96inv zgnC/ZQpcPpF7+gX1C/zPkwIbncpLk4Q1gR3P/3WMmUZtZxQI3y1bf587+QZCzd8omTt Zk2eKwa6WUgl80/Yam9DHlGqZ0N4whPd/o5zRN/joyXbhtg8UbmtexieD7WSqe3REGeD mRuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683097840; x=1685689840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=255MRdCL5yMgAYzZU5GCo8Mz4wWBTy7IHHcpHB/YBDg=; b=UWEN2mOgwZlO5jyA9rnWJfwbOhntDtSxkE86Gtw2ALnv+vLQmTUp8PwZSMGFdOUwfo dGjp1El3H8F8+Lncv1Vesq/GbxAzAcAEx8cczCt9Id2rQ4dcdmR002PJeyrWLZSVghax hseUc8TuK7eRXPOwhca6vG0Hwhk4Ic2wehiuuIMzbxGOsNLta6t47p9GZL3YwfkwbQ5B ITOrzPvm/ZVd9r6pa9WKcM5uemvBp8EPxjWZXm64rxqmR2hQz40iT7dA1ITh+bbh0AER N362R+6UfoTbzB4+I2b8nk53tddgM154XUPmefv2HBwKKqVv+14NIUGdgU+9MterZlOr dYow== X-Gm-Message-State: AC+VfDwV8VjVUsUHC/7bsdJ/WSRxKdJVeD+DnYAInaY/BMgukyVZi80T HXj/0OLw7Bps2avKZSJjTULEuEfZO2piFRCc4tK3gg== X-Received: by 2002:a7b:ca56:0:b0:3f1:6ec5:3105 with SMTP id m22-20020a7bca56000000b003f16ec53105mr13735864wml.20.1683097840701; Wed, 03 May 2023 00:10:40 -0700 (PDT) Received: from stoup.Home ([2a02:c7c:74db:8d00:c01d:9d74:b630:9087]) by smtp.gmail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm54223673wmn.2.2023.05.03.00.10.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 00:10:40 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: git@xen0n.name, gaosong@loongson.cn, philmd@linaro.org, qemu-arm@nongnu.org, qemu-riscv@nongnu.org, qemu-s390x@nongnu.org Subject: [PATCH v4 57/57] tcg/s390x: Support 128-bit load/store Date: Wed, 3 May 2023 08:06:56 +0100 Message-Id: <20230503070656.1746170-58-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503070656.1746170-1-richard.henderson@linaro.org> References: <20230503070656.1746170-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::336; envelope-from=richard.henderson@linaro.org; helo=mail-wm1-x336.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use LPQ/STPQ when 16-byte atomicity is required. Note that these instructions require 16-byte alignment. Signed-off-by: Richard Henderson --- tcg/s390x/tcg-target-con-set.h | 2 + tcg/s390x/tcg-target.h | 2 +- tcg/s390x/tcg-target.c.inc | 100 ++++++++++++++++++++++++++++++++- 3 files changed, 102 insertions(+), 2 deletions(-) diff --git a/tcg/s390x/tcg-target-con-set.h b/tcg/s390x/tcg-target-con-set.h index ecc079bb6d..cbad91b2b5 100644 --- a/tcg/s390x/tcg-target-con-set.h +++ b/tcg/s390x/tcg-target-con-set.h @@ -14,6 +14,7 @@ C_O0_I2(r, r) C_O0_I2(r, ri) C_O0_I2(r, rA) C_O0_I2(v, r) +C_O0_I3(o, m, r) C_O1_I1(r, r) C_O1_I1(v, r) C_O1_I1(v, v) @@ -36,6 +37,7 @@ C_O1_I2(v, v, v) C_O1_I3(v, v, v, v) C_O1_I4(r, r, ri, rI, r) C_O1_I4(r, r, rA, rI, r) +C_O2_I1(o, m, r) C_O2_I2(o, m, 0, r) C_O2_I2(o, m, r, r) C_O2_I3(o, m, 0, 1, r) diff --git a/tcg/s390x/tcg-target.h b/tcg/s390x/tcg-target.h index 170007bea5..ec96952172 100644 --- a/tcg/s390x/tcg-target.h +++ b/tcg/s390x/tcg-target.h @@ -140,7 +140,7 @@ extern uint64_t s390_facilities[3]; #define TCG_TARGET_HAS_muluh_i64 0 #define TCG_TARGET_HAS_mulsh_i64 0 -#define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 1 #define TCG_TARGET_HAS_v64 HAVE_FACILITY(VECTOR) #define TCG_TARGET_HAS_v128 HAVE_FACILITY(VECTOR) diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index ddd9860a6a..91fecfc51b 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -243,6 +243,7 @@ typedef enum S390Opcode { RXY_LLGF = 0xe316, RXY_LLGH = 0xe391, RXY_LMG = 0xeb04, + RXY_LPQ = 0xe38f, RXY_LRV = 0xe31e, RXY_LRVG = 0xe30f, RXY_LRVH = 0xe31f, @@ -253,6 +254,7 @@ typedef enum S390Opcode { RXY_STG = 0xe324, RXY_STHY = 0xe370, RXY_STMG = 0xeb24, + RXY_STPQ = 0xe38e, RXY_STRV = 0xe33e, RXY_STRVG = 0xe32f, RXY_STRVH = 0xe33f, @@ -1578,7 +1580,19 @@ typedef struct { bool tcg_target_has_memory_bswap(MemOp memop) { - return true; + MemOp atom_a, atom_u; + + if ((memop & MO_SIZE) <= MO_64) { + return true; + } + + /* + * Reject 16-byte memop with 16-byte atomicity, + * but do allow a pair of 64-bit operations. + */ + (void)atom_and_align_for_opc(tcg_ctx, &atom_a, &atom_u, memop, + MO_ATOM_IFALIGN, true); + return atom_a <= MO_64; } static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data, @@ -1868,6 +1882,80 @@ static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg, } } +static void tcg_out_qemu_ldst_i128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi, bool is_ld) +{ + TCGLabel *l1 = NULL, *l2 = NULL; + TCGLabelQemuLdst *ldst; + HostAddress h; + bool need_bswap; + bool use_pair; + S390Opcode insn; + + ldst = prepare_host_addr(s, &h, addr_reg, oi, is_ld); + + use_pair = h.atom < MO_128; + need_bswap = get_memop(oi) & MO_BSWAP; + + if (!use_pair) { + /* + * Atomicity requires we use LPQ. If we've already checked for + * 16-byte alignment, that's all we need. If we arrive with + * lesser alignment, we have determined that less than 16-byte + * alignment can be satisfied with two 8-byte loads. + */ + if (h.align < MO_128) { + use_pair = true; + l1 = gen_new_label(); + l2 = gen_new_label(); + + tcg_out_insn(s, RI, TMLL, addr_reg, 15); + tgen_branch(s, 7, l1); /* CC in {1,2,3} */ + } + + tcg_debug_assert(!need_bswap); + tcg_debug_assert(datalo & 1); + tcg_debug_assert(datahi == datalo - 1); + insn = is_ld ? RXY_LPQ : RXY_STPQ; + tcg_out_insn_RXY(s, insn, datahi, h.base, h.index, h.disp); + + if (use_pair) { + tgen_branch(s, S390_CC_ALWAYS, l2); + tcg_out_label(s, l1); + } + } + if (use_pair) { + TCGReg d1, d2; + + if (need_bswap) { + d1 = datalo, d2 = datahi; + insn = is_ld ? RXY_LRVG : RXY_STRVG; + } else { + d1 = datahi, d2 = datalo; + insn = is_ld ? RXY_LG : RXY_STG; + } + + if (h.base == d1 || h.index == d1) { + tcg_out_insn(s, RXY, LAY, TCG_TMP0, h.base, h.index, h.disp); + h.base = TCG_TMP0; + h.index = TCG_REG_NONE; + h.disp = 0; + } + tcg_out_insn_RXY(s, insn, d1, h.base, h.index, h.disp); + tcg_out_insn_RXY(s, insn, d2, h.base, h.index, h.disp + 8); + } + if (l2) { + tcg_out_label(s, l2); + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + static void tcg_out_exit_tb(TCGContext *s, uintptr_t a0) { /* Reuse the zeroing that exists for goto_ptr. */ @@ -2225,6 +2313,12 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, args[0], args[1], args[2], TCG_TYPE_I64); break; + case INDEX_op_qemu_ld_i128: + tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], true); + break; + case INDEX_op_qemu_st_i128: + tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], false); + break; case INDEX_op_ld16s_i64: tcg_out_mem(s, 0, RXY_LGH, args[0], args[1], TCG_REG_NONE, args[2]); @@ -3102,6 +3196,10 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_qemu_st_i64: case INDEX_op_qemu_st_i32: return C_O0_I2(r, r); + case INDEX_op_qemu_ld_i128: + return C_O2_I1(o, m, r); + case INDEX_op_qemu_st_i128: + return C_O0_I3(o, m, r); case INDEX_op_deposit_i32: case INDEX_op_deposit_i64: