From patchwork Wed Jul 10 06:29:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 811771 Delivered-To: patch@linaro.org Received: by 2002:adf:ee12:0:b0:367:895a:4699 with SMTP id y18csp612100wrn; Tue, 9 Jul 2024 23:29:52 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVjoeQWqBCP3DpulHwTKS2xIt0zg2iIgIYhTuf26cwEGwB0JgV2+8S9xZt36b7mI1HB7Pj/ya4FIAVaFTOS4QyH X-Google-Smtp-Source: AGHT+IEphTCVPWYwAmsJxvKHybxER488kQzZSkZvdDM7MeDi+3CETd8lQzG76f0QMwsEtHUdCp6Q X-Received: by 2002:a05:620a:5d:b0:79d:7a40:1bde with SMTP id af79cd13be357-79f19a50927mr487457485a.45.1720592991857; Tue, 09 Jul 2024 23:29:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1720592991; cv=none; d=google.com; s=arc-20160816; b=w+P69QqEoNNlunZlqMTwkYVRk8kDZXA3e1sQ8Rh7/7dZl6JuPlQVMPH0T70NbGMnoH Xm1eDkg14Oh8TC/X0yMM17pkWUILQ3ljeb2c8jZxpxo+r/updx40M2LYx2wCrQlz9UUh SKcvIJJDf6+kjjxaRBMA7PdktvygdDgHWDgL1tZUDtDbrySDZwZMwe0Rkz/mHKO6AFUk JZGqjkn9ILqiu1su0sZDuSBmltGdll4O1xAeQ/cPfsPfAOGSoZ3a/5kJCevVzFZPpw5Z /Chv3TSeyCKfwXFNQGzKWihblFysiE9fNisJtvIRhlwwgbkT6mG4zKA+VGitb4UXM4Kf pr2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=zysmERLOD7LqFn2JG/XTo8hRSGrS1y0PB/cgSbGIed8=; fh=/6rEROlJiatsWcL3eO75DVvps3BAflSRhCYH1t6q/+4=; b=MWbekmfUQXbmCyytVw5hOEzDbEwbp7FVvvCka5VInNtiWg9dCBE2AZVrPblVsZFM5G hX02vPFTPooEHz4UXhl5arZ2kQlUyQSbEZAiftBghJJ1u9h3atzNYdvVM2dANiNfE388 a6JdeL2jH5Y+SjkzvLgNnJzPC0icf8ofWuL9aJ4ceS+yARiBsBvh7dxBgCD4SDO6+O7K 7MMyZ8/4pKI8ONMUIt6HA7+CvlaYSwgGtyk3JRxDofbKzuImWerE9K6XljrStub6Lctc 9hEVzfdakCwRteXLNvxz085eYPJ/vOZBAh9uczT8i3Ac3++peIcEXKzfj1qFnoe/yXzq kXXg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ApfrPFIO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id af79cd13be357-79f190b6579si387781885a.411.2024.07.09.23.29.51 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 09 Jul 2024 23:29:51 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ApfrPFIO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sRQpi-0008D1-Fv; Wed, 10 Jul 2024 02:29:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sRQpb-0008AI-VB for qemu-devel@nongnu.org; Wed, 10 Jul 2024 02:29:35 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sRQpZ-0000u6-W2 for qemu-devel@nongnu.org; Wed, 10 Jul 2024 02:29:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720592972; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zysmERLOD7LqFn2JG/XTo8hRSGrS1y0PB/cgSbGIed8=; b=ApfrPFIO83ayI6RbViraM+l3GHDnBm89QAdStO/ZnI8mISl+M8FqUB+nYVEQ4BBdsx9bxT drTEOzZmtR2eFEFCP/LkfYetV4LK6JCWT4aFFmfhtuFsY93vSWVipNuAapZ1DwYp7lu8ar lo0RZ7dVoVnJDv6sAWRMWjRvl6B7g00= Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-2-ne1ETC2cNqadSX46lAMe4A-1; Wed, 10 Jul 2024 02:29:28 -0400 X-MC-Unique: ne1ETC2cNqadSX46lAMe4A-1 Received: by mail-lf1-f71.google.com with SMTP id 2adb3069b0e04-52e9557e312so1590805e87.0 for ; Tue, 09 Jul 2024 23:29:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720592966; x=1721197766; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zysmERLOD7LqFn2JG/XTo8hRSGrS1y0PB/cgSbGIed8=; b=IlA/O6bK34wE6nMHwYZT82J3GvUXXPVBk4F+JTvLhfuY99eSQgy7pt4hCkx7eNLcRW sg38vAPoCAKl0eb8obSKEpZzO+o4v3DGPIoDWIfJ6M5O7PINUKrhJ/RsHllhBv7cjaFL XHWwlZK0t/lcI6B0vg9GRGCnOjNN/16MRZg85dT8EuQOqbSxdN/+nV0C1DBRQ7YC4f23 P8+GaFMjkfJhfJvcEgPj4ZHRc343qWlN4JbFuAkHlEgvqvwJrvLYL5449lft3Q1+zkdh pBsQnDjSKqE0AAqdW3BjVm/adTpRQr9+vP+zRMk/tqNehjtbXUl72iuFKeZw28Bhju3Y Qcuw== X-Gm-Message-State: AOJu0YwKa/kt1rk7U1YOk5a4/pNzaWHmnOCHflBVD3jntHeRu2cmJw5k i2aoE74m0slFsAXr3vBd/a0Z5fe26LWgMTEN1g6CwpGmKaTaZz4/qeNPjJRluq0tbQjED9HAxsj Q/WGWhO5i1qdoFMtE2/E71vghhuAxBtuYbxZTix6XajdSQ296szYhGKorjoJeZNHBQD1Qumh64N CjAhLy4ix9dk5IktUv+KD8aQp34Di5Sr0+vCcu X-Received: by 2002:ac2:4288:0:b0:52c:e121:c927 with SMTP id 2adb3069b0e04-52eb99d6d24mr2346022e87.62.1720592965833; Tue, 09 Jul 2024 23:29:25 -0700 (PDT) X-Received: by 2002:ac2:4288:0:b0:52c:e121:c927 with SMTP id 2adb3069b0e04-52eb99d6d24mr2346008e87.62.1720592965300; Tue, 09 Jul 2024 23:29:25 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4266f736939sm68500425e9.37.2024.07.09.23.29.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jul 2024 23:29:23 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: rrh.henry@gmail.com, richard.henderson@linaro.org Subject: [PATCH 01/10] target/i386/tcg: Remove SEG_ADDL Date: Wed, 10 Jul 2024 08:29:11 +0200 Message-ID: <20240710062920.73063-2-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240710062920.73063-1-pbonzini@redhat.com> References: <20240710062920.73063-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.144, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Richard Henderson This truncation is now handled by MMU_*32_IDX. The introduction of MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses with a high segment base (e.g. big real mode or vm86 mode), which did not use SEG_ADDL. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index aee3d19f29b..19d6b41a589 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -579,10 +579,6 @@ int exception_has_error_code(int intno) } while (0) #endif -/* in 64-bit machines, this can overflow. So this segment addition macro - * can be used to trim the value to 32-bit whenever needed */ -#define SEG_ADDL(ssp, sp, sp_mask) ((uint32_t)((ssp) + (sp & (sp_mask)))) - /* XXX: add a is_user flag to have proper security support */ #define PUSHW_RA(ssp, sp, sp_mask, val, ra) \ { \ @@ -593,7 +589,7 @@ int exception_has_error_code(int intno) #define PUSHL_RA(ssp, sp, sp_mask, val, ra) \ { \ sp -= 4; \ - cpu_stl_kernel_ra(env, SEG_ADDL(ssp, sp, sp_mask), (uint32_t)(val), ra); \ + cpu_stl_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ } #define POPW_RA(ssp, sp, sp_mask, val, ra) \ @@ -604,7 +600,7 @@ int exception_has_error_code(int intno) #define POPL_RA(ssp, sp, sp_mask, val, ra) \ { \ - val = (uint32_t)cpu_ldl_kernel_ra(env, SEG_ADDL(ssp, sp, sp_mask), ra); \ + val = (uint32_t)cpu_ldl_kernel_ra(env, (ssp) + (sp & (sp_mask)), ra); \ sp += 4; \ } From patchwork Wed Jul 10 06:29:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 811773 Delivered-To: patch@linaro.org Received: by 2002:adf:ee12:0:b0:367:895a:4699 with SMTP id y18csp612725wrn; Tue, 9 Jul 2024 23:31:55 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWU6ftkxLmWWDHlHU3xfV+j2aYfK9kM/JOU74Li2YhXMDux8tnmqmJTvRzehqLWhNhfbNw5A2wNq9eCr+mca5r3 X-Google-Smtp-Source: AGHT+IEmT4TvlYY4CI378flLC4XOeOGpEj8E5Wc0rsb7k5P+K3sGRhiRqs3Y1NTWcIAkAT65a0rK X-Received: by 2002:a05:622a:307:b0:441:57c1:b05c with SMTP id d75a77b69052e-447fa825dc5mr52768781cf.3.1720593114840; Tue, 09 Jul 2024 23:31:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1720593114; cv=none; d=google.com; s=arc-20160816; b=EDUMj2n/mcrtyCMmiGAPnlW+yuVGj7lm6pLrRlV051QZKtMz8GOsfE93Ccen+9WYrC 0fnjLEFcS4yNGNAgKY0hyP8R1EYS3c9Z58MoXn2svZqtB9Suy2Xcy85BK2miztbBrffS q2d4x5SBbEdokDU+zuEe6RD3x7yqMlutuo8d4FAm17DMN55KfhGquvCAune+MTL9oxhU rq4kFgoT5uyGlQAclGureCpaVM672paC93uXbY69Ss7nlbn+nzI2afqd2DstKFDmFzxD 01WuDIAF/CTpa+JxcC+rHPS0hZRp4JD6SiDxRJ/gaHIgEDyr2hfF+vTnFr9Onj1su75p qDfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=VWTL4VrbXgQj1vr1wUg5bw0Yrmv3ueLbi/7PMgwKSwc=; fh=/6rEROlJiatsWcL3eO75DVvps3BAflSRhCYH1t6q/+4=; b=bVUdjmiTuI1XqBJXk0Y/v7UNeF8zHiTYYRjDlkOu4sBZSDO/C9d0XMNaLATVAFAwIU 9dAmphOjcO2wJIIX7xFq236HT9jtyLoFsM+uuVMrXOsXIWBAM2A0+ipI4P4le+YGF6Sk CvNDegbQubIzzU+eOzp8cCakjIs0yIRbpc3UWA87d9BzshztSJpy3rR6wyygrolq2O8M 83vSMjisKSFuGeEEgTB3NfZTFZHi2NEVWFE60EzbcvhksWU5tXCXPnW/XWLbMJIbYlCd 42gUgF2k43RI8AevGp98XhEGhBKQLvjGN815oLUhxYi5/cKz/b+DRMBFd//wVpxZz316 JM+Q==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jLmN10Jl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d75a77b69052e-447f9bc5dd8si37888811cf.478.2024.07.09.23.31.54 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 09 Jul 2024 23:31:54 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jLmN10Jl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sRQpl-0008NJ-SS; Wed, 10 Jul 2024 02:29:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sRQpi-0008D2-FW for qemu-devel@nongnu.org; Wed, 10 Jul 2024 02:29:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sRQpf-0000ua-9K for qemu-devel@nongnu.org; Wed, 10 Jul 2024 02:29:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720592978; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VWTL4VrbXgQj1vr1wUg5bw0Yrmv3ueLbi/7PMgwKSwc=; b=jLmN10JlyMhbEs8WTd9s/RA7TOPw5VuTYUbnVGoWHU3cyqBy6UHN+hHVhFi4GKS/TLV04w QwpSCIvmQZ30YkDo4qmL1al6ZJGEup2G5Xh1GDiBP75bTj0z+23j4RMx2JWqu8/zJxBpM8 3jnXuRfe68wH5ViuOFXok56eMuJ2dhg= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-27-YZ5es4LDPPanEf0pFZm9NA-1; Wed, 10 Jul 2024 02:29:36 -0400 X-MC-Unique: YZ5es4LDPPanEf0pFZm9NA-1 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-42668699453so27364785e9.3 for ; Tue, 09 Jul 2024 23:29:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720592974; x=1721197774; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VWTL4VrbXgQj1vr1wUg5bw0Yrmv3ueLbi/7PMgwKSwc=; b=mwlWuO78YVCYgRYyIMBUs02VF1QdVIh6KRP4RiZDTkLHD+rPQ1Jv5ms9zlxTQL6mbH D4DDokg5HL2r5DLjzNNJ7H6pyRMu2aF8ry40sfjhc7WJrZHqWVWIvRyighcwhPnDJkfQ IVqRp0YUgnHQdD9OD5N/M5kh1TyqdBZOA5FyLFFBaVewGNxsjX9eaUPtwodrCu+oeJmi q+WSEeM69vm2JBLAw9GwioYMbqp3qWfihl4im30tPk+U5YbMwi8gYu/J/plMeQL7euTW Ok/TNscYaigeuWt2CGS43wLA6CUmzK74fPnwja2/32gyFQifsok1mK8TR13jkhT0r1C8 LbwA== X-Gm-Message-State: AOJu0YwmEBkgybpvN8yloWEi8eI/GDTbrW4uIjwrMwKVwUUafnOqhaZ7 G3nq7RnPk02vv6+WFa6Dx0IueuZmw+TeMWMblfYqPb/z0WhGPhDgsXFbML5KXW374Plk+YL4vSk NpXFDMCwRKeEmvt7yvYBoUQ5B1hFVfQfZtIiYAuBtCxw5MALedOCBI9yA3jeURlddIEY9u60SNC ApzvLhS98t/r5rA9ouKnikNSPtwbw2pU1iD5HQ X-Received: by 2002:a05:600c:2109:b0:426:58cb:8ca4 with SMTP id 5b1f17b1804b1-426708f1d36mr35077515e9.37.1720592973996; Tue, 09 Jul 2024 23:29:33 -0700 (PDT) X-Received: by 2002:a05:600c:2109:b0:426:58cb:8ca4 with SMTP id 5b1f17b1804b1-426708f1d36mr35077305e9.37.1720592973298; Tue, 09 Jul 2024 23:29:33 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-367cdfab753sm4321191f8f.107.2024.07.09.23.29.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jul 2024 23:29:31 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: rrh.henry@gmail.com, richard.henderson@linaro.org Subject: [PATCH 04/10] target/i386/tcg: Reorg push/pop within seg_helper.c Date: Wed, 10 Jul 2024 08:29:14 +0200 Message-ID: <20240710062920.73063-5-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240710062920.73063-1-pbonzini@redhat.com> References: <20240710062920.73063-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.144, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Richard Henderson Interrupts and call gates should use accesses with the DPL as the privilege level. While computing the applicable MMU index is easy, the harder thing is how to plumb it in the code. One possibility could be to add a single argument to the PUSH* macros for the privilege level, but this is repetitive and risks confusion between the involved privilege levels. Another possibility is to pass both CPL and DPL, and adjusting both PUSH* and POP* to use specific privilege levels (instead of using cpu_{ld,st}*_data). This makes the code more symmetric. However, a more complicated but much nicer approach is to use a structure to contain the stack parameters, env, unwind return address, and rewrite the macros into functions. The struct provides an easy home for the MMU index as well. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 439 +++++++++++++++++++---------------- 1 file changed, 238 insertions(+), 201 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index 0653bc10936..6b3de7a2be4 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -579,35 +579,47 @@ int exception_has_error_code(int intno) } while (0) #endif -/* XXX: add a is_user flag to have proper security support */ -#define PUSHW_RA(ssp, sp, sp_mask, val, ra) \ - { \ - sp -= 2; \ - cpu_stw_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ - } +/* XXX: use mmu_index to have proper DPL support */ +typedef struct StackAccess +{ + CPUX86State *env; + uintptr_t ra; + target_ulong ss_base; + target_ulong sp; + target_ulong sp_mask; +} StackAccess; -#define PUSHL_RA(ssp, sp, sp_mask, val, ra) \ - { \ - sp -= 4; \ - cpu_stl_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ - } +static void pushw(StackAccess *sa, uint16_t val) +{ + sa->sp -= 2; + cpu_stw_kernel_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), + val, sa->ra); +} -#define POPW_RA(ssp, sp, sp_mask, val, ra) \ - { \ - val = cpu_lduw_data_ra(env, (ssp) + (sp & (sp_mask)), ra); \ - sp += 2; \ - } +static void pushl(StackAccess *sa, uint32_t val) +{ + sa->sp -= 4; + cpu_stl_kernel_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), + val, sa->ra); +} -#define POPL_RA(ssp, sp, sp_mask, val, ra) \ - { \ - val = (uint32_t)cpu_ldl_data_ra(env, (ssp) + (sp & (sp_mask)), ra); \ - sp += 4; \ - } +static uint16_t popw(StackAccess *sa) +{ + uint16_t ret = cpu_lduw_data_ra(sa->env, + sa->ss_base + (sa->sp & sa->sp_mask), + sa->ra); + sa->sp += 2; + return ret; +} -#define PUSHW(ssp, sp, sp_mask, val) PUSHW_RA(ssp, sp, sp_mask, val, 0) -#define PUSHL(ssp, sp, sp_mask, val) PUSHL_RA(ssp, sp, sp_mask, val, 0) -#define POPW(ssp, sp, sp_mask, val) POPW_RA(ssp, sp, sp_mask, val, 0) -#define POPL(ssp, sp, sp_mask, val) POPL_RA(ssp, sp, sp_mask, val, 0) +static uint32_t popl(StackAccess *sa) +{ + uint32_t ret = cpu_ldl_data_ra(sa->env, + sa->ss_base + (sa->sp & sa->sp_mask), + sa->ra); + sa->sp += 4; + return ret; +} /* protected mode interrupt */ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, @@ -615,12 +627,13 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, int is_hw) { SegmentCache *dt; - target_ulong ptr, ssp; + target_ulong ptr; int type, dpl, selector, ss_dpl, cpl; int has_error_code, new_stack, shift; - uint32_t e1, e2, offset, ss = 0, esp, ss_e1 = 0, ss_e2 = 0; - uint32_t old_eip, sp_mask, eflags; + uint32_t e1, e2, offset, ss = 0, ss_e1 = 0, ss_e2 = 0; + uint32_t old_eip, eflags; int vm86 = env->eflags & VM_MASK; + StackAccess sa; bool set_rf; has_error_code = 0; @@ -662,6 +675,9 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0D_GPF, intno * 8 + 2); } + sa.env = env; + sa.ra = 0; + if (type == 5) { /* task gate */ /* must do that check here to return the correct error code */ @@ -672,18 +688,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, if (has_error_code) { /* push the error code */ if (env->segs[R_SS].flags & DESC_B_MASK) { - sp_mask = 0xffffffff; + sa.sp_mask = 0xffffffff; } else { - sp_mask = 0xffff; + sa.sp_mask = 0xffff; } - esp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; if (shift) { - PUSHL(ssp, esp, sp_mask, error_code); + pushl(&sa, error_code); } else { - PUSHW(ssp, esp, sp_mask, error_code); + pushw(&sa, error_code); } - SET_ESP(esp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); } return; } @@ -717,6 +733,7 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, } if (dpl < cpl) { /* to inner privilege */ + uint32_t esp; get_ss_esp_from_tss(env, &ss, &esp, dpl, 0); if ((ss & 0xfffc) == 0) { raise_exception_err(env, EXCP0A_TSS, ss & 0xfffc); @@ -740,17 +757,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0A_TSS, ss & 0xfffc); } new_stack = 1; - sp_mask = get_sp_mask(ss_e2); - ssp = get_seg_base(ss_e1, ss_e2); + sa.sp = esp; + sa.sp_mask = get_sp_mask(ss_e2); + sa.ss_base = get_seg_base(ss_e1, ss_e2); } else { /* to same privilege */ if (vm86) { raise_exception_err(env, EXCP0D_GPF, selector & 0xfffc); } new_stack = 0; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; - esp = env->regs[R_ESP]; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; } shift = type >> 3; @@ -775,36 +793,36 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, if (shift == 1) { if (new_stack) { if (vm86) { - PUSHL(ssp, esp, sp_mask, env->segs[R_GS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_FS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_DS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_ES].selector); + pushl(&sa, env->segs[R_GS].selector); + pushl(&sa, env->segs[R_FS].selector); + pushl(&sa, env->segs[R_DS].selector); + pushl(&sa, env->segs[R_ES].selector); } - PUSHL(ssp, esp, sp_mask, env->segs[R_SS].selector); - PUSHL(ssp, esp, sp_mask, env->regs[R_ESP]); + pushl(&sa, env->segs[R_SS].selector); + pushl(&sa, env->regs[R_ESP]); } - PUSHL(ssp, esp, sp_mask, eflags); - PUSHL(ssp, esp, sp_mask, env->segs[R_CS].selector); - PUSHL(ssp, esp, sp_mask, old_eip); + pushl(&sa, eflags); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, old_eip); if (has_error_code) { - PUSHL(ssp, esp, sp_mask, error_code); + pushl(&sa, error_code); } } else { if (new_stack) { if (vm86) { - PUSHW(ssp, esp, sp_mask, env->segs[R_GS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_FS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_DS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_ES].selector); + pushw(&sa, env->segs[R_GS].selector); + pushw(&sa, env->segs[R_FS].selector); + pushw(&sa, env->segs[R_DS].selector); + pushw(&sa, env->segs[R_ES].selector); } - PUSHW(ssp, esp, sp_mask, env->segs[R_SS].selector); - PUSHW(ssp, esp, sp_mask, env->regs[R_ESP]); + pushw(&sa, env->segs[R_SS].selector); + pushw(&sa, env->regs[R_ESP]); } - PUSHW(ssp, esp, sp_mask, eflags); - PUSHW(ssp, esp, sp_mask, env->segs[R_CS].selector); - PUSHW(ssp, esp, sp_mask, old_eip); + pushw(&sa, eflags); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, old_eip); if (has_error_code) { - PUSHW(ssp, esp, sp_mask, error_code); + pushw(&sa, error_code); } } @@ -822,10 +840,10 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, cpu_x86_load_seg_cache(env, R_GS, 0, 0, 0, 0); } ss = (ss & ~3) | dpl; - cpu_x86_load_seg_cache(env, R_SS, ss, - ssp, get_seg_limit(ss_e1, ss_e2), ss_e2); + cpu_x86_load_seg_cache(env, R_SS, ss, sa.ss_base, + get_seg_limit(ss_e1, ss_e2), ss_e2); } - SET_ESP(esp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, @@ -837,20 +855,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, #ifdef TARGET_X86_64 -#define PUSHQ_RA(sp, val, ra) \ - { \ - sp -= 8; \ - cpu_stq_kernel_ra(env, sp, (val), ra); \ - } +static void pushq(StackAccess *sa, uint64_t val) +{ + sa->sp -= 8; + cpu_stq_kernel_ra(sa->env, sa->sp, val, sa->ra); +} -#define POPQ_RA(sp, val, ra) \ - { \ - val = cpu_ldq_data_ra(env, sp, ra); \ - sp += 8; \ - } - -#define PUSHQ(sp, val) PUSHQ_RA(sp, val, 0) -#define POPQ(sp, val) POPQ_RA(sp, val, 0) +static uint64_t popq(StackAccess *sa) +{ + uint64_t ret = cpu_ldq_data_ra(sa->env, sa->sp, sa->ra); + sa->sp += 8; + return ret; +} static inline target_ulong get_rsp_from_tss(CPUX86State *env, int level) { @@ -893,8 +909,9 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, int type, dpl, selector, cpl, ist; int has_error_code, new_stack; uint32_t e1, e2, e3, ss, eflags; - target_ulong old_eip, esp, offset; + target_ulong old_eip, offset; bool set_rf; + StackAccess sa; has_error_code = 0; if (!is_int && !is_hw) { @@ -962,10 +979,15 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, if (e2 & DESC_C_MASK) { dpl = cpl; } + + sa.env = env; + sa.ra = 0; + sa.sp_mask = -1; + sa.ss_base = 0; if (dpl < cpl || ist != 0) { /* to inner privilege */ new_stack = 1; - esp = get_rsp_from_tss(env, ist != 0 ? ist + 3 : dpl); + sa.sp = get_rsp_from_tss(env, ist != 0 ? ist + 3 : dpl); ss = 0; } else { /* to same privilege */ @@ -973,9 +995,9 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0D_GPF, selector & 0xfffc); } new_stack = 0; - esp = env->regs[R_ESP]; + sa.sp = env->regs[R_ESP]; } - esp &= ~0xfLL; /* align stack */ + sa.sp &= ~0xfLL; /* align stack */ /* See do_interrupt_protected. */ eflags = cpu_compute_eflags(env); @@ -983,13 +1005,13 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, eflags |= RF_MASK; } - PUSHQ(esp, env->segs[R_SS].selector); - PUSHQ(esp, env->regs[R_ESP]); - PUSHQ(esp, eflags); - PUSHQ(esp, env->segs[R_CS].selector); - PUSHQ(esp, old_eip); + pushq(&sa, env->segs[R_SS].selector); + pushq(&sa, env->regs[R_ESP]); + pushq(&sa, eflags); + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, old_eip); if (has_error_code) { - PUSHQ(esp, error_code); + pushq(&sa, error_code); } /* interrupt gate clear IF mask */ @@ -1002,7 +1024,7 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, ss = 0 | dpl; cpu_x86_load_seg_cache(env, R_SS, ss, 0, 0, dpl << DESC_DPL_SHIFT); } - env->regs[R_ESP] = esp; + env->regs[R_ESP] = sa.sp; selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, @@ -1074,10 +1096,11 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, int error_code, unsigned int next_eip) { SegmentCache *dt; - target_ulong ptr, ssp; + target_ulong ptr; int selector; - uint32_t offset, esp; + uint32_t offset; uint32_t old_cs, old_eip; + StackAccess sa; /* real mode (simpler!) */ dt = &env->idt; @@ -1087,8 +1110,13 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, ptr = dt->base + intno * 4; offset = cpu_lduw_kernel(env, ptr); selector = cpu_lduw_kernel(env, ptr + 2); - esp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + + sa.env = env; + sa.ra = 0; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = 0xffff; + sa.ss_base = env->segs[R_SS].base; + if (is_int) { old_eip = next_eip; } else { @@ -1096,12 +1124,12 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, } old_cs = env->segs[R_CS].selector; /* XXX: use SS segment size? */ - PUSHW(ssp, esp, 0xffff, cpu_compute_eflags(env)); - PUSHW(ssp, esp, 0xffff, old_cs); - PUSHW(ssp, esp, 0xffff, old_eip); + pushw(&sa, cpu_compute_eflags(env)); + pushw(&sa, old_cs); + pushw(&sa, old_eip); /* update processor state */ - env->regs[R_ESP] = (env->regs[R_ESP] & ~0xffff) | (esp & 0xffff); + SET_ESP(sa.sp, sa.sp_mask); env->eip = offset; env->segs[R_CS].selector = selector; env->segs[R_CS].base = (selector << 4); @@ -1544,21 +1572,23 @@ void helper_ljmp_protected(CPUX86State *env, int new_cs, target_ulong new_eip, void helper_lcall_real(CPUX86State *env, uint32_t new_cs, uint32_t new_eip, int shift, uint32_t next_eip) { - uint32_t esp, esp_mask; - target_ulong ssp; + StackAccess sa; + + sa.env = env; + sa.ra = GETPC(); + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; - esp = env->regs[R_ESP]; - esp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; if (shift) { - PUSHL_RA(ssp, esp, esp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, esp, esp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, esp, esp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, esp, esp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } - SET_ESP(esp, esp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = new_eip; env->segs[R_CS].selector = new_cs; env->segs[R_CS].base = (new_cs << 4); @@ -1570,9 +1600,10 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, { int new_stack, i; uint32_t e1, e2, cpl, dpl, rpl, selector, param_count; - uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl, sp_mask; + uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl; uint32_t val, limit, old_sp_mask; - target_ulong ssp, old_ssp, offset, sp; + target_ulong old_ssp, offset; + StackAccess sa; LOG_PCALL("lcall %04x:" TARGET_FMT_lx " s=%d\n", new_cs, new_eip, shift); LOG_PCALL_STATE(env_cpu(env)); @@ -1584,6 +1615,10 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, } cpl = env->hflags & HF_CPL_MASK; LOG_PCALL("desc=%08x:%08x\n", e1, e2); + + sa.env = env; + sa.ra = GETPC(); + if (e2 & DESC_S_MASK) { if (!(e2 & DESC_CS_MASK)) { raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); @@ -1611,14 +1646,14 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, #ifdef TARGET_X86_64 /* XXX: check 16/32 bit cases in long mode */ if (shift == 2) { - target_ulong rsp; - /* 64 bit case */ - rsp = env->regs[R_ESP]; - PUSHQ_RA(rsp, env->segs[R_CS].selector, GETPC()); - PUSHQ_RA(rsp, next_eip, GETPC()); + sa.sp = env->regs[R_ESP]; + sa.sp_mask = -1; + sa.ss_base = 0; + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, next_eip); /* from this point, not restartable */ - env->regs[R_ESP] = rsp; + env->regs[R_ESP] = sa.sp; cpu_x86_load_seg_cache(env, R_CS, (new_cs & 0xfffc) | cpl, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); @@ -1626,15 +1661,15 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, } else #endif { - sp = env->regs[R_ESP]; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; if (shift) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } limit = get_seg_limit(e1, e2); @@ -1642,7 +1677,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); } /* from this point, not restartable */ - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); cpu_x86_load_seg_cache(env, R_CS, (new_cs & 0xfffc) | cpl, get_seg_base(e1, e2), limit, e2); env->eip = new_eip; @@ -1737,13 +1772,13 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, /* to inner privilege */ #ifdef TARGET_X86_64 if (shift == 2) { - sp = get_rsp_from_tss(env, dpl); ss = dpl; /* SS = NULL selector with RPL = new CPL */ new_stack = 1; - sp_mask = 0; - ssp = 0; /* SS base is always zero in IA-32e mode */ + sa.sp = get_rsp_from_tss(env, dpl); + sa.sp_mask = -1; + sa.ss_base = 0; /* SS base is always zero in IA-32e mode */ LOG_PCALL("new ss:rsp=%04x:%016llx env->regs[R_ESP]=" - TARGET_FMT_lx "\n", ss, sp, env->regs[R_ESP]); + TARGET_FMT_lx "\n", ss, sa.sp, env->regs[R_ESP]); } else #endif { @@ -1752,7 +1787,6 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, LOG_PCALL("new ss:esp=%04x:%08x param_count=%d env->regs[R_ESP]=" TARGET_FMT_lx "\n", ss, sp32, param_count, env->regs[R_ESP]); - sp = sp32; if ((ss & 0xfffc) == 0) { raise_exception_err_ra(env, EXCP0A_TSS, ss & 0xfffc, GETPC()); } @@ -1775,63 +1809,64 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0A_TSS, ss & 0xfffc, GETPC()); } - sp_mask = get_sp_mask(ss_e2); - ssp = get_seg_base(ss_e1, ss_e2); + sa.sp = sp32; + sa.sp_mask = get_sp_mask(ss_e2); + sa.ss_base = get_seg_base(ss_e1, ss_e2); } /* push_size = ((param_count * 2) + 8) << shift; */ - old_sp_mask = get_sp_mask(env->segs[R_SS].flags); old_ssp = env->segs[R_SS].base; + #ifdef TARGET_X86_64 if (shift == 2) { /* XXX: verify if new stack address is canonical */ - PUSHQ_RA(sp, env->segs[R_SS].selector, GETPC()); - PUSHQ_RA(sp, env->regs[R_ESP], GETPC()); + pushq(&sa, env->segs[R_SS].selector); + pushq(&sa, env->regs[R_ESP]); /* parameters aren't supported for 64-bit call gates */ } else #endif if (shift == 1) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); + pushl(&sa, env->segs[R_SS].selector); + pushl(&sa, env->regs[R_ESP]); for (i = param_count - 1; i >= 0; i--) { val = cpu_ldl_data_ra(env, - old_ssp + ((env->regs[R_ESP] + i * 4) & old_sp_mask), - GETPC()); - PUSHL_RA(ssp, sp, sp_mask, val, GETPC()); + old_ssp + ((env->regs[R_ESP] + i * 4) & old_sp_mask), + GETPC()); + pushl(&sa, val); } } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); + pushw(&sa, env->segs[R_SS].selector); + pushw(&sa, env->regs[R_ESP]); for (i = param_count - 1; i >= 0; i--) { val = cpu_lduw_data_ra(env, - old_ssp + ((env->regs[R_ESP] + i * 2) & old_sp_mask), - GETPC()); - PUSHW_RA(ssp, sp, sp_mask, val, GETPC()); + old_ssp + ((env->regs[R_ESP] + i * 2) & old_sp_mask), + GETPC()); + pushw(&sa, val); } } new_stack = 1; } else { /* to same privilege */ - sp = env->regs[R_ESP]; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; /* push_size = (4 << shift); */ new_stack = 0; } #ifdef TARGET_X86_64 if (shift == 2) { - PUSHQ_RA(sp, env->segs[R_CS].selector, GETPC()); - PUSHQ_RA(sp, next_eip, GETPC()); + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, next_eip); } else #endif if (shift == 1) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } /* from this point, not restartable */ @@ -1845,7 +1880,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, { ss = (ss & ~3) | dpl; cpu_x86_load_seg_cache(env, R_SS, ss, - ssp, + sa.ss_base, get_seg_limit(ss_e1, ss_e2), ss_e2); } @@ -1856,7 +1891,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = offset; } } @@ -1864,26 +1899,28 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, /* real and vm86 mode iret */ void helper_iret_real(CPUX86State *env, int shift) { - uint32_t sp, new_cs, new_eip, new_eflags, sp_mask; - target_ulong ssp; + uint32_t new_cs, new_eip, new_eflags; int eflags_mask; + StackAccess sa; + + sa.env = env; + sa.ra = GETPC(); + sa.sp_mask = 0xffff; /* XXXX: use SS segment size? */ + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; - sp_mask = 0xffff; /* XXXX: use SS segment size? */ - sp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_eip, GETPC()); - POPL_RA(ssp, sp, sp_mask, new_cs, GETPC()); - new_cs &= 0xffff; - POPL_RA(ssp, sp, sp_mask, new_eflags, GETPC()); + new_eip = popl(&sa); + new_cs = popl(&sa) & 0xffff; + new_eflags = popl(&sa); } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_eip, GETPC()); - POPW_RA(ssp, sp, sp_mask, new_cs, GETPC()); - POPW_RA(ssp, sp, sp_mask, new_eflags, GETPC()); + new_eip = popw(&sa); + new_cs = popw(&sa); + new_eflags = popw(&sa); } - env->regs[R_ESP] = (env->regs[R_ESP] & ~sp_mask) | (sp & sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->segs[R_CS].selector = new_cs; env->segs[R_CS].base = (new_cs << 4); env->eip = new_eip; @@ -1936,47 +1973,49 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, uint32_t new_es, new_ds, new_fs, new_gs; uint32_t e1, e2, ss_e1, ss_e2; int cpl, dpl, rpl, eflags_mask, iopl; - target_ulong ssp, sp, new_eip, new_esp, sp_mask; + target_ulong new_eip, new_esp; + StackAccess sa; + + sa.env = env; + sa.ra = retaddr; #ifdef TARGET_X86_64 if (shift == 2) { - sp_mask = -1; + sa.sp_mask = -1; } else #endif { - sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); } - sp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; new_eflags = 0; /* avoid warning */ #ifdef TARGET_X86_64 if (shift == 2) { - POPQ_RA(sp, new_eip, retaddr); - POPQ_RA(sp, new_cs, retaddr); - new_cs &= 0xffff; + new_eip = popq(&sa); + new_cs = popq(&sa) & 0xffff; if (is_iret) { - POPQ_RA(sp, new_eflags, retaddr); + new_eflags = popq(&sa); } } else #endif { if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_eip, retaddr); - POPL_RA(ssp, sp, sp_mask, new_cs, retaddr); - new_cs &= 0xffff; + new_eip = popl(&sa); + new_cs = popl(&sa) & 0xffff; if (is_iret) { - POPL_RA(ssp, sp, sp_mask, new_eflags, retaddr); + new_eflags = popl(&sa); if (new_eflags & VM_MASK) { goto return_to_vm86; } } } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_eip, retaddr); - POPW_RA(ssp, sp, sp_mask, new_cs, retaddr); + new_eip = popw(&sa); + new_cs = popw(&sa); if (is_iret) { - POPW_RA(ssp, sp, sp_mask, new_eflags, retaddr); + new_eflags = popw(&sa); } } } @@ -2012,7 +2051,7 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, raise_exception_err_ra(env, EXCP0B_NOSEG, new_cs & 0xfffc, retaddr); } - sp += addend; + sa.sp += addend; if (rpl == cpl && (!(env->hflags & HF_CS64_MASK) || ((env->hflags & HF_CS64_MASK) && !is_iret))) { /* return to same privilege level */ @@ -2024,21 +2063,19 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, /* return to different privilege level */ #ifdef TARGET_X86_64 if (shift == 2) { - POPQ_RA(sp, new_esp, retaddr); - POPQ_RA(sp, new_ss, retaddr); - new_ss &= 0xffff; + new_esp = popq(&sa); + new_ss = popq(&sa) & 0xffff; } else #endif { if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ss, retaddr); - new_ss &= 0xffff; + new_esp = popl(&sa); + new_ss = popl(&sa) & 0xffff; } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPW_RA(ssp, sp, sp_mask, new_ss, retaddr); + new_esp = popw(&sa); + new_ss = popw(&sa); } } LOG_PCALL("new ss:esp=%04x:" TARGET_FMT_lx "\n", @@ -2088,14 +2125,14 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - sp = new_esp; + sa.sp = new_esp; #ifdef TARGET_X86_64 if (env->hflags & HF_CS64_MASK) { - sp_mask = -1; + sa.sp_mask = -1; } else #endif { - sp_mask = get_sp_mask(ss_e2); + sa.sp_mask = get_sp_mask(ss_e2); } /* validate data segments */ @@ -2104,9 +2141,9 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, validate_seg(env, R_FS, rpl); validate_seg(env, R_GS, rpl); - sp += addend; + sa.sp += addend; } - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = new_eip; if (is_iret) { /* NOTE: 'cpl' is the _old_ CPL */ @@ -2126,12 +2163,12 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, return; return_to_vm86: - POPL_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ss, retaddr); - POPL_RA(ssp, sp, sp_mask, new_es, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ds, retaddr); - POPL_RA(ssp, sp, sp_mask, new_fs, retaddr); - POPL_RA(ssp, sp, sp_mask, new_gs, retaddr); + new_esp = popl(&sa); + new_ss = popl(&sa); + new_es = popl(&sa); + new_ds = popl(&sa); + new_fs = popl(&sa); + new_gs = popl(&sa); /* modify processor state */ cpu_load_eflags(env, new_eflags, TF_MASK | AC_MASK | ID_MASK | From patchwork Wed Jul 10 06:29:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 811772 Delivered-To: patch@linaro.org Received: by 2002:adf:ee12:0:b0:367:895a:4699 with SMTP id y18csp612697wrn; Tue, 9 Jul 2024 23:31:47 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUWI1G/dM4FN5+NkLomTSEknTS5kEvOkrj/9KKZ/h7/AzXcrWzNodIZv/lsrR50O8tEAlk16AhaednnbGTGMLHV X-Google-Smtp-Source: AGHT+IFcdeaFJ3viSFJPDzPNe5bYcBox8k1MBMFP4ep6NVhVKSqRF2B9fIaTxHJo4gnGRa5VNWmE X-Received: by 2002:a05:6214:20a4:b0:6b5:34b:8c02 with SMTP id 6a1803df08f44-6b61bcf32cbmr54353416d6.27.1720593107084; Tue, 09 Jul 2024 23:31:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1720593107; cv=none; d=google.com; s=arc-20160816; b=t3rEyLTTK9QCRDzqmNxo+HLgxT7nB/mTzK/wSGatsNVyTpZHJbAC75d6cEbeTXxXwD qjg+ep2EzjF+87/P/QOR5mXpZYPxZD+0q/sx7daMUkmSS1PgPiRb0GcNMV0boXf2BTL8 Mc6h4u1iIHjdapXWmlE6yHfX89gA7Hnfdy6iJtBdBF5xp3mf/3Ck1qzxwe4+VqPD3z4M oJmAEC1onIJd48Bmn+zyVfdj4Hls95LC+sQIAZmbBuw1Ib1N3H8nwzeXo4MoY4mY+DLK KFN3Ffsm6Z5nMFacGVuoY+bVWgsscZNS2ijgMtRuNnloy1t/Q9r7LSZTXkZbtanLzFoQ DBfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yKHaLhM9UAuURgVWP4GssGjRnYIMl9btSsv/NKhFweE=; fh=/6rEROlJiatsWcL3eO75DVvps3BAflSRhCYH1t6q/+4=; b=0uFONzuJoIH6zfZul0VuRt+2U7dlyNIaMB9P1AG2nDbEIVVQeSDF9h741zMkid1S1+ RUs4HWt83RPluGQTrjfavr4jxemtguQiVK0Z0pxgA1T0LhN+pWrMfYgMPZUMao8fgyR/ kQM3eRUfRgQtjqy3yqbtMavQjsL0iJx7JbIYmQ6s37HHKD2wa073VMHKhieaVU0eDQNw DxxvyGHwJzXIQhnn40lgZUZdSGiQRJ5F74n3aW4F52/CqM9n6tTMppqIwZAZtCB2F33A 7WMHoT76gOAC92yL3RyLwVJiFzHgRxqUk2fpMou/bhh1MnMv3LSGi1HlFU5OwIOQEPZX 1Gkg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XVTxqToP; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 6a1803df08f44-6b61bacc443si40696266d6.489.2024.07.09.23.31.46 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 09 Jul 2024 23:31:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XVTxqToP; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sRQpm-0008RI-Th; Wed, 10 Jul 2024 02:29:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sRQpl-0008Mt-OL for qemu-devel@nongnu.org; Wed, 10 Jul 2024 02:29:45 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sRQpk-0000ur-1S for qemu-devel@nongnu.org; Wed, 10 Jul 2024 02:29:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720592983; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yKHaLhM9UAuURgVWP4GssGjRnYIMl9btSsv/NKhFweE=; b=XVTxqToPXrWF3hl7iOhWIpnx1X4yhe6I4oN9N9bUs0wIEwkKvvX7gkvUJcqYWU2cP1Iti2 xr4irxzMaeYVSkhzxqQkEoECPRH/WCc886evJMSvshklJcylvG/AKCBN91aSQHtK9e/aiB t58+U6/499y5yG1w6OjVPBF7rUo65LM= Received: from mail-lj1-f197.google.com (mail-lj1-f197.google.com [209.85.208.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-224-A56q7UioOOaxK9J_ueZ5UA-1; Wed, 10 Jul 2024 02:29:39 -0400 X-MC-Unique: A56q7UioOOaxK9J_ueZ5UA-1 Received: by mail-lj1-f197.google.com with SMTP id 38308e7fff4ca-2ee87d500caso59438401fa.3 for ; Tue, 09 Jul 2024 23:29:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720592977; x=1721197777; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yKHaLhM9UAuURgVWP4GssGjRnYIMl9btSsv/NKhFweE=; b=geMZPi8CkSWnCm4iIaE+CCeuPRngZhl2dLqnFHwGtgPnZiT5eB4aCIZyex04b66S0q kVWdBT1TnnvIwfN1864+O07F1Jp3wME8RJgWjP1fga/rHNriI3e1QhpLxbwKP5r0GqG/ a2+cK9HDMuCCcR3UPt3PKr5COy7U5qDde4SE62rTuVq24W/TWVazgUrlpKACftv+XItS z5dWrl8JiVWPnu7UYg46LK1WK3jM2SKzofq5xZ7cuNHpdo7qGpKOYdjwfdZS06SZ8bqj KzbnrPdy8r3nxWhLgiRa9vOuPX0HZoO0H5cRgt2rPXYoAQSGmav/eVh9pS0k+ZkF7KLg e3Qw== X-Gm-Message-State: AOJu0YzvtuLSvrBShAUI4kttNuIt/qA83YjUnXX9JDcvSxwNOdWD3/Hq bHNfJA5IQkr+r6uPwWwhWSUSioQOpwAe9y4SSSjLCEH/Tk/D3CIXs5HwNY+CfbTFie/Yg8YbnrM PibU/zaz16tLZCeCJK+p0uLqdAF1bKssbfudDXUpW39ogaewXSrAnX+2dqPX4XepUBksAR90fnE 4rWcCIjZ6rWItP/RHIha6942e1VZVuS2Kik9Jt X-Received: by 2002:a2e:90c7:0:b0:2ee:4514:aa9a with SMTP id 38308e7fff4ca-2eeb318ab0bmr28900871fa.48.1720592976947; Tue, 09 Jul 2024 23:29:36 -0700 (PDT) X-Received: by 2002:a2e:90c7:0:b0:2ee:4514:aa9a with SMTP id 38308e7fff4ca-2eeb318ab0bmr28900741fa.48.1720592976556; Tue, 09 Jul 2024 23:29:36 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-367cde848e7sm4335496f8f.44.2024.07.09.23.29.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jul 2024 23:29:34 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: rrh.henry@gmail.com, richard.henderson@linaro.org Subject: [PATCH 05/10] target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl Date: Wed, 10 Jul 2024 08:29:15 +0200 Message-ID: <20240710062920.73063-6-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240710062920.73063-1-pbonzini@redhat.com> References: <20240710062920.73063-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.144, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Richard Henderson Disconnect mmu index computation from the current pl as stored in env->hflags. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-2-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/cpu.h | 11 ++--------- target/i386/cpu.c | 27 ++++++++++++++++++++++++--- 2 files changed, 26 insertions(+), 12 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index c43ac01c794..1e121acef54 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -2445,15 +2445,8 @@ static inline bool is_mmu_index_32(int mmu_index) return mmu_index & 1; } -static inline int cpu_mmu_index_kernel(CPUX86State *env) -{ - int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 0 : 1; - int mmu_index_base = - !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : - ((env->hflags & HF_CPL_MASK) < 3 && (env->eflags & AC_MASK)) ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX; - - return mmu_index_base + mmu_index_32; -} +int x86_mmu_index_pl(CPUX86State *env, unsigned pl); +int cpu_mmu_index_kernel(CPUX86State *env); #define CC_DST (env->cc_dst) #define CC_SRC (env->cc_src) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index c05765eeafc..4688d140c2d 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -8122,18 +8122,39 @@ static bool x86_cpu_has_work(CPUState *cs) return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0; } -static int x86_cpu_mmu_index(CPUState *cs, bool ifetch) +int x86_mmu_index_pl(CPUX86State *env, unsigned pl) { - CPUX86State *env = cpu_env(cs); int mmu_index_32 = (env->hflags & HF_CS64_MASK) ? 0 : 1; int mmu_index_base = - (env->hflags & HF_CPL_MASK) == 3 ? MMU_USER64_IDX : + pl == 3 ? MMU_USER64_IDX : !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : (env->eflags & AC_MASK) ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX; return mmu_index_base + mmu_index_32; } +static int x86_cpu_mmu_index(CPUState *cs, bool ifetch) +{ + CPUX86State *env = cpu_env(cs); + return x86_mmu_index_pl(env, env->hflags & HF_CPL_MASK); +} + +static int x86_mmu_index_kernel_pl(CPUX86State *env, unsigned pl) +{ + int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 0 : 1; + int mmu_index_base = + !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : + (pl < 3 && (env->eflags & AC_MASK) + ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX); + + return mmu_index_base + mmu_index_32; +} + +int cpu_mmu_index_kernel(CPUX86State *env) +{ + return x86_mmu_index_kernel_pl(env, env->hflags & HF_CPL_MASK); +} + static void x86_disas_set_info(CPUState *cs, disassemble_info *info) { X86CPU *cpu = X86_CPU(cs);