From patchwork Sun Jul 14 11:10:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 812559 Delivered-To: patch@linaro.org Received: by 2002:adf:fac3:0:b0:367:895a:4699 with SMTP id a3csp1396787wrs; Sun, 14 Jul 2024 04:13:24 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWoEekhxildxtbQqvUTC/4WbbvzGYSe9CwlNGkZ7tJltrksr4HI7I30qUiFGrZDAYCpNiiEos3+wjU6gJA7cACp X-Google-Smtp-Source: AGHT+IGFXDOz8XtBcUqXdEQkoArJ5DrqN63aJODfUsJtpFl5ipckVUIG30MtCz6oqzeMk41hw8Ky X-Received: by 2002:a05:6214:234f:b0:6b5:e58a:6397 with SMTP id 6a1803df08f44-6b61c1c18c6mr189461086d6.35.1720955604345; Sun, 14 Jul 2024 04:13:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1720955604; cv=none; d=google.com; s=arc-20160816; b=THzEjtjFhEmNsBCKLluMU0dHPUgthVih5E9xzNB9PHDsSoZSPIUPUYfJ+Sw/bq39Fo +CxlZ8wyUwwaz5byLhCqAqshpuqyFu54p+Fzos7j0gOzN3RkgDLbqcvMCWGwZjjGwiRk S1yx/73TszI58hC5t5Zn8Ujm9CTtIx0eXttqiEEo+ibf8GLGUP9sBWoqhmOTb3H+U1aA 22J5kH7bn5EcUJJZ0bBa5az1saUa2gvDfVax8+fNyPC59MkNDO3Wn4Y1tMimUs8OM0vs tgOIRQDo0fGUvu9RruZo71lnOvH2gLzYuhOK1ZW7NIg0YUjmO75iaIYWrKQU/+u2cqeA ZROw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=zysmERLOD7LqFn2JG/XTo8hRSGrS1y0PB/cgSbGIed8=; fh=mN+7zamMqtxpbv5qGNGIO/QHsXnjUAqHW/yN5gisKlY=; b=B4WLKRCCnVsKyKu1KyJmCIHP6+sN/tS3zz5gRmmiMMym2bmIw5GSyoVDhDDIoAQz9K m56APyEWtRuxKJkJI9mJm3D/h/oMbaDIV8FXecRaJ31QhI4HV9WOoSI7TRbxw92LnuiS KGCnFXpb8OaexCWBh2SP+OSzKDylreIvUNDyobLKR55oQxzWskquO+SNnNy8aElz+CUl LMGVNkt7HnqZ2dOnwV5io5NsbWFAKEro8g8kkHwhYkESXyKFGq7jeex6CcI/u5F3I6Vy ze9TvCh1MzR4zgtjhWiX0XxrYIApWVV6OFHZkLW2bUcgcLdvYz/HRXc2M4TmkcJEStRQ Af5A==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZMXy27Z9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 6a1803df08f44-6b76197cc05si29381836d6.185.2024.07.14.04.13.24 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sun, 14 Jul 2024 04:13:24 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZMXy27Z9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8E-0004nJ-JX; Sun, 14 Jul 2024 07:11:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8C-0004fI-Ix for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:04 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx86-00022y-Np for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955454; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zysmERLOD7LqFn2JG/XTo8hRSGrS1y0PB/cgSbGIed8=; b=ZMXy27Z9r2cbzNBvMq5uw51o7l1gUNw6CgGfW54UbPZ9FmS+0nTZnakb4VTQ5ziALhE8Kd i0zD5SXay0UsnLzJiHkrJEWViCrntbjnG2bU5XKHPLOCRig2fiSwk18jWke/AoegiV0MPB /ZJwN5OWTO2A8VNCB1xWdI+47nMWxQc= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-345-mH1_8B5tMBCkrLuPWAwJtA-1; Sun, 14 Jul 2024 07:10:52 -0400 X-MC-Unique: mH1_8B5tMBCkrLuPWAwJtA-1 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-367f1dc92e3so2125655f8f.0 for ; Sun, 14 Jul 2024 04:10:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955451; x=1721560251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zysmERLOD7LqFn2JG/XTo8hRSGrS1y0PB/cgSbGIed8=; b=WRlqI3NaLDc+33/k9Y6YmJVrRbU0MXYNcryUA/2z+DZb7KlcWMy7E+ecWq2iZviqVA fy2to6zLkFwRtV3qW9lX4poupDt+pQUcMEnlO6fUy/8WuF3gpdS7dfpwOUc4/Ha/E4dy t+4V0zUsXt6D9bU2Oj2gMbwMTADh5/xva3ktTsxM62qdw7Zxir2Ii0fMJBTc7n40ZPaf cb2mgnuNODNgi5f6QlZiQBpE/a4ar6RKHIr0dT3Vp0f7o3wIS3TQZCOgk6OiRaAfL/lh YNUvM1BLv22k3oPvcCMtheyAZotcB7c8vO/oOkleDpRL9Mww6qs0wlpvF5D1mrUW+8ba lWxg== X-Gm-Message-State: AOJu0YzPKeWLOGXw3kddX1GriyHpK9OZoRU7H5nQpbQxEJNcC6cE/lVn tRLNXBzn3xm2ir9T1hxwmtIK8wjnEhWNqJfTHzb1YujmcsSfvnp1RioevGPdcvyODJjSqnLOZ6w T5GIqF2Q5hj7qEPuniHzMnZuqf/MvrduxRxW7TgvMXtmYH84lgTHwtFFaS828cYd1s9/EvmaP4v vWx1MknZgr5dBWS9EsFDMe/SJrbs80U2w7bIYm X-Received: by 2002:a5d:4a42:0:b0:366:e2e5:c323 with SMTP id ffacd0b85a97d-367cea6bfb2mr9878886f8f.19.1720955451318; Sun, 14 Jul 2024 04:10:51 -0700 (PDT) X-Received: by 2002:a5d:4a42:0:b0:366:e2e5:c323 with SMTP id ffacd0b85a97d-367cea6bfb2mr9878879f8f.19.1720955451013; Sun, 14 Jul 2024 04:10:51 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3680dafbb5bsm3576031f8f.80.2024.07.14.04.10.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:10:49 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 02/13] target/i386/tcg: Remove SEG_ADDL Date: Sun, 14 Jul 2024 13:10:32 +0200 Message-ID: <20240714111043.14132-3-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, T_SPF_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Richard Henderson This truncation is now handled by MMU_*32_IDX. The introduction of MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses with a high segment base (e.g. big real mode or vm86 mode), which did not use SEG_ADDL. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index aee3d19f29b..19d6b41a589 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -579,10 +579,6 @@ int exception_has_error_code(int intno) } while (0) #endif -/* in 64-bit machines, this can overflow. So this segment addition macro - * can be used to trim the value to 32-bit whenever needed */ -#define SEG_ADDL(ssp, sp, sp_mask) ((uint32_t)((ssp) + (sp & (sp_mask)))) - /* XXX: add a is_user flag to have proper security support */ #define PUSHW_RA(ssp, sp, sp_mask, val, ra) \ { \ @@ -593,7 +589,7 @@ int exception_has_error_code(int intno) #define PUSHL_RA(ssp, sp, sp_mask, val, ra) \ { \ sp -= 4; \ - cpu_stl_kernel_ra(env, SEG_ADDL(ssp, sp, sp_mask), (uint32_t)(val), ra); \ + cpu_stl_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ } #define POPW_RA(ssp, sp, sp_mask, val, ra) \ @@ -604,7 +600,7 @@ int exception_has_error_code(int intno) #define POPL_RA(ssp, sp, sp_mask, val, ra) \ { \ - val = (uint32_t)cpu_ldl_kernel_ra(env, SEG_ADDL(ssp, sp, sp_mask), ra); \ + val = (uint32_t)cpu_ldl_kernel_ra(env, (ssp) + (sp & (sp_mask)), ra); \ sp += 4; \ } From patchwork Sun Jul 14 11:10:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 812557 Delivered-To: patch@linaro.org Received: by 2002:adf:fac3:0:b0:367:895a:4699 with SMTP id a3csp1396293wrs; Sun, 14 Jul 2024 04:11:39 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVF35ZL/tnGi0X9J/JjA+BeeCSv1xIYsoaIptMwKnaoZwQ/gZOK0KnHI+6kGXfW1fhm+2iTT6WN8cJ/QDRdVeom X-Google-Smtp-Source: AGHT+IE/onNghCYt8cq9Tezui8ygug+zb3Hvse6/CAXZzDQ04wopO45OfPj9s8Am4lYMWAVeT4Ei X-Received: by 2002:a05:6102:dd4:b0:48f:461c:ab86 with SMTP id ada2fe7eead31-490321131c3mr20041633137.12.1720955499210; Sun, 14 Jul 2024 04:11:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1720955499; cv=none; d=google.com; s=arc-20160816; b=0o+nb2l+G/QziwpIzY7j6ivb7mqt6XOxZyfOax6Sp3YHCI1dKr22HVfs0RJCUHjgJ/ soZR7w7HbvCZRlvRnGEDnDtVGMYzSyCigPiFBbjCVez6VuyX8SUMqXZii6FKEMqv7tsh byO67IYJrXzMh5XOrAc8Yx2ydZmVRbacmzF9xD6DAhNYHSfFnW2lTSsx8PM5flFgIJZP 9zzsBpiXL4PFZ//9STx/UCAgoe83MDYhxd7VPftTOnfRlWBqTsNOpC7k5KPFQXer0u4c liLMGKHzH0mlBK3XdctHUsS7fwYUXocs7Gdm2V4NAYsgsK39/W2atn06YfmYKurshErf NPdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=riytJjgbE6x/tv0Qu82lUf7rTP82b0wZOff0TXFI5UI=; fh=mN+7zamMqtxpbv5qGNGIO/QHsXnjUAqHW/yN5gisKlY=; b=HUYioParq9RQJc+XGwlsFkTb7s1eUq0Ibzp2g7r4tcOnL7PYs0faElE3laqgZxMoUA dQR0XBeyIbyScvROspIZvzzjC6no0XdlkAuHi/YbJIAVMI6p60ELi60p/kGWuGIObTMz kGNjTM4+XhZba8pAPxF6FM8Z7KWntIrcjwVTDHn693pNLRlDVs9PIb+fMsQKY0gTy3NH t91flIiYrSTQIuCiUOKUSojhSrXYUM40pSt6tns1QIpBlnc5JFCxMaPIwmIK5/Xw4odJ MpqM70fVNalDkCLFBvZb0hFwIO6B96pFcXz75zK3qsbcvqT1YEPegqWYoKvWVcabq9/T z/3A==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MfOirAAT; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d75a77b69052e-44f5b8324easi32510271cf.763.2024.07.14.04.11.38 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sun, 14 Jul 2024 04:11:39 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MfOirAAT; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8J-000572-SI; Sun, 14 Jul 2024 07:11:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8G-0004te-60 for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8C-000253-Ph for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955464; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=riytJjgbE6x/tv0Qu82lUf7rTP82b0wZOff0TXFI5UI=; b=MfOirAAThmcN3N38plGzDDxzpXe7KBQ6WGbjhyNGf7y8n+YEzg/PLVtSsUhQ9GuMAiBE6X 7Aq0keoRIB7pyKo7gGZiFPTBc90C/UpeaCyhuQMd8j0LSzGdNx83sNUD9/sUWAV337Kigy kXGU1+2gVVVSRkWR22TP2bxntGPQ5a8= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-446-XluMdoppNb-aSRaaizP2Aw-1; Sun, 14 Jul 2024 07:11:02 -0400 X-MC-Unique: XluMdoppNb-aSRaaizP2Aw-1 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-4264dc624a5so21838305e9.1 for ; Sun, 14 Jul 2024 04:11:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955460; x=1721560260; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=riytJjgbE6x/tv0Qu82lUf7rTP82b0wZOff0TXFI5UI=; b=i6RbprMgvZaA+swqNzWVFngFx65UGqSNawV6vE/3fHT/+hx+GmT2XN1QOhdHnW1DJH UQqcP7+Jnsarm0kqSr28oipLkdVcsyUTXIit1r4WzaWZ5Y+pz3NtmjW9yC+i/gsorRh+ Jzcp9qZc3vxQ/NpdPbC950nFjJxMNHsNM6U3Fwv8r3OZJuLVX7OZykJXal0SKHFC8dNk PoAyDbM/VIXFBQ2uFtqYm2byvpov6LKh2pS1CDAHDEYMp4JFkfTtG3hatkSks5a6bdLz wCVDZzCyuv3z42RxM/PfCUu+zoh9Nww2eRnPHrKsv/bQiP6mGummnK2x034s1xk43ndx Bmyg== X-Gm-Message-State: AOJu0YwSLXIUsFB9U74CH1gDu19r5ZZLeKS0caoeyY9QVks9bakycU3N KiGT+EdaEO5lO//OSfYyN5FgiBvr8kKHEBDRpZBpvURmuLZxfZVE6+6N0T09cyZwGvNaf0LoUVN 6751F5iHIaDQIqFwotz8+FthSFAqoLPqy04HR21Zgu3/W4LZ5Yqdry0Z19riQgh5zRexGm48CfT cJ5KPoUOORkq/3J2Wqaw1Rc0YkReAlqaJJgEH7 X-Received: by 2002:a7b:c452:0:b0:426:623f:34ae with SMTP id 5b1f17b1804b1-426707d8bd5mr113442915e9.16.1720955459642; Sun, 14 Jul 2024 04:10:59 -0700 (PDT) X-Received: by 2002:a7b:c452:0:b0:426:623f:34ae with SMTP id 5b1f17b1804b1-426707d8bd5mr113442785e9.16.1720955458961; Sun, 14 Jul 2024 04:10:58 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-427a5edb4a3sm48259965e9.31.2024.07.14.04.10.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:10:58 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 05/13] target/i386/tcg: Reorg push/pop within seg_helper.c Date: Sun, 14 Jul 2024 13:10:35 +0200 Message-ID: <20240714111043.14132-6-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Richard Henderson Interrupts and call gates should use accesses with the DPL as the privilege level. While computing the applicable MMU index is easy, the harder thing is how to plumb it in the code. One possibility could be to add a single argument to the PUSH* macros for the privilege level, but this is repetitive and risks confusion between the involved privilege levels. Another possibility is to pass both CPL and DPL, and adjusting both PUSH* and POP* to use specific privilege levels (instead of using cpu_{ld,st}*_data). This makes the code more symmetric. However, a more complicated but much nicer approach is to use a structure to contain the stack parameters, env, unwind return address, and rewrite the macros into functions. The struct provides an easy home for the MMU index as well. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 481 +++++++++++++++++++---------------- 1 file changed, 259 insertions(+), 222 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index b985382d704..b6902ca3fba 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -28,6 +28,68 @@ #include "helper-tcg.h" #include "seg_helper.h" +#ifdef TARGET_X86_64 +#define SET_ESP(val, sp_mask) \ + do { \ + if ((sp_mask) == 0xffff) { \ + env->regs[R_ESP] = (env->regs[R_ESP] & ~0xffff) | \ + ((val) & 0xffff); \ + } else if ((sp_mask) == 0xffffffffLL) { \ + env->regs[R_ESP] = (uint32_t)(val); \ + } else { \ + env->regs[R_ESP] = (val); \ + } \ + } while (0) +#else +#define SET_ESP(val, sp_mask) \ + do { \ + env->regs[R_ESP] = (env->regs[R_ESP] & ~(sp_mask)) | \ + ((val) & (sp_mask)); \ + } while (0) +#endif + +/* XXX: use mmu_index to have proper DPL support */ +typedef struct StackAccess +{ + CPUX86State *env; + uintptr_t ra; + target_ulong ss_base; + target_ulong sp; + target_ulong sp_mask; +} StackAccess; + +static void pushw(StackAccess *sa, uint16_t val) +{ + sa->sp -= 2; + cpu_stw_kernel_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), + val, sa->ra); +} + +static void pushl(StackAccess *sa, uint32_t val) +{ + sa->sp -= 4; + cpu_stl_kernel_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), + val, sa->ra); +} + +static uint16_t popw(StackAccess *sa) +{ + uint16_t ret = cpu_lduw_data_ra(sa->env, + sa->ss_base + (sa->sp & sa->sp_mask), + sa->ra); + sa->sp += 2; + return ret; +} + +static uint32_t popl(StackAccess *sa) +{ + uint32_t ret = cpu_ldl_data_ra(sa->env, + sa->ss_base + (sa->sp & sa->sp_mask), + sa->ra); + sa->sp += 4; + return ret; +} + int get_pg_mode(CPUX86State *env) { int pg_mode = 0; @@ -559,68 +621,19 @@ int exception_has_error_code(int intno) return 0; } -#ifdef TARGET_X86_64 -#define SET_ESP(val, sp_mask) \ - do { \ - if ((sp_mask) == 0xffff) { \ - env->regs[R_ESP] = (env->regs[R_ESP] & ~0xffff) | \ - ((val) & 0xffff); \ - } else if ((sp_mask) == 0xffffffffLL) { \ - env->regs[R_ESP] = (uint32_t)(val); \ - } else { \ - env->regs[R_ESP] = (val); \ - } \ - } while (0) -#else -#define SET_ESP(val, sp_mask) \ - do { \ - env->regs[R_ESP] = (env->regs[R_ESP] & ~(sp_mask)) | \ - ((val) & (sp_mask)); \ - } while (0) -#endif - -/* XXX: add a is_user flag to have proper security support */ -#define PUSHW_RA(ssp, sp, sp_mask, val, ra) \ - { \ - sp -= 2; \ - cpu_stw_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ - } - -#define PUSHL_RA(ssp, sp, sp_mask, val, ra) \ - { \ - sp -= 4; \ - cpu_stl_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ - } - -#define POPW_RA(ssp, sp, sp_mask, val, ra) \ - { \ - val = cpu_lduw_data_ra(env, (ssp) + (sp & (sp_mask)), ra); \ - sp += 2; \ - } - -#define POPL_RA(ssp, sp, sp_mask, val, ra) \ - { \ - val = (uint32_t)cpu_ldl_data_ra(env, (ssp) + (sp & (sp_mask)), ra); \ - sp += 4; \ - } - -#define PUSHW(ssp, sp, sp_mask, val) PUSHW_RA(ssp, sp, sp_mask, val, 0) -#define PUSHL(ssp, sp, sp_mask, val) PUSHL_RA(ssp, sp, sp_mask, val, 0) -#define POPW(ssp, sp, sp_mask, val) POPW_RA(ssp, sp, sp_mask, val, 0) -#define POPL(ssp, sp, sp_mask, val) POPL_RA(ssp, sp, sp_mask, val, 0) - /* protected mode interrupt */ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, int error_code, unsigned int next_eip, int is_hw) { SegmentCache *dt; - target_ulong ptr, ssp; + target_ulong ptr; int type, dpl, selector, ss_dpl, cpl; int has_error_code, new_stack, shift; - uint32_t e1, e2, offset, ss = 0, esp, ss_e1 = 0, ss_e2 = 0; - uint32_t old_eip, sp_mask, eflags; + uint32_t e1, e2, offset, ss = 0, ss_e1 = 0, ss_e2 = 0; + uint32_t old_eip, eflags; int vm86 = env->eflags & VM_MASK; + StackAccess sa; bool set_rf; has_error_code = 0; @@ -662,6 +675,9 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0D_GPF, intno * 8 + 2); } + sa.env = env; + sa.ra = 0; + if (type == 5) { /* task gate */ /* must do that check here to return the correct error code */ @@ -672,18 +688,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, if (has_error_code) { /* push the error code */ if (env->segs[R_SS].flags & DESC_B_MASK) { - sp_mask = 0xffffffff; + sa.sp_mask = 0xffffffff; } else { - sp_mask = 0xffff; + sa.sp_mask = 0xffff; } - esp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; if (shift) { - PUSHL(ssp, esp, sp_mask, error_code); + pushl(&sa, error_code); } else { - PUSHW(ssp, esp, sp_mask, error_code); + pushw(&sa, error_code); } - SET_ESP(esp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); } return; } @@ -717,6 +733,7 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, } if (dpl < cpl) { /* to inner privilege */ + uint32_t esp; get_ss_esp_from_tss(env, &ss, &esp, dpl, 0); if ((ss & 0xfffc) == 0) { raise_exception_err(env, EXCP0A_TSS, ss & 0xfffc); @@ -740,17 +757,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0A_TSS, ss & 0xfffc); } new_stack = 1; - sp_mask = get_sp_mask(ss_e2); - ssp = get_seg_base(ss_e1, ss_e2); + sa.sp = esp; + sa.sp_mask = get_sp_mask(ss_e2); + sa.ss_base = get_seg_base(ss_e1, ss_e2); } else { /* to same privilege */ if (vm86) { raise_exception_err(env, EXCP0D_GPF, selector & 0xfffc); } new_stack = 0; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; - esp = env->regs[R_ESP]; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; } shift = type >> 3; @@ -775,36 +793,36 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, if (shift == 1) { if (new_stack) { if (vm86) { - PUSHL(ssp, esp, sp_mask, env->segs[R_GS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_FS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_DS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_ES].selector); + pushl(&sa, env->segs[R_GS].selector); + pushl(&sa, env->segs[R_FS].selector); + pushl(&sa, env->segs[R_DS].selector); + pushl(&sa, env->segs[R_ES].selector); } - PUSHL(ssp, esp, sp_mask, env->segs[R_SS].selector); - PUSHL(ssp, esp, sp_mask, env->regs[R_ESP]); + pushl(&sa, env->segs[R_SS].selector); + pushl(&sa, env->regs[R_ESP]); } - PUSHL(ssp, esp, sp_mask, eflags); - PUSHL(ssp, esp, sp_mask, env->segs[R_CS].selector); - PUSHL(ssp, esp, sp_mask, old_eip); + pushl(&sa, eflags); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, old_eip); if (has_error_code) { - PUSHL(ssp, esp, sp_mask, error_code); + pushl(&sa, error_code); } } else { if (new_stack) { if (vm86) { - PUSHW(ssp, esp, sp_mask, env->segs[R_GS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_FS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_DS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_ES].selector); + pushw(&sa, env->segs[R_GS].selector); + pushw(&sa, env->segs[R_FS].selector); + pushw(&sa, env->segs[R_DS].selector); + pushw(&sa, env->segs[R_ES].selector); } - PUSHW(ssp, esp, sp_mask, env->segs[R_SS].selector); - PUSHW(ssp, esp, sp_mask, env->regs[R_ESP]); + pushw(&sa, env->segs[R_SS].selector); + pushw(&sa, env->regs[R_ESP]); } - PUSHW(ssp, esp, sp_mask, eflags); - PUSHW(ssp, esp, sp_mask, env->segs[R_CS].selector); - PUSHW(ssp, esp, sp_mask, old_eip); + pushw(&sa, eflags); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, old_eip); if (has_error_code) { - PUSHW(ssp, esp, sp_mask, error_code); + pushw(&sa, error_code); } } @@ -822,10 +840,10 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, cpu_x86_load_seg_cache(env, R_GS, 0, 0, 0, 0); } ss = (ss & ~3) | dpl; - cpu_x86_load_seg_cache(env, R_SS, ss, - ssp, get_seg_limit(ss_e1, ss_e2), ss_e2); + cpu_x86_load_seg_cache(env, R_SS, ss, sa.ss_base, + get_seg_limit(ss_e1, ss_e2), ss_e2); } - SET_ESP(esp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, @@ -837,20 +855,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, #ifdef TARGET_X86_64 -#define PUSHQ_RA(sp, val, ra) \ - { \ - sp -= 8; \ - cpu_stq_kernel_ra(env, sp, (val), ra); \ - } +static void pushq(StackAccess *sa, uint64_t val) +{ + sa->sp -= 8; + cpu_stq_kernel_ra(sa->env, sa->sp, val, sa->ra); +} -#define POPQ_RA(sp, val, ra) \ - { \ - val = cpu_ldq_data_ra(env, sp, ra); \ - sp += 8; \ - } - -#define PUSHQ(sp, val) PUSHQ_RA(sp, val, 0) -#define POPQ(sp, val) POPQ_RA(sp, val, 0) +static uint64_t popq(StackAccess *sa) +{ + uint64_t ret = cpu_ldq_data_ra(sa->env, sa->sp, sa->ra); + sa->sp += 8; + return ret; +} static inline target_ulong get_rsp_from_tss(CPUX86State *env, int level) { @@ -893,8 +909,9 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, int type, dpl, selector, cpl, ist; int has_error_code, new_stack; uint32_t e1, e2, e3, ss, eflags; - target_ulong old_eip, esp, offset; + target_ulong old_eip, offset; bool set_rf; + StackAccess sa; has_error_code = 0; if (!is_int && !is_hw) { @@ -962,10 +979,15 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, if (e2 & DESC_C_MASK) { dpl = cpl; } + + sa.env = env; + sa.ra = 0; + sa.sp_mask = -1; + sa.ss_base = 0; if (dpl < cpl || ist != 0) { /* to inner privilege */ new_stack = 1; - esp = get_rsp_from_tss(env, ist != 0 ? ist + 3 : dpl); + sa.sp = get_rsp_from_tss(env, ist != 0 ? ist + 3 : dpl); ss = 0; } else { /* to same privilege */ @@ -973,9 +995,9 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0D_GPF, selector & 0xfffc); } new_stack = 0; - esp = env->regs[R_ESP]; + sa.sp = env->regs[R_ESP]; } - esp &= ~0xfLL; /* align stack */ + sa.sp &= ~0xfLL; /* align stack */ /* See do_interrupt_protected. */ eflags = cpu_compute_eflags(env); @@ -983,13 +1005,13 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, eflags |= RF_MASK; } - PUSHQ(esp, env->segs[R_SS].selector); - PUSHQ(esp, env->regs[R_ESP]); - PUSHQ(esp, eflags); - PUSHQ(esp, env->segs[R_CS].selector); - PUSHQ(esp, old_eip); + pushq(&sa, env->segs[R_SS].selector); + pushq(&sa, env->regs[R_ESP]); + pushq(&sa, eflags); + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, old_eip); if (has_error_code) { - PUSHQ(esp, error_code); + pushq(&sa, error_code); } /* interrupt gate clear IF mask */ @@ -1002,7 +1024,7 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, ss = 0 | dpl; cpu_x86_load_seg_cache(env, R_SS, ss, 0, 0, dpl << DESC_DPL_SHIFT); } - env->regs[R_ESP] = esp; + env->regs[R_ESP] = sa.sp; selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, @@ -1074,10 +1096,11 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, int error_code, unsigned int next_eip) { SegmentCache *dt; - target_ulong ptr, ssp; + target_ulong ptr; int selector; - uint32_t offset, esp; + uint32_t offset; uint32_t old_cs, old_eip; + StackAccess sa; /* real mode (simpler!) */ dt = &env->idt; @@ -1087,8 +1110,13 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, ptr = dt->base + intno * 4; offset = cpu_lduw_kernel(env, ptr); selector = cpu_lduw_kernel(env, ptr + 2); - esp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + + sa.env = env; + sa.ra = 0; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = 0xffff; + sa.ss_base = env->segs[R_SS].base; + if (is_int) { old_eip = next_eip; } else { @@ -1096,12 +1124,12 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, } old_cs = env->segs[R_CS].selector; /* XXX: use SS segment size? */ - PUSHW(ssp, esp, 0xffff, cpu_compute_eflags(env)); - PUSHW(ssp, esp, 0xffff, old_cs); - PUSHW(ssp, esp, 0xffff, old_eip); + pushw(&sa, cpu_compute_eflags(env)); + pushw(&sa, old_cs); + pushw(&sa, old_eip); /* update processor state */ - env->regs[R_ESP] = (env->regs[R_ESP] & ~0xffff) | (esp & 0xffff); + SET_ESP(sa.sp, sa.sp_mask); env->eip = offset; env->segs[R_CS].selector = selector; env->segs[R_CS].base = (selector << 4); @@ -1544,21 +1572,23 @@ void helper_ljmp_protected(CPUX86State *env, int new_cs, target_ulong new_eip, void helper_lcall_real(CPUX86State *env, uint32_t new_cs, uint32_t new_eip, int shift, uint32_t next_eip) { - uint32_t esp, esp_mask; - target_ulong ssp; + StackAccess sa; + + sa.env = env; + sa.ra = GETPC(); + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; - esp = env->regs[R_ESP]; - esp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; if (shift) { - PUSHL_RA(ssp, esp, esp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, esp, esp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, esp, esp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, esp, esp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } - SET_ESP(esp, esp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = new_eip; env->segs[R_CS].selector = new_cs; env->segs[R_CS].base = (new_cs << 4); @@ -1570,9 +1600,10 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, { int new_stack, i; uint32_t e1, e2, cpl, dpl, rpl, selector, param_count; - uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl, sp_mask; + uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl; uint32_t val, limit, old_sp_mask; - target_ulong ssp, old_ssp, offset, sp; + target_ulong old_ssp, offset; + StackAccess sa; LOG_PCALL("lcall %04x:" TARGET_FMT_lx " s=%d\n", new_cs, new_eip, shift); LOG_PCALL_STATE(env_cpu(env)); @@ -1584,6 +1615,10 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, } cpl = env->hflags & HF_CPL_MASK; LOG_PCALL("desc=%08x:%08x\n", e1, e2); + + sa.env = env; + sa.ra = GETPC(); + if (e2 & DESC_S_MASK) { if (!(e2 & DESC_CS_MASK)) { raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); @@ -1611,14 +1646,14 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, #ifdef TARGET_X86_64 /* XXX: check 16/32 bit cases in long mode */ if (shift == 2) { - target_ulong rsp; - /* 64 bit case */ - rsp = env->regs[R_ESP]; - PUSHQ_RA(rsp, env->segs[R_CS].selector, GETPC()); - PUSHQ_RA(rsp, next_eip, GETPC()); + sa.sp = env->regs[R_ESP]; + sa.sp_mask = -1; + sa.ss_base = 0; + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, next_eip); /* from this point, not restartable */ - env->regs[R_ESP] = rsp; + env->regs[R_ESP] = sa.sp; cpu_x86_load_seg_cache(env, R_CS, (new_cs & 0xfffc) | cpl, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); @@ -1626,15 +1661,15 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, } else #endif { - sp = env->regs[R_ESP]; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; if (shift) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } limit = get_seg_limit(e1, e2); @@ -1642,7 +1677,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); } /* from this point, not restartable */ - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); cpu_x86_load_seg_cache(env, R_CS, (new_cs & 0xfffc) | cpl, get_seg_base(e1, e2), limit, e2); env->eip = new_eip; @@ -1737,13 +1772,13 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, /* to inner privilege */ #ifdef TARGET_X86_64 if (shift == 2) { - sp = get_rsp_from_tss(env, dpl); ss = dpl; /* SS = NULL selector with RPL = new CPL */ new_stack = 1; - sp_mask = 0; - ssp = 0; /* SS base is always zero in IA-32e mode */ + sa.sp = get_rsp_from_tss(env, dpl); + sa.sp_mask = -1; + sa.ss_base = 0; /* SS base is always zero in IA-32e mode */ LOG_PCALL("new ss:rsp=%04x:%016llx env->regs[R_ESP]=" - TARGET_FMT_lx "\n", ss, sp, env->regs[R_ESP]); + TARGET_FMT_lx "\n", ss, sa.sp, env->regs[R_ESP]); } else #endif { @@ -1752,7 +1787,6 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, LOG_PCALL("new ss:esp=%04x:%08x param_count=%d env->regs[R_ESP]=" TARGET_FMT_lx "\n", ss, sp32, param_count, env->regs[R_ESP]); - sp = sp32; if ((ss & 0xfffc) == 0) { raise_exception_err_ra(env, EXCP0A_TSS, ss & 0xfffc, GETPC()); } @@ -1775,63 +1809,64 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0A_TSS, ss & 0xfffc, GETPC()); } - sp_mask = get_sp_mask(ss_e2); - ssp = get_seg_base(ss_e1, ss_e2); + sa.sp = sp32; + sa.sp_mask = get_sp_mask(ss_e2); + sa.ss_base = get_seg_base(ss_e1, ss_e2); } /* push_size = ((param_count * 2) + 8) << shift; */ - old_sp_mask = get_sp_mask(env->segs[R_SS].flags); old_ssp = env->segs[R_SS].base; + #ifdef TARGET_X86_64 if (shift == 2) { /* XXX: verify if new stack address is canonical */ - PUSHQ_RA(sp, env->segs[R_SS].selector, GETPC()); - PUSHQ_RA(sp, env->regs[R_ESP], GETPC()); + pushq(&sa, env->segs[R_SS].selector); + pushq(&sa, env->regs[R_ESP]); /* parameters aren't supported for 64-bit call gates */ } else #endif if (shift == 1) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); + pushl(&sa, env->segs[R_SS].selector); + pushl(&sa, env->regs[R_ESP]); for (i = param_count - 1; i >= 0; i--) { val = cpu_ldl_data_ra(env, old_ssp + ((env->regs[R_ESP] + i * 4) & old_sp_mask), GETPC()); - PUSHL_RA(ssp, sp, sp_mask, val, GETPC()); + pushl(&sa, val); } } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); + pushw(&sa, env->segs[R_SS].selector); + pushw(&sa, env->regs[R_ESP]); for (i = param_count - 1; i >= 0; i--) { val = cpu_lduw_data_ra(env, old_ssp + ((env->regs[R_ESP] + i * 2) & old_sp_mask), GETPC()); - PUSHW_RA(ssp, sp, sp_mask, val, GETPC()); + pushw(&sa, val); } } new_stack = 1; } else { /* to same privilege */ - sp = env->regs[R_ESP]; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; /* push_size = (4 << shift); */ new_stack = 0; } #ifdef TARGET_X86_64 if (shift == 2) { - PUSHQ_RA(sp, env->segs[R_CS].selector, GETPC()); - PUSHQ_RA(sp, next_eip, GETPC()); + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, next_eip); } else #endif if (shift == 1) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } /* from this point, not restartable */ @@ -1845,7 +1880,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, { ss = (ss & ~3) | dpl; cpu_x86_load_seg_cache(env, R_SS, ss, - ssp, + sa.ss_base, get_seg_limit(ss_e1, ss_e2), ss_e2); } @@ -1856,7 +1891,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = offset; } } @@ -1864,26 +1899,28 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, /* real and vm86 mode iret */ void helper_iret_real(CPUX86State *env, int shift) { - uint32_t sp, new_cs, new_eip, new_eflags, sp_mask; - target_ulong ssp; + uint32_t new_cs, new_eip, new_eflags; int eflags_mask; + StackAccess sa; + + sa.env = env; + sa.ra = GETPC(); + sa.sp_mask = 0xffff; /* XXXX: use SS segment size? */ + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; - sp_mask = 0xffff; /* XXXX: use SS segment size? */ - sp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_eip, GETPC()); - POPL_RA(ssp, sp, sp_mask, new_cs, GETPC()); - new_cs &= 0xffff; - POPL_RA(ssp, sp, sp_mask, new_eflags, GETPC()); + new_eip = popl(&sa); + new_cs = popl(&sa) & 0xffff; + new_eflags = popl(&sa); } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_eip, GETPC()); - POPW_RA(ssp, sp, sp_mask, new_cs, GETPC()); - POPW_RA(ssp, sp, sp_mask, new_eflags, GETPC()); + new_eip = popw(&sa); + new_cs = popw(&sa); + new_eflags = popw(&sa); } - env->regs[R_ESP] = (env->regs[R_ESP] & ~sp_mask) | (sp & sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->segs[R_CS].selector = new_cs; env->segs[R_CS].base = (new_cs << 4); env->eip = new_eip; @@ -1936,47 +1973,49 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, uint32_t new_es, new_ds, new_fs, new_gs; uint32_t e1, e2, ss_e1, ss_e2; int cpl, dpl, rpl, eflags_mask, iopl; - target_ulong ssp, sp, new_eip, new_esp, sp_mask; + target_ulong new_eip, new_esp; + StackAccess sa; + + sa.env = env; + sa.ra = retaddr; #ifdef TARGET_X86_64 if (shift == 2) { - sp_mask = -1; + sa.sp_mask = -1; } else #endif { - sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); } - sp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; new_eflags = 0; /* avoid warning */ #ifdef TARGET_X86_64 if (shift == 2) { - POPQ_RA(sp, new_eip, retaddr); - POPQ_RA(sp, new_cs, retaddr); - new_cs &= 0xffff; + new_eip = popq(&sa); + new_cs = popq(&sa) & 0xffff; if (is_iret) { - POPQ_RA(sp, new_eflags, retaddr); + new_eflags = popq(&sa); } } else #endif { if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_eip, retaddr); - POPL_RA(ssp, sp, sp_mask, new_cs, retaddr); - new_cs &= 0xffff; + new_eip = popl(&sa); + new_cs = popl(&sa) & 0xffff; if (is_iret) { - POPL_RA(ssp, sp, sp_mask, new_eflags, retaddr); + new_eflags = popl(&sa); if (new_eflags & VM_MASK) { goto return_to_vm86; } } } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_eip, retaddr); - POPW_RA(ssp, sp, sp_mask, new_cs, retaddr); + new_eip = popw(&sa); + new_cs = popw(&sa); if (is_iret) { - POPW_RA(ssp, sp, sp_mask, new_eflags, retaddr); + new_eflags = popw(&sa); } } } @@ -2012,7 +2051,7 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, raise_exception_err_ra(env, EXCP0B_NOSEG, new_cs & 0xfffc, retaddr); } - sp += addend; + sa.sp += addend; if (rpl == cpl && (!(env->hflags & HF_CS64_MASK) || ((env->hflags & HF_CS64_MASK) && !is_iret))) { /* return to same privilege level */ @@ -2024,21 +2063,19 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, /* return to different privilege level */ #ifdef TARGET_X86_64 if (shift == 2) { - POPQ_RA(sp, new_esp, retaddr); - POPQ_RA(sp, new_ss, retaddr); - new_ss &= 0xffff; + new_esp = popq(&sa); + new_ss = popq(&sa) & 0xffff; } else #endif { if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ss, retaddr); - new_ss &= 0xffff; + new_esp = popl(&sa); + new_ss = popl(&sa) & 0xffff; } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPW_RA(ssp, sp, sp_mask, new_ss, retaddr); + new_esp = popw(&sa); + new_ss = popw(&sa); } } LOG_PCALL("new ss:esp=%04x:" TARGET_FMT_lx "\n", @@ -2088,14 +2125,14 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - sp = new_esp; + sa.sp = new_esp; #ifdef TARGET_X86_64 if (env->hflags & HF_CS64_MASK) { - sp_mask = -1; + sa.sp_mask = -1; } else #endif { - sp_mask = get_sp_mask(ss_e2); + sa.sp_mask = get_sp_mask(ss_e2); } /* validate data segments */ @@ -2104,9 +2141,9 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, validate_seg(env, R_FS, rpl); validate_seg(env, R_GS, rpl); - sp += addend; + sa.sp += addend; } - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = new_eip; if (is_iret) { /* NOTE: 'cpl' is the _old_ CPL */ @@ -2126,12 +2163,12 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, return; return_to_vm86: - POPL_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ss, retaddr); - POPL_RA(ssp, sp, sp_mask, new_es, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ds, retaddr); - POPL_RA(ssp, sp, sp_mask, new_fs, retaddr); - POPL_RA(ssp, sp, sp_mask, new_gs, retaddr); + new_esp = popl(&sa); + new_ss = popl(&sa); + new_es = popl(&sa); + new_ds = popl(&sa); + new_fs = popl(&sa); + new_gs = popl(&sa); /* modify processor state */ cpu_load_eflags(env, new_eflags, TF_MASK | AC_MASK | ID_MASK | From patchwork Sun Jul 14 11:10:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 812558 Delivered-To: patch@linaro.org Received: by 2002:adf:fac3:0:b0:367:895a:4699 with SMTP id a3csp1396619wrs; Sun, 14 Jul 2024 04:12:46 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCX8f0QD2VyWSE3ieX1vbtDm33axKqM8FHnvtOs5F5ydhuMxsz0+LYtfnw1+oprajzT/34oMTNymJD8GVCA/ue2F X-Google-Smtp-Source: AGHT+IEDCblHmn0tPv4r4gJDDnUBIgLCAbBZ+jAKWIJJBxNsOYqhm7jUKUcX6JNpw0DzQM2GAUrT X-Received: by 2002:ac8:7d56:0:b0:447:ebc3:bdaa with SMTP id d75a77b69052e-447fa84be78mr199600881cf.9.1720955565763; Sun, 14 Jul 2024 04:12:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1720955565; cv=none; d=google.com; s=arc-20160816; b=SoUlRVRsPwOi1D6jesDKfv/7KXY3ZBTymA8RzgXOEcyujplhqS5MoyXMBmhN+5f0ov 9Fc6FQ1KfoFVQt5FmmzARiNuvaiE51W4L0b8cgKfbdlHB4zsABpN5Do7gEpxCZviVnVc 05vSdiNpN5OjRC83mPaqYwEP6yG8mHpOUPbgTlT7gznC15nyIccG8oizoN63rF4Ntbm/ EKevSvHEsvsVqDy4W6i3dwLb2QD55EKP4staEYVsLsymM4e/M6ud7lG7lRxmmAylLC3V vnVyUZfgdpXGPNgtoZiD0jgqAB0jO33L8t9fT9XzELOv7myPyA/OW0jKR7/O7ZWVUo3k wC/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yKHaLhM9UAuURgVWP4GssGjRnYIMl9btSsv/NKhFweE=; fh=mN+7zamMqtxpbv5qGNGIO/QHsXnjUAqHW/yN5gisKlY=; b=02CznixAAouCuq7jo2g+fylrbFVYpEFGmZ6Rt87fK7dWJZEF0rFhCfgueCdHIdPAHm eRBr++gqxeHlRu5YIRKyLwjLlz+2bnv9kpXi6NdY2bOafy85dV+A9rF6YeHLISUG1Roc O+XGLNd8avDofRijVrOM3lZxh+WoUqISKVAwrQsWeAkF3VPHJHPc6u1v4EBJ/me43Ph7 jLlxS+mWRZ3pyuc3rWFsCc6fmg13QPGtJcd6V8lAe4uUmA8U67jwWSXnepIEXOy1V80L PBAIaY44bCZdfPBnUvo9TcDjzYLHObPvyDX6q6WYEdrilVZKKMNaHCPEWf652U0TatvK GGSA==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=EJhyAf9B; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d75a77b69052e-44f5b805654si29699361cf.438.2024.07.14.04.12.45 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sun, 14 Jul 2024 04:12:45 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=EJhyAf9B; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8M-0005Gm-7Y; Sun, 14 Jul 2024 07:11:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8G-0004vz-OP for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8F-000267-2u for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955466; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yKHaLhM9UAuURgVWP4GssGjRnYIMl9btSsv/NKhFweE=; b=EJhyAf9BESePSVimDoTLSq++zGYaoeUznQ/pgKIWxa3mMDaoD9C5fSiSKKaVJGT7LcBL9W unDv3Sgz1/4gz5OHLiXa3JDiIDPV0F0SMPuC3S46y5Kxbea5XqDiaiwNeZ/gqPrykr6Ckk tQPC/V3gM6xJXEDNrD+ArH4dDp/4w78= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-441-BndHyV1NO-O52A9cIjjzig-1; Sun, 14 Jul 2024 07:11:04 -0400 X-MC-Unique: BndHyV1NO-O52A9cIjjzig-1 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-42725ec6d8dso32402465e9.2 for ; Sun, 14 Jul 2024 04:11:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955462; x=1721560262; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yKHaLhM9UAuURgVWP4GssGjRnYIMl9btSsv/NKhFweE=; b=wVNpe42fPe80nLzqbR8V9ussptBuE05eTF5JpX659Nr85QLs3AjKtGb+gOBaErAj81 nxb/XJoCLpztjSKQsM+u4z6oKWWFq3O6ELyfJA8WsCCHVh8NRUJpax0lQaH/0vwJLfP0 9/kMTc5MIo2ym1X2zzWLPvNZSRoNVRFE2Qgsvy1Q3NHVzCqLjm5W6KE1mb5hqKlIfjgs F6xPz84MT4tRwAS7j0Um9Tz1l3yytLnfUfVMIjmgaKGb8c//ArDZitY7rm4w2A4KLnIM csKlDx1vyh7mrnjYvCGxXjkX3rQtyvSxODsdtHcOcI7aHg2/DA/U9kKEGu1dydDbvskE ZBAg== X-Gm-Message-State: AOJu0Yzu1xpH9ok8pnn5x9w41c8hvw5NEHHMn17htmbTHhhIWsLe+7it HjnTDIUKoLdj1xsVJOIm6zx8E4qAm35S2UDq0QqhJ8mn54AYaY9W3AquOjFv1Pk+SqkHRhIjcGZ x49ozRb2yDp1VNa/69h3lof1a6uLDsNPhIEv/+xYEmCW7B/VylamD7sj8TtKbgl66THsmeKiCP3 0lztoeqRSyUAGmx9YPEzJNNWDThC6vjKJTyhsf X-Received: by 2002:a05:600c:3013:b0:426:63f1:9a1b with SMTP id 5b1f17b1804b1-426708f1c97mr124836835e9.33.1720955462575; Sun, 14 Jul 2024 04:11:02 -0700 (PDT) X-Received: by 2002:a05:600c:3013:b0:426:63f1:9a1b with SMTP id 5b1f17b1804b1-426708f1c97mr124836735e9.33.1720955462222; Sun, 14 Jul 2024 04:11:02 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4279f25ac47sm83415875e9.18.2024.07.14.04.11.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:00 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 06/13] target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl Date: Sun, 14 Jul 2024 13:10:36 +0200 Message-ID: <20240714111043.14132-7-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Richard Henderson Disconnect mmu index computation from the current pl as stored in env->hflags. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-2-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/cpu.h | 11 ++--------- target/i386/cpu.c | 27 ++++++++++++++++++++++++--- 2 files changed, 26 insertions(+), 12 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index c43ac01c794..1e121acef54 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -2445,15 +2445,8 @@ static inline bool is_mmu_index_32(int mmu_index) return mmu_index & 1; } -static inline int cpu_mmu_index_kernel(CPUX86State *env) -{ - int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 0 : 1; - int mmu_index_base = - !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : - ((env->hflags & HF_CPL_MASK) < 3 && (env->eflags & AC_MASK)) ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX; - - return mmu_index_base + mmu_index_32; -} +int x86_mmu_index_pl(CPUX86State *env, unsigned pl); +int cpu_mmu_index_kernel(CPUX86State *env); #define CC_DST (env->cc_dst) #define CC_SRC (env->cc_src) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index c05765eeafc..4688d140c2d 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -8122,18 +8122,39 @@ static bool x86_cpu_has_work(CPUState *cs) return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0; } -static int x86_cpu_mmu_index(CPUState *cs, bool ifetch) +int x86_mmu_index_pl(CPUX86State *env, unsigned pl) { - CPUX86State *env = cpu_env(cs); int mmu_index_32 = (env->hflags & HF_CS64_MASK) ? 0 : 1; int mmu_index_base = - (env->hflags & HF_CPL_MASK) == 3 ? MMU_USER64_IDX : + pl == 3 ? MMU_USER64_IDX : !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : (env->eflags & AC_MASK) ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX; return mmu_index_base + mmu_index_32; } +static int x86_cpu_mmu_index(CPUState *cs, bool ifetch) +{ + CPUX86State *env = cpu_env(cs); + return x86_mmu_index_pl(env, env->hflags & HF_CPL_MASK); +} + +static int x86_mmu_index_kernel_pl(CPUX86State *env, unsigned pl) +{ + int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 0 : 1; + int mmu_index_base = + !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : + (pl < 3 && (env->eflags & AC_MASK) + ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX); + + return mmu_index_base + mmu_index_32; +} + +int cpu_mmu_index_kernel(CPUX86State *env) +{ + return x86_mmu_index_kernel_pl(env, env->hflags & HF_CPL_MASK); +} + static void x86_disas_set_info(CPUState *cs, disassemble_info *info) { X86CPU *cpu = X86_CPU(cs);