From patchwork Wed Mar 19 14:52:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875060 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B72DA15574E; Wed, 19 Mar 2025 14:54:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396091; cv=none; b=B2JBiOdQmljRRonLuZLwePG4xwe9HvwGTwUAul8WdTvGDriY7q4XjfXPPceKC8xrieEeO5JdmME118Sqz/nBJWG4xOFtBRlMSSSBTVEKcu9hNrw4TZkd/ViE8zx+PPbMZkJyvO1xbdg/zwEEspeMLxkXsRnDBKWROfQHCvaynBY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396091; c=relaxed/simple; bh=0C0WG8RuFcb4duEd3LV7+fbJ5H9jCOyDqUbHqQ5OjpQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hwRKv1Y3SOBFAgSueCoxA3eFXAqKtG95zKTkar7X/Co6M8GUhRm2d0dw2usDBRYPplqXDciZxIcgMz5N3V6uGk0x/fm84HJnK//wn+kYzV8AeuIYmO157qGBEqXuJjIF2+b/o/LzPZc8RXvnle335BzETTKXtfhkELpf8E/vKYE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ht4KrR1c; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ht4KrR1c" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-2264aefc45dso23454095ad.0; Wed, 19 Mar 2025 07:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396089; x=1743000889; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5dZtNKeQwhga7C/NdCHWATB5ZEqb/G6D875fLgA0Gp4=; b=ht4KrR1cl/arWFhXYB5QGojq9WZGUoGfvdUYAUriqsr4pzwgBeQ0bEZIG4902tkisB o68dUMlUjD06Wb1EAgP/6LRUQubwUMrelnjnvr1PJ8s2/KsYpOYXwXmClgiVkPtVIjwg c0ugfrVlZH0t33Sd2xexW7xa1dthQjHqIlqR6I6CV+pba/Veu4OWovFB53gfxNfJfWbQ MqmFKOHqPLxDmeej3+Gvzn0T1CPdpCs7PoMsfGLEobNEM0P/FMOKUG0wrrDYdxwaIdHt YeZBF/7o80gt2JIEH9Nf1v2bKCWY4Zl62FCxfsgM2V5s3eqh4ighDcT03alHyVXbfnHy NIew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396089; x=1743000889; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5dZtNKeQwhga7C/NdCHWATB5ZEqb/G6D875fLgA0Gp4=; b=VGbBnAa2TOtYTRgJVYtfxi0Zk+8Hk44nIukUYa5apeCHE9mw90HSH6Dxg0iI3x7g/d zJ/7d/TwsUgUSfiU5BZcWl6KZKXcUB2JG8w6SiiJPX/NvaO0okyaJmxlQ2BuQ0FVSOKo HR1trb9BmOyXHqcfxSRrkK1GWGMaEJL1GHzq2ulH8GGhwr9Y3q9xeVJEMBxapzLEZek+ 5kFoWP/GA7mTPamOJp3V6WXHShVr8/0kZCPFmZXjYZxmkauGanfJhArnollfQU2agw5n 0YiFfnf8kvm20tJ/Q0wLf4uoC57KSflVl9LUWe31iJmIjoSkePpxE/U3DG0HLkF2xix8 yoig== X-Forwarded-Encrypted: i=1; AJvYcCUjJj8qvxHJ4BrtItkX2Ryiiwk+SG2GUHKp2Fy2t9t9lt0dd92LJd7y+DiP2edGe68ms1Dz2iN1fvgZjVcc@vger.kernel.org, AJvYcCVUvJRQ28FOTeL8wLwpq7DB3w8rgLBUNEYsmhbPhPFgSHwUUDs+R0GKec7NNQKbCnEqCFL+wmluRgv+HSsG@vger.kernel.org X-Gm-Message-State: AOJu0YzZqKEiei4wavD41GEZ/J8OqBV7XqI9wvYjTlkFpyK6Z0XCpMWS QH7LYmRrJPLMyshek4f8wkH9XHYHK7ySS0rIQEW4egFMd2l69vDF X-Gm-Gg: ASbGnctWwVhMNOT0recMz+oKKEan5OTsNT3LOrJGeFZmMEuqSdzW0amfYSzEfynXxAq lMiVRD0A/5sIs5QrGLvKXUe49YWzSqYdBqKFzBWAaj29zulpa5Hs0i9wh8fuqkN568xH2BAWhcP K3T34Sc+EtWLJj/GfkdALaiSX3oI32l8tv5KyDvqIu7r87r69BR/VgpLE41Ld1KfPHOQxsDviRT XzX2ctxTT/95MFkbOe/NpcRmlfVV4jHllfiZILi0Ml3iNzxIIOey5l2dIYQIQz7A85rGwTiTrVR SwdEvWsQ1poX1xQgI4QIdCyxiH5zyCge4TGAJBTn8XYNSw+akRJUtt/c7K3M41Nu/jeDYsyJyfj ugE2cTzxHwOcFSVjW8aA= X-Google-Smtp-Source: AGHT+IFOMy8+oTohkcWuHiW4WgtVgQ/iqyS7y8TuEZyHPSXdEIIYmiy16h8Qp/B345NHcEo1TSCqRg== X-Received: by 2002:a05:6a00:e09:b0:737:6e1f:29da with SMTP id d2e1a72fcca58-7376e1f2ae7mr4929307b3a.21.1742396088919; Wed, 19 Mar 2025 07:54:48 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7371167df97sm11699520b3a.114.2025.03.19.07.54.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:54:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 01/34] drm/gpuvm: Don't require obj lock in destructor path Date: Wed, 19 Mar 2025 07:52:13 -0700 Message-ID: <20250319145425.51935-2-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark See commit a414fe3a2129 ("drm/msm/gem: Drop obj lock in msm_gem_free_object()") for justification. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index f9eb56f24bef..1e89a98caad4 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1511,7 +1511,9 @@ drm_gpuvm_bo_destroy(struct kref *kref) drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); - drm_gem_gpuva_assert_lock_held(obj); + if (kref_read(&obj->refcount) > 0) + drm_gem_gpuva_assert_lock_held(obj); + list_del(&vm_bo->list.entry.gem); if (ops && ops->vm_bo_free) @@ -1871,7 +1873,8 @@ drm_gpuva_unlink(struct drm_gpuva *va) if (unlikely(!obj)) return; - drm_gem_gpuva_assert_lock_held(obj); + if (kref_read(&obj->refcount) > 0) + drm_gem_gpuva_assert_lock_held(obj); list_del_init(&va->gem.entry); va->vm_bo = NULL; From patchwork Wed Mar 19 14:52:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874769 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 495961F09BB; Wed, 19 Mar 2025 14:54:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396094; cv=none; b=aiEEMSh5fMUFGDMcRndGJxTp2gukOjezgxBAv0T5vetss506RFfseky/S6MQiwKisu5VUOpZKMsob+9GnRo862IJEYESx6q/OTUNvqTV3UUCXCx6Ex+LhHcbpwyHjfE1UaUfqNp47P+9qDW93t/YEuUomXw1U6T7Nkj0op9Ljpg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396094; c=relaxed/simple; bh=ye/Ha1gh841KfwNkbqX4DU3M51WIxb+TjhNn0UjruMU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hVP4HToPV8sHpuZ6kJXtz57WJTf5HoDzOpx8pXLnf/dkj8alI0WZm138PnPoZeZgTRJ3Gjs8rzD0bUavFpJQFeQE/0BSF49yrmsKFc4wX8eSw3tCaXUSrNtgpZ3wSLpzAX2gGLgwcjObGY87tpIoomOVkS+PIeeuQpSuffsn4WI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LaiYqdAG; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LaiYqdAG" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-226185948ffso71182945ad.0; Wed, 19 Mar 2025 07:54:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396091; x=1743000891; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aYQYwB4Q+56fcPRL5nHTiaN7xyPn+dVSiWbH+JLh+2I=; b=LaiYqdAGjHqwL+GOw7PvabP9zF+bpyygjU7tHsbCetF3W50jiDvi4oP4yh5AEgh0Eo 3K4A10k+TA6suDMnLo8rgKNW+YXATGc25BcySAUti6GKifqh+a46DqBns91JFClQddr0 hwtOUqAjodJsv8HdozL7U+XM3ONw/pbsSDDixcSrCCIq9ytwyeZY+4oKcI4Tu/McBpns +poPmLWAbQ1gIHDBeX7g6SnVKTuRpyP7Ny5xAM4gI1W7P+SdB0yzInBVrlk4z6AIkPkp IAUE+hjT3kVFY08aFvIMIRKqSDZNqLq1PdRa820dm/egaXkQVHFg9r5f3z+ohee4i8n4 spGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396091; x=1743000891; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aYQYwB4Q+56fcPRL5nHTiaN7xyPn+dVSiWbH+JLh+2I=; b=EbFwMh+2SShvacEu/bC4BX0XH1pRmaAuSa51xJAIR1lEntBchrKEpdUeVDEI6AuSc6 OWHEPmYf1cCpEkHF0HU3oITOageUwloAWi819ysWD1qUw25V7Q2D56uaZiYVzHMXeCLC VcjI+OjVsVoipVaO2iXzO1HQ86T6TRU8xnikxBg5Be4tQIN0uxzvjZsORoFYHsCTzCac +XZRT+2RI72jrAmTi9eJWgjs4iwVbTFxrqbGzq+hlApjVp9pO3pYdTs3Z8Fn7iiK6DTm ymqjAGu5MI0p/N94uFk5N2nslpO0VFsIJ5iiTfRwFaNx8QTnWqL+AO6yXu3YWVbtjRw5 J2aQ== X-Forwarded-Encrypted: i=1; AJvYcCU9vr6yhIXdBv8lXKO8eL5FWnY8ixhppULi9zEV7ywBB6xrrWWz5tAFkiQNcLpE3Yxxef7Jx3ZeUDRPzXvz@vger.kernel.org, AJvYcCVc8tkibrg6eIpjAGzrZqBmDSmhy7He2sbzF3o/nW0jabiQawSgd46I5t4FJID2ninO1m8gYWecfu1LOA4l@vger.kernel.org X-Gm-Message-State: AOJu0YzvQWfWvjRa8rwcP0yYM0WC1mAP78v8k4EYoDXi5j9k5HDsyCYH XfpKm07QQtUqfWVSstrZyPngIZrasz8rYlU5hZaEwWgW4An2vzbe X-Gm-Gg: ASbGncv/MrpQX8eCHr+47WcnGsBzrZi2gegn4jHng4QPLvt0vani8xozBUvqW948PIE Nii9Mn2fZuPyszfXqZU17LInBGxNWiADCH+33cq/6CFyjj5UtJUWBVrHwh74iJoP1Fp3jE3UMpV 92dsGT5W2t75tfdsxswqqwClaPwbEyLHyj6SrCSdBoQ0wBr4fQfEh3KNBR+w6zrrI83nZYbb/8B b6AwmYRT/RCYFon+PeVde0cuDr85onRimFmNVhdFCrmXQQFxonrXV1Nwb+xiibSFfpXKGXSvhy9 538XCRqLNzVrywMlt/6lm2TJ48BAUS/6SOr7JS4j94OGgcLqq7xcJ9roL1q7qGa/4qAnilv+HVl wEFjggu/SmkTYmxKNHlg= X-Google-Smtp-Source: AGHT+IE+lJUACMy/jDtMWHtB7qfzX6V/KZu/qqIx8l4Z/A67DfiqQ7vwjT+eCEkpsRZkdL/7NBCcXQ== X-Received: by 2002:a17:902:ea07:b0:224:e2a:9ff5 with SMTP id d9443c01a7336-22649800955mr49641495ad.9.1742396091394; Wed, 19 Mar 2025 07:54:51 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-301bf63589dsm1681568a91.41.2025.03.19.07.54.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:54:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 02/34] drm/gpuvm: Remove bogus lock assert Date: Wed, 19 Mar 2025 07:52:14 -0700 Message-ID: <20250319145425.51935-3-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If the driver is using an external mutex to synchronize vm access, it doesn't need to hold vm->r_obj->resv. And if the driver is already holding obj->resv, then needing to pointlessly grab vm->r_obj->resv will be seen by lockdep as nested locking. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 1e89a98caad4..c9bf18119a86 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1505,9 +1505,6 @@ drm_gpuvm_bo_destroy(struct kref *kref) struct drm_gem_object *obj = vm_bo->obj; bool lock = !drm_gpuvm_resv_protected(gpuvm); - if (!lock) - drm_gpuvm_resv_assert_held(gpuvm); - drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); From patchwork Wed Mar 19 14:52:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875059 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0DD41F4CB1; Wed, 19 Mar 2025 14:54:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396095; cv=none; b=sC4p+Cxm1zsTxxJHhrKr0Hllk6+vGy2ycZUqmOrowATqCU/VNs6hZrtAthHoWFEDWTZx3cXiFFfpXDf5uyqvjYD8itA5MJgX/7S1i5EPNl0nTGHURQEcXmesDVERW20jqk6/UPVDRgwPac5SZlIQrwh6rdE0HgG8FFM8QXJMKkQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396095; c=relaxed/simple; bh=aSPg41vptR/fA49IY6bYFn2eKpyGI/cwfHbFUsufpLA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GmAktU5C3+Qke/Q/iEBDGaPKfsWZbuXpoNZI+/BIjMZIjSbH+pER9dSffjKTblJije4pRe6suLr696o1Acv1GFIC6aMCyp9GSwKtH5YENVkeKXRVYJEDgIf0imxsC0gbzN9E2Q4SPtR+Ohzrwl+I2rzwq8OisFG/KDrLUptHPzc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=EzRXiELe; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EzRXiELe" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-224191d92e4so136458825ad.3; Wed, 19 Mar 2025 07:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396093; x=1743000893; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=536eHDinsv2rHSZhePOzHHVrQvHoKFomdEbAY4A5Vvs=; b=EzRXiELe0SPfjUrRXg0kPbFecUPli/te5yhtkg0bc5+jxhLitvzMaAAZmf+jntB+jc BJsEnVO3tsh7yUWl5xkMlrbmQYmRWNtm4qzc+BwAoCC9j881zk8FYKHHifYl9tJl7OJp fjvx6fc6EsS68f7/jwt8hEGxmiMJ9s7QMZ4eFNnk1nxyhrW0JmU6ZRr4X4sGE5pKtFMb BJDYbNCWxs/dsWaI//f2L49aJMgeRhvOJX3Uyu0/KXbGeRGuQC6STx4M5PUKf4uMNtWz hC0CmKitYXggQ8xa/S2FCkD0JAzz1AmzIHcv21KQEe2a7iqXcd7gUbxMCTJLvqppOtRC b2GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396093; x=1743000893; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=536eHDinsv2rHSZhePOzHHVrQvHoKFomdEbAY4A5Vvs=; b=VAvFA93asQWsYUMlXmO5yeH9alY/uoJ6J2XRjaVGsu9v+7fIHb38zBkwcTpIOi+pDO /U2U1OmsA5fSY1a76zEIEFWLo2OUhJsh4/fgZGpilA74aHTb3NEX1XOGlbP4VJ1GUvSC cB/VO82UNQrcUs7ngU0uA76Sb2TNsNd/OlbmvJ7+zHWVHZ2dXWui00yjIvwZxgYluM4z 6h3cf9169aBdZgutVRt7ctIaJakQeLvaMhC4Y//aPuO6tkEqNI8UOUkSTHPvrZsEgy+p Hu+KnElV9Kw8SgyYq4SHekOIc6sGOT+JLOyyMUdkgvzCr+iNmlKtg3TbmeTFf+0KFmbJ 6CrA== X-Forwarded-Encrypted: i=1; AJvYcCWAhNdhc8wRFbBLldgwvECvgrdKgUCLaAyaWhI+I1qfW5GwF8rTX5rVgC0lDHgdwPdfqlqIQaZOCI8C2z21@vger.kernel.org, AJvYcCXZcFLBn6c+bKGtmrOg+Vh0h6FwODlUhw54+D2vkFtPzn13ArSbmzTss5YYmJtBCxjFN4aXllmg1nnKdowT@vger.kernel.org X-Gm-Message-State: AOJu0Yx1bGO6jvWLhbVbQ4hqcVk1bXETT3qUsVYyTl835+8kg6CC8cI3 SgZMPJAswCVBcgzM/81YcPGB2pT1xvnijWoRzzosU1iqmV9Yc/yjPy5NIA== X-Gm-Gg: ASbGnctBE4doTNsV9ri/n7QRPyJrOAWYPgsBNOTl6qd9nHDdOClj0V6+DKFLKZUGQyo aIRUFb37XgL9S/mWs8wc7bK9nn59iEgtNGDRx2wsgQke9KarbpSw5exG/S13dkejY1FD1IYMbGX W7kXrxyQmm62m4kmsCQvCBfc/7oWXXEZ1JAUkIvP6TLw5XAVmEnek8tVsBTwLyn99Ej4ZiObnj8 QCJ8n06hzD6ucICC5bVfKeTFm6NJ2jt8ZgV3l8g+SuERdg76CYAvo/eNrYffajX292BP+Zz52XE Klr8NzPeIB7OYUQIq7wgAvGetnRatLzu7WCGjAbvH4XrJ63JlTgH1Mntiqf5bu77QqsOAWsiusR wunuhDP6PtbocdfOvrhM= X-Google-Smtp-Source: AGHT+IHjbDJd+YwV/LQ8c+2MFP6BWpYrXhPpV6rGxA6gur8Vc2Grke7zFns7JUKb3AONdquwD1Yzig== X-Received: by 2002:a05:6a20:2d23:b0:1f5:93cd:59b5 with SMTP id adf61e73a8af0-1fbecd481ebmr5327032637.28.1742396092914; Wed, 19 Mar 2025 07:54:52 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-af56e9e5d47sm10937642a12.32.2025.03.19.07.54.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:54:52 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 03/34] drm/gpuvm: Allow VAs to hold soft reference to BOs Date: Wed, 19 Mar 2025 07:52:15 -0700 Message-ID: <20250319145425.51935-4-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Eases migration for drivers where VAs don't hold hard references to their associated BO, avoiding reference loops. In particular, msm uses soft references to optimistically keep around mappings until the BO is distroyed. Which obviously won't work if the VA (the mapping) is holding a reference to the BO. By making this a per-VM flag, we can use normal hard-references for mappings in a "VM_BIND" managed VM, but soft references in other cases, such as kernel-internal VMs (for display scanout, etc). Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 8 ++++++-- include/drm/drm_gpuvm.h | 12 ++++++++++-- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index c9bf18119a86..681dc58e9160 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1482,7 +1482,9 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, vm_bo->vm = drm_gpuvm_get(gpuvm); vm_bo->obj = obj; - drm_gem_object_get(obj); + + if (!(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF)) + drm_gem_object_get(obj); kref_init(&vm_bo->kref); INIT_LIST_HEAD(&vm_bo->list.gpuva); @@ -1504,6 +1506,7 @@ drm_gpuvm_bo_destroy(struct kref *kref) const struct drm_gpuvm_ops *ops = gpuvm->ops; struct drm_gem_object *obj = vm_bo->obj; bool lock = !drm_gpuvm_resv_protected(gpuvm); + bool unref = !(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); @@ -1519,7 +1522,8 @@ drm_gpuvm_bo_destroy(struct kref *kref) kfree(vm_bo); drm_gpuvm_put(gpuvm); - drm_gem_object_put(obj); + if (unref) + drm_gem_object_put(obj); } /** diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 00d4e43b76b6..13ab087a45fa 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -205,10 +205,18 @@ enum drm_gpuvm_flags { */ DRM_GPUVM_RESV_PROTECTED = BIT(0), + /** + * @DRM_GPUVM_VA_WEAK_REF: + * + * Flag indicating that the &drm_gpuva (or more correctly, the + * &drm_gpuvm_bo) only holds a weak reference to the &drm_gem_object. + */ + DRM_GPUVM_VA_WEAK_REF = BIT(1), + /** * @DRM_GPUVM_USERBITS: user defined bits */ - DRM_GPUVM_USERBITS = BIT(1), + DRM_GPUVM_USERBITS = BIT(2), }; /** @@ -651,7 +659,7 @@ struct drm_gpuvm_bo { /** * @obj: The &drm_gem_object being mapped in @vm. This is a reference - * counted pointer. + * counted pointer, unless the &DRM_GPUVM_VA_WEAK_REF flag is set. */ struct drm_gem_object *obj; From patchwork Wed Mar 19 14:52:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874768 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E304202971; Wed, 19 Mar 2025 14:54:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396097; cv=none; b=DNGN8EInhXmTu4ynuBbLiru6YAR4LXiR6ojJf5+gJIssZWsx9ik6su5oR7404hQiu0HdYBQeVwIMswWHmrAEoPhNWKcMeGhM0AxnAFZ5Fo5KAFZb3Zhen0Rr7ZjU+zZB9AYLYeoo8ZkXjZZoQuBlZwXgzb4Jf+znJoDzOFJ+iSw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396097; c=relaxed/simple; bh=+eT6Npsr5SHhPGIxCHZJzJkTKBj/gMBjlmVRlM7dxPw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OVE6Hy/ImRlSdhs7DJ8gvvlhlejO01QPL0PbDuupss9Q+g8NzP6xYAk75Q/lCTJ/ZFjy8gIYIvNfoYWYq0duTSJG418gORHg5rpRLcEdjh127Y4e/UrhmQzMxl70DAMqjpUBIiulUWh5ONoU8INvcvbbWk0SwW5qUiMz49AiZno= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=KdfXlEy6; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KdfXlEy6" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-224191d92e4so136460365ad.3; Wed, 19 Mar 2025 07:54:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396094; x=1743000894; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oZ34MCNZ44HhyvI9do4wVGRSqo0NC+oGwftYrjB8b3w=; b=KdfXlEy6cmmdcW/JnIDi2y9a8Y/tOp2mx53oICoifG/KkCRN2tqsaZGrcLz5ERBlbj WYYxfvCEhUttt2U/MT/uSWaTkWOft5/zKD19/jFZN6M7BTE2pgGmNwE0T1XILmtIo6IB hMEidfXB9E9HBdFds3blSwCzfuCO5NJd+bH+f4SwP9vnxJoFE242K9WnT2jGid3Sszjd MrwkaMtcGx/IZOa3/mfWsVKO/0bX5qa6P6XAaVe7ZYL2IQE2/1A69z9dN4+XKrXggBRZ XBH8Ndbxjc1ejOMfWgm2dR6ufQjXKZBQGNkmR84eaNkRrcXdXU9XlhDFofFqr/P6Gs3c i2lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396094; x=1743000894; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oZ34MCNZ44HhyvI9do4wVGRSqo0NC+oGwftYrjB8b3w=; b=h8fV7m2e08QYMUvEi0x2YisnAVL4WVDFm5Iem4m6L1moerq4xQeeNKmJQBXuaVFpob /D84GBnglkvEUxHiyON4x0yIvnjdY6aMOpaQ4V2FlF0oOVRD7wAsN0HwN0m4M3MIK3FV +zfOO5jrEaeEcmyDouLR2zo0zMfYCVbvMioUvWzGnQ34gNMqMkcFuWIH643PxMKbPjqP 2bLpiRU8bboo9Hknd/DSoPQIUrdK0JMFGhXDoVk5VI4lHIPHfpjGy4EUXyxfyuLw1/ne l9MqHxXI3qykU5LZysHGsL7RsHv0iarZIT4boZkNOD3os2wfK6/uEmyYeLUhOy3aFFj9 rHIg== X-Forwarded-Encrypted: i=1; AJvYcCUYc0wAh5QtYYtOG/ngbps5lZ9iSmFBFDNN1SiVcnWKvKGIrbI9bRQzWdShLlZno/s1CGleyV6H2JITqWhQ@vger.kernel.org, AJvYcCW4KJIq/HMEinmPt7fTbP/DPP+41KmTg5bi01w74E/oW84ogsEUDGHNmiDSbG/VjV6EO82taS5+bl8+0pu7@vger.kernel.org X-Gm-Message-State: AOJu0YyV2cjg4wwIfWaqcNNFRj3bb1dDZvl4+gkPaVYRPMrUJL5xwzvc T1/DdwwQpwZny7ycEliaxN25fzJRQbzjQHjWiGKKLBABo9leL1YD X-Gm-Gg: ASbGncvwK4lWL1i37c8wt5DwtJA9HMC7n2/7qHCbrXxgJFMGm42c0Mwc95exiTCUgsq 4ms6Ey2EZnVm/tm/z3B1AUVTADgCAGz51FFNgfFzaUmg8B6dQEkTxtuhtbbiZzHG1bGaJriLRrJ wprJ+q9PLw/yqgyB3zMCTBg7ANHEPiba9h3e0zlOt3b2USXL5eX7wh7Mv70E8yToqjdrQ0+SBDQ elMAohA+xYyEiyFkGe+rU0vwvcC3UKTEc/CCwleBMWM/EP+8FvIJXiIeFV/SYppXXtxXqaF7vAM 6VFY33zRY+oQnk0IM3Tw1hod1l2oGI23bx2qN/WWMFkKbXWLHCdd8+VC1xBG8nCBIhroqOWZxaP RHUocZRdDT+2cYWJ4KH0= X-Google-Smtp-Source: AGHT+IE6G7DIcnAdTSdExbPC+8+93hGLx1fXkyPdgmB7cZhx/aYrLajLhcpobRKo4yKxYRpHbccXEw== X-Received: by 2002:a17:902:d50e:b0:223:628c:199 with SMTP id d9443c01a7336-22649cc360amr41186675ad.52.1742396094339; Wed, 19 Mar 2025 07:54:54 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c688da8csm115390165ad.48.2025.03.19.07.54.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:54:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 04/34] drm/gpuvm: Add drm_gpuvm_sm_unmap_va() Date: Wed, 19 Mar 2025 07:52:16 -0700 Message-ID: <20250319145425.51935-5-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add drm_gpuvm_sm_unmap_va(), which works similarly to drm_gpuvm_sm_unmap(), except that it operates on a single VA at a time. This is needed by drm/msm, where due to locking ordering we cannot reuse drm_gpuvm_sm_unmap() as-is. This helper lets us reuse the logic about remaps vs unmaps cases. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 123 ++++++++++++++++++++++++------------ include/drm/drm_gpuvm.h | 2 + 2 files changed, 85 insertions(+), 40 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 681dc58e9160..7267fcaa8f50 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -2244,6 +2244,56 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, req_obj, req_offset); } +static int +__drm_gpuvm_sm_unmap_va(struct drm_gpuva *va, const struct drm_gpuvm_ops *ops, + void *priv, u64 req_addr, u64 req_range) +{ + struct drm_gpuva_op_map prev = {}, next = {}; + bool prev_split = false, next_split = false; + struct drm_gem_object *obj = va->gem.obj; + u64 req_end = req_addr + req_range; + u64 offset = va->gem.offset; + u64 addr = va->va.addr; + u64 range = va->va.range; + u64 end = addr + range; + int ret; + + if (addr < req_addr) { + prev.va.addr = addr; + prev.va.range = req_addr - addr; + prev.gem.obj = obj; + prev.gem.offset = offset; + + prev_split = true; + } + + if (end > req_end) { + next.va.addr = req_end; + next.va.range = end - req_end; + next.gem.obj = obj; + next.gem.offset = offset + (req_end - addr); + + next_split = true; + } + + if (prev_split || next_split) { + struct drm_gpuva_op_unmap unmap = { .va = va }; + + ret = op_remap_cb(ops, priv, + prev_split ? &prev : NULL, + next_split ? &next : NULL, + &unmap); + if (ret) + return ret; + } else { + ret = op_unmap_cb(ops, priv, va, false); + if (ret) + return ret; + } + + return 0; +} + static int __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, const struct drm_gpuvm_ops *ops, void *priv, @@ -2257,46 +2307,9 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, return -EINVAL; drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) { - struct drm_gpuva_op_map prev = {}, next = {}; - bool prev_split = false, next_split = false; - struct drm_gem_object *obj = va->gem.obj; - u64 offset = va->gem.offset; - u64 addr = va->va.addr; - u64 range = va->va.range; - u64 end = addr + range; - - if (addr < req_addr) { - prev.va.addr = addr; - prev.va.range = req_addr - addr; - prev.gem.obj = obj; - prev.gem.offset = offset; - - prev_split = true; - } - - if (end > req_end) { - next.va.addr = req_end; - next.va.range = end - req_end; - next.gem.obj = obj; - next.gem.offset = offset + (req_end - addr); - - next_split = true; - } - - if (prev_split || next_split) { - struct drm_gpuva_op_unmap unmap = { .va = va }; - - ret = op_remap_cb(ops, priv, - prev_split ? &prev : NULL, - next_split ? &next : NULL, - &unmap); - if (ret) - return ret; - } else { - ret = op_unmap_cb(ops, priv, va, false); - if (ret) - return ret; - } + ret = __drm_gpuvm_sm_unmap_va(va, ops, priv, req_addr, req_range); + if (ret) + return ret; } return 0; @@ -2394,6 +2407,36 @@ drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv, } EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap); +/** + * drm_gpuvm_sm_unmap_va() - like drm_gpuvm_sm_unmap() but operating on a single VA + * @va: the &drm_gpuva to unmap from + * @priv: pointer to a driver private data structure + * @req_addr: the start address of the range to unmap + * @req_range: the range of the mappings to unmap + * + * This function iterates the range of a single VA. It utilizes the + * &drm_gpuvm_ops to call back into the driver providing the operations to + * unmap and, if required, split the existent mapping. + * + * There can be at most one unmap or remap operation, depending on whether the + * requested unmap range fully covers or partially covers the specified VA. + * + * Returns: 0 on success or a negative error code + */ +int +drm_gpuvm_sm_unmap_va(struct drm_gpuva *va, void *priv, + u64 req_addr, u64 req_range) +{ + const struct drm_gpuvm_ops *ops = va->vm->ops; + + if (unlikely(!(ops && ops->sm_step_remap && + ops->sm_step_unmap))) + return -EINVAL; + + return __drm_gpuvm_sm_unmap_va(va, ops, priv, req_addr, req_range); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap_va); + static struct drm_gpuva_op * gpuva_op_alloc(struct drm_gpuvm *gpuvm) { diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 13ab087a45fa..e1f488eb714e 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -1213,6 +1213,8 @@ int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv, int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv, u64 addr, u64 range); +int drm_gpuvm_sm_unmap_va(struct drm_gpuva *va, void *priv, + u64 req_addr, u64 req_range); void drm_gpuva_map(struct drm_gpuvm *gpuvm, struct drm_gpuva *va, From patchwork Wed Mar 19 14:52:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875058 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 276D0204F8D; Wed, 19 Mar 2025 14:54:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396099; cv=none; b=XdM2BB2Qkegs3R0j3WBd/pLA+EPQKqsVxjdf7rtuiPgpfwmRP8OvA3M1G0Wv/VA5vpnu0a/R7dIkwmsiZhGIw8dCAV2o9IXZm6QssfAHqmBmVg7nPlQQ/iejNUWEVhTGVBLs5sgqk8VoHhoI4QPJHSz8KbWC6J5RWyaAPdeP6cM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396099; c=relaxed/simple; bh=XKd093duNpCoqNPzwHvhskitrRNp3MIKee8HofYl6Jo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NAgvWreMAQVTJZuQEWYL09ePuFldmoI+GbD8hMRzVmCJB2wOPI5Isc9jwZv+WYsYOQEqYO5eGnOovdqIs0FgSIjxiKhi4g1BB1LGSQwn00cq+pc8tAkhBa9BI2fLti2SYDyloWNPfsD/n3zUfRnBqbnVhSv8L0KRytO13h44fpk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LhugxMkk; arc=none smtp.client-ip=209.85.216.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LhugxMkk" Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-301302a328bso8182732a91.2; Wed, 19 Mar 2025 07:54:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396096; x=1743000896; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gKa3hUYPEs+SyvxRDdRZDj2qjpo3oPsP/0spn+b/Nw4=; b=LhugxMkk+GiPK9II4UeIoCUCkJmL+SyO9NGW32oSvymtPkEG3C9Iz/6KngoWX4OXiV j2VwIkxLsnGZWb01Pw5AuR0veSMi5o4qSdllbAAz2cV6QaMyhKMwDKU79hU79cg4ZYTL CezFIYodn4UhQMNsr7wEvUKAHbM+9hJdhDkcb6vBhnL18sK0gNvaSl2W3Ihe7JwO0SxS 2/ol1zqAUCpvJzZ+gAH+X/ERvNCmi1UuRU25TTH7/uh8vQltfPZV2NTndgmpf2csr2jV 1WotHHzThWyYEzDZkmoztBeSdvccwlLZRw2S1hrKewht5uDiRBjaTxPkOVjlFCpYx6kh +w0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396096; x=1743000896; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gKa3hUYPEs+SyvxRDdRZDj2qjpo3oPsP/0spn+b/Nw4=; b=RnDVJzWtKq1ZbMJN7X40hcC+iPhUtuzsodJrpjvdoXOdekgRmLzRbJfpPPyPp2JzSy fzerjKllUS0hx5vqKkJXpZlx11ovOuGovIrCOpSk8XCR0f5qKqOtIIlfQGNWT3sOptjg DghXHXbbrn0IEDq7I5B8WNa4DEymnsTmzm4AC+QYgxMTMQ4yXHN6aA04KtYcZSqtS8V3 ehema4Kr4bbCj0ychVlTHWBIK3mNpmDBBkb3EMi0n9oUT0fnEwqDMiTQ2qk6WK0s/AkK qSz0P+4fMp5FIyMM4Bg49/CyaATIzA99Yb4NZWrNkHUx5kdMtOuLwYUxSJKmP1gOSii+ B7yg== X-Forwarded-Encrypted: i=1; AJvYcCUTvlDOvF7C5yn110h0ulpAxCyBALQGoy2bDikhNB5H0V7BN97Kpjk5/Tpid9erBB1pFhpqm6ATqH5f9CVF@vger.kernel.org, AJvYcCUkQtSaT4YfjgZ01t5VVzS8kv+Cf/ak46Eq0pn+9/MyPPJcXrUKBglfkhkrVk5Mu3DQNvL7zK6LMz7ls+Nq@vger.kernel.org X-Gm-Message-State: AOJu0YzN881xqqNbSN9g5Z+8SllCFL4ad+fOzWEAGQk8m2lpvLDGWuuR e8IUT7jSgUOonouUSt1Sl7j6MfO2KGJmPk89Y/zr25+H5BdGhe2R X-Gm-Gg: ASbGnct3XVTjbKZf9i5iLc5RuDEB8MBjVIWPZZdxUnDSkxMhXKS8E/d7WXOdWcUIuKc P5SyG8ywqLO0WjOgJnja8gJGPOyFLEgew2uR36rucu/NeyQmuM4UUnwho9TW4pZpxovsbG06gUj 5pMsE5azLqkPO0puGYz0I1347jq20uyFhWdB1XXlJqWkQjA9PQ3Ue9TwUSsxj3inU8NVNO1q5kz nhqQuHsBSPAOf8Q+oTlnRYf2wKQUfSy79PJ2zPM09CUBCwle+TK/a15oR9rMf0JWmRb0SyTPywq DocXfz3FEgADlnWMM4cChWWZ/VbCm6jL1lIedM3X/+cO1JqGzAnSI+1j5lxeNR2omT9MRfs6B3Y Sf5hQyJZCJ47UjfeSOWr2S07oxBoWEw== X-Google-Smtp-Source: AGHT+IF59oub1VYT8pyqP1RMf2jy5T6+fpg7S7/AYghSrv9ozZFnYeuWamywkg/QcGU3T38H9j83qA== X-Received: by 2002:a17:90b:2ed0:b0:2fa:137f:5c61 with SMTP id 98e67ed59e1d1-301bde6210cmr6003819a91.12.1742396096190; Wed, 19 Mar 2025 07:54:56 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-301bf5a1d1fsm1678508a91.27.2025.03.19.07.54.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:54:55 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 05/34] drm/msm: Rename msm_file_private -> msm_context Date: Wed, 19 Mar 2025 07:52:17 -0700 Message-ID: <20250319145425.51935-6-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +-- drivers/gpu/drm/msm/msm_drv.c | 14 ++++----- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.c | 4 +-- drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++------------- drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++-------- 9 files changed, 49 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index eeb8b5e582d5..c5294351b1f0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -111,7 +111,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring, struct msm_gem_submit *submit) { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 59cfed5acace..e0ab5126e390 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -343,7 +343,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -431,7 +431,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, } } -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len) { struct drm_device *drm = gpu->dev; @@ -477,7 +477,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, case MSM_PARAM_SYSPROF: if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); - return msm_file_private_set_sysprof(ctx, gpu, value); + return msm_context_set_sysprof(ctx, gpu, value); default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index a1e2d9e87b75..9fce88461f5a 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -601,9 +601,9 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) } u64 adreno_private_address_space_size(struct msm_gpu *gpu); -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); const struct firmware *adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname); diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c3588dc9e537..29ca24548c67 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -333,7 +333,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx; + struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) @@ -363,23 +363,23 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) return context_init(dev, file); } -static void context_close(struct msm_file_private *ctx) +static void context_close(struct msm_context *ctx) { msm_submitqueue_close(ctx); - msm_file_private_put(ctx); + msm_context_put(ctx); } static void msm_postclose(struct drm_device *dev, struct drm_file *file) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; /* * It is not possible to set sysprof param to non-zero if gpu * is not initialized: */ if (priv->gpu) - msm_file_private_set_sysprof(ctx, priv->gpu, 0); + msm_context_set_sysprof(ctx, priv->gpu, 0); context_close(ctx); } @@ -511,7 +511,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, uint64_t *iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; @@ -531,7 +531,7 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, uint64_t iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index d2f38e1df510..fdeb6cf7eeb5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -48,7 +48,7 @@ static void update_device_mem(struct msm_drm_private *priv, ssize_t size) static void update_ctx_mem(struct drm_file *file, ssize_t size) { - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; uint64_t ctx_mem = atomic64_add_return(size, &ctx->ctx_mem); rcu_read_lock(); /* Locks file->pid! */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 3e9aa2cc38ef..16ca6cfac967 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -642,7 +642,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, { struct msm_drm_private *priv = dev->dev_private; struct drm_msm_gem_submit *args = data; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; struct msm_gem_submit *submit = NULL; struct msm_gpu *gpu = priv->gpu; struct msm_gpu_submitqueue *queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index c380d9d9f5af..d786fcfad62f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -148,7 +148,7 @@ int msm_gpu_pm_suspend(struct msm_gpu *gpu) return 0; } -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p) { drm_printf(p, "drm-engine-gpu:\t%llu ns\n", ctx->elapsed_ns); @@ -339,7 +339,7 @@ static void retire_submits(struct msm_gpu *gpu); static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd) { - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct task_struct *task; WARN_ON(!mutex_is_locked(&submit->gpu->lock)); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index e25009150579..957d6fb3469d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -22,7 +22,7 @@ struct msm_gem_submit; struct msm_gpu_perfcntr; struct msm_gpu_state; -struct msm_file_private; +struct msm_context; struct msm_gpu_config { const char *ioname; @@ -44,9 +44,9 @@ struct msm_gpu_config { * + z180_gpu */ struct msm_gpu_funcs { - int (*get_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*get_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); - int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*set_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); int (*hw_init)(struct msm_gpu *gpu); @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) /** - * struct msm_file_private - per-drm_file context + * struct msm_context - per-drm_file context * * @queuelock: synchronizes access to submitqueues list * @submitqueues: list of &msm_gpu_submitqueue created by userspace @@ -357,7 +357,7 @@ struct msm_gpu_perfcntr { * @ref: reference count * @seqno: unique per process seqno */ -struct msm_file_private { +struct msm_context { rwlock_t queuelock; struct list_head submitqueues; int queueid; @@ -512,7 +512,7 @@ struct msm_gpu_submitqueue { u32 ring_nr; int faults; uint32_t last_fence; - struct msm_file_private *ctx; + struct msm_context *ctx; struct list_head node; struct idr fence_idr; struct spinlock idr_lock; @@ -608,33 +608,32 @@ static inline void gpu_write64(struct msm_gpu *gpu, u32 reg, u64 val) int msm_gpu_pm_suspend(struct msm_gpu *gpu); int msm_gpu_pm_resume(struct msm_gpu *gpu); -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p); -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx); +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id); int msm_submitqueue_create(struct drm_device *drm, - struct msm_file_private *ctx, + struct msm_context *ctx, u32 prio, u32 flags, u32 *id); -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args); -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); -void msm_submitqueue_close(struct msm_file_private *ctx); +int msm_submitqueue_remove(struct msm_context *ctx, u32 id); +void msm_submitqueue_close(struct msm_context *ctx); void msm_submitqueue_destroy(struct kref *kref); -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof); -void __msm_file_private_destroy(struct kref *kref); +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof); +void __msm_context_destroy(struct kref *kref); -static inline void msm_file_private_put(struct msm_file_private *ctx) +static inline void msm_context_put(struct msm_context *ctx) { - kref_put(&ctx->ref, __msm_file_private_destroy); + kref_put(&ctx->ref, __msm_context_destroy); } -static inline struct msm_file_private *msm_file_private_get( - struct msm_file_private *ctx) +static inline struct msm_context *msm_context_get( + struct msm_context *ctx) { kref_get(&ctx->ref); return ctx; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 7fed1de63b5d..1acc0fe36353 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -7,8 +7,7 @@ #include "msm_gpu.h" -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof) +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof) { /* * Since pm_runtime and sysprof_active are both refcounts, we @@ -46,10 +45,10 @@ int msm_file_private_set_sysprof(struct msm_file_private *ctx, return 0; } -void __msm_file_private_destroy(struct kref *kref) +void __msm_context_destroy(struct kref *kref) { - struct msm_file_private *ctx = container_of(kref, - struct msm_file_private, ref); + struct msm_context *ctx = container_of(kref, + struct msm_context, ref); int i; for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) { @@ -73,12 +72,12 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); - msm_file_private_put(queue->ctx); + msm_context_put(queue->ctx); kfree(queue); } -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; @@ -101,7 +100,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, return NULL; } -void msm_submitqueue_close(struct msm_file_private *ctx) +void msm_submitqueue_close(struct msm_context *ctx) { struct msm_gpu_submitqueue *entry, *tmp; @@ -119,7 +118,7 @@ void msm_submitqueue_close(struct msm_file_private *ctx) } static struct drm_sched_entity * -get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, +get_sched_entity(struct msm_context *ctx, struct msm_ringbuffer *ring, unsigned ring_nr, enum drm_sched_priority sched_prio) { static DEFINE_MUTEX(entity_lock); @@ -155,7 +154,7 @@ get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, return ctx->entities[idx]; } -int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, u32 prio, u32 flags, u32 *id) { struct msm_drm_private *priv = drm->dev_private; @@ -200,7 +199,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, write_lock(&ctx->queuelock); - queue->ctx = msm_file_private_get(ctx); + queue->ctx = msm_context_get(ctx); queue->id = ctx->queueid++; if (id) @@ -221,7 +220,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, * Create the default submit-queue (id==0), used for backwards compatibility * for userspace that pre-dates the introduction of submitqueues. */ -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx) +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx) { struct msm_drm_private *priv = drm->dev_private; int default_prio, max_priority; @@ -261,7 +260,7 @@ static int msm_submitqueue_query_faults(struct msm_gpu_submitqueue *queue, return ret ? -EFAULT : 0; } -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args) { struct msm_gpu_submitqueue *queue; @@ -282,7 +281,7 @@ int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, return ret; } -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id) +int msm_submitqueue_remove(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; From patchwork Wed Mar 19 14:52:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874767 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60662207A35; Wed, 19 Mar 2025 14:54:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396100; cv=none; b=JMaslfP/W7bTPC+Ziog9PPzSovTbSaa8A/5tzNUHm7c378JJM690qIL5GurpbUIlUbm7EuYlM/A1fTDMszzHrxpOhe2dSgg5iL3hOMJakXZvmZYMowLWwfRdATNGjNCmkEyb8RczRy13IG/1ZcZrJ32QF3Sxa3GII79s1dc9pvU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396100; c=relaxed/simple; bh=tALCuUrW8DcFG1WwaBrDxvGW6D4TSkDrgHx4uCGnYNk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SwS6GsAyy0hRPmUkMGzfYsmJH25HlemPgnVk0lbAqt+Ie94SmYpbKjCProv+YdB87735R/CZ55nspIfY1Hrl9j+faF4D9zod2SXKr/VbLQTp+SnnO4d8f45qhyQZYd/AuQ+U7/D/RAQiJjRXRcvBxHyGZ/xnSi0lVPDUwKLLsNk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aZnrevuC; arc=none smtp.client-ip=209.85.214.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aZnrevuC" Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-22423adf751so125129285ad.2; Wed, 19 Mar 2025 07:54:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396098; x=1743000898; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EO2+o/bsNNfbxQWZXTYcMl0sTaN+Gj3FBJ3RecW0vNY=; b=aZnrevuCfNCM4gBmUluIJCaMqtfh92aI0p3A2uqLMEaLWYokgeLR+ifysLXnwr4zJw w4TTZrtGXxxTPbiwJAaeM84UHjsXjty9l6bAj5A7VUfYfEUUaoYx+yQdH9jH1JOlBMJi kAgYrbsWkaUB0eH6TkE+3mqB3/i9q53kEIHfvyVE2CQwukRtXiqIxWOkbflxO0zSL/h9 yRUMZddLPrGlFvTstsXkLxxdKI/lk/T63Z4BSyID49ig1wJ5xwCEjXuAihtd817o0Lvu LIvi+Mp4iIsNTwck4lKdklny1jD2NDgLf8b7HWD/rTZCVgfTNJSm7swUR2kDOjmprkZb 54sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396098; x=1743000898; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EO2+o/bsNNfbxQWZXTYcMl0sTaN+Gj3FBJ3RecW0vNY=; b=Om1vGyIzeJJ48bHw9E3AGqJQ1lgAxrGza/wGN8EQWOsnf3fztQKasjQShzP7CLb92+ PhjaGEqaaMc4tSxwmqYRGTcU0p2mpPXIUY0xUw9laKbOY8lKEStp7EYghsX8sDnWgCys u7WE7yAr81eEI5YuEHw0FpjtPUdT7iNNHqSSMB5umZS1ojNkAmiz5nU1hwWnPBj1SeF0 WuCso8rUfuJYdnwuqsjmZBAxznfGxtk5O/v3tTbiWLm66/q1dBvYEiER9cVXpyZskkNg 3RRzeJELJJ7Vk9326VuvuDpRIUX04p0e2LOPfqgDXtGbrBMHH4F9cMNLExwAdoB6lphr 4JhQ== X-Forwarded-Encrypted: i=1; AJvYcCUVBvBSGi4ZEnVedZZ8DYYtGa1hTegLzFAvy5QUmBrXknKNEWYT1Aaezm2zMZZphB8tKgkmvUmr7PmtQ9b3@vger.kernel.org, AJvYcCXi8S9SRZYGdVwgQFhil2ElojuvpsLzH+MIpucOOlzFMu/c1tm5KjaNOukJMmdnsx6bAuUTrDUFQxgirVCj@vger.kernel.org X-Gm-Message-State: AOJu0YzMdyZKIG6EFmwQUtcQxJ8NSIKGFpkR+iNxtu4eetyO01+2YmfD AeumzRfH+Dsj0cQCAj9iAmF7ZnInMhE/CDX+ZFz6UfhNalE0JQQ5 X-Gm-Gg: ASbGncu1v2aNnLawoXF33CGpWq0rLJsAIGSayq+1MNZSdEYs/0mvHFqNLDsZkGGgIro o8xZCEShQukrBiAAclZzIQCJDIwX81UeNKRynDpjgH+OTAOOj1EAGsLiIXS1brOJMNqXWbV1rJK IQQfFkuNGn4b4gOCTS4pVuDhgzU+1m0h7hNTsbP8WG7tQAQJxXrD+ypXvwULfMdpnNii2vJgZkQ w3ymMRAm0ump/LF/iCE83Ij0xVYY2bgfMplNAuM8En3GeuZkdJAmo6I7DA3WiqMHSu9/bomON4o EWqPtjkbLxV5FMLMmZ7+ybUnRZTR4+pDf9lP8kPWBXOUwKUg5DRFbmRjyV6jsBFpqT6ODjf3Y6e bOSGcPbQ8cjeeT+3TQxw= X-Google-Smtp-Source: AGHT+IEIogLZO/OYsi4tDswh4850btgV4QdRo63x6gQecFtgoLCxLPv2jMPjlPAUKRtjjDj1a74OzQ== X-Received: by 2002:a17:903:1ce:b0:221:78a1:27fb with SMTP id d9443c01a7336-2264981b453mr42186515ad.11.1742396097638; Wed, 19 Mar 2025 07:54:57 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c688d9eesm115018435ad.35.2025.03.19.07.54.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:54:56 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 06/34] drm/msm: Improve msm_context comments Date: Wed, 19 Mar 2025 07:52:18 -0700 Message-ID: <20250319145425.51935-7-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Just some tidying up. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 957d6fb3469d..c699ce0c557b 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -348,25 +348,39 @@ struct msm_gpu_perfcntr { /** * struct msm_context - per-drm_file context - * - * @queuelock: synchronizes access to submitqueues list - * @submitqueues: list of &msm_gpu_submitqueue created by userspace - * @queueid: counter incremented each time a submitqueue is created, - * used to assign &msm_gpu_submitqueue.id - * @aspace: the per-process GPU address-space - * @ref: reference count - * @seqno: unique per process seqno */ struct msm_context { + /** @queuelock: synchronizes access to submitqueues list */ rwlock_t queuelock; + + /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ struct list_head submitqueues; + + /** + * @queueid: + * + * Counter incremented each time a submitqueue is created, used to + * assign &msm_gpu_submitqueue.id + */ int queueid; + + /** @aspace: the per-process GPU address-space */ struct msm_gem_address_space *aspace; + + /** @kref: the reference count */ struct kref ref; + + /** + * @seqno: + * + * A unique per-process sequence number. Used to detect context + * switches, without relying on keeping a, potentially dangling, + * pointer to the previous context. + */ int seqno; /** - * sysprof: + * @sysprof: * * The value of MSM_PARAM_SYSPROF set by userspace. This is * intended to be used by system profiling tools like Mesa's @@ -384,21 +398,21 @@ struct msm_context { int sysprof; /** - * comm: Overridden task comm, see MSM_PARAM_COMM + * @comm: Overridden task comm, see MSM_PARAM_COMM * * Accessed under msm_gpu::lock */ char *comm; /** - * cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE + * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE * * Accessed under msm_gpu::lock */ char *cmdline; /** - * elapsed: + * @elapsed: * * The total (cumulative) elapsed time GPU was busy with rendering * from this context in ns. @@ -406,7 +420,7 @@ struct msm_context { uint64_t elapsed_ns; /** - * cycles: + * @cycles: * * The total (cumulative) GPU cycles elapsed attributed to this * context. @@ -414,7 +428,7 @@ struct msm_context { uint64_t cycles; /** - * entities: + * @entities: * * Table of per-priority-level sched entities used by submitqueues * associated with this &drm_file. Because some userspace apps @@ -427,7 +441,7 @@ struct msm_context { struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS]; /** - * ctx_mem: + * @ctx_mem: * * Total amount of memory of GEM buffers with handles attached for * this context. From patchwork Wed Mar 19 14:52:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875057 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CBE333E7; Wed, 19 Mar 2025 14:55:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396117; cv=none; b=h+hu/vggignKSy3feGChrZTF48/tEK2CPnD5wLdazCJS8iuP20b6wB0xlRiHsY0q+fajHljfT6rGVDBh8mn/q6g86nqIbfb2WTIBxgk4BMRIKfgLOtEifVa+kRrh673C+fke1WcO2qQ8Yj3fGq4BVVIffTk+YqQD4Bsi+BIMBCw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396117; c=relaxed/simple; bh=DaiyUH2j3QdoVsaEg/jDDBA7Np+njN71OME9ZlxAoNI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GZve3ZXZZ4Y0ss6aBEAxkfnHkdZlIYmVhC5OUfGSgDKc9Sa1++9wULtG4Ljrbgkxvt+cVhlQ6ZN86JqzG7yk11vQrgfsIS1DMmfgGO5X3VFRrtC/7JrWKSpBs/pfq++uUGY7qn3PLgt3uMUHh6anijI4baIB+6u4N6HPy2HyzBQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GtwGvBe5; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GtwGvBe5" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-3018f827c9bso4888043a91.0; Wed, 19 Mar 2025 07:55:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396112; x=1743000912; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oWKQdZY91vy1PqFyvKMMIEFSf9lv6nubkj7UWguP/tQ=; b=GtwGvBe5QlT4GBXIvpEbVzJHndWlJH8/bW0TYtFuggfSws7vlCAidB7Qy28sKiaHkX dJR6GSoSzi+hVTqBD81B7ACS7OgM8KZH1hyVtyoOtOfdp9/4THSWVQ2S0TguwhftAbxD SFfelQbCkCEvf5GRqoqNFWOlDutcyFGcbwUH6VqyoZENuQ0MfaHsE9D8Fnr1Q9L0UkxF fGHJzcmu65O51R5MoaI9kg/9QovKpAIUhwMylo49rYExYDvedEg2tTCqfl0yYeaZyYxE UCzKklE6AU0RFtszlnVTYgmhIz2tZnNIIyjam0AUSfTHxXStQfrY/hNK1o8S3roNi3sK q0ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396112; x=1743000912; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oWKQdZY91vy1PqFyvKMMIEFSf9lv6nubkj7UWguP/tQ=; b=l9yGE3Ks/txJQZsF3UUjyN7aCBTyAtvX8t1dfwD1c1myYxJoAH20l9wVQyt4Djt+dN K7ub4didRMP0CuYurXM4z+/uAMweqPJehFFWyuGBhqN7ETzH1dtHivgD5qsrNYjX7JQ5 yYxjJDEfnEJsZfKmtGcwUs7/yrfpYnYXwnqjcaRSSI4ZZTN7it0ygZBpyk3RBb1SvKTi 95sKbFZbr+Mm/v3fRsmwNGIkNP1q7z7pQvnJLFlFQQUfwVccMN6rzIyLNRpiBrevPZ5j niwtKK64iXRsR14JpWufIQUfRgKX2n6n/70OwusSLYUcvaPbIp6U70ArIJHT+tNcwr0w MnGQ== X-Forwarded-Encrypted: i=1; AJvYcCWQST1jSRQmOZcckpNFTBaoSJO5BVfVTeOlu+5egRn9GhMiD0lxQfQQUHXQcIppa5gQkvAzH2oF0c4O+N8t@vger.kernel.org, AJvYcCWSlER0W0V0zk0PPhsuhp695Sjx5/IE0h7SBBxWZMPHDQu/C2PXPfCz0Uua1nbckuFFijfyu3orMZcLsWMO@vger.kernel.org X-Gm-Message-State: AOJu0Yxp+Z5A0gUoDmh38fK+4/5Vf+cZJWk+gneAh43XKdZW0c8nUDSK LLV3e8Y3jSSo8brszl1hDX1tj594/Y8xWLdMTvu/OLp7huM4MfDp X-Gm-Gg: ASbGnctsSiTfz2BJe1PHtrylhXbLLKLDREFruCM8e4o9rPNe16SLWqAATz51hufJRkh qvcF23lNkJf8oP8DxMn/XHsLacDeJx7zWoMWhRP2KtscCKWiEdnY99x1uI3yu9hba35xnqjTaDV 7CwNd/WYipGwVWiQDkAWMwPjVjJKvUc3Pwi/hf9hunWc+xWNsv4ubqh3KHD7ZU6mIXy9i7DAmLA IKOSHjugspICrezXuiz4WbF4R467pl08uGrTp0zo/ed9yi5HTGxcrNFfFenMoHqAbBR/xrr1mDC aBYP+xJVSP31l5BxBT6MCLsVfSQqXOZAfj9vQumP+EVQSVUNX44rULcH1Sdz3gapP57qr/jHgNx ynnWEQO2E8609YzP3HCw= X-Google-Smtp-Source: AGHT+IHIUrSgCJPNTmgJpw7zmOSVpIm2gyPyYG1f1YL1T/fWCN6fxEEy34XmzmdXanvFlmOKe5WS5g== X-Received: by 2002:a05:6a21:6493:b0:1ee:e46d:58a2 with SMTP id adf61e73a8af0-1fbeae911d2mr5557143637.3.1742396111856; Wed, 19 Mar 2025 07:55:11 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-af56e9ff0d0sm10975205a12.40.2025.03.19.07.55.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:11 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , Jani Nikula , =?utf-8?b?QmFybmFiw6FzIEN6w6ltw6Fu?= , Arnd Bergmann , =?utf-8?q?Andr=C3=A9_Almeida?= , Christopher Snowhill , Jonathan Marek , Krzysztof Kozlowski , Eugene Lepshy , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 07/34] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Date: Wed, 19 Mar 2025 07:52:19 -0700 Message-ID: <20250319145425.51935-8-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Re-aligning naming to better match drm_gpuvm terminology will make things less confusing at the end of the drm_gpuvm conversion. This is just rename churn, no functional change. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++-- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++--- drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++--- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++---- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 47 +++++----- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++-- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++--- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +-- drivers/gpu/drm/msm/msm_drv.c | 8 +- drivers/gpu/drm/msm/msm_drv.h | 10 +- drivers/gpu/drm/msm/msm_fb.c | 10 +- drivers/gpu/drm/msm/msm_fbdev.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 74 +++++++-------- drivers/gpu/drm/msm/msm_gem.h | 34 +++---- drivers/gpu/drm/msm/msm_gem_submit.c | 6 +- drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 48 +++++----- drivers/gpu/drm/msm/msm_gpu.h | 16 ++-- drivers/gpu/drm/msm/msm_kms.c | 16 ++-- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 41 files changed, 349 insertions(+), 354 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 379a3d346c30..5eb063ed0b46 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; - a2xx_gpummu_params(gpu->aspace->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); DBG("%s", gpu->name); @@ -466,19 +466,19 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu) return state; } -static struct msm_gem_address_space * -a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) +static struct msm_gem_vm * +a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; - aspace = msm_gem_address_space_create(mmu, "gpu", SZ_16M, + vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, 0xfff * SZ_64K); - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); - return aspace; + return vm; } static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -504,7 +504,7 @@ static const struct adreno_gpu_funcs funcs = { #endif .gpu_state_get = a2xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = a2xx_create_address_space, + .create_vm = a2xx_create_vm, .get_rptr = a2xx_get_rptr, }, }; @@ -551,7 +551,7 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - if (!gpu->aspace) { + if (!gpu->vm) { dev_err(dev->dev, "No memory protection without MMU\n"); if (!allow_vram_carveout) { ret = -ENXIO; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index b6df115bb567..434e6ededf83 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -526,7 +526,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a3xx_gpu_busy, .gpu_state_get = a3xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a3xx_get_rptr, }, }; @@ -581,7 +581,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index f1b18a6663f7..2c75debcfd84 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -645,7 +645,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a4xx_gpu_busy, .gpu_state_get = a4xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a4xx_get_rptr, }, .get_timestamp = a4xx_get_timestamp, @@ -695,7 +695,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c index 169b8fe688f8..625a4e787d8f 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c @@ -116,13 +116,13 @@ reset_set(void *data, u64 val) adreno_gpu->fw[ADRENO_FW_PFP] = NULL; if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); a5xx_gpu->pm4_bo = NULL; } if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); a5xx_gpu->pfp_bo = NULL; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 670141531112..cce95ad3cfb8 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -622,7 +622,7 @@ static int a5xx_ucode_load(struct msm_gpu *gpu) a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a5xx_gpu->shadow_bo, + gpu->vm, &a5xx_gpu->shadow_bo, &a5xx_gpu->shadow_iova); if (IS_ERR(a5xx_gpu->shadow)) @@ -1042,22 +1042,22 @@ static void a5xx_destroy(struct msm_gpu *gpu) a5xx_preempt_fini(gpu); if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); } if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); } if (a5xx_gpu->gpmu_bo) { - msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->gpmu_bo); } if (a5xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->shadow_bo); } @@ -1457,7 +1457,7 @@ static int a5xx_crashdumper_init(struct msm_gpu *gpu, struct a5xx_crashdumper *dumper) { dumper->ptr = msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); if (!IS_ERR(dumper->ptr)) @@ -1557,7 +1557,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu, if (a5xx_crashdumper_run(gpu, &dumper)) { kfree(a5xx_state->hlsqregs); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); return; } @@ -1565,7 +1565,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu, memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K), count * sizeof(u32)); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); } static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu) @@ -1713,7 +1713,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a5xx_gpu_busy, .gpu_state_get = a5xx_gpu_state_get, .gpu_state_put = a5xx_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a5xx_get_rptr, }, .get_timestamp = a5xx_get_timestamp, @@ -1786,8 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c index 6b91e0bd1514..d6da7351cfbb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c @@ -363,7 +363,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu) bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2; ptr = msm_gem_kernel_new(drm, bosize, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &a5xx_gpu->gpmu_bo, &a5xx_gpu->gpmu_iova); if (IS_ERR(ptr)) return; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 0469fea55010..5f9e2eb80a2c 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -254,7 +254,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -262,9 +262,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, /* The buffer to store counters needs to be unprivileged */ counters = msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC, gpu->aspace, &counters_bo, &counters_iova); + MSM_BO_WC, gpu->vm, &counters_bo, &counters_iova); if (IS_ERR(counters)) { - msm_gem_kernel_put(bo, gpu->aspace); + msm_gem_kernel_put(bo, gpu->vm); return PTR_ERR(counters); } @@ -295,8 +295,8 @@ void a5xx_preempt_fini(struct msm_gpu *gpu) int i; for (i = 0; i < gpu->nr_rings; i++) { - msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace); - msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->aspace); + msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->vm); + msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->vm); } } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 3d2c5661dbee..4c459ae25cba 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,15 +1259,15 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { - msm_gem_kernel_put(gmu->hfi.obj, gmu->aspace); - msm_gem_kernel_put(gmu->debug.obj, gmu->aspace); - msm_gem_kernel_put(gmu->icache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dcache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dummy.obj, gmu->aspace); - msm_gem_kernel_put(gmu->log.obj, gmu->aspace); - - gmu->aspace->mmu->funcs->detach(gmu->aspace->mmu); - msm_gem_address_space_put(gmu->aspace); + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); + msm_gem_kernel_put(gmu->debug.obj, gmu->vm); + msm_gem_kernel_put(gmu->icache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dcache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); + msm_gem_kernel_put(gmu->log.obj, gmu->vm); + + gmu->vm->mmu->funcs->detach(gmu->vm->mmu); + msm_gem_vm_put(gmu->vm); } static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, @@ -1296,7 +1296,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, if (IS_ERR(bo->obj)) return PTR_ERR(bo->obj); - ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova, + ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->vm, &bo->iova, range_start, range_end); if (ret) { drm_gem_object_put(bo->obj); @@ -1321,9 +1321,9 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->aspace = msm_gem_address_space_create(mmu, "gmu", 0x0, 0x80000000); - if (IS_ERR(gmu->aspace)) - return PTR_ERR(gmu->aspace); + gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + if (IS_ERR(gmu->vm)) + return PTR_ERR(gmu->vm); return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index 39fb8c774a79..cceda7d9c33a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index c5294351b1f0..fa63149bf73f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { @@ -957,7 +957,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); a6xx_gpu->sqe_bo = NULL; @@ -974,7 +974,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->shadow_bo, + gpu->vm, &a6xx_gpu->shadow_bo, &a6xx_gpu->shadow_iova); if (IS_ERR(a6xx_gpu->shadow)) @@ -985,7 +985,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->pwrup_reglist_bo, + gpu->vm, &a6xx_gpu->pwrup_reglist_bo, &a6xx_gpu->pwrup_reglist_iova); if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr)) @@ -2198,12 +2198,12 @@ static void a6xx_destroy(struct msm_gpu *gpu) struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); if (a6xx_gpu->sqe_bo) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); } if (a6xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->shadow_bo); } @@ -2243,8 +2243,8 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } -static struct msm_gem_address_space * -a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) +static struct msm_gem_vm * +a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); @@ -2258,22 +2258,22 @@ a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY)) quirks |= IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; - return adreno_iommu_create_address_space(gpu, pdev, quirks); + return adreno_iommu_create_vm(gpu, pdev, quirks); } -static struct msm_gem_address_space * -a6xx_create_private_address_space(struct msm_gpu *gpu) +static struct msm_gem_vm * +a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(gpu->aspace->mmu); + mmu = msm_iommu_pagetable_create(gpu->vm->mmu); if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_address_space_create(mmu, + return msm_gem_vm_create(mmu, "gpu", 0x100000000ULL, - adreno_private_address_space_size(gpu)); + adreno_private_vm_size(gpu)); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -2390,8 +2390,8 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2419,8 +2419,8 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2450,8 +2450,8 @@ static const struct adreno_gpu_funcs funcs_a7xx = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2547,9 +2547,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, - a6xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index 341a72a67401..ff06bb75b76d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -132,7 +132,7 @@ static int a6xx_crashdumper_init(struct msm_gpu *gpu, struct a6xx_crashdumper *dumper) { dumper->ptr = msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); if (!IS_ERR(dumper->ptr)) @@ -1619,7 +1619,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) a7xx_get_clusters(gpu, a6xx_state, dumper); a7xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } a7xx_get_post_crashdumper_registers(gpu, a6xx_state); @@ -1631,7 +1631,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) a6xx_get_clusters(gpu, a6xx_state, dumper); a6xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c index 2fd4e39f618f..41229c60aa06 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -343,7 +343,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, PREEMPT_RECORD_SIZE(adreno_gpu), - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -361,7 +361,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, PREEMPT_SMMU_INFO_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &bo, &iova); + gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, struct a7xx_cp_smmu_info *smmu_info_ptr = ptr; - msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 = ttbr; @@ -404,7 +404,7 @@ void a6xx_preempt_fini(struct msm_gpu *gpu) int i; for (i = 0; i < gpu->nr_rings; i++) - msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace); + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->vm); } void a6xx_preempt_init(struct msm_gpu *gpu) @@ -430,7 +430,7 @@ void a6xx_preempt_init(struct msm_gpu *gpu) a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &a6xx_gpu->preempt_postamble_bo, + gpu->vm, &a6xx_gpu->preempt_postamble_bo, &a6xx_gpu->preempt_postamble_iova); preempt_prepare_postamble(a6xx_gpu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index e0ab5126e390..ffbbf3d5ce2f 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev) +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev) { - return adreno_iommu_create_address_space(gpu, pdev, 0); + return adreno_iommu_create_vm(gpu, pdev, 0); } -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks) +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -224,16 +224,15 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - aspace = msm_gem_address_space_create(mmu, "gpu", - start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); - return aspace; + return vm; } -u64 adreno_private_address_space_size(struct msm_gpu *gpu) +u64 adreno_private_vm_size(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -261,7 +260,7 @@ void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) !READ_ONCE(gpu->crashstate)) { adreno_gpu->stall_enabled = true; - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); } spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); } @@ -289,7 +288,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, if (adreno_gpu->stall_enabled) { adreno_gpu->stall_enabled = false; - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); } adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); @@ -299,7 +298,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); } /* @@ -393,8 +392,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->aspace) - *value = gpu->global_faults + ctx->aspace->faults; + if (ctx->vm) + *value = gpu->global_faults + ctx->vm->faults; else *value = gpu->global_faults; return 0; @@ -402,14 +401,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->aspace == gpu->aspace) + if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->aspace->va_start; + *value = ctx->vm->va_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->aspace == gpu->aspace) + if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->aspace->va_size; + *value = ctx->vm->va_size; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; @@ -599,7 +598,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu, void *ptr; ptr = msm_gem_kernel_new(gpu->dev, fw->size - 4, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &bo, iova); + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &bo, iova); if (IS_ERR(ptr)) return ERR_CAST(ptr); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 9fce88461f5a..eaebcb108b5e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -600,7 +600,7 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) adreno_is_a740_family(gpu); } -u64 adreno_private_address_space_size(struct msm_gpu *gpu); +u64 adreno_private_vm_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, @@ -643,14 +643,14 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len, * Common helper function to initialize the default address space for arm-smmu * attached targets */ -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev); - -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks); +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev); + +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks); int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 849fea580a4c..32e208ee946d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +576,13 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc wb_enc->wb_job = job; wb_enc->wb_conn = job->connector; - aspace = phys_enc->dpu_kms->base.aspace; + vm = phys_enc->dpu_kms->base.vm; wb_cfg = &wb_enc->wb_cfg; memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); - ret = msm_framebuffer_prepare(job->fb, aspace, false); + ret = msm_framebuffer_prepare(job->fb, vm, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +596,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc return; } - dpu_format_populate_addrs(aspace, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); wb_cfg->dest.width = job->fb->width; wb_cfg->dest.height = job->fb->height; @@ -619,14 +619,14 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; if (!job->fb) return; - aspace = phys_enc->dpu_kms->base.aspace; + vm = phys_enc->dpu_kms->base.vm; - msm_framebuffer_cleanup(job->fb, aspace, false); + msm_framebuffer_cleanup(job->fb, vm, false); wb_enc->wb_job = NULL; wb_enc->wb_conn = NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c index 59c9427da7dd..d115b79af771 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace, +static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -282,7 +282,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace uint32_t base_addr = 0; bool meta; - base_addr = msm_framebuffer_iova(fb, aspace, 0); + base_addr = msm_framebuffer_iova(fb, vm, 0); fmt = msm_framebuffer_format(fb); meta = MSM_FORMAT_IS_UBWC(fmt); @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace } } -static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspace, +static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -363,17 +363,17 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspa /* Populate addresses for simple formats here */ for (i = 0; i < layout->num_planes; ++i) - layout->plane_addr[i] = msm_framebuffer_iova(fb, aspace, i); -} + layout->plane_addr[i] = msm_framebuffer_iova(fb, vm, i); + } /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @aspace: address space pointer + * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -384,7 +384,7 @@ void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(aspace, fb, layout); + _dpu_format_populate_addrs_ubwc(vm, fb, layout); else - _dpu_format_populate_addrs_linear(aspace, fb, layout); + _dpu_format_populate_addrs_linear(vm, fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h index c6145d43aa3f..989f3e13c497 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats, return false; } -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 3305ad0623ca..bb5db6da636a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1095,26 +1095,26 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms) { struct msm_mmu *mmu; - if (!dpu_kms->base.aspace) + if (!dpu_kms->base.vm) return; - mmu = dpu_kms->base.aspace->mmu; + mmu = dpu_kms->base.vm->mmu; mmu->funcs->detach(mmu); - msm_gem_address_space_put(dpu_kms->base.aspace); + msm_gem_vm_put(dpu_kms->base.vm); - dpu_kms->base.aspace = NULL; + dpu_kms->base.vm = NULL; } static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; - aspace = msm_kms_init_aspace(dpu_kms->dev); - if (IS_ERR(aspace)) - return PTR_ERR(aspace); + vm = msm_kms_init_vm(dpu_kms->dev); + if (IS_ERR(vm)) + return PTR_ERR(vm); - dpu_kms->base.aspace = aspace; + dpu_kms->base.vm = vm; return 0; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c index af3e541f60c3..92a249b2ef5f 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -71,7 +71,7 @@ static const uint32_t qcom_compressed_supported_formats[] = { /* * struct dpu_plane - local dpu plane structure - * @aspace: address space pointer + * @vm: address space pointer * @csc_ptr: Points to dpu_csc_cfg structure to use for current * @catalog: Points to dpu catalog structure * @revalidate: force revalidation of all the plane properties @@ -654,8 +654,8 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); - /* cache aspace */ - pstate->aspace = kms->base.aspace; + /* cache vm */ + pstate->vm = kms->base.vm; /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so @@ -664,9 +664,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, */ drm_gem_plane_helper_prepare_fb(plane, new_state); - if (pstate->aspace) { + if (pstate->vm) { ret = msm_framebuffer_prepare(new_state->fb, - pstate->aspace, pstate->needs_dirtyfb); + pstate->vm, pstate->needs_dirtyfb); if (ret) { DPU_ERROR("failed to prepare framebuffer\n"); return ret; @@ -689,7 +689,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); - msm_framebuffer_cleanup(old_state->fb, old_pstate->aspace, + msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, old_pstate->needs_dirtyfb); } @@ -1349,7 +1349,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane, pstate->needs_qos_remap |= (is_rt_pipe != pdpu->is_rt_pipe); pdpu->is_rt_pipe = is_rt_pipe; - dpu_format_populate_addrs(pstate->aspace, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h index acd5725175cd..3578f52048a5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,7 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @aspace: pointer to address space for input/output buffers + * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c index b8610aa806ea..0133c0c01a0b 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -120,7 +120,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val) struct mdp4_kms *mdp4_kms = get_kms(&mdp4_crtc->base); struct msm_kms *kms = &mdp4_kms->base.base; - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } @@ -369,7 +369,7 @@ static void update_cursor(struct drm_crtc *crtc) if (next_bo) { /* take a obj ref + iova ref when we start scanning out: */ drm_gem_object_get(next_bo); - msm_gem_get_and_pin_iova(next_bo, kms->aspace, &iova); + msm_gem_get_and_pin_iova(next_bo, kms->vm, &iova); /* enable cursor: */ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma), @@ -427,7 +427,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc, } if (cursor_bo) { - ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &iova); + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &iova); if (ret) goto fail; } else { diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index c469e66cfc11..94fbc20b2fbd 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,15 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); struct device *dev = mdp4_kms->dev->dev; - struct msm_gem_address_space *aspace = kms->aspace; + struct msm_gem_vm *vm = kms->vm; if (mdp4_kms->blank_cursor_iova) - msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->aspace); + msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } if (mdp4_kms->rpm_enabled) @@ -380,7 +380,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms = NULL; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int ret; u32 major, minor; unsigned long max_clk; @@ -449,19 +449,19 @@ static int mdp4_kms_init(struct drm_device *dev) } else if (!mmu) { DRM_DEV_INFO(dev->dev, "no iommu, fallback to phys " "contig buffers for scanout\n"); - aspace = NULL; + vm = NULL; } else { - aspace = msm_gem_address_space_create(mmu, + vm = msm_gem_vm_create(mmu, "mdp4", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { + if (IS_ERR(vm)) { if (!IS_ERR(mmu)) mmu->funcs->destroy(mmu); - ret = PTR_ERR(aspace); + ret = PTR_ERR(vm); goto fail; } - kms->aspace = aspace; + kms->vm = vm; } ret = modeset_init(mdp4_kms); @@ -478,7 +478,7 @@ static int mdp4_kms_init(struct drm_device *dev) goto fail; } - ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->aspace, + ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->vm, &mdp4_kms->blank_cursor_iova); if (ret) { DRM_DEV_ERROR(dev->dev, "could not pin blank-cursor bo: %d\n", ret); diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c index 3fefb2088008..7743be6167f8 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -87,7 +87,7 @@ static int mdp4_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->aspace, false); + return msm_framebuffer_prepare(new_state->fb, kms->vm, false); } static void mdp4_plane_cleanup_fb(struct drm_plane *plane, @@ -102,7 +102,7 @@ static void mdp4_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, false); + msm_framebuffer_cleanup(fb, kms->vm, false); } @@ -153,13 +153,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index 0f653e62b4a0..298861f373b0 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -169,7 +169,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val) struct mdp5_kms *mdp5_kms = get_kms(&mdp5_crtc->base); struct msm_kms *kms = &mdp5_kms->base.base; - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } @@ -993,7 +993,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!cursor_bo) return -ENOENT; - ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &mdp5_crtc->cursor.iova); if (ret) { drm_gem_object_put(cursor_bo); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index 3fcca7a3d82e..9dca0385a42d 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,11 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_address_space *aspace = kms->aspace; + struct msm_gem_vm *vm = kms->vm; - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +500,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms = priv->kms; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int i, ret; ret = mdp5_init(to_platform_device(dev->dev), dev); @@ -534,13 +534,13 @@ static int mdp5_kms_init(struct drm_device *dev) } mdelay(16); - aspace = msm_kms_init_aspace(mdp5_kms->dev); - if (IS_ERR(aspace)) { - ret = PTR_ERR(aspace); + vm = msm_kms_init_vm(mdp5_kms->dev); + if (IS_ERR(vm)) { + ret = PTR_ERR(vm); goto fail; } - kms->aspace = aspace; + kms->vm = vm; pm_runtime_put_sync(&pdev->dev); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c index bb1601921938..9f68a4747203 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -144,7 +144,7 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->aspace, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); } static void mdp5_plane_cleanup_fb(struct drm_plane *plane, @@ -159,7 +159,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, needed_dirtyfb); + msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); } static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, @@ -478,13 +478,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index 4d75529c0e85..16335ebd21e4 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,10 +1146,10 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size) uint64_t iova; u8 *data; - msm_host->aspace = msm_gem_address_space_get(priv->kms->aspace); + msm_host->vm = msm_gem_vm_get(priv->kms->vm); data = msm_gem_kernel_new(dev, size, MSM_BO_WC, - msm_host->aspace, + msm_host->vm, &msm_host->tx_gem_obj, &iova); if (IS_ERR(data)) { @@ -1193,10 +1193,10 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) return; if (msm_host->tx_gem_obj) { - msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->aspace); - msm_gem_address_space_put(msm_host->aspace); + msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); + msm_gem_vm_put(msm_host->vm); msm_host->tx_gem_obj = NULL; - msm_host->aspace = NULL; + msm_host->vm = NULL; } if (msm_host->tx_buf) @@ -1327,7 +1327,7 @@ int dsi_dma_base_get_6g(struct msm_dsi_host *msm_host, uint64_t *dma_base) return -EINVAL; return msm_gem_get_and_pin_iova(msm_host->tx_gem_obj, - priv->kms->aspace, dma_base); + priv->kms->vm, dma_base); } int dsi_dma_base_get_v2(struct msm_dsi_host *msm_host, uint64_t *dma_base) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 29ca24548c67..903abf3532e0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -345,7 +345,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current); + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv = ctx; ctx->seqno = atomic_inc_return(&ident); @@ -523,7 +523,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->aspace, iova); + return msm_gem_get_iova(obj, ctx->vm, iova); } static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -537,13 +537,13 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, return -EINVAL; /* Only supported if per-process address space is supported: */ - if (priv->gpu->aspace == ctx->aspace) + if (priv->gpu->vm == ctx->vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; - return msm_gem_set_iova(obj, ctx->aspace, iova); + return msm_gem_set_iova(obj, ctx->vm, iova); } static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index a65077855201..0e675c9a7f83 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,7 +48,7 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_address_space; +struct msm_gem_vm; struct msm_gem_vma; struct msm_disp_state; @@ -241,7 +241,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev); +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -263,11 +263,11 @@ int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needs_dirtyfb); + struct msm_gem_vm *vm, bool needs_dirtyfb); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needed_dirtyfb); + struct msm_gem_vm *vm, bool needed_dirtyfb); uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane); + struct msm_gem_vm *vm, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 09268e416843..6df318b73534 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -76,7 +76,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needs_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); @@ -88,7 +88,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, atomic_inc(&msm_fb->prepare_count); for (i = 0; i < n; i++) { - ret = msm_gem_get_and_pin_iova(fb->obj[i], aspace, &msm_fb->iova[i]); + ret = msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); if (ret) @@ -99,7 +99,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, } void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needed_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); @@ -109,14 +109,14 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, refcount_dec(&msm_fb->dirtyfb); for (i = 0; i < n; i++) - msm_gem_unpin_iova(fb->obj[i], aspace); + msm_gem_unpin_iova(fb->obj[i], vm); if (!atomic_dec_return(&msm_fb->prepare_count)) memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane) + struct msm_gem_vm *vm, int plane) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c index c62249b1ab3d..b5969374d53f 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -122,7 +122,7 @@ int msm_fbdev_driver_fbdev_probe(struct drm_fb_helper *helper, * in panic (ie. lock-safe, etc) we could avoid pinning the * buffer now: */ - ret = msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr); + ret = msm_gem_get_and_pin_iova(bo, priv->kms->vm, &paddr); if (ret) { DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret); goto fail; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index fdeb6cf7eeb5..07a30d29248c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -402,14 +402,14 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); - vma = msm_gem_vma_new(aspace); + vma = msm_gem_vma_new(vm); if (!vma) return ERR_PTR(-ENOMEM); @@ -419,7 +419,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; @@ -427,7 +427,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, msm_gem_assert_locked(obj); list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace == aspace) + if (vma->vm == vm) return vma; } @@ -458,7 +458,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) msm_gem_assert_locked(obj); list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace) { + if (vma->vm) { msm_gem_vma_purge(vma); if (close) msm_gem_vma_close(vma); @@ -481,19 +481,19 @@ put_iova_vmas(struct drm_gem_object *obj) } static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; msm_gem_assert_locked(obj); - vma = lookup_vma(obj, aspace); + vma = lookup_vma(obj, vm); if (!vma) { int ret; - vma = add_vma(obj, aspace); + vma = add_vma(obj, vm); if (IS_ERR(vma)) return vma; @@ -569,13 +569,13 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) } struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - return get_vma_locked(obj, aspace, 0, U64_MAX); + return get_vma_locked(obj, vm, 0, U64_MAX); } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; @@ -583,7 +583,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); - vma = get_vma_locked(obj, aspace, range_start, range_end); + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -601,13 +601,13 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { int ret; msm_gem_lock(obj); - ret = get_and_pin_iova_range_locked(obj, aspace, iova, range_start, range_end); + ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end); msm_gem_unlock(obj); return ret; @@ -615,9 +615,9 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { - return msm_gem_get_and_pin_iova_range(obj, aspace, iova, 0, U64_MAX); + return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } /* @@ -625,13 +625,13 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, * valid for the life of the object */ int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; int ret = 0; msm_gem_lock(obj); - vma = get_vma_locked(obj, aspace, 0, U64_MAX); + vma = get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { @@ -643,9 +643,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - struct msm_gem_vma *vma = lookup_vma(obj, aspace); + struct msm_gem_vma *vma = lookup_vma(obj, vm); if (!vma) return 0; @@ -665,20 +665,20 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova) + struct msm_gem_vm *vm, uint64_t iova) { int ret = 0; msm_gem_lock(obj); if (!iova) { - ret = clear_iova(obj, aspace); + ret = clear_iova(obj, vm); } else { struct msm_gem_vma *vma; - vma = get_vma_locked(obj, aspace, iova, iova + obj->size); + vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else if (GEM_WARN_ON(vma->iova != iova)) { - clear_iova(obj, aspace); + clear_iova(obj, vm); ret = -EBUSY; } } @@ -693,12 +693,12 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * to get rid of it */ void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_vma *vma; msm_gem_lock(obj); - vma = lookup_vma(obj, aspace); + vma = lookup_vma(obj, vm); if (!GEM_WARN_ON(!vma)) { msm_gem_unpin_locked(obj); } @@ -1016,23 +1016,23 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, list_for_each_entry(vma, &msm_obj->vmas, list) { const char *name, *comm; - if (vma->aspace) { - struct msm_gem_address_space *aspace = vma->aspace; + if (vma->vm) { + struct msm_gem_vm *vm = vma->vm; struct task_struct *task = - get_pid_task(aspace->pid, PIDTYPE_PID); + get_pid_task(vm->pid, PIDTYPE_PID); if (task) { comm = kstrdup(task->comm, GFP_KERNEL); put_task_struct(task); } else { comm = NULL; } - name = aspace->name; + name = vm->name; } else { name = comm = NULL; } - seq_printf(m, " [%s%s%s: aspace=%p, %08llx,%s]", + seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", name, comm ? ":" : "", comm ? comm : "", - vma->aspace, vma->iova, + vma->vm, vma->iova, vma->mapped ? "mapped" : "unmapped"); kfree(comm); } @@ -1357,7 +1357,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, } void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova) { void *vaddr; @@ -1368,14 +1368,14 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, return ERR_CAST(obj); if (iova) { - ret = msm_gem_get_and_pin_iova(obj, aspace, iova); + ret = msm_gem_get_and_pin_iova(obj, vm, iova); if (ret) goto err; } vaddr = msm_gem_get_vaddr(obj); if (IS_ERR(vaddr)) { - msm_gem_unpin_iova(obj, aspace); + msm_gem_unpin_iova(obj, vm); ret = PTR_ERR(vaddr); goto err; } @@ -1392,13 +1392,13 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, } void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { if (IS_ERR_OR_NULL(bo)) return; msm_gem_put_vaddr(bo); - msm_gem_unpin_iova(bo, aspace); + msm_gem_unpin_iova(bo, vm); drm_gem_object_put(bo); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 85f0257e83da..d2f39a371373 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -22,7 +22,7 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ -struct msm_gem_address_space { +struct msm_gem_vm { const char *name; /* NOTE: mm managed at the page level, size is in # of pages * and position mm_node->start is in # of pages: @@ -47,13 +47,13 @@ struct msm_gem_address_space { uint64_t va_size; }; -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace); +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm); -void msm_gem_address_space_put(struct msm_gem_address_space *aspace); +void msm_gem_vm_put(struct msm_gem_vm *vm); -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size); struct msm_fence_context; @@ -61,12 +61,12 @@ struct msm_fence_context; struct msm_gem_vma { struct drm_mm_node node; uint64_t iova; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head list; /* node in msm_gem_object::vmas */ bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace); +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); @@ -127,18 +127,18 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova); + struct msm_gem_vm *vm, uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -160,10 +160,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova); void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -257,7 +257,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 16ca6cfac967..95da4714fffb 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, kref_init(&submit->ref); submit->dev = dev; - submit->aspace = queue->ctx->aspace; + submit->vm = queue->ctx->vm; submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; @@ -302,7 +302,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) struct msm_gem_vma *vma; /* if locking succeeded, pin bo: */ - vma = msm_gem_get_vma_locked(obj, submit->aspace); + vma = msm_gem_get_vma_locked(obj, submit->vm); if (IS_ERR(vma)) { ret = PTR_ERR(vma); break; @@ -659,7 +659,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->aspace) && !capable(CAP_SYS_RAWIO)) { + if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); return -EPERM; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 11e842dda73c..9419692f0cc8 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -10,45 +10,44 @@ #include "msm_mmu.h" static void -msm_gem_address_space_destroy(struct kref *kref) +msm_gem_vm_destroy(struct kref *kref) { - struct msm_gem_address_space *aspace = container_of(kref, - struct msm_gem_address_space, kref); - - drm_mm_takedown(&aspace->mm); - if (aspace->mmu) - aspace->mmu->funcs->destroy(aspace->mmu); - put_pid(aspace->pid); - kfree(aspace); + struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + + drm_mm_takedown(&vm->mm); + if (vm->mmu) + vm->mmu->funcs->destroy(vm->mmu); + put_pid(vm->pid); + kfree(vm); } -void msm_gem_address_space_put(struct msm_gem_address_space *aspace) +void msm_gem_vm_put(struct msm_gem_vm *vm) { - if (aspace) - kref_put(&aspace->kref, msm_gem_address_space_destroy); + if (vm) + kref_put(&vm->kref, msm_gem_vm_destroy); } -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace) +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm) { - if (!IS_ERR_OR_NULL(aspace)) - kref_get(&aspace->kref); + if (!IS_ERR_OR_NULL(vm)) + kref_get(&vm->kref); - return aspace; + return vm; } /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; unsigned size = vma->node.size; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); vma->mapped = false; } @@ -58,7 +57,7 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; int ret; if (GEM_WARN_ON(!vma->iova)) @@ -69,7 +68,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!aspace) + if (!vm) return 0; /* @@ -81,7 +80,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); if (ret) { vma->mapped = false; @@ -93,21 +92,21 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; GEM_WARN_ON(vma->mapped); - spin_lock(&aspace->lock); + spin_lock(&vm->lock); if (vma->iova) drm_mm_remove_node(&vma->node); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); vma->iova = 0; - msm_gem_address_space_put(aspace); + msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) { struct msm_gem_vma *vma; @@ -115,7 +114,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) if (!vma) return NULL; - vma->aspace = aspace; + vma->vm = vm; return vma; } @@ -124,20 +123,20 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; int ret; - if (GEM_WARN_ON(!aspace)) + if (GEM_WARN_ON(!vm)) return -EINVAL; if (GEM_WARN_ON(vma->iova)) return -EBUSY; - spin_lock(&aspace->lock); - ret = drm_mm_insert_node_in_range(&aspace->mm, &vma->node, + spin_lock(&vm->lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); if (ret) return ret; @@ -145,33 +144,33 @@ int msm_gem_vma_init(struct msm_gem_vma *vma, int size, vma->iova = vma->node.start; vma->mapped = false; - kref_get(&aspace->kref); + kref_get(&vm->kref); return 0; } -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; if (IS_ERR(mmu)) return ERR_CAST(mmu); - aspace = kzalloc(sizeof(*aspace), GFP_KERNEL); - if (!aspace) + vm = kzalloc(sizeof(*vm), GFP_KERNEL); + if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&aspace->lock); - aspace->name = name; - aspace->mmu = mmu; - aspace->va_start = va_start; - aspace->va_size = size; + spin_lock_init(&vm->lock); + vm->name = name; + vm->mmu = mmu; + vm->va_start = va_start; + vm->va_size = size; - drm_mm_init(&aspace->mm, va_start, size); + drm_mm_init(&vm->mm, va_start, size); - kref_init(&aspace->kref); + kref_init(&vm->kref); - return aspace; + return vm; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index d786fcfad62f..0d466a2e9b32 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = submit->aspace->mmu; + struct msm_mmu *mmu = submit->vm->mmu; msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -386,8 +386,8 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; - if (submit->aspace) - submit->aspace->faults++; + if (submit->vm) + submit->vm->faults++; get_comm_cmdline(submit, &comm, &cmd); @@ -492,7 +492,7 @@ static void fault_worker(struct kthread_work *work) resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); mutex_unlock(&gpu->lock); } @@ -829,10 +829,10 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) } /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task) +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_address_space *aspace = NULL; + struct msm_gem_vm *vm = NULL; if (!gpu) return NULL; @@ -840,16 +840,16 @@ msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *ta * If the target doesn't support private address spaces then return * the global one */ - if (gpu->funcs->create_private_address_space) { - aspace = gpu->funcs->create_private_address_space(gpu); - if (!IS_ERR(aspace)) - aspace->pid = get_pid(task_pid(task)); + if (gpu->funcs->create_private_vm) { + vm = gpu->funcs->create_private_vm(gpu); + if (!IS_ERR(vm)) + vm->pid = get_pid(task_pid(task)); } - if (IS_ERR_OR_NULL(aspace)) - aspace = msm_gem_address_space_get(gpu->aspace); + if (IS_ERR_OR_NULL(vm)) + vm = msm_gem_vm_get(gpu->vm); - return aspace; + return vm; } int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, @@ -945,18 +945,18 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, msm_devfreq_init(gpu); - gpu->aspace = gpu->funcs->create_address_space(gpu, pdev); + gpu->vm = gpu->funcs->create_vm(gpu, pdev); - if (gpu->aspace == NULL) + if (gpu->vm == NULL) DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); - else if (IS_ERR(gpu->aspace)) { - ret = PTR_ERR(gpu->aspace); + else if (IS_ERR(gpu->vm)) { + ret = PTR_ERR(gpu->vm); goto fail; } memptrs = msm_gem_kernel_new(drm, sizeof(struct msm_rbmemptrs) * nr_rings, - check_apriv(gpu, MSM_BO_WC), gpu->aspace, &gpu->memptrs_bo, + check_apriv(gpu, MSM_BO_WC), gpu->vm, &gpu->memptrs_bo, &memptrs_iova); if (IS_ERR(memptrs)) { @@ -1000,7 +1000,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, gpu->rb[i] = NULL; } - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); platform_set_drvdata(pdev, NULL); return ret; @@ -1017,11 +1017,11 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) gpu->rb[i] = NULL; } - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); - if (!IS_ERR_OR_NULL(gpu->aspace)) { - gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu); - msm_gem_address_space_put(gpu->aspace); + if (!IS_ERR_OR_NULL(gpu->vm)) { + gpu->vm->mmu->funcs->detach(gpu->vm->mmu); + msm_gem_vm_put(gpu->vm); } if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index c699ce0c557b..1f26ba00f773 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,10 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_address_space *(*create_address_space) - (struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_address_space *(*create_private_address_space) - (struct msm_gpu *gpu); + struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); + struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -236,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -364,8 +362,8 @@ struct msm_context { */ int queueid; - /** @aspace: the per-process GPU address-space */ - struct msm_gem_address_space *aspace; + /** @vm: the per-process GPU address-space */ + struct msm_gem_vm *vm; /** @kref: the reference count */ struct kref ref; @@ -675,8 +673,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task); +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 35d5397e73b4..88504c4b842f 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned long iova, int flags, void return -ENOSYS; } -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct msm_mmu *mmu; struct device *mdp_dev = dev->dev; struct device *mdss_dev = mdp_dev->parent; @@ -204,17 +204,17 @@ struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) return NULL; } - aspace = msm_gem_address_space_create(mmu, "mdp_kms", + vm = msm_gem_vm_create(mmu, "mdp_kms", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { - dev_err(mdp_dev, "aspace create, error %pe\n", aspace); + if (IS_ERR(vm)) { + dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); - return aspace; + return vm; } - msm_mmu_set_fault_handler(aspace->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); - return aspace; + return vm; } void msm_drm_kms_uninit(struct device *dev) diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index 43b58d052ee6..f45996a03e15 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index c5651c39ac2a..bbf8503f6bb5 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -84,7 +84,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->start = msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ, check_apriv(gpu, MSM_BO_WC | MSM_BO_GPU_READONLY), - gpu->aspace, &ring->bo, &ring->iova); + gpu->vm, &ring->bo, &ring->iova); if (IS_ERR(ring->start)) { ret = PTR_ERR(ring->start); @@ -131,7 +131,7 @@ void msm_ringbuffer_destroy(struct msm_ringbuffer *ring) msm_fence_context_free(ring->fctx); - msm_gem_kernel_put(ring->bo, ring->gpu->aspace); + msm_gem_kernel_put(ring->bo, ring->gpu->vm); kfree(ring); } diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 1acc0fe36353..6298233c3568 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } - msm_gem_address_space_put(ctx->aspace); + msm_gem_vm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); From patchwork Wed Mar 19 14:52:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874766 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFAE41B4F0A; Wed, 19 Mar 2025 14:55:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396117; cv=none; b=NLgaNl6SJ7eQHduvN1lcEob8M0a3EDkuYdadb9rmKpQ36HLoD16b5cj97txw9sIwLCibrONvNJ5J3MW4eDvnYmP0FNRjzJtCFGo8tsiGXttcSJ5C/tjhfkXGytszbLEwc8E7qGxdkNkdOd73y4lTY/Ht+c8BITLI+eT6sq7yV5o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396117; c=relaxed/simple; bh=J0ewJYAfBe/mp9Eig7TthxY6EwVwQ67eGz70y3lMNaA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Hz+59Ob4sPV+Bg1OLczfoypJ0QurfTESQvsKZNsEk5epFdSv1ldG/6eVoJfD5Dlr7cYGa/vYjB1gs2ONJjNtiF72T0xkN7RZmwTZKnNEByYlR6GYBFUCT4O5qGDUD0hJ8m0fzSJYGgZYiOlwux2TANtBL8cOYX2kdUH4OjqFTmA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CUhLw1pX; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CUhLw1pX" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-224019ad9edso25522065ad.1; Wed, 19 Mar 2025 07:55:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396114; x=1743000914; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bhvfS8GBuYbMDi5JJolhJY8ynuK7HTOh4WtKTl/CI7U=; b=CUhLw1pXUWpKGGeuGm9MuZoRwuOJVPxSCIOuv4LC8AOItqQ7/Kan4deMQnhgsh1VRR 9enmRJNxNyhyKiN0yaCYK/AoZhcXhuRsOqOP2aD4vmWSaQC93FrzxbeXY9qOXzNksG9V xxAUTdjKxqv/xJsOlqXZ/NdA/XisYeGqfSALSgjB/eAMMJWSYgyCdZFMPWq2FxhCpURj e5Kn8rVbwbcVtGWGoe8QlgRPGWP35V3bitxk0sA9neaB85GyMmOsdemK7p+7Egal8UKB FeR1Ps90G8dyVuPRHz+B6WRSMnEFhlNP+Vv0YiQIXAc/x8JMQZ6uRSy6GHTuFbEqEJQg HSaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396114; x=1743000914; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bhvfS8GBuYbMDi5JJolhJY8ynuK7HTOh4WtKTl/CI7U=; b=KAlYX6NlkA+envrMP8/VqvAU2/zDNfjkFmuwjHXPzONxoTFyyM/8oP60GpZwcN5O1V mIArwslumTJPPlWVmyxHgmQX8Xl+761SKrtYM2qdBvJgAqktLPyvoUh0y6+Y+mF5X7Se wFGwNDZJqyC5+RmiZskJGU90CTWH0kXAQcuHftFlgLnGPoMYdV0twrZ+uTqJudmGThHf kZpr2HwOPNCgd7Nz+jHCCs8c9zwYV1s0lQgQsXb5Cxs9JUFWCEF5lAflttrCgIFy0152 bB9nfYhl+cXp7UmHkXII/nsbQrlSTuyvEcL65R85YGRJQF1hWCDQaa+fZo8VmBM6NHOo h52A== X-Forwarded-Encrypted: i=1; AJvYcCWIm9pUTwtF1OcX1XwrCheM0EPQ6SzInF4a3xXxazisR2h7ghHTaSuHLB8Kf60Eh0kSk1VbZ+/jNveGFW8S@vger.kernel.org, AJvYcCXje/+13ZB/PzPrGyzV2Ex9asiDOXIqRGubUJQSqrlKV2nEGa2+jo53e19Zp9CaQK/BVprvG9QTgt0lrgum@vger.kernel.org X-Gm-Message-State: AOJu0YxVjaDYIDeEl4WpkyP1QlOcxTMH0I0UxGviGuDxHIRcMKwLWVhv YI5ZObS+FQdznnuw167E1MAyP1uFX3fcnpbbDcIKkZV99DPUcMPB X-Gm-Gg: ASbGncve8b6mCH7kJqTIWjmjGZTGjUg0pyp8uXLnKuR3L6xtZA62cP8TuNsM0Px9oNG QT+mk32R62uL8WkXFv33yPqxlzEQtME5pH5Mw8lZ7gWynaEYf2UyDNvs/hG9e3LbfrjxRYXsn3d ncuY1Dl56wjXUdUYCmLbUPw26CAjn+dl8yUSP+l2mIHd2s74QFii8SE1bfgb+Ufg+TCv7Kh6RqP UbnXM+/A+5AeA1olzR4jPCQI/mqK7xyHvMp+koQ9ZYfHOstIqgo6/jIoINuJupbsQUjdmwHQiGj Y/t0iQj9fLE3P+uCep0HdKmHDwUYnLXEqdb0i4+F16p2s0G7yeRdwiGJ9/xKUqp7h9W/IzoxK/w mgn7NvsS3cOwlxiHl1X5eDF3EQnSpGQ== X-Google-Smtp-Source: AGHT+IG93mEOJ7vyFmJWUO+X9klfHOEsy61lL3ILcm8ot6fLDQw2z3GznQ5JtZPrBnxWgQIfqSnNsg== X-Received: by 2002:a17:903:22c4:b0:220:faa2:c911 with SMTP id d9443c01a7336-2264992ff63mr47153795ad.14.1742396114109; Wed, 19 Mar 2025 07:55:14 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6ba6d69sm114832385ad.158.2025.03.19.07.55.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:13 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 08/34] drm/msm: Remove vram carveout support Date: Wed, 19 Mar 2025 07:52:20 -0700 Message-ID: <20250319145425.51935-9-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 13 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 13 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 4 - drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 - drivers/gpu/drm/msm/msm_drv.c | 117 +----------------- drivers/gpu/drm/msm/msm_drv.h | 11 -- drivers/gpu/drm/msm/msm_gem.c | 131 ++------------------- drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - 10 files changed, 19 insertions(+), 287 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 5eb063ed0b46..db1aa281ce47 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -553,10 +553,8 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) if (!gpu->vm) { dev_err(dev->dev, "No memory protection without MMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } + ret = -ENXIO; + goto fail; } return gpu; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index 434e6ededf83..49ba1ce77144 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -582,18 +582,9 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) } if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } + ret = -ENXIO; + goto fail; } icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index 2c75debcfd84..4faf8570aec7 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -696,18 +696,9 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } + ret = -ENXIO; + goto fail; } icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index f4552b8c6767..6b0390c38bff 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -16,10 +16,6 @@ bool snapshot_debugbus = false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devcoredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); -bool allow_vram_carveout = false; -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place of IOMMU"); -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); - int enable_preemption = -1; MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=on , 0=disable, -1=auto (default))"); module_param(enable_preemption, int, 0600); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index eaebcb108b5e..7dbe09817edc 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,7 +18,6 @@ #include "adreno_pm4.xml.h" extern bool snapshot_debugbus; -extern bool allow_vram_carveout; enum { ADRENO_FW_PM4 = 0, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 903abf3532e0..978f1d355b42 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram = "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); ddev->dev_private = NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - struct device_node *node; - unsigned long size = 0; - int ret = 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node = of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret = of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size = r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size = memparse(vram, NULL); - } - - if (size) { - unsigned long attrs = 0; - void *p; - - priv->vram.size = size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |= DMA_ATTR_NO_KERNEL_MAPPING; - attrs |= DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p = dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr = 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv = ddev->dev_private; - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv = dev_get_drvdata(dev); @@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) goto err_destroy_wq; } - ret = msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); /* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; ret = msm_gem_shrinker_init(ddev); if (ret) @@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) return ret; -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 0e675c9a7f83..ad509403f072 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { struct msm_drm_thread event_thread[MAX_CRTCS]; - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 07a30d29248c..621fb4e17a2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return !msm_obj->vram_node; -} static int pgprot = 0; module_param(pgprot, int, 0600); @@ -139,36 +123,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr = physaddr(obj); - for (i = 0; i < npages; i++) { - p[i] = pfn_to_page(__phys_to_pfn(paddr)); - paddr += PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -180,10 +134,7 @@ static struct page **get_pages(struct drm_gem_object *obj) struct page **p; int npages = obj->size >> PAGE_SHIFT; - if (use_pages(obj)) - p = drm_gem_get_pages(obj); - else - p = get_pages_vram(obj, npages); + p = drm_gem_get_pages(obj); if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -216,18 +167,6 @@ static struct page **get_pages(struct drm_gem_object *obj) return msm_obj->pages; } -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -248,10 +187,7 @@ static void put_pages(struct drm_gem_object *obj) update_device_mem(obj->dev->dev_private, -obj->size); - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); msm_obj->pages = NULL; update_lru(obj); @@ -1215,19 +1151,10 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 struct msm_drm_private *priv = dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj = NULL; - bool use_vram = false; int ret; size = PAGE_ALIGN(size); - if (!msm_use_mmu(dev)) - use_vram = true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram = true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1240,44 +1167,16 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 msm_obj = to_msm_bo(obj); - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma = add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret = PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node = &vma->node; - - msm_gem_lock(obj); - pages = get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret = PTR_ERR(pages); - goto fail; - } - - vma->iova = physaddr(obj); - } else { - ret = drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret = drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); drm_gem_lru_move_tail(&priv->lru.unbacked, obj); @@ -1305,12 +1204,6 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size; int ret, npages; - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size = PAGE_ALIGN(dmabuf->size); ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d2f39a371373..c16b11182831 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { struct list_head vmas; /* list of msm_gem_vma */ - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 95da4714fffb..a186b7dfea35 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -659,11 +659,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ From patchwork Wed Mar 19 14:52:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875056 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E36361C5D7F; Wed, 19 Mar 2025 14:55:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396118; cv=none; b=pK2eyveW9GsP0d7GPVPFlTq5NxSEuoLg0mVhCuKFNP1DQIRF8kSlYYo64+YuBMsw1XuR1lfLbWriD/6uTk5jvurwwlEyWEo919VCnIx0bbqncuafor8l5llqsQlqPMLn2UQIMu0Z6KIo69me+oOxVRsP71mPbjmo0GjG9GMQ0OY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396118; c=relaxed/simple; bh=560nB3wa8JPZR6h5FvcIg2j3kD7bO+nrJmtCZ0ZUB2Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qQIEt/Bxt/FB2C/34BeSf8WuPlqXknaEt2W0oBWSX7YgErER7/uJl+8e3GcG/9VnrQmR8cCrib9jeoZmebkf/+uOS4/2m12muDg3mXhQfiRbTQVDgf+/29BpFeQgOS711gSXtfXWAqqwu82E10zm/EPdTM7fakauOBSdvJUIu8w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=lyqeF05F; arc=none smtp.client-ip=209.85.216.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="lyqeF05F" Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-301a4d5156aso3262300a91.1; Wed, 19 Mar 2025 07:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396116; x=1743000916; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V9o9ghClyA07C4hTObpMloLPtnE0pvgT4RZs/4ytmhg=; b=lyqeF05F/RdDh+8BirUa9l8FoCgHWvwxlt7nuXQpxjXGSgitm781mLT40mgoXZY4E0 /1b29tVsJlnLQuaovYZgUc3kY14EpeIglgQDYa9xUzpbCwWhQTxQ/tjPOQMDuruSCROM jHtClO3tTWp34wclYXpvcLo54TtAqHp5TA5/akjyJfAzQyqv4hGTqsgLPtK1V3rDw/no nZQ4lIkT8VtEGeNw3waI15b5IIlRvf287sMICKE1smoSDZWhZNBRXpk+c6CItFHYDo3E AYo+TLAjg3hB2t+o5G9vFXae4Sy7MPb0BO7cKsJPD75u1PduBxo0ceXYJcf+glFFYLWn RyoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396116; x=1743000916; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V9o9ghClyA07C4hTObpMloLPtnE0pvgT4RZs/4ytmhg=; b=tQyvmQ/756Zqs3PUuOO90Xv4OXWwwCksjDrq0unfR4gYPWEAZaeBNUK2ytEENLRiOz 4OGaWHQ2Nn/9zz4CJapDilaLMs46OS+qnfh+mrePziT9BcXutiJYxQky5KUvtaOAaM5I gkv4CeSxfKhL2F78/1k7z3Uh1CnyoXssVcZz6fFeo21lmDwWqaiRSVpky8q7jiLZsP0f n7YweVXq9nzUw9BADqCoJbN3c1ZmSW24q43BXGtq7DA0KGwAM+aYAVgPAoJTWmQ/Yqdo i8nJWHU33XwmDNRQaPo7ykyl8KYO5UexiyM38alpwMqhBmpA6I8+upSVGfZ9hTHMFShb Er6g== X-Forwarded-Encrypted: i=1; AJvYcCUGzBrxVV7fhhCMUUsa4gfXrmsRpt1cNXoI8A0EHEfXTdplcLihRaO07X6iM4wCoO1axkXfhTBwq7kym6tE@vger.kernel.org, AJvYcCXseZaLYGV5wEgwtRLNSImI2DZdgUjQZKs7bCjfy/08q8RlrE6FB/L4E2ecqLG8XPAHUuUbRsTLwr5F53Xs@vger.kernel.org X-Gm-Message-State: AOJu0Yzqp/ewYybrAaqKX6oeLQ9hvO/iYAxumChjXpJFjGd6KVsUAqBv /KhruQIf4Nzf1+u10PkaetEHuF1VhM8l/NlzfRbdyFLcLDEykC6c X-Gm-Gg: ASbGnctAAdwmdFayiNaqPnEte9X60RuR36ETuqdOA++UnltjeMWTIktMR9YPWv11Mf9 12Ut4bgW/GMPLlHHiqSgCg6lBcaOebkldopJ85u0rVf3YelaWGSja2iS1vgQunFUR5hI0pKW7Hk rSsh57oYFnB7uWnbO0eqUJzcTFvHyBrG+1xXfqSZGfhVJKvM3VnQdOrHA5QNAlwryHR0Q1LBcsV LBGdACUkVau7H4tt5xapkWyN8O0FNjO/kztNrLAdSwDkss7PMy0ozNaoD7Atcah3rt+uEKnBnmd uA4aUT58hgS3mdqPVqk26T5juuGATGO1t65fXA4ni3aTax9bkkVbeLA3JmAGfpNVahrZjAXXj0J vjonWJugqH6e6/y+m7Pk= X-Google-Smtp-Source: AGHT+IESGz446nj65kMylJg7WGLLeVI35K62WmvD4n/0MdQAwude8TjaYWISSQUbowHgQj0n2LPWxg== X-Received: by 2002:a17:90b:224c:b0:2fe:b907:562f with SMTP id 98e67ed59e1d1-301bdf84fb6mr5011147a91.14.1742396115955; Wed, 19 Mar 2025 07:55:15 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-301bf5d0d78sm1668031a91.46.2025.03.19.07.55.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 09/34] drm/msm: Collapse vma allocation and initialization Date: Wed, 19 Mar 2025 07:52:21 -0700 Message-ID: <20250319145425.51935-10-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Now that we've dropped vram carveout support, we can collapse vma allocation and initialization. This better matches how things work with drm_gpuvm. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 30 +++----------------------- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 36 +++++++++++++------------------ 3 files changed, 20 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 621fb4e17a2e..29247911f048 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -337,23 +337,6 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; - - msm_gem_assert_locked(obj); - - vma = msm_gem_vma_new(vm); - if (!vma) - return ERR_PTR(-ENOMEM); - - list_add_tail(&vma->list, &msm_obj->vmas); - - return vma; -} - static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_vm *vm) { @@ -420,6 +403,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { + struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -427,18 +411,10 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - int ret; - - vma = add_vma(obj, vm); + vma = msm_gem_vma_new(vm, obj, range_start, range_end); if (IS_ERR(vma)) return vma; - - ret = msm_gem_vma_init(vma, obj->size, - range_start, range_end); - if (ret) { - del_vma(vma); - return ERR_PTR(ret); - } + list_add_tail(&vma->list, &msm_obj->vmas); } else { GEM_WARN_ON(vma->iova < range_start); GEM_WARN_ON((vma->iova + obj->size) > range_end); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c16b11182831..9bd78642671c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -66,8 +66,8 @@ struct msm_gem_vma { bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 9419692f0cc8..6d18364f321c 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -106,47 +106,41 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) +/* Create a new vma and allocate an iova for it */ +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, + u64 range_start, u64 range_end) { struct msm_gem_vma *vma; + int ret; vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) - return NULL; + return ERR_PTR(-ENOMEM); vma->vm = vm; - return vma; -} - -/* Initialize a new vma and allocate an iova for it */ -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, - u64 range_start, u64 range_end) -{ - struct msm_gem_vm *vm = vma->vm; - int ret; - - if (GEM_WARN_ON(!vm)) - return -EINVAL; - - if (GEM_WARN_ON(vma->iova)) - return -EBUSY; - spin_lock(&vm->lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - size, PAGE_SIZE, 0, + obj->size, PAGE_SIZE, 0, range_start, range_end, 0); spin_unlock(&vm->lock); if (ret) - return ret; + goto err_free_vma; vma->iova = vma->node.start; vma->mapped = false; + INIT_LIST_HEAD(&vma->list); + kref_get(&vm->kref); - return 0; + return vma; + +err_free_vma: + kfree(vma); + return ERR_PTR(ret); } struct msm_gem_vm * From patchwork Wed Mar 19 14:52:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874765 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D43B1DDC0F; Wed, 19 Mar 2025 14:55:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396119; cv=none; b=a3T6O/fGzpzN26AWwkiHj43tcGVhLJ2VRo32rlOO+igqcvmV+xCKq2tn3wDpCbaPZ9MF1caRrI5JdZVNweTrquIZCS9xGhky+nZX0vNkl0jZ4dBHYmcHpUlKiZ1QDVTTIkWG0StF5MvBpdHL/GoxzhNGbus8hV7PgumyNdoZHHw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396119; c=relaxed/simple; bh=XV8x+TWHDDsWvhwwo25ZgVKi9ZmagmCADrK1NRYnzU0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Byvb2bkoEcszkkRles/L8G5duhXaTvprRYXHRCR6hLGVlaxnYx39stbGWsUSWbet2lLa16PzsN6Prwh8Kfj1PS2ihwkJ+BhirQ2JIohP0PsQDCWkE999m1ppJKR6lNx7c4WaXpcmsHoid9xDWmvu6+Ud4G2gY8kJApYAf2GFbKE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MlEVwogI; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MlEVwogI" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-22398e09e39so153763565ad.3; Wed, 19 Mar 2025 07:55:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396117; x=1743000917; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7ORYD9iaEUBX1OwggPqr2MympMuauxETKP2Qdij/1u8=; b=MlEVwogIHn6+umNx4DQFeSPEw94wpXCQ8gHWtbchuunvHZqv8+Mcl+ERpL4a0j4kAA 0HliN6Oz6tpcfr/VMyfJOfKxlGqG/OAAsakYwZ2qPJr3d9S/zWuBic+ZlvZBA1C31iNf 6ysrmt0xuijeU+GL+CxuBo8nkn7diPWtpMZyXdUECYhZ3NEeHW3Plsk/qgtpCYkXC8Av FCTQtW8ZMYTfTP9ipTYTkrIUPvR9tSSVtT1EqLrPYgcs/u1xt6rJz771Ww4d16aO6Ejv enRMG0SafEFyuioYjMBigkMRGRfJuW503Kh+F7uKOc1T1PIGM+2ZCpZQxbLaSsBzWXeZ 8s3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396117; x=1743000917; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7ORYD9iaEUBX1OwggPqr2MympMuauxETKP2Qdij/1u8=; b=J4K+movE5zN/d1u71Ry2xKZwAcTIcFJ5i+5A9vJtjPomQJVDGB59Wtf+lcsIEqihUB mBt9cDMBHzNqEvVND+++Q7cQ1a2gfFayhw5wM6n36SXBTEPG135n0sQtkCesze+Ri83B GTxPH+mY28lcpo3fB2ZIBvrctpEP+eBpAaHfaeD5sBkcotWUBOGLwYSK7bYn+rENYlHS wHXBjm/LyAAPFVn46LNfKoW+5peOYPVDxqNcFGd5NthHVEsTY5XjAAhl/NXL0a6qZxfX lCOqmesaZ+nAkTMDvYsaCgZZgAWdfBmLRWixE93xnRfEtZKt3zNxE0pbNsoSlTzpm23H 72Pw== X-Forwarded-Encrypted: i=1; AJvYcCVwA34RayQzTlCobIHvIgKr5T77WiAG/DQDV3An1kZkARoTvcpT7FWM3goJdTbLHMHliFNXCZjEM6qrwj7W@vger.kernel.org, AJvYcCWE8KXWGjtE+rIeI9HK8QHAR17C1Qn7AP7ixZzZ0j7J5GNxdUc951WNfraevPAElEDk0Q8iruHZtBIps5We@vger.kernel.org X-Gm-Message-State: AOJu0YxYbOk/uhukrsQHe987pGJAha1oU1DoQdxaVOvY4qbtCebOjGOp CDScO6hbeYYTDUV6ojL3xLwHKCDiYNiFOycG9q7nhj0B0ORf8+/5 X-Gm-Gg: ASbGncuhJ/t5dcqVqdTThdYZ3beOGwVyjrIj1JUgLiJzHV7e4XfbWvKqSI8KnB4kuse W/MTBBi0PGbEuNjv0OQNVlWsOFT/UI+i+XPYozX6HSGW6LnCI2V3J0dAQDI217aBVJNIDlEPtno 1M7xyF+TwfxpbJkyuOYWJSF12NhPMqQDrovw1nMh53xJW2ZeZSq3TCAk/2g6Lx37T7PSHHNxZ+z /iaFJNkyFFLhJeJLKgwY7Ik4YHnlqywvmb3Rl3VUkiUi4u32EU/MCukaEcqwh5WeISD80nikaTf gdo7d9reYEPHt366MAoJg8HszugFFjRXyTKtURRYj4B65LvMCl1JupFtv+pnFBAI2+utrGLtxVD gwRlKCETz1BXlF7y7L2o= X-Google-Smtp-Source: AGHT+IGBQPof1dc3QJbf5dt4IlT1bch2AgO2nshefyb0Q5O8dbBqD+rh4NuLDoB1B3ixTUcjZPE6rw== X-Received: by 2002:a17:903:228a:b0:223:5c33:56a2 with SMTP id d9443c01a7336-2264992e142mr51954485ad.28.1742396117467; Wed, 19 Mar 2025 07:55:17 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6ba6f64sm115536485ad.127.2025.03.19.07.55.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:16 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 10/34] drm/msm: Collapse vma close and delete Date: Wed, 19 Mar 2025 07:52:22 -0700 Message-ID: <20250319145425.51935-11-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This fits better drm_gpuvm/drm_gpuva. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 16 +++------------- drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++ 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 29247911f048..4c10eca404e0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -353,15 +353,6 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, return NULL; } -static void del_vma(struct msm_gem_vma *vma) -{ - if (!vma) - return; - - list_del(&vma->list); - kfree(vma); -} - /* * If close is true, this also closes the VMA (releasing the allocated * iova range) in addition to removing the iommu mapping. In the eviction @@ -372,11 +363,11 @@ static void put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct msm_gem_vma *vma, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { + list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->vm) { msm_gem_vma_purge(vma); if (close) @@ -395,7 +386,7 @@ put_iova_vmas(struct drm_gem_object *obj) msm_gem_assert_locked(obj); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - del_vma(vma); + msm_gem_vma_close(vma); } } @@ -564,7 +555,6 @@ static int clear_iova(struct drm_gem_object *obj, msm_gem_vma_purge(vma); msm_gem_vma_close(vma); - del_vma(vma); return 0; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 6d18364f321c..ca29e81d79d2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -102,8 +102,10 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) spin_unlock(&vm->lock); vma->iova = 0; + list_del(&vma->list); msm_gem_vm_put(vm); + kfree(vma); } /* Create a new vma and allocate an iova for it */ From patchwork Wed Mar 19 14:52:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875055 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 840DB20C027; Wed, 19 Mar 2025 14:55:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396124; cv=none; b=NyDraAyM8pdh5fq0YcS9DFIA2CIarZUFVwIlCQjL1dG/QlllWIrVIRdxR2EX/o+Gkz/cedBBW1e9yqMzotgRcxyA+U4uZXHjxmfcRmKgSKl3+dlrLEMsIY5dFaC66pfYWMWlS1mHDT43OcLjMnrd1XFJX3OOTNpDlZevWGlrac4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396124; c=relaxed/simple; bh=24YIsWXQ/ueFbirMTiD/glmy+Zy24bGsH+IgaqeaEPE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Pmaur5srWl3EjAHWCcsh1sfji5UVMufPr1xEeRPJu0Bevg5bWfjlIttYeFEWM8yG7QkItcFgJIZXNpGvwHB601UTusDOjAwvF1mICCRr/ThJn6Z6/tMUC7keJIpj7gToMOW6idgM8nIEyy831JH0yaz1yPuZ24zwjunWTXq4QLQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=JF/oqSmz; arc=none smtp.client-ip=209.85.216.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JF/oqSmz" Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-301493f45aeso6760858a91.1; Wed, 19 Mar 2025 07:55:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396121; x=1743000921; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TMhJ7CNPRjP2etqWE2gJYlhTLA+opNl1mum9tT/lAYo=; b=JF/oqSmzi0payNppYLBAFDeI/UhRM7IYyllcUFTiLK7Wtwmv7CwYKLuceUBl6+toLa 68LraiGwsmJAByShlF6BL3UC/oNmm9b06LfTnR5SOXMA6qrgcs5cXFnaUCOYeFZ6mos0 I8jb7lQMuU+dwHv9bpbUb+QMeGQp364SDNBKodGqopr6uod8A24xee+AtZcfqtoO8vrL 0lnc5/XIM8BrsjSLOtenlDULcSNcPnDI4D2wHkuBWw7ZKSzUyPYFTjKk9omsbK01z/gE wlGhOdwomLfRVM0ZJHac8io9IbiFdhzGyqoU9dgUmHqroXvk10Mzmi4co8hYqpTa0B5J QgSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396121; x=1743000921; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TMhJ7CNPRjP2etqWE2gJYlhTLA+opNl1mum9tT/lAYo=; b=Eq2Y1urYdMb7/JxNfFMNmxQkiysLLQtlWr6nD4Ubz0UQO8wot1TSVHA2fpnWBwMaeo kKgKv8q7XvqTTf9OY9Du7/E9lheQPXLIBAqpi1UnC86mvWXgtRbDESWpjkGma8eWjMUE pXPsuhRNI8al+TSST80+kXECqLwIOrYR7zTU47WwZPvLzrljoB8lPQ7WSaj9bHAz0U/k JNPeXq0BWOzjRxDzY1Azi3kqsD58p/0/lFMlYvfSrTI0feKMdQrGzZUySlnveuzbNUfL K9RvBBz+qrC4Mhzo1imlz0Nx9GiXjNHETJDdbouZUU9VeNBLlyzJOjZxfTWEirKieFSi y1Xw== X-Forwarded-Encrypted: i=1; AJvYcCX2j/AfBUwzhAFejHtnDD9MuxxJr9vV8CUybhR52J7qwerKoEfDI6ruR7emsJvtWHJUPAI5z94bagAJ70F9@vger.kernel.org, AJvYcCXpcyZp8laVDgSNGistmz9L40sdRabCXxBJHT9Mx0bO52q9bufxBGvhXK/af3/Iedaqudi828slqjOC261r@vger.kernel.org X-Gm-Message-State: AOJu0YzADZXQ2jKJuEBiezcNYTY+iv9jkRZvBbxZQIO16E1pwM6GYBqB 3qRfZmIuxsJnr3dMicNa1gCVLHsvCymkT4MKNSW6XL1EYl8i5sGX X-Gm-Gg: ASbGncv416G7Sb/Clti+ttoLfXa8zV12TcTO6KURNW8Y8ZPl2ZbpwoyBq91vpapZXt+ TqQiENq2jqL4DmaJuMDbqAWhLKslsXkkwH9VIeHxs8VFPToTuIB7gpppk7zfWdYNRVDjXgp1uXo tGge0mqg85LwviS6G26SVMYq82gkhFC6UABLrbtSwXTXb9T6ousE5RsXE47hgKLYU6aaPAHpoy5 8XvFcq3cH/u7328bw5a7vZhyteHcC3euRgBEaLxxmXEmjb9A84Xp2L3k5/JpKx776+Ijw3e6RWX k6FLi2iYUsZuOxbgr0m6PYsrfe27OQWBGytqWw2phg4cmXIXHgNGZt0vcjdf0g4ZlH3SwvX24K7 f0crGTOUTdVNai5i7ijE= X-Google-Smtp-Source: AGHT+IG44or0zyQsqnluzeWt8ID9nXMHKGwTySe2M0Le0bch0sGD1JOm/zr2QvKV7Wm+W/3TeIV8Iw== X-Received: by 2002:a17:90b:5285:b0:2ff:6a5f:9b39 with SMTP id 98e67ed59e1d1-301bdf91ccdmr5990824a91.18.1742396120471; Wed, 19 Mar 2025 07:55:20 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-301bf576d32sm1683877a91.5.2025.03.19.07.55.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:19 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 11/34] drm/msm: drm_gpuvm conversion Date: Wed, 19 Mar 2025 07:52:23 -0700 Message-ID: <20250319145425.51935-12-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Now that we've realigned deletion and allocation, switch over to using drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per VM, to allow mapping different parts of a single BO at different virtual addresses, which is a key requirement for sparse/VM_BIND. This prepares us for using drm_gpuvm to translate a batch of MAP/ MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/ unmap steps for updating the page tables. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/Kconfig | 1 + drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gem.c | 142 ++++++++++++++--------- drivers/gpu/drm/msm/msm_gem.h | 87 +++++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 139 +++++++++++++++------- drivers/gpu/drm/msm/msm_kms.c | 4 +- 12 files changed, 268 insertions(+), 134 deletions(-) diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 974bc7c0ea76..4af7e896c1d4 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -21,6 +21,7 @@ config DRM_MSM select DRM_DISPLAY_HELPER select DRM_BRIDGE_CONNECTOR select DRM_EXEC + select DRM_GPUVM select DRM_KMS_HELPER select DRM_PANEL select DRM_BRIDGE diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index db1aa281ce47..94c49ed057cd 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); struct msm_gem_vm *vm; - vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, - 0xfff * SZ_64K); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 4c459ae25cba..259a589a827d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1311,7 +1311,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, return 0; } -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) +static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *gmu) { struct msm_mmu *mmu; @@ -1321,7 +1321,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + gmu->vm = msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true); if (IS_ERR(gmu->vm)) return PTR_ERR(gmu->vm); @@ -1940,7 +1940,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) if (ret) goto err_put_device; - ret = a6xx_gmu_memory_probe(gmu); + ret = a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu); if (ret) goto err_put_device; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index fa63149bf73f..a124249f7a1d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2271,9 +2271,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu) if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_vm_create(mmu, - "gpu", 0x100000000ULL, - adreno_private_vm_size(gpu)); + return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL, + adreno_private_vm_size(gpu), true); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index ffbbf3d5ce2f..0ba1819833ab 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -224,7 +224,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0), + size, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); @@ -403,12 +404,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_start; + *value = ctx->vm->base.mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_size; + *value = ctx->vm->base.mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index 94fbc20b2fbd..d5b5628bee24 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -451,8 +451,9 @@ static int mdp4_kms_init(struct drm_device *dev) "contig buffers for scanout\n"); vm = NULL; } else { - vm = msm_gem_vm_create(mmu, - "mdp4", 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp4", + 0x1000, 0x100000000 - 0x1000, + true); if (IS_ERR(vm)) { if (!IS_ERR(mmu)) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 978f1d355b42..6ef29bc48bb0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -776,6 +776,7 @@ static const struct file_operations fops = { static const struct drm_driver msm_driver = { .driver_features = DRIVER_GEM | + DRIVER_GEM_GPUVA | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_MODESET | diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 4c10eca404e0..7901871c66cc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -47,9 +47,32 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) return 0; } +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close); + static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { + struct msm_context *ctx = file->driver_priv; + update_ctx_mem(file, -obj->size); + + /* + * If VM isn't created yet, nothing to cleanup. And in fact calling + * put_iova_spaces() with vm=NULL would be bad, in that it will tear- + * down the mappings of shared buffers in other contexts. + */ + if (!ctx->vm) + return; + + /* + * TODO we might need to kick this to a queue to avoid blocking + * in CLOSE ioctl + */ + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, + msecs_to_jiffies(1000)); + + msm_gem_lock(obj); + put_iova_spaces(obj, &ctx->vm->base, true); + msm_gem_unlock(obj); } /* @@ -171,6 +194,13 @@ static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + /* + * Skip gpuvm in the object free path to avoid a WARN_ON() splat. + * See explaination in msm_gem_assert_locked() + */ + if (kref_read(&obj->refcount)) + drm_gpuvm_bo_gem_evict(obj, true); + if (msm_obj->pages) { if (msm_obj->sgt) { /* For non-cached buffers, ensure the new @@ -338,16 +368,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct msm_gem_vm *vm) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct drm_gpuvm_bo *vm_bo; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->vm == vm) - return vma; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm == &vm->base) { + /* lookup_vma() should only be used in paths + * with at most one vma per vm + */ + GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); + + return to_msm_vma(vma); + } + } } return NULL; @@ -360,33 +399,29 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, bool close) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - if (vma->vm) { - msm_gem_vma_purge(vma); - if (close) - msm_gem_vma_close(vma); - } - } -} + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; -/* Called with msm_obj locked */ -static void -put_iova_vmas(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + if (vm && vm_bo->vm != vm) + continue; - msm_gem_assert_locked(obj); + drm_gpuvm_bo_get(vm_bo); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - msm_gem_vma_close(vma); + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + msm_gem_vma_purge(msm_vma); + if (close) + msm_gem_vma_close(msm_vma); + } + + drm_gpuvm_bo_put(vm_bo); } } @@ -394,7 +429,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -403,12 +437,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); - if (IS_ERR(vma)) - return vma; - list_add_tail(&vma->list, &msm_obj->vmas); } else { - GEM_WARN_ON(vma->iova < range_start); - GEM_WARN_ON((vma->iova + obj->size) > range_end); + GEM_WARN_ON(vma->base.va.addr < range_start); + GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); } return vma; @@ -492,7 +523,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->iova; + *iova = vma->base.va.addr; pin_obj_locked(obj); } @@ -538,7 +569,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->iova; + *iova = vma->base.va.addr; } msm_gem_unlock(obj); @@ -579,7 +610,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->iova != iova)) { + } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -763,7 +794,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, true); + put_iova_spaces(obj, NULL, true); msm_gem_vunmap(obj); @@ -771,8 +802,6 @@ void msm_gem_purge(struct drm_gem_object *obj) put_pages(obj); - put_iova_vmas(obj); - mutex_lock(&priv->lru.lock); /* A one-way transition: */ msm_obj->madv = __MSM_MADV_PURGED; @@ -803,7 +832,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); @@ -869,7 +898,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct dma_resv *robj = obj->resv; - struct msm_gem_vma *vma; uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; @@ -912,14 +940,17 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name); - if (!list_empty(&msm_obj->vmas)) { + if (!list_empty(&obj->gpuva.list)) { + struct drm_gpuvm_bo *vm_bo; seq_puts(m, " vmas:"); - list_for_each_entry(vma, &msm_obj->vmas, list) { - const char *name, *comm; - if (vma->vm) { - struct msm_gem_vm *vm = vma->vm; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + const char *name, *comm; + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct task_struct *task = get_pid_task(vm->pid, PIDTYPE_PID); if (task) { @@ -928,15 +959,14 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, } else { comm = NULL; } - name = vm->name; - } else { - name = comm = NULL; + name = vm->base.name; + + seq_printf(m, " [%s%s%s: vm=%p, %08llx,%smapped]", + name, comm ? ":" : "", comm ? comm : "", + vma->vm, vma->va.addr, + to_msm_vma(vma)->mapped ? "" : "un"); + kfree(comm); } - seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", - name, comm ? ":" : "", comm ? comm : "", - vma->vm, vma->iova, - vma->mapped ? "mapped" : "unmapped"); - kfree(comm); } seq_puts(m, "\n"); @@ -982,7 +1012,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj) list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); - put_iova_spaces(obj, true); + put_iova_spaces(obj, NULL, true); if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); @@ -992,13 +1022,10 @@ static void msm_gem_free_object(struct drm_gem_object *obj) */ kvfree(msm_obj->pages); - put_iova_vmas(obj); - drm_prime_gem_destroy(obj, msm_obj->sgt); } else { msm_gem_vunmap(obj); put_pages(obj); - put_iova_vmas(obj); } drm_gem_object_release(obj); @@ -1104,7 +1131,6 @@ static int msm_gem_new_impl(struct drm_device *dev, msm_obj->madv = MSM_MADV_WILLNEED; INIT_LIST_HEAD(&msm_obj->node); - INIT_LIST_HEAD(&msm_obj->vmas); *obj = &msm_obj->base; (*obj)->funcs = &msm_gem_object_funcs; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9bd78642671c..5091892bbe2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -10,6 +10,7 @@ #include #include #include "drm/drm_exec.h" +#include "drm/drm_gpuvm.h" #include "drm/gpu_scheduler.h" #include "msm_drv.h" @@ -22,30 +23,67 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ +/** + * struct msm_gem_vm - VM object + * + * A VM object representing a GPU (or display or GMU or ...) virtual address + * space. + * + * In the case of GPU, if per-process address spaces are supported, the address + * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. TTBR0 + * is used for userspace objects, and is unique per msm_context/drm_file, while + * TTBR1 is the same for all processes. (The kernel controlled ringbuffer and + * a few other kernel controlled buffers live in TTBR1.) + * + * The GPU TTBR0 vm can be managed by userspace or by the kernel, depending on + * whether userspace supports VM_BIND. All other vm's are managed by the kernel. + * (Managed by kernel means the kernel is responsible for VA allocation.) + * + * Note that because VM_BIND allows a given BO to be mapped multiple times in + * a VM, and therefore have multiple VMA's in a VM, there is an extra object + * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not + * embedded in any larger driver structure. The GEM object holds a list of + * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma + * holds a reference to the vm_bo, and drops it when the vma is unlinked. + * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an + * existing vm_bo, or create a new one. Once the vma is linked, the ref + * to the vm_bo can be dropped (since the vma is holding one). + */ struct msm_gem_vm { - const char *name; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: + /** @base: Inherit from drm_gpuvm. */ + struct drm_gpuvm base; + + /** + * @mm: Memory management for kernel managed VA allocations + * + * Only used for kernel managed VMs, unused for user managed VMs. + * + * Protected by @mm_lock. */ struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ + + /** @mm_lock: protects @mm node allocation/removal */ + struct spinlock mm_lock; + + /** @vm_lock: protects gpuvm insert/remove/traverse */ + struct mutex vm_lock; + + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; - struct kref kref; - /* For address spaces associated with a specific process, this + /** + * @pid: For address spaces associated with a specific process, this * will be non-NULL: */ struct pid *pid; - /* @faults: the number of GPU hangs associated with this address space */ + /** @faults: the number of GPU hangs associated with this address space */ int faults; - /** @va_start: lowest possible address to allocate */ - uint64_t va_start; - - /** @va_size: the size of the address space (in bytes) */ - uint64_t va_size; + /** @managed: is this a kernel managed VM? */ + bool managed; }; +#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm); @@ -53,18 +91,31 @@ msm_gem_vm_get(struct msm_gem_vm *vm); void msm_gem_vm_put(struct msm_gem_vm *vm); struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size); +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed); struct msm_fence_context; +/** + * struct msm_gem_vma - a VMA mapping + * + * Represents a combination of a GEM object plus a VM. + */ struct msm_gem_vma { + /** @base: inherit from drm_gpuva */ + struct drm_gpuva base; + + /** + * @node: mm node for VA allocation + * + * Only used by kernel managed VMs + */ struct drm_mm_node node; - uint64_t iova; - struct msm_gem_vm *vm; - struct list_head list; /* node in msm_gem_object::vmas */ + + /** @mapped: Is this VMA mapped? */ bool mapped; }; +#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, @@ -100,8 +151,6 @@ struct msm_gem_object { struct sg_table *sgt; void *vaddr; - struct list_head vmas; /* list of msm_gem_vma */ - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a186b7dfea35..e8a670566147 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -312,7 +312,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].iova = vma->iova; + submit->bos[i].iova = vma->base.va.addr; } /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index ca29e81d79d2..56221dfdf551 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -5,14 +5,13 @@ */ #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_mmu.h" static void -msm_gem_vm_destroy(struct kref *kref) +msm_gem_vm_free(struct drm_gpuvm *gpuvm) { - struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base); drm_mm_takedown(&vm->mm); if (vm->mmu) @@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref) void msm_gem_vm_put(struct msm_gem_vm *vm) { if (vm) - kref_put(&vm->kref, msm_gem_vm_destroy); + drm_gpuvm_put(&vm->base); } struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm) { if (!IS_ERR_OR_NULL(vm)) - kref_get(&vm->kref); + drm_gpuvm_get(&vm->base); return vm; } @@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm) /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; - unsigned size = vma->node.size; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + unsigned size = vma->base.va.range; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); vma->mapped = false; } @@ -57,10 +56,10 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); int ret; - if (GEM_WARN_ON(!vma->iova)) + if (GEM_WARN_ON(!vma->base.va.addr)) return -EINVAL; if (vma->mapped) @@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!vm) - return 0; - /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); if (ret) { vma->mapped = false; @@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); GEM_WARN_ON(vma->mapped); - spin_lock(&vm->lock); - if (vma->iova) + spin_lock(&vm->mm_lock); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->lock); + spin_unlock(&vm->mm_lock); - vma->iova = 0; - list_del(&vma->list); + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + drm_gpuva_unlink(&vma->base); + mutex_unlock(&vm->vm_lock); - msm_gem_vm_put(vm); kfree(vma); } @@ -113,6 +110,7 @@ struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -120,36 +118,82 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, if (!vma) return ERR_PTR(-ENOMEM); - vma->vm = vm; + if (vm->managed) { + spin_lock(&vm->mm_lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, + obj->size, PAGE_SIZE, 0, + range_start, range_end, 0); + spin_unlock(&vm->mm_lock); - spin_lock(&vm->lock); - ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - obj->size, PAGE_SIZE, 0, - range_start, range_end, 0); - spin_unlock(&vm->lock); + if (ret) + goto err_free_vma; - if (ret) - goto err_free_vma; + range_start = vma->node.start; + range_end = range_start + obj->size; + } - vma->iova = vma->node.start; + GEM_WARN_ON((range_end - range_start) > obj->size); + + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - INIT_LIST_HEAD(&vma->list); + mutex_lock(&vm->vm_lock); + ret = drm_gpuva_insert(&vm->base, &vma->base); + mutex_unlock(&vm->vm_lock); + if (ret) + goto err_free_range; - kref_get(&vm->kref); + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); + if (IS_ERR(vm_bo)) { + ret = PTR_ERR(vm_bo); + goto err_va_remove; + } + + mutex_lock(&vm->vm_lock); + drm_gpuva_link(&vma->base, vm_bo); + mutex_unlock(&vm->vm_lock); + GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; +err_va_remove: + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + mutex_unlock(&vm->vm_lock); +err_free_range: + if (vm->managed) + drm_mm_remove_node(&vma->node); err_free_vma: kfree(vma); return ERR_PTR(ret); } +static const struct drm_gpuvm_ops msm_gpuvm_ops = { + .vm_free = msm_gem_vm_free, +}; + +/** + * msm_gem_vm_create() - Create and initialize a &msm_gem_vm + * @drm: the drm device + * @mmu: the backing MMU objects handling mapping/unmapping + * @name: the name of the VM + * @va_start: the start offset of the GPU VA space + * @va_size: the size of the GPU VA space + * @managed: is it a kernel managed VM? + * + * In a kernel managed VM, the kernel handles address allocation, and only + * synchronous operations are supported. In a user managed VM, userspace + * handles virtual address allocation, and both async and sync operations + * are supported. + */ struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size) +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed) { + enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0; struct msm_gem_vm *vm; + struct drm_gem_object *dummy_gem; + int ret = 0; if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -158,15 +202,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *name, if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&vm->lock); - vm->name = name; - vm->mmu = mmu; - vm->va_start = va_start; - vm->va_size = size; + dummy_gem = drm_gpuvm_resv_object_alloc(drm); + if (!dummy_gem) { + ret = -ENOMEM; + goto err_free_vm; + } + + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, + va_start, va_size, 0, 0, &msm_gpuvm_ops); + drm_gem_object_put(dummy_gem); + + spin_lock_init(&vm->mm_lock); + mutex_init(&vm->vm_lock); - drm_mm_init(&vm->mm, va_start, size); + vm->mmu = mmu; + vm->managed = managed; - kref_init(&vm->kref); + drm_mm_init(&vm->mm, va_start, va_size); return vm; + +err_free_vm: + kfree(vm); + return ERR_PTR(ret); + } diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 88504c4b842f..6458bd82a0cd 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -204,8 +204,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) return NULL; } - vm = msm_gem_vm_create(mmu, "mdp_kms", - 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp_kms", + 0x1000, 0x100000000 - 0x1000, true); if (IS_ERR(vm)) { dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); From patchwork Wed Mar 19 14:52:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875054 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A54C1211715; Wed, 19 Mar 2025 14:55:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396133; cv=none; b=HRLrcmlEHKl/g/4426NDQYf6omOCJe/dXr7qsyoArP77RYgXiuWLh2Z+GoTXPUgid7adi3o24xnJS+8okUAJaakEVPr22JKS4S4VTRml5G/S5I+BWcFKQ6X+kUH7Fr25r7864GILdLemg7msr3SzSHLRanv/6sHdMMNY95sjU2g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396133; c=relaxed/simple; bh=by7+lmTzvpYs1XhytXNuJ1RzeFDcNn/lDXt2ueYFIFI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XGC/mr/E6DYUpG2oOOB2SwXpDE9Uy8TPq8uX1ZbdJ5NOovqe9DU6KuZkYqeurv+skfCgbRfhoANgVkfGPGhrnQuHdV0dq3W1ecYjf1hEP7hLhwVUOuJt4HSJJc9gcBNkB8Y5kW3nf7ofTit/LlUh7i+9kv/RDwvzMB5HBFsq9c0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=JgBPh5U5; arc=none smtp.client-ip=209.85.214.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JgBPh5U5" Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-2241053582dso40849985ad.1; Wed, 19 Mar 2025 07:55:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396130; x=1743000930; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4wPGaWcXKTbzClNRHPWqQd+Vb852kmbrt/TxmjiMN1w=; b=JgBPh5U5L9+G/1hmzd4RLsiXNxc6jg+YfqNg+XmJxIe0erKSgDD5XpQldsuSAHmK5I Z1pxtupM7hm+3fJAOdS0kVoZzsZHkbaI9pZfxARgiujUzyZ1gRH/JvdOOL7hHWeIEDLj eK+oR/7pCB460N4NLCOaLicJp64fhGtLStVAob5zfZW9ETYx1mC/Cc1mdjHY6/Jds+cu fo4tGLZTUJ6E/gbF6BgejqWWSXgp+nBKPwID8LrDbNeZAdNOksSxYYsa/ubWskpFlnTY 9lZnW9W7zUVXAgK21RpunsN0XlyFAAS3G03NXOnpNJtgHSzjLThHzisE3939gifAYmfI NUsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396130; x=1743000930; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4wPGaWcXKTbzClNRHPWqQd+Vb852kmbrt/TxmjiMN1w=; b=hHIvVjyl15p7MR7CND1zWAKKO96wROOe0siF5hMtLydeSQjoVioK76zOj9Qp9Ovg7U pVeXrkdywpsraBiUQiW9XA/z+C4Zw1L/2/QpLLjPkZP3PQY5IIgXta+/BLfeM5a2IZAm y+1QhxD0sGLf5U5QEyyc1KEVe9J7bwpcqk1hTZvqolfF+OMymS3mXV7iX55EcZKkS33V zEKTjVKZCQ8nn9NsM6bEaKynUEyFW1t/EzCTJ3Sf3Y4mZTVcq34WWgpf2ymdxtTCkSoU fuZA+7lU7Gqi4N/j7TFr4XvrMKcKGaa6cJaBx1pGqLRLEODnsNaKNGZ/CGJMcBk4R2Mn xsQA== X-Forwarded-Encrypted: i=1; AJvYcCV8YffnULOaGIWWz9OXnpzP4dEWDEEMZg4Jva0visvBIamllLMMtZrtY+blQl8jIsciNanhDvGGFkrfE6Eg@vger.kernel.org, AJvYcCVq5jlJBiKXxpnz559HXK5fctjviQfQ3P2QKMwO3QvM4un6AWEPw6/KmM7camtpmillctzksMQmu0yAotvL@vger.kernel.org X-Gm-Message-State: AOJu0YysFtQ932D8UJSpyGNcD7x1kPwC4Y7//w53DL1AZlGP2Q35GVUh RX+0vbP/4Ki31zYGlQX4bUnE2sHuNSRAutcGtzeCCKknLCMF4fjX X-Gm-Gg: ASbGnctTTEazT6DA/tffPDc+wSdApzqEUHHm/RCL68TK+hhj83NpEeR5awSDeFfTDd4 o46NFygjFs9qJ+uC2z9kr4M3HyJ77kyMLybQL+9p/wQOs64deN/3SMxC29WHInhLXpwWw3i0/au xcIZiYJvG8P1LZrs8lDt4lsyfelNmYf4hU1a/6PVeKdUJ4c3c7LybeHx9Wca4YOjENfIKS28qZs Bf+SPuXfD2IlZiEfiklnPKd2/ZiphgZYWn+lUvQKPbt8GO1Uyw5daTtBloohC7mW1DkybAqV3C9 r8jjuYQEKkxecoBt1qFD754KBquTRgzkMK5uZDygVEfv/Y3fJFF9ImnNozisRI/1DqnvXrN83P8 iYMOXh73lmnj+1COfis8= X-Google-Smtp-Source: AGHT+IEknE7Ed11x/ibgJ5QzQpkMpm252egUeMoODkBeiFxQuKrCkeotsKLzvKSZOSTQOIHwRjMeQQ== X-Received: by 2002:a05:6a20:840b:b0:1f5:b25b:7995 with SMTP id adf61e73a8af0-1fbebf6618dmr5806194637.24.1742396129676; Wed, 19 Mar 2025 07:55:29 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-af56ea7c741sm10931259a12.52.2025.03.19.07.55.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:29 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , Jani Nikula , =?utf-8?b?QmFybmFiw6FzIEN6w6ltw6Fu?= , Arnd Bergmann , Jonathan Marek , Krzysztof Kozlowski , Eugene Lepshy , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 12/34] drm/msm: Use drm_gpuvm types more Date: Wed, 19 Mar 2025 07:52:24 -0700 Message-ID: <20250319145425.51935-13-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Most of the driver code doesn't need to reach in to msm specific fields, so just use the drm_gpuvm/drm_gpuva types directly. This should hopefully improve commonality with other drivers and make the code easier to understand. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 14 ++-- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 21 +++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 4 +- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 6 +- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 6 +- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 2 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 11 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 11 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 6 +- drivers/gpu/drm/msm/msm_drv.h | 19 ++--- drivers/gpu/drm/msm/msm_fb.c | 14 ++-- drivers/gpu/drm/msm/msm_gem.c | 82 +++++++++---------- drivers/gpu/drm/msm/msm_gem.h | 53 ++++++------ drivers/gpu/drm/msm/msm_gem_submit.c | 4 +- drivers/gpu/drm/msm/msm_gem_vma.c | 72 +++++++--------- drivers/gpu/drm/msm/msm_gpu.c | 21 +++-- drivers/gpu/drm/msm/msm_gpu.h | 10 +-- drivers/gpu/drm/msm/msm_kms.c | 6 +- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 27 files changed, 192 insertions(+), 202 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 94c49ed057cd..c4c723a0bf1a 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; - a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(to_msm_vm(gpu->vm)->mmu, &pt_base, &tran_error); DBG("%s", gpu->name); @@ -466,11 +466,11 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu) return state; } -static struct msm_gem_vm * +static struct drm_gpuvm * a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index cce95ad3cfb8..9dd7dea84a4a 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,8 +1786,10 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) { + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a5xx_fault_handler); + } /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 259a589a827d..32711c4967f7 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,6 +1259,8 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { + struct msm_mmu *mmu = to_msm_vm(gmu->vm)->mmu; + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); msm_gem_kernel_put(gmu->debug.obj, gmu->vm); msm_gem_kernel_put(gmu->icache.obj, gmu->vm); @@ -1266,8 +1268,8 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); msm_gem_kernel_put(gmu->log.obj, gmu->vm); - gmu->vm->mmu->funcs->detach(gmu->vm->mmu); - msm_gem_vm_put(gmu->vm); + mmu->funcs->detach(mmu); + drm_gpuvm_put(gmu->vm); } static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index cceda7d9c33a..5da36226b93d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index a124249f7a1d..4811be5a7c29 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { @@ -2243,7 +2243,7 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -2261,12 +2261,12 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) return adreno_iommu_create_vm(gpu, pdev, quirks); } -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(gpu->vm->mmu); + mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -2546,8 +2546,10 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + if (gpu->vm) { + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a6xx_fault_handler); + } a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c index 41229c60aa06..bd40d0f26e2c 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, struct a7xx_cp_smmu_info *smmu_info_ptr = ptr; - msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(to_msm_vm(gpu->vm)->mmu, &ttbr, &asid); smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 = ttbr; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 0ba1819833ab..0f71703f6ec7 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { return adreno_iommu_create_vm(gpu, pdev, 0); } -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -259,9 +259,10 @@ void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) if (!adreno_gpu->stall_enabled && ktime_after(ktime_get(), adreno_gpu->stall_reenable_time) && !READ_ONCE(gpu->crashstate)) { + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; adreno_gpu->stall_enabled = true; - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); + mmu->funcs->set_stall(mmu, true); } spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); } @@ -275,6 +276,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, u32 scratch[4]) { + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); const char *type = "UNKNOWN"; bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) && @@ -287,9 +289,10 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, */ spin_lock_irqsave(&adreno_gpu->fault_stall_lock, irq_flags); if (adreno_gpu->stall_enabled) { + adreno_gpu->stall_enabled = false; - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); + mmu->funcs->set_stall(mmu, false); } adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); @@ -299,7 +302,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); } /* @@ -394,7 +397,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, return 0; case MSM_PARAM_FAULTS: if (ctx->vm) - *value = gpu->global_faults + ctx->vm->faults; + *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults; else *value = gpu->global_faults; return 0; @@ -404,12 +407,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->base.mm_start; + *value = ctx->vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->base.mm_range; + *value = ctx->vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 7dbe09817edc..a76f4c62deee 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -642,11 +642,11 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len, * Common helper function to initialize the default address space for arm-smmu * attached targets */ -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev); -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 32e208ee946d..3b02f4d1a7a5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); @@ -619,7 +619,7 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; if (!job->fb) return; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c index d115b79af771..6aef29590a3d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, +static void _dpu_format_populate_addrs_ubwc(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, } } -static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, +static void _dpu_format_populate_addrs_linear(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -373,7 +373,7 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_vm *vm, +void dpu_format_populate_addrs(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h index 989f3e13c497..127bf4f586db 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats, return false; } -void dpu_format_populate_addrs(struct msm_gem_vm *vm, +void dpu_format_populate_addrs(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index bb5db6da636a..a9cd215cfd33 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1098,17 +1098,17 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms) if (!dpu_kms->base.vm) return; - mmu = dpu_kms->base.vm->mmu; + mmu = to_msm_vm(dpu_kms->base.vm)->mmu; mmu->funcs->detach(mmu); - msm_gem_vm_put(dpu_kms->base.vm); + drm_gpuvm_put(dpu_kms->base.vm); dpu_kms->base.vm = NULL; } static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; vm = msm_kms_init_vm(dpu_kms->dev); if (IS_ERR(vm)) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h index 3578f52048a5..fbf9c1fd6cfb 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index d5b5628bee24..9326ed3aab04 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,16 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); struct device *dev = mdp4_kms->dev->dev; - struct msm_gem_vm *vm = kms->vm; if (mdp4_kms->blank_cursor_iova) msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } if (mdp4_kms->rpm_enabled) @@ -380,7 +381,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms = NULL; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int ret; u32 major, minor; unsigned long max_clk; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index 9dca0385a42d..b6e6bd1f95ee 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,12 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_vm *vm = kms->vm; - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +501,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms = priv->kms; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int i, ret; ret = mdp5_init(to_platform_device(dev->dev), dev); diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index 16335ebd21e4..2d1699b7dc93 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,7 +1146,7 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size) uint64_t iova; u8 *data; - msm_host->vm = msm_gem_vm_get(priv->kms->vm); + msm_host->vm = drm_gpuvm_get(priv->kms->vm); data = msm_gem_kernel_new(dev, size, MSM_BO_WC, msm_host->vm, @@ -1194,7 +1194,7 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) if (msm_host->tx_gem_obj) { msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); - msm_gem_vm_put(msm_host->vm); + drm_gpuvm_put(msm_host->vm); msm_host->tx_gem_obj = NULL; msm_host->vm = NULL; } diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index ad509403f072..b77fd2c531c3 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,8 +48,6 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_vm; -struct msm_gem_vma; struct msm_disp_state; #define MAX_CRTCS 8 @@ -230,7 +228,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -251,13 +249,14 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needs_dirtyfb); -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needed_dirtyfb); -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane); -struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); +int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needs_dirtyfb); +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needed_dirtyfb); +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + int plane); +struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, + int plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd); diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 6df318b73534..d267aa1cb218 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -75,9 +75,8 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needs_dirtyfb) +int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needs_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int ret, i, n = fb->format->num_planes; @@ -98,9 +97,8 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, return 0; } -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needed_dirtyfb) +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needed_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int i, n = fb->format->num_planes; @@ -115,8 +113,8 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane) +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + int plane) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 7901871c66cc..a0c15cca9245 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -71,7 +71,7 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) msecs_to_jiffies(1000)); msm_gem_lock(obj); - put_iova_spaces(obj, &ctx->vm->base, true); + put_iova_spaces(obj, ctx->vm, true); msm_gem_unlock(obj); } @@ -367,8 +367,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { struct drm_gpuvm_bo *vm_bo; @@ -378,13 +378,13 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct drm_gpuva *vma; drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm == &vm->base) { + if (vma->vm == vm) { /* lookup_vma() should only be used in paths * with at most one vma per vm */ GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); - return to_msm_vma(vma); + return vma; } } } @@ -414,22 +414,20 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - struct msm_gem_vma *msm_vma = to_msm_vma(vma); - - msm_gem_vma_purge(msm_vma); + msm_gem_vma_purge(vma); if (close) - msm_gem_vma_close(msm_vma); + msm_gem_vma_close(vma); } drm_gpuvm_bo_put(vm_bo); } } -static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, - u64 range_start, u64 range_end) +static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm, u64 range_start, + u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; msm_gem_assert_locked(obj); @@ -438,14 +436,14 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); } else { - GEM_WARN_ON(vma->base.va.addr < range_start); - GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); + GEM_WARN_ON(vma->va.addr < range_start); + GEM_WARN_ON((vma->va.addr + obj->size) > range_end); } return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **pages; @@ -502,17 +500,17 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) update_lru_active(obj); } -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { return get_vma_locked(obj, vm, 0, U64_MAX); } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret; msm_gem_assert_locked(obj); @@ -523,7 +521,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->base.va.addr; + *iova = vma->va.addr; pin_obj_locked(obj); } @@ -535,8 +533,8 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { int ret; @@ -548,8 +546,8 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, } /* get iova and pin it. Should have a matching put */ -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } @@ -558,10 +556,10 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, * Get an iova but don't pin it. Doesn't need a put because iovas are currently * valid for the life of the object */ -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret = 0; msm_gem_lock(obj); @@ -569,7 +567,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->base.va.addr; + *iova = vma->va.addr; } msm_gem_unlock(obj); @@ -577,9 +575,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { - struct msm_gem_vma *vma = lookup_vma(obj, vm); + struct drm_gpuva *vma = lookup_vma(obj, vm); if (!vma) return 0; @@ -598,7 +596,7 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova) + struct drm_gpuvm *vm, uint64_t iova) { int ret = 0; @@ -606,11 +604,11 @@ int msm_gem_set_iova(struct drm_gem_object *obj, if (!iova) { ret = clear_iova(obj, vm); } else { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { + } else if (GEM_WARN_ON(vma->va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -625,10 +623,9 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * purged until something else (shrinker, mm_notifier, destroy, etc) decides * to get rid of it */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; msm_gem_lock(obj); vma = lookup_vma(obj, vm); @@ -1241,9 +1238,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, return ERR_PTR(ret); } -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova) +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova) { void *vaddr; struct drm_gem_object *obj = msm_gem_new(dev, size, flags); @@ -1276,8 +1273,7 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, } -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm) +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm) { if (IS_ERR_OR_NULL(bo)) return; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 5091892bbe2e..acb976722580 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -85,12 +85,7 @@ struct msm_gem_vm { }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm); - -void msm_gem_vm_put(struct msm_gem_vm *vm); - -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); @@ -117,12 +112,12 @@ struct msm_gem_vma { }; #define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct msm_gem_vma *vma); -int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); -void msm_gem_vma_close(struct msm_gem_vma *vma); +void msm_gem_vma_purge(struct drm_gpuva *vma); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { struct drm_gem_object base; @@ -167,22 +162,21 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova); +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm); +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +int msm_gem_set_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm); + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -203,11 +197,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, char *name); struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -301,7 +294,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index e8a670566147..998cedb24941 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -299,7 +299,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; - struct msm_gem_vma *vma; + struct drm_gpuva *vma; /* if locking succeeded, pin bo: */ vma = msm_gem_get_vma_locked(obj, submit->vm); @@ -312,7 +312,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].iova = vma->base.va.addr; + submit->bos[i].iova = vma->va.addr; } /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 56221dfdf551..0bc22618e9f0 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -20,52 +20,38 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } - -void msm_gem_vm_put(struct msm_gem_vm *vm) -{ - if (vm) - drm_gpuvm_put(&vm->base); -} - -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm) -{ - if (!IS_ERR_OR_NULL(vm)) - drm_gpuvm_get(&vm->base); - - return vm; -} - /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct msm_gem_vma *vma) +void msm_gem_vma_purge(struct drm_gpuva *vma) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); - unsigned size = vma->base.va.range; + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ - if (!vma->mapped) + if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); + vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); - vma->mapped = false; + msm_vma->mapped = false; } /* Map and pin vma: */ int -msm_gem_vma_map(struct msm_gem_vma *vma, int prot, +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); int ret; - if (GEM_WARN_ON(!vma->base.va.addr)) + if (GEM_WARN_ON(!vma->va.addr)) return -EINVAL; - if (vma->mapped) + if (msm_vma->mapped) return 0; - vma->mapped = true; + msm_vma->mapped = true; /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold @@ -76,40 +62,44 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); if (ret) { - vma->mapped = false; + msm_vma->mapped = false; } return ret; } /* Close an iova. Warn if it is still in use */ -void msm_gem_vma_close(struct msm_gem_vma *vma) +void msm_gem_vma_close(struct drm_gpuva *vma) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + struct msm_gem_vma *msm_vma = to_msm_vma(vma); - GEM_WARN_ON(vma->mapped); + GEM_WARN_ON(msm_vma->mapped); spin_lock(&vm->mm_lock); - if (vma->base.va.addr) - drm_mm_remove_node(&vma->node); + if (vma->va.addr && vm->managed) + drm_mm_remove_node(&msm_vma->node); spin_unlock(&vm->mm_lock); + dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL); mutex_lock(&vm->vm_lock); - drm_gpuva_remove(&vma->base); - drm_gpuva_unlink(&vma->base); + drm_gpuva_remove(vma); + drm_gpuva_unlink(vma); mutex_unlock(&vm->vm_lock); + dma_resv_unlock(drm_gpuvm_resv(vma->vm)); kfree(vma); } /* Create a new vma and allocate an iova for it */ -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct msm_gem_vm *vm = to_msm_vm(_vm); struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -154,7 +144,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); - return vma; + return &vma->base; err_va_remove: mutex_lock(&vm->vm_lock); @@ -186,7 +176,7 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = { * handles virtual address allocation, and both async and sync operations * are supported. */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { @@ -220,7 +210,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_mm_init(&vm->mm, va_start, va_size); - return vm; + return &vm->base; err_free_vm: kfree(vm); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0d466a2e9b32..4d24dcf62064 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = submit->vm->mmu; + struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -387,7 +387,7 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; if (submit->vm) - submit->vm->faults++; + to_msm_vm(submit->vm)->faults++; get_comm_cmdline(submit, &comm, &cmd); @@ -463,6 +463,7 @@ static void fault_worker(struct kthread_work *work) { struct msm_gpu *gpu = container_of(work, struct msm_gpu, fault_work); struct msm_gem_submit *submit; + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); char *comm = NULL, *cmd = NULL; @@ -492,7 +493,7 @@ static void fault_worker(struct kthread_work *work) resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); mutex_unlock(&gpu->lock); } @@ -829,10 +830,11 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) } /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_vm *vm = NULL; + struct drm_gpuvm *vm = NULL; + if (!gpu) return NULL; @@ -843,11 +845,11 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) if (gpu->funcs->create_private_vm) { vm = gpu->funcs->create_private_vm(gpu); if (!IS_ERR(vm)) - vm->pid = get_pid(task_pid(task)); + to_msm_vm(vm)->pid = get_pid(task_pid(task)); } if (IS_ERR_OR_NULL(vm)) - vm = msm_gem_vm_get(gpu->vm); + vm = drm_gpuvm_get(gpu->vm); return vm; } @@ -1020,8 +1022,9 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); if (!IS_ERR_OR_NULL(gpu->vm)) { - gpu->vm->mmu->funcs->detach(gpu->vm->mmu); - msm_gem_vm_put(gpu->vm); + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; + mmu->funcs->detach(mmu); + drm_gpuvm_put(gpu->vm); } if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 1f26ba00f773..d8425e6d7f5a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,8 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -234,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -363,7 +363,7 @@ struct msm_context { int queueid; /** @vm: the per-process GPU address-space */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /** @kref: the reference count */ struct kref ref; @@ -673,7 +673,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 6458bd82a0cd..e82b8569a468 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned long iova, int flags, void return -ENOSYS; } -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct msm_mmu *mmu; struct device *mdp_dev = dev->dev; struct device *mdss_dev = mdp_dev->parent; @@ -212,7 +212,7 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) return vm; } - msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(vm)->mmu, kms, msm_kms_fault_handler); return vm; } diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index f45996a03e15..7cdb2eb67700 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 6298233c3568..8ced49c7557b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } - msm_gem_vm_put(ctx->vm); + drm_gpuvm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); From patchwork Wed Mar 19 14:52:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874764 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAC531DE4E7; Wed, 19 Mar 2025 14:55:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396133; cv=none; b=rzwe3+8anNUB2N4C8jg+jQZTf1WZqJa9bIqPX8eU3dyNJMTqQWcrkCJJuAarY7TWL2QRWAR66aP5EkLDjMd7X6Q3IkjbosfuS+VR9ECCIZLT+48+ooAEP+thmaTRikKo01RhoLBbd9E+HgMi+qZsTjVPhQqFFW51K0Yot/hkxpQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396133; c=relaxed/simple; bh=o79r6tCeY0P38PsiNUSlSfEWia91GTvNdpsKrUAZ+JE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X5b27WU4SWkkub159XKJxYlPEUqQcbJ2v0I0KjDfftuaKdbRP343D2U/u2x2al0YC1EkTIkS1zX25ooxacKzPoLpbJ6hyJ5jc11WoCaShl45OovUbBWwLSXO0T7nr7qDIhM6ZsK8hpEvB40dXOWQKSYHqWcYtm8U53b6a/eqDUU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LMNz5DF3; arc=none smtp.client-ip=209.85.216.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LMNz5DF3" Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-2ff615a114bso1332391a91.0; Wed, 19 Mar 2025 07:55:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396131; x=1743000931; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RNXaYltKdqI4pcwHv3VKQ0s77cLhZEAL3udH6bNlA1k=; b=LMNz5DF3y8CgnYlEXfzpnDWbfTZ7c3VdLZB4CFo1wIbDPIU82OVwvZMMEm2NT8gb44 YWxLPnDeNrJB2IKTJdpOHryZt8v3VnemNGCRjdc+xZslhe9y9TQ7xDQHN8WE0irhrJSQ puhjcQNT3eSqUzUZoSQImm+a6YxuqObqaSOhFHWuqs1YZT0IS1lfvpHuS4/o50b9IDi2 Fpnh9gOumLGheB6UJYas6lzCfeYFS6kVukQ+0/Y0NaZvXSPJSV6GbG62H+TlegbaJ3vD g3uiT2Dd8oNVA+2E81IP3ogCvdUJ6hSPr+IegoE4cWnLCzPPkYWky+1DTgISXRm80xg2 Fk6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396131; x=1743000931; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RNXaYltKdqI4pcwHv3VKQ0s77cLhZEAL3udH6bNlA1k=; b=dhrSgPt75gdX9xQx7Z7g/NHMWqq2HVnKWimG9OqYJhYd68aurz1VzXGgZkm9qIj8pX pG7QKNxCfIUINxY7T/4sfNPHsGrW0pMjRMtXiMrcvqY1OqD6akow9Q93Zh+vbVxdoJzt pPkLFNR78k0M8Nj/M9hmNvDXTZqdTNRAVkpZZWM3hLrjiDCyYLxv4LCQc9h/fO3CpdzG 8KyFKmM1mDXEHd2t9Nc5HMA2bIdAJbf3o4+WAS/gUPvmWwolV5eua8uyrA0bjka6aKcW ZONGMIl6dd6XtALqkIpIb//Cp1qU7DCZO4Depn1nzPl/zkxT6MWgz+NyEhVk/nu8dlVi kIng== X-Forwarded-Encrypted: i=1; AJvYcCU0jPUjd/CMEFWy58OydoNqf4Z3ZbKvQPIrUPnvKg+SHp2QweyhbnknK9JHetKAp2/hjXqjwKjyVeo8r3q7@vger.kernel.org, AJvYcCWiSe8/kvlaqdTB9tdeXp+bW2gEHUyEgrPnca2m1xeX/bTKZSAf+R5zTgJivpoQGP45dv0NkLvt/L+w4MKa@vger.kernel.org X-Gm-Message-State: AOJu0YwJk0/A8FoXn+kh0jfItoljb33goW6r8BKqGB0WfIDf79Q7EIj9 MoHu5yv175b8I42VAzHnwb69lh7rXS/j0crQAkl4KW4MXax5PBK2 X-Gm-Gg: ASbGncsdX0YuOcrtMfssp+78eLVGCIBZwVwZcWCn7ZFJtwLJ2CrLAaLFqn/e2nxiVYd j/lUf5XMdP/11wjaZMPp0uiHq4yRvIAzWFJCl5EWUdlLJDpnb6XoRE04n9Huy7xEbaActX9HhPr pqwLn1zbhP1aee62G1DBW1TPl2+bijyRe546Z2XVNNGwZxJPE3l65LdgNb+cYiUckQBqLS8Zkoz 24XogXRLt7N4fQIgLl0HnihAjX0quRFURtncsiJYfjQbzmrkHWfvETDuQd4KPbg5S70MIUlmE6l /kPwPclFxtHGyfzxVhi+TrAGAne99gy4jsDp8If25lGgJscarl6Yaup4AbRKFCU51kTvzMDEE0D ij0HrUywxSS4/wODm67g= X-Google-Smtp-Source: AGHT+IHYkvTd6Ji85eeGfR6RC4/2eCSeeWGGiU/i0Z4s8dLAfPk1JpKwIwLzLzIY/Vr8CeGvJLiqjQ== X-Received: by 2002:a17:90b:2f48:b0:2fa:1e56:5d82 with SMTP id 98e67ed59e1d1-301bdb1074cmr5400280a91.17.1742396131316; Wed, 19 Mar 2025 07:55:31 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-301bf58b360sm1682942a91.20.2025.03.19.07.55.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:30 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 13/34] drm/msm: Split submit_pin_objects() Date: Wed, 19 Mar 2025 07:52:25 -0700 Message-ID: <20250319145425.51935-14-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark For VM_BIND, in the first step, we just want to get the backing pages, but defer creating the vma until the map/unmap/ops are evaluated. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 998cedb24941..c65f3a6a5256 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -292,12 +292,16 @@ static int submit_fence_sync(struct msm_gem_submit *submit) return ret; } -static int submit_pin_objects(struct msm_gem_submit *submit) +static int submit_pin_vmas(struct msm_gem_submit *submit) { - struct msm_drm_private *priv = submit->dev->dev_private; - int i, ret = 0; + int ret = 0; - for (i = 0; i < submit->nr_bos; i++) { + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + for (int i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; struct drm_gpuva *vma; @@ -315,6 +319,13 @@ static int submit_pin_objects(struct msm_gem_submit *submit) submit->bos[i].iova = vma->va.addr; } + return ret; +} + +static void submit_pin_objects(struct msm_gem_submit *submit) +{ + struct msm_drm_private *priv = submit->dev->dev_private; + /* * A second loop while holding the LRU lock (a) avoids acquiring/dropping * the LRU lock for each individual bo, while (b) avoiding holding the @@ -323,14 +334,12 @@ static int submit_pin_objects(struct msm_gem_submit *submit) * could trigger deadlock with the shrinker). */ mutex_lock(&priv->lru.lock); - for (i = 0; i < submit->nr_bos; i++) { + for (int i = 0; i < submit->nr_bos; i++) { msm_gem_pin_obj_locked(submit->bos[i].obj); } mutex_unlock(&priv->lru.lock); submit->bos_pinned = true; - - return ret; } static void submit_unpin_objects(struct msm_gem_submit *submit) @@ -760,10 +769,12 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; } - ret = submit_pin_objects(submit); + ret = submit_pin_vmas(submit); if (ret) goto out; + submit_pin_objects(submit); + for (i = 0; i < args->nr_cmds; i++) { struct drm_gem_object *obj; uint64_t iova; From patchwork Wed Mar 19 14:52:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874763 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9AFEA212B29; Wed, 19 Mar 2025 14:55:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396135; cv=none; b=n9mLTNbowgXuTn1JSwjet1jtGDUEbjSxiL16mDT+TAN2ubdMPcV/ZHVimrHeYMJ6SBybgAMy/+T6Wc4O39TTsXtOhJF/+QCr/QH+qO1/Z40kgjO9USELM84RZgUQmf7WyxuSY41uPVcVjb8IkSBTEqenwtb7qyAl2wz+9D8RrYQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396135; c=relaxed/simple; bh=c5/aPOAW8wr9nxLS+nlMDoymUY0NRLuVDLLZiO2RO5s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OSOtjIHrlSHfuo0ciGkNJt9h69Ytjxj87ArjDjJPkIyz0xEzxJaFzQKdhMPAX/bbLLfrxAF9Y6dPkv4ImqPk+HYH2QjmhNbcB8QAWi/tBJUE6Ory2XAisDmjccoRJ9IfnB0uD2mtDZBrcBMVDI0RhUMDtkyeSexkxC9YiVNry3M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gsV30qUa; arc=none smtp.client-ip=209.85.216.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gsV30qUa" Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-2fecba90cc3so10720914a91.2; Wed, 19 Mar 2025 07:55:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396133; x=1743000933; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TOHk38F0tl7xHU3oAQ4aqYFTwL2HjLzboyssr3kzaUc=; b=gsV30qUaWFN8mdXuvhDrcLkHT3B1lhola64UOK2Q6BcSUIv9/I/LSBwecyIiYvFWOj ju+RtXIXOleHuFx4IFriQBfrDJPuRhxo7gzRr6w5VQ41ExC7msldMpSGPOYRaf7SdoQd fezzeXDsx7cnrQAMowv90gLaZaLM4YYs6Ojtl6dbHxf935kSGBZU6Tnc6ZT+TaQOnbUw ypLyRdMJlcugthqL8aVABdhR6y1wcqST4+IUg+9Nk5RRnC2lZQkcJYB/j3glV61KRh/s WNRCDNtDyyrMDsToIl/eGsU/lHTcURtcXAkz9Dbx84dnznw5CgjJEm9WIWBRNHgZoGUy IWpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396133; x=1743000933; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TOHk38F0tl7xHU3oAQ4aqYFTwL2HjLzboyssr3kzaUc=; b=f84H95sTW6VyDZkgsrtWhG+YSftqLcZ/i5aABSjE6wydBsl6Jig3hSBTT6zV5VWMsL Pk/injk4TWkcVWGGs73XT297mrb0MD+Ulvk6z/xdYT8bGYZqewdiGjjxN1dmJnsQ3+XM XMnX4lkJpnx3/8CFHTMUyLtGrMesvB9T8p7adPd44ix/L4jgEvnfUcAhsLNFoBseSwAY Z8PuW6DjsoMG7QhcRPSZReqJMBYXB0IUuCrkn2ZG3WZuI1ROk/cix/CkTAup5+hv++Ms tgAyu5jR6w9QZRDp2DE9bIEx5NDKJPEp7f2BSuzzrnn+MoJT75QGMiT3Bi7I0/aThW9y ew6w== X-Forwarded-Encrypted: i=1; AJvYcCWoXIOIiJSKKKj5nmYyAh5N+kABA7Eh4mR6Eq1C+dHPPNj9O7191QMhld+A1ogU3Y6VrTcvD1U1OAhKimLP@vger.kernel.org, AJvYcCXLhsIwVQ80lFeQ5JfmHt23dk6oDTjzmhzlpjpPJMfXClc7Fni3IiY0FljY9dIxgt5VZ5pVGecC/9JUndyU@vger.kernel.org X-Gm-Message-State: AOJu0YxU7byojPT7pTPmJy94ZImKjd8X81HSh6PrQd4MoYxlD0SSER+/ 6/uxQKZKaFvkt01VChZHpoEjGcOjT4v2LBQSyvkY5oCfW7Q3Teea X-Gm-Gg: ASbGncuKnZhNMt2xmBC9lGWDUcYUMksfvIbcC9EdMxgryup/TcelYKG0hzidx5U7cvS VFhk5MDvr9rtNQXtwbXubSxVH3FqAUYWU8/6bASdMftWcmbCAjyIvrsq0g2Wxnox5aLONHVqWPZ w0dqs/gUX3XvcQ+61WIeSGHWVMKtnWfgpBG9J/IlTlwT5TuF9pr6KXc+DqoTEq1vAX/jtU+8zpf x8NHzz0saNfr/YG+u8giVyVC+9M11ewpLPuAbUKHcB28xMQatXqSl2g47yD8nZt12UwDbLzIexw IbsypVQm/Ng5/bFAJhECF8z3tso2ObkgpA4lS0jJPu2wfFz6FoOF2t75P0xVfNEKOtU5OLmGaEB 8wRAUxLxvD27QruHP3TE= X-Google-Smtp-Source: AGHT+IFYQufK0wa3cbengJLh5klI1mjIuJKwUWBuMkZcZN0IbcasOFyRGym8ACl300A02oGcV2TWXg== X-Received: by 2002:a17:90b:1811:b0:2ff:5ed8:83d1 with SMTP id 98e67ed59e1d1-301bdf8fe6dmr3960584a91.19.1742396132902; Wed, 19 Mar 2025 07:55:32 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-301bf63716fsm1675085a91.47.2025.03.19.07.55.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:32 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 14/34] drm/msm: Lazily create context VM Date: Wed, 19 Mar 2025 07:52:26 -0700 Message-ID: <20250319145425.51935-15-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In the next commit, a way for userspace to opt-in to userspace managed VM is added. For this to work, we need to defer creation of the VM until it is needed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 14 +++++++----- drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++- 5 files changed, 43 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 4811be5a7c29..0b1e2ba3539e 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; struct msm_context *ctx = submit->queue->ctx; + struct drm_gpuvm *vm = msm_context_vm(submit->dev, ctx); struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; @@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 0f71703f6ec7..e4d895dda051 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -351,6 +351,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct drm_device *drm = gpu->dev; + /* Note ctx can be NULL when called from rd_open(): */ + struct drm_gpuvm *vm = ctx ? msm_context_vm(drm, ctx) : NULL; /* No pointer params yet */ if (*len != 0) @@ -396,8 +398,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->vm) - *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults; + if (vm) + *value = gpu->global_faults + to_msm_vm(vm)->faults; else *value = gpu->global_faults; return 0; @@ -405,14 +407,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->vm == gpu->vm) + if (vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->mm_start; + *value = vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->vm == gpu->vm) + if (vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->mm_range; + *value = vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 6ef29bc48bb0..6fd981ee6aee 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -214,10 +214,29 @@ static void load_gpu(struct drm_device *dev) mutex_unlock(&init_lock); } +/** + * msm_context_vm - lazily create the context's VM + * + * @dev: the drm device + * @ctx: the context + * + * The VM is lazily created, so that userspace has a chance to opt-in to having + * a userspace managed VM before the VM is created. + * + * Note that this does not return a reference to the VM. Once the VM is created, + * it exists for the lifetime of the context. + */ +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) +{ + struct msm_drm_private *priv = dev->dev_private; + if (!ctx->vm) + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + return ctx->vm; +} + static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); - struct msm_drm_private *priv = dev->dev_private; struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); @@ -230,7 +249,6 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv = ctx; ctx->seqno = atomic_inc_return(&ident); @@ -408,7 +426,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->vm, iova); + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova); } static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -417,18 +435,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, { struct msm_drm_private *priv = dev->dev_private; struct msm_context *ctx = file->driver_priv; + struct drm_gpuvm *vm = msm_context_vm(dev, ctx); if (!priv->gpu) return -EINVAL; /* Only supported if per-process address space is supported: */ - if (priv->gpu->vm == ctx->vm) + if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; - return msm_gem_set_iova(obj, ctx->vm, iova); + return msm_gem_set_iova(obj, vm, iova); } static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c65f3a6a5256..9731ad7993cf 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, kref_init(&submit->ref); submit->dev = dev; - submit->vm = queue->ctx->vm; + submit->vm = msm_context_vm(dev, queue->ctx); submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d8425e6d7f5a..c15aad288552 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -362,7 +362,12 @@ struct msm_context { */ int queueid; - /** @vm: the per-process GPU address-space */ + /** + * @vm: + * + * The per-process GPU address-space. Do not access directly, use + * msm_context_vm(). + */ struct drm_gpuvm *vm; /** @kref: the reference count */ @@ -447,6 +452,8 @@ struct msm_context { atomic64_t ctx_mem; }; +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * From patchwork Wed Mar 19 14:52:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875053 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FEDD214227; Wed, 19 Mar 2025 14:55:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396138; cv=none; b=q2SbadRf11J0I9q7ufVndF6rpAACxT2tZTOK11akab1032jZaXBQs1Zz4tYvLauv6pAEGXulr+JvhErg7JgRfezBDURcW0pTAGS0w6nC80CbtFrua7+85Tq/yc4Rc4V3lNj7MvOEn9hp//PgAaljS1894WnBetDZeRVeUdBS+X0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396138; c=relaxed/simple; bh=9VLeal116Ey4bP1wzpDX9ooikXDJQYGMzDC5B0Jtay4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=I13D+nOoORZyat3oDvmlE3Zm/O3SBoyldjcey6yj6mj0aQJyaau1MwEcrVEgD4uwKtSzSCNtaX1ZiVKg1FXgjM+qyCo5AaKSGIR6BjMsYAcnWy4p2GbHfqgUGTP1T8p96ufCjizb0GhdexbPAl+2aS2TbewC262mz7mec/L2/28= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=EAdXdaKm; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EAdXdaKm" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-22349bb8605so155863815ad.0; Wed, 19 Mar 2025 07:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396136; x=1743000936; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bQH8DbZqtwg4hS3pHDUEf9q1ITyddrtv8687wNJtOMc=; b=EAdXdaKmsnxYEkIhhSz0wP2hgJVbTHchJHBF6QB1UmjVHpGI5Y5UAsV5QcRqpcz0xB Pree6/MEngQoeAzrCC3Zz7Sktf3SXAiU5Td6NoXeApSmKkxRPXwl2cU9UEUMVEUR8euq n/q4meW/YNC6/fjP2D+DkuXaN9VNluV9/lNl6BRnVTJbsy6NkI1fMs8VVGRoA7nmCFLa H7+L1en/n4O2EWyK9O2nQ2MQYfmUYFkAcc7iIUzswWOSIL5aF642r+2alSk3OdWOz7OI nt9bR7tzEBvpjhAgjZ+QytlI/R9SEusPiut1sjoDwncKcqY6bwN2QYv1nzJnSzE7b4Nt Xptg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396136; x=1743000936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bQH8DbZqtwg4hS3pHDUEf9q1ITyddrtv8687wNJtOMc=; b=IW7KAw5SUl7Yib4TPyArr+Rq0SOIrRzvvQxGQdI5Kc0HQYyeH3Rd9XNEkgAVKXt/An +DMYQidu8FZczynDgYsiVAHj1CvlB1omQIQEaJy4iGq8ey7u0c8jdv0EjMv/4/slCbLd KOPmZ0DJJx7fRAiQAZAWoapXcBQkLvuIlNPAkKINnb+5yVZrfw2XLDDzms02vdqXz70M uH+mGlg0KM9vXcbeY31OesEqtva/GQLHU9tP22vjOPoAHZiJZosNzvW5qqglfGa3bCtj ICZ3gDepKPb5VI1roEaExQutoZrV7TDLCCgC4lwhgckfSau5mhSl3UZ0CyNp0sPkb2oJ APqQ== X-Forwarded-Encrypted: i=1; AJvYcCVV/5/TfsWE2qeYS1Bi7pJWlOd08XVeNtcRc1hGTXpHOqiM4nzeE/yBEArL96IZ3XEfphjCRyYf6vShkGH/@vger.kernel.org, AJvYcCWmu8JeZBz4ffIhdI7brkqweKSa+33VFm9kRCAnpT9SiKB455yeBxaRgfGUN82W2+YgceqDv3+NPOJysz4N@vger.kernel.org X-Gm-Message-State: AOJu0Yzc7rSyhARWj5/qr7JXT3nMBaAt9sHJXL1JUUgRdDEPsWGvmC5/ oQx03kmfcByt8tlZFcEKp8DFnjJrgKyBQ451ZhUXLE+aNXYaY4ApjlOehw== X-Gm-Gg: ASbGncthLVh6CsL1Yq7YdjM1WERfGSLiYIUPT/2MeUPjmnVAxS3BkCHXaFtVLCANkgm pD/z6SBezFehweIfvMD7QQmAB+4pm/JxckCZm+84fPUXkWFD1UF54ZMrm3YVNZLMVnddl7LFGvE 6UPUmIeKKg4HNtBRQuu1vTaMdeLSl54bVyiwzSyP5O/BUGRicPuoQdVACA34Eu/7YD8XqUfTzys tEp+e+MKB5mMurRPVnVrBP29ZE3RHhfp7Vb3P35Dyh+9d4XYeKBtGqS9Ko//i8FUuV1tb+sCnaj nmZPMGJWyfnelWxqs2KEtlwxRi0fU2nJN5keeQykJ0Rb/gLrEKJmYIPqcCWGj5b03t505Zc8TOo F28P8PlRHbPHqXuzdU+c= X-Google-Smtp-Source: AGHT+IG6KAE4mm0/G2mieEtGQOmqBqpkq9/XnH3CNDWg2HUy+zAKeyqc5KIqyyMy0GE43BBNhZ/OTg== X-Received: by 2002:a17:902:d4ce:b0:223:3624:87b7 with SMTP id d9443c01a7336-22649a2f6femr33007325ad.13.1742396135663; Wed, 19 Mar 2025 07:55:35 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c68880cesm115251685ad.30.2025.03.19.07.55.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:35 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 15/34] drm/msm: Add opt-in for VM_BIND Date: Wed, 19 Mar 2025 07:52:27 -0700 Message-ID: <20250319145425.51935-16-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add a SET_PARAM for userspace to request to manage to the VM itself, instead of getting a kernel managed VM. In order to transition to a userspace managed VM, this param must be set before any mappings are created. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 15 +++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 13 +++++++++-- drivers/gpu/drm/msm/msm_gem.c | 8 +++++++ drivers/gpu/drm/msm/msm_gpu.c | 5 +++-- drivers/gpu/drm/msm/msm_gpu.h | 29 +++++++++++++++++++++++-- include/uapi/drm/msm_drm.h | 24 ++++++++++++++++++++ 7 files changed, 90 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0b1e2ba3539e..ca3247f845b5 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2263,7 +2263,7 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) } static struct drm_gpuvm * -a6xx_create_private_vm(struct msm_gpu *gpu) +a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; @@ -2273,7 +2273,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu) return ERR_CAST(mmu); return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL, - adreno_private_vm_size(gpu), true); + adreno_private_vm_size(gpu), kernel_managed); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index e4d895dda051..739161df3e3c 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -483,6 +483,21 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); return msm_context_set_sysprof(ctx, gpu, value); + case MSM_PARAM_EN_VM_BIND: + /* We can only support VM_BIND with per-process pgtables: */ + if (ctx->vm == gpu->vm) + return UERR(EINVAL, drm, "requires per-process pgtables"); + + /* + * We can only swtich to VM_BIND mode if the VM has not yet + * been created: + */ + if (ctx->vm) + return UERR(EBUSY, drm, "VM already created"); + + ctx->userspace_managed_vm = value; + + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 6fd981ee6aee..5b5a64c8dddb 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -229,8 +229,11 @@ static void load_gpu(struct drm_device *dev) struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) { struct msm_drm_private *priv = dev->dev_private; - if (!ctx->vm) - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + if (!ctx->vm) { + ctx->vm = msm_gpu_create_private_vm( + priv->gpu, current, !ctx->userspace_managed_vm); + + } return ctx->vm; } @@ -419,6 +422,9 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; @@ -440,6 +446,9 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + /* Only supported if per-process address space is supported: */ if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index a0c15cca9245..5a5220b6f21d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -63,6 +63,14 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) if (!ctx->vm) return; + /* + * VM_BIND does not depend on implicit teardown of VMAs on handle + * close, but instead on implicit teardown of the VM when the device + * is closed (see msm_gem_vm_close()) + */ + if (msm_context_is_vmbind(ctx)) + return; + /* * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 4d24dcf62064..503e4dcc5a6f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -831,7 +831,8 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) /* Return a new address space for a msm_drm_private instance */ struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed) { struct drm_gpuvm *vm = NULL; @@ -843,7 +844,7 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) * the global one */ if (gpu->funcs->create_private_vm) { - vm = gpu->funcs->create_private_vm(gpu); + vm = gpu->funcs->create_private_vm(gpu, kernel_managed); if (!IS_ERR(vm)) to_msm_vm(vm)->pid = get_pid(task_pid(task)); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index c15aad288552..20f52d9636b0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -79,7 +79,7 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu, bool kernel_managed); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -362,6 +362,14 @@ struct msm_context { */ int queueid; + /** + * @userspace_managed_vm: + * + * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via + * MSM_PARAM_EN_VM_BIND? + */ + bool userspace_managed_vm; + /** * @vm: * @@ -454,6 +462,22 @@ struct msm_context { struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); +/** + * msm_context_is_vm_bind() - has userspace opted in to VM_BIND? + * + * @ctx: the drm_file context + * + * See MSM_PARAM_EN_VM_BIND. If userspace is managing the VM, it can + * do sparse binding including having multiple, potentially partial, + * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA, + * SET_IOVA) are rejected because they don't have a sensible meaning. + */ +static inline bool +msm_context_is_vmbind(struct msm_context *ctx) +{ + return ctx->userspace_managed_vm; +} + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * @@ -681,7 +705,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, const char *name, struct msm_gpu_config *config); struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2342cb90857e..072e82a80607 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -91,6 +91,30 @@ struct drm_msm_timespec { #define MSM_PARAM_UBWC_SWIZZLE 0x12 /* RO */ #define MSM_PARAM_MACROTILE_MODE 0x13 /* RO */ #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ +/* MSM_PARAM_EN_VM_BIND is set to 1 to enable VM_BIND ops. + * + * With VM_BIND enabled, userspace is required to allocate iova and use the + * VM_BIND ops for map/unmap ioctls. MSM_INFO_SET_IOVA and MSM_INFO_GET_IOVA + * will be rejected. (The latter does not have a sensible meaning when a BO + * can have multiple and/or partial mappings.) + * + * With VM_BIND enabled, userspace does not include a submit_bo table in the + * SUBMIT ioctl (this will be rejected), the resident set is determined by + * the the VM_BIND ops. + * + * Enabling VM_BIND will fail on devices which do not have per-process pgtables. + * And it is not allowed to disable VM_BIND once it has been enabled. + * + * Enabling VM_BIND should be done (attempted) prior to allocating any BOs or + * submitqueues of type MSM_SUBMITQUEUE_VM_BIND. + * + * Relatedly, when VM_BIND mode is enabled, the kernel will not try to recover + * from GPU faults or failed async VM_BIND ops, in particular because it is + * difficult to communicate to userspace which op failed so that userspace + * could rewind and try again. When the VM is marked unusable, the SUBMIT + * ioctl will throw -EPIPE. + */ +#define MSM_PARAM_EN_VM_BIND 0x15 /* WO, once */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Wed Mar 19 14:52:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874762 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F33A11DE88D; Wed, 19 Mar 2025 14:55:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396140; cv=none; b=mthxBEqF/IaHUHYyQUd/89YDzdAaEdfTe57Fe6jXTGt+wqGe+YDa7vhomYMB3EYIotZUNR+4pNu3rv3VgG7Kmz1xK5h7hSuL3Nq8qySd4nVNT4XKdhQSACKhHqq9e1u314fiiZoVYpq26jGxV9Bsge/Bq+0Ge8BpiMkxk90RARc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396140; c=relaxed/simple; bh=PYjhvIGMZOmHXUrOkWbR44l0lsxY9HfY6WnLu+ZllKA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lo3vfwqfAa60RP0rdReodSmTA+4xLEYKEnIevyjim/yphot3f5LdFSflcEX2ZWeyUSUYt87fs8iLpTDL83Gq6/Q+D8dpGLxC2frpdue0ui//5/rfn+Fv1dAQdMtM3jcWVH1uNqFQN6xru5/GMbxephDbs47TwK1M89bm5jcdNDU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=IkHaytvH; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IkHaytvH" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-2260c91576aso64656815ad.3; Wed, 19 Mar 2025 07:55:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396138; x=1743000938; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Nim9kjrkjGdIN6eNBZAhPnqJ/xfEPzPRp5Qv43ty7PU=; b=IkHaytvHBs6fxtTzxk4cBgfSJPd8Vi3i8cKq4EWm71Vi0/VY2+srjDDCxsbg3/M1Ly EU5fBadDe8ebb+9EEUeiFfk/dbc0cPqWt7NPEsd6YJZEqEZWIkjKH9OCar5dqHzJQBYQ vQdaLlqC/RTxd5+UIF5cYqzTfTMWgLmYkqy45au5AOdDwIushMPnq4eoQ9WeOu55FyT6 B9Iw6dH80k/5Iic4Obf2Z8VIfuLdfynJfwpsYpA6XqRD4gthhipPk5CPZBA1jOg5AAuz uFbAuQ+CvWldmhPgggKS9aUEQROzsCFrOJem1aWatdUdmzVUxNHXRK17191jbV02qF+M Q2Lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396138; x=1743000938; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nim9kjrkjGdIN6eNBZAhPnqJ/xfEPzPRp5Qv43ty7PU=; b=jqdpFrYW0bo9bz+27gJioIQvDHI8tTbSaIhNSbpFdXBYJVP2O050YQCbvM9bo1dQ1/ qOzpClrFbKlNw42dusxtbJ0D2ZjZYECpP9k4HWt0W7Ac3SCXS34VOoq4L0yp15hMMyeS 9IWnJAzpRM2Pw4e49OcCtZW/4sikztRxq0ra/c1kTqv9bMHBo96H7kRb04IbLtP7zMEm baAX4Ow8LeAnVvfQF3XGq6h/FaruKaiOcM071dKjVmN0RYkxmQGhNazpiFhvXIUy3Kfj fRYTf02TFNW0DBoiMqPJrZFcbHmOQ+rzrA+NcR1P7CT+eGmANoBVsrbihsbGcufM7fQ9 H/Mg== X-Forwarded-Encrypted: i=1; AJvYcCUU4+CYAX9iuANFOtyIGFb5PT78PJwi0i1AYSakq0dHD98p/oAJ2SYqS7u+solMf1gYLgPXoD2EDjRcLHTM@vger.kernel.org, AJvYcCWG09iLnpeiEjMLwBXQHqfZLccx7Uo2Y77x4O7Ucg/L9NnnT8sO37BDkFFhsCKOdGHxFrHN9dvYGUXloEwo@vger.kernel.org X-Gm-Message-State: AOJu0YzERf6RBM5osqIM36SILXbCF9dILyZsAVuHDPqXkHNkL7oR0NrV ZOe7mksBWrgMYmZCeJJfjQa033tsxIe7XvmaAdJ4c6coAv4bV2UK X-Gm-Gg: ASbGncta3MO/lG3msnUiqHbflx9tL5HcnEEZaJlvWT6L2Z4Y8dI2myS6SFRfpmQUaU1 MAb63+AcGl+Ylgl50GcKJUao+zRjB52eiVqmulNnNkB+BjPklzzvA+y7zinjpokoEEKi6ILKaH8 jF1AMZFHzNF39Nx9mQ6S9Y1Z6zd2vjWBgd9EdIAGjraRijb0PLI6HIxz1F0qUVuYpxaFub0pzAq Ey2l2ybiwkNK1p13LT4/XdMAoOhrpVmu2SXJFatCtU65rrE0R8A652u2t8p3vOxjo5+GayHZ26+ YASMq20IqBQPQCE7N/8qRyQ/CW/R/JEAD/am1bPw+0PkQ8FiKTk9K1IthBzoXMjBHBMeDxKWnmk wHv50xmM79jTCZ8DBoUo= X-Google-Smtp-Source: AGHT+IGArp2n3VmhersmP5Pj7DoCKt1FlM2hckfmyKddMEM26QCObFWqRh+c2PJFcEiciG90iToiIQ== X-Received: by 2002:a17:903:1d2:b0:21f:85d0:828 with SMTP id d9443c01a7336-22649c8f940mr50284065ad.41.1742396138086; Wed, 19 Mar 2025 07:55:38 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c68ab4c6sm115203605ad.77.2025.03.19.07.55.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:37 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults Date: Wed, 19 Mar 2025 07:52:28 -0700 Message-ID: <20250319145425.51935-17-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If userspace has opted-in to VM_BIND, then GPU faults and VM_BIND errors will mark the VM as unusable. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++ drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++-- 3 files changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index acb976722580..7cb720137548 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -82,6 +82,23 @@ struct msm_gem_vm { /** @managed: is this a kernel managed VM? */ bool managed; + + /** + * @unusable: True if the VM has turned unusable because something + * bad happened during an asynchronous request. + * + * We don't try to recover from such failures, because this implies + * informing userspace about the specific operation that failed, and + * hoping the userspace driver can replay things from there. This all + * sounds very complicated for little gain. + * + * Instead, we should just flag the VM as unusable, and fail any + * further request targeting this VM. + * + * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST + * situation, where the logical device needs to be re-created. + */ + bool unusable; }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 9731ad7993cf..9cef308a0ad1 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -668,6 +668,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 503e4dcc5a6f..4831f4e42fd9 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -386,8 +386,20 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; - if (submit->vm) - to_msm_vm(submit->vm)->faults++; + if (submit->vm) { + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + + vm->faults++; + + /* + * If userspace has opted-in to VM_BIND (and therefore userspace + * management of the VM), faults mark the VM as unusuable. This + * matches vulkan expectations (vulkan is the main target for + * VM_BIND) + */ + if (!vm->managed) + vm->unusable = true; + } get_comm_cmdline(submit, &comm, &cmd); From patchwork Wed Mar 19 14:52:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875052 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A331E214801; Wed, 19 Mar 2025 14:55:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396143; cv=none; b=BwFG/Oiu9C3JOLW+5nAGB1fNPpAeJQnzmPzHP7bbKJ40zof3aekM7M7jDt+Rgd34WuP47BQM4t2jQV3ccJWTwLIoY5ryK6KbLp5JoD4Z7yno3MsMcepe+w2rXLc8xymDlQF+/H11paiiO13bkceenypXpgQvD8aL+nqrL/qSVwc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396143; c=relaxed/simple; bh=vmY7C8OGUYpxgzPjTwyBIYcrLqKNd4GK17l+1xwvqb0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hFIReTTgHCwbvkFP2abCCwEd1+Jby+QnlBKRlYQ6e0PgWCp7JYynol/CoNeexWDPXHUlOckXXxeq9jxVFIGSYKNtwEnCbqdWwGrJxmSY8uJVLVAWOQBPtWO7JnwLBhseB4dDtxrV+s/4hTLYlsx9g///dQLc+VXHupnUWcX0DdA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aHP56ftc; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aHP56ftc" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-2243803b776so27779615ad.0; Wed, 19 Mar 2025 07:55:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396140; x=1743000940; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rFp5gYDsJKKjusUsNUKNLygAl2cHqL42U7ejjcZcvPI=; b=aHP56ftcjtNP8PZMZXXg+Du1RjuqKmDiuqeueNb0D19iUuClcdu0rguLYwQ0BIwZnC LXToPI19H7LOGo/NdZ83lFF8WyYxCJlzpiHWgz32AfzfDD7joHOcI7ogFNOeZyqoHuYL YlMD18gaJ+Uztjbu71iaSCdW9Xn3F/dr5aBixTkC194O8896Ek/qdpWyFd5EctKhYMUi vKOUeS+nOWLAvPBp6N8D9bBul49ErSsZt69iq5sgu+7whUlVlP4Bb1wNn2ZMKP9BNPJV 5T0sVoytOoEp3VNNLOeZERSMBxDAURYmDRFsQe7Isfo3BG4DB1QVuTyVUhZjUx/Ilb33 RLgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396140; x=1743000940; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rFp5gYDsJKKjusUsNUKNLygAl2cHqL42U7ejjcZcvPI=; b=isIfXiv2xf21fovWGGupQKm2scHmpwtlLXcG/cUQWU9SIQpt6GwbeT+Fe84CzUzprr QbuGQpD6eqkjihS6p+uceUSnShh5WPL16wUfVfAmO1NXx8nr81qYnuYY+uo9NB9clFwk kKcWTo8SPHIZSyB51BZwBGsLhenXnsd7C59kTT2bsfWc1DPqiAlOe7+Rhy0ZbJoDbeRC fzTWGDKm8Cc2G/28V7/ukAmc5WJX1rzsjM1dYok4LoDvAlzV8MPhdcjxrmDC3t+DFioe k7L4Bz3NFzdrVcBrgdT2mbScsZQhbYTqzgRG56+2i4m1W3TEq99MJ6c7cpKr8ElLqPDw pGJw== X-Forwarded-Encrypted: i=1; AJvYcCU/4M972t5HgrBfeF+9lc6SIyDOhigin/MiS2QbfZUNPZgNJOImjhho11VR0VG17TRvlWtjkNo8jqaElPPi@vger.kernel.org, AJvYcCWtFT+KKDPHdfu0v8p264yh06uxMz36H+IeRfhVthHtcpie8IExlDQ5mntzIQfZ3XNZZHm2MVrE7sP3IBY=@vger.kernel.org, AJvYcCXTXJvrgSHPED8uzn7W5enf6hghgYkvh69W9GsUqIaXPQMmvCD9K6ggz5jAAyT4d06AH4NpmvXUo/H44QYR@vger.kernel.org X-Gm-Message-State: AOJu0Yz2dqiszt++4dH9/xiXmMSoyStDT0Lu2DG9aHHsE3Okke34BKge Id5RzCT9MwiNqt+/S7fA02dBUzQHnTmgBI6u/zJBFxfRviAiDGJ+rZYCrA== X-Gm-Gg: ASbGncv7OE48vlUK5lzVyN3vkWd6rBx5GA+ESrYBIM9ZyrQSr3I0twtdt1DCfxLkY/D R1O4tml08YrTCVv+L+VmyDACFklXul9wtu4Jue3eXSyV6plk8BaxTSaRRQ8PlKYIsdbI04WGB9Q vPfJvxSGBYlbfeJf6dAlRDgigm9844noIGvegmuw87IK9ZcsLrOZiIP2ovbzF9iJNgjM3RCi6k4 /K3MXNX5uv5LWLAABeOxWTfXlEw4hpXtjIJZeUieB4LScWK2P/Lm/4F6V+XonkYN/Hn06XkSSCN dfmGfXExn0Ixqa4W6zrnebNFFH7umT3YeiR1/pyORh0JaVcfyD+rsbd140x3a8cJfs7+uJmmMRx V563TaOd6VCKDjG1bDqs= X-Google-Smtp-Source: AGHT+IEqIieP8fKBMjMJOk9V0T6bFwZVD5WXsDeLdLV0AxJjABCMvE+1HFXmwXd0HLstQOzQ1r9vkA== X-Received: by 2002:a17:903:41d0:b0:224:24d3:6103 with SMTP id d9443c01a7336-22649a80c3fmr53570765ad.35.1742396139817; Wed, 19 Mar 2025 07:55:39 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c68a8796sm114893425ad.99.2025.03.19.07.55.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:39 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v2 17/34] drm/msm: Extend SUBMIT ioctl for VM_BIND Date: Wed, 19 Mar 2025 07:52:29 -0700 Message-ID: <20250319145425.51935-18-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a bit different than the path taken by other clean-slate drivers. But there is a lot in similar with BO pinning in the legacy "EXEC" path and "VM_BIND" MAP path. Also, we want the same fence and syncobj handling. (Why bother with fence fd's? Because for virtgpu nctx for, guest syncobj's exist only as dma_fence's between the guest kernel and host.) Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 10 ++--- drivers/gpu/drm/msm/msm_gem_submit.c | 65 ++++++++++++++++++++++++---- include/uapi/drm/msm_drm.h | 49 ++++++++++++++++++--- 3 files changed, 103 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 7cb720137548..8e29e36ca9c5 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -345,13 +345,13 @@ struct msm_gem_submit { uint32_t nr_relocs; struct drm_msm_gem_submit_reloc *relocs; } *cmd; /* array of size nr_cmds */ - struct { + struct msm_gem_submit_bo { uint32_t flags; - union { - struct drm_gem_object *obj; - uint32_t handle; - }; + uint32_t handle; + struct drm_gem_object *obj; uint64_t iova; + uint64_t bo_offset; + uint64_t range; } bos[]; }; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 9cef308a0ad1..bb61231ab8ba 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -115,23 +115,37 @@ void __msm_gem_submit_destroy(struct kref *kref) kfree(submit); } +static bool invalid_alignment(uint64_t addr) +{ + /* + * Technically this is about GPU alignment, not CPU alignment. But + * I've not seen any qcom SoC where the SMMU does not support the + * CPU's smallest page size. + */ + return !PAGE_ALIGNED(addr); +} + static int submit_lookup_objects(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { - unsigned i; + unsigned i, bo_stride = args->bos_stride; int ret = 0; + if (!bo_stride) + bo_stride = sizeof(struct drm_msm_gem_submit_bo); + for (i = 0; i < args->nr_bos; i++) { - struct drm_msm_gem_submit_bo submit_bo; + struct drm_msm_gem_submit_bo_v2 submit_bo = {0}; void __user *userptr = - u64_to_user_ptr(args->bos + (i * sizeof(submit_bo))); + u64_to_user_ptr(args->bos + (i * bo_stride)); + unsigned copy_sz = min(bo_stride, sizeof(submit_bo)); /* make sure we don't have garbage flags, in case we hit * error path before flags is initialized: */ submit->bos[i].flags = 0; - if (copy_from_user(&submit_bo, userptr, sizeof(submit_bo))) { + if (copy_from_user(&submit_bo, userptr, copy_sz)) { ret = -EFAULT; i = 0; goto out; @@ -141,14 +155,27 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, #define MANDATORY_FLAGS (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE) if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) || - !(submit_bo.flags & MANDATORY_FLAGS)) { + !(submit_bo.flags & MANDATORY_FLAGS)) ret = SUBMIT_ERROR(EINVAL, submit, "invalid flags: %x\n", submit_bo.flags); + + if (invalid_alignment(submit_bo.address)) + ret = SUBMIT_ERROR(EINVAL, submit, "invalid address: %016llx\n", submit_bo.address); + + if (invalid_alignment(submit_bo.bo_offset)) + ret = SUBMIT_ERROR(EINVAL, submit, "invalid bo_offset: %016llx\n", submit_bo.bo_offset); + + if (invalid_alignment(submit_bo.range)) + ret = SUBMIT_ERROR(EINVAL, submit, "invalid range: %016llx\n", submit_bo.range); + + if (ret) { i = 0; goto out; } submit->bos[i].handle = submit_bo.handle; submit->bos[i].flags = submit_bo.flags; + submit->bos[i].bo_offset = submit_bo.bo_offset; + submit->bos[i].range = submit_bo.range; } spin_lock(&file->table_lock); @@ -167,6 +194,15 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, drm_gem_object_get(obj); + if (submit->bos[i].bo_offset > obj->size) + ret = SUBMIT_ERROR(EINVAL, submit, "bo_offset to large: %016llx\n", submit->bos[i].bo_offset); + + if ((submit->bos[i].bo_offset + submit->bos[i].range) > obj->size) + ret = SUBMIT_ERROR(EINVAL, submit, "range to large: %016llx\n", submit->bos[i].range); + + if (ret) + goto out_unlock; + submit->bos[i].obj = obj; } @@ -182,6 +218,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, static int submit_lookup_cmds(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { + struct msm_context *ctx = file->driver_priv; unsigned i; size_t sz; int ret = 0; @@ -213,6 +250,19 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, goto out; } + if (msm_context_is_vmbind(ctx)) { + if (submit_cmd.nr_relocs) { + ret = SUBMIT_ERROR(EINVAL, submit, "nr_relocs must be zero"); + goto out; + } + if (submit_cmd.submit_idx || submit_cmd.submit_offset) { + ret = SUBMIT_ERROR(EINVAL, submit, "submit_idx/offset must be zero"); + goto out; + } + + submit->cmd[i].iova = submit_cmd.iova; + } + submit->cmd[i].type = submit_cmd.type; submit->cmd[i].size = submit_cmd.size / 4; submit->cmd[i].offset = submit_cmd.submit_offset / 4; @@ -665,9 +715,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (!gpu) return -ENXIO; - if (args->pad) - return -EINVAL; - if (to_msm_vm(ctx->vm)->unusable) return UERR(EPIPE, dev, "context is unusable"); @@ -677,7 +724,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (MSM_PIPE_ID(args->flags) != MSM_PIPE_3D0) return UERR(EINVAL, dev, "invalid pipe"); - if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_FLAGS) + if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS) return UERR(EINVAL, dev, "invalid flags"); if (args->flags & MSM_SUBMIT_SUDO) { diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 072e82a80607..1a948d49c610 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -245,7 +245,10 @@ struct drm_msm_gem_submit_cmd { __u32 size; /* in, cmdstream size */ __u32 pad; __u32 nr_relocs; /* in, number of submit_reloc's */ - __u64 relocs; /* in, ptr to array of submit_reloc's */ + union { + __u64 relocs; /* in, ptr to array of submit_reloc's */ + __u64 iova; /* cmdstream address (for VM_BIND contexts) */ + }; }; /* Each buffer referenced elsewhere in the cmdstream submit (ie. the @@ -264,6 +267,19 @@ struct drm_msm_gem_submit_cmd { #define MSM_SUBMIT_BO_DUMP 0x0004 #define MSM_SUBMIT_BO_NO_IMPLICIT 0x0008 +/* Map OP for for submits to a VM_BIND submitqueue: + * - MAP: map a specified range of the BO into the VM + * - MAP_NULL: map a NULL page into the specified range of the VM, handle + * and bo_offset MBZ. A NULL range will return zero on reads + * and discard writes + * see: VkPhysicalDeviceSparseProperties::residencyNonResidentStrict + * - UNMAP: unmap a specified VM range, handle and bo_offset MBZ + */ +#define MSM_SUBMIT_BO_OP_MASK 0xf000 +#define MSM_SUBMIT_BO_OP_MAP 0x0000 +#define MSM_SUBMIT_BO_OP_MAP_NULL 0x1000 +#define MSM_SUBMIT_BO_OP_UNMAP 0x2000 + #define MSM_SUBMIT_BO_FLAGS (MSM_SUBMIT_BO_READ | \ MSM_SUBMIT_BO_WRITE | \ MSM_SUBMIT_BO_DUMP | \ @@ -272,7 +288,16 @@ struct drm_msm_gem_submit_cmd { struct drm_msm_gem_submit_bo { __u32 flags; /* in, mask of MSM_SUBMIT_BO_x */ __u32 handle; /* in, GEM handle */ - __u64 presumed; /* in/out, presumed buffer address */ + __u64 address; /* in/out, presumed buffer address */ +}; + +struct drm_msm_gem_submit_bo_v2 { + __u32 flags; /* in, mask of MSM_SUBMIT_BO_x */ + __u32 handle; /* in, GEM handle */ + __u64 address; /* in/out, presumed buffer address */ + /* Remaining fields are only used with MSM_SUBMIT_OP_VM_BIND/_ASYNC: */ + __u64 bo_offset; + __u64 range; }; /* Valid submit ioctl flags: */ @@ -283,7 +308,8 @@ struct drm_msm_gem_submit_bo { #define MSM_SUBMIT_SYNCOBJ_IN 0x08000000 /* enable input syncobj */ #define MSM_SUBMIT_SYNCOBJ_OUT 0x04000000 /* enable output syncobj */ #define MSM_SUBMIT_FENCE_SN_IN 0x02000000 /* userspace passes in seqno fence */ -#define MSM_SUBMIT_FLAGS ( \ + +#define MSM_SUBMIT_EXEC_FLAGS ( \ MSM_SUBMIT_NO_IMPLICIT | \ MSM_SUBMIT_FENCE_FD_IN | \ MSM_SUBMIT_FENCE_FD_OUT | \ @@ -293,6 +319,13 @@ struct drm_msm_gem_submit_bo { MSM_SUBMIT_FENCE_SN_IN | \ 0) +#define MSM_SUBMIT_VM_BIND_FLAGS ( \ + MSM_SUBMIT_FENCE_FD_IN | \ + MSM_SUBMIT_FENCE_FD_OUT | \ + MSM_SUBMIT_SYNCOBJ_IN | \ + MSM_SUBMIT_SYNCOBJ_OUT | \ + 0) + #define MSM_SUBMIT_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ #define MSM_SUBMIT_SYNCOBJ_FLAGS ( \ MSM_SUBMIT_SYNCOBJ_RESET | \ @@ -307,14 +340,17 @@ struct drm_msm_gem_submit_syncobj { /* Each cmdstream submit consists of a table of buffers involved, and * one or more cmdstream buffers. This allows for conditional execution * (context-restore), and IB buffers needed for per tile/bin draw cmds. + * + * For MSM_SUBMIT_VM_BIND/_ASYNC operations, the queue must have been + * created with the MSM_SUBMITQUEUE_VM_BIND flag. */ struct drm_msm_gem_submit { __u32 flags; /* MSM_PIPE_x | MSM_SUBMIT_x */ __u32 fence; /* out (or in with MSM_SUBMIT_FENCE_SN_IN flag) */ __u32 nr_bos; /* in, number of submit_bo's */ - __u32 nr_cmds; /* in, number of submit_cmd's */ + __u32 nr_cmds; /* in, number of submit_cmd's, MBZ for VM_BIND queue */ __u64 bos; /* in, ptr to array of submit_bo's */ - __u64 cmds; /* in, ptr to array of submit_cmd's */ + __u64 cmds; /* in, ptr to array of submit_cmd's, MBZ for VM_BIND queue */ __s32 fence_fd; /* in/out fence fd (see MSM_SUBMIT_FENCE_FD_IN/OUT) */ __u32 queueid; /* in, submitqueue id */ __u64 in_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ @@ -322,8 +358,7 @@ struct drm_msm_gem_submit { __u32 nr_in_syncobjs; /* in, number of entries in in_syncobj */ __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ - __u32 pad; /*in, reserved for future use, always 0. */ - + __u32 bos_stride; /* in, stride of bos array, if zero 16bytes used. */ }; #define MSM_WAIT_FENCE_BOOST 0x00000001 From patchwork Wed Mar 19 14:52:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874761 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7204214A60; Wed, 19 Mar 2025 14:55:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396144; cv=none; b=uDbhKTn2YP89DKRrUyAN1wYk8x2iToqrWrMXpDw9VXXVaL4/En0Lgg2SqlF72WzIJnevfWjtKPCdEI7CMkIt1WU9zQ+rhxN/jc3NsuHcl9HZiHgtFdHMThP8knPBPqigOoYW58hcUou8LOq7ywRBljcMr4gmtERw+sgr37F2KUw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396144; c=relaxed/simple; bh=Pk8qrg8XbQ3vqlzNp8run9pseTLdpNaZRm01OhGQn4s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CjChv/71P9bEuT6CKyc3bYJt+c0GneUmYDJPLnj3vtt3eYPEIIeiifop92T30sQKdbHLhxEa4r09uTqpWYnic/GrLJmq3fCrw6l91etOD1nZd7r1hsmRfi0jnGheBWh5W47zo614xk3n9EBGEHfEkAhstuH+6x4YoL5swGAu/CI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=iFvJppNQ; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iFvJppNQ" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-223f4c06e9fso14544065ad.1; Wed, 19 Mar 2025 07:55:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396142; x=1743000942; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8XT8DN3ZleGCofMlQ1mMQjy7k/r/T2YXHUe97y9PZGo=; b=iFvJppNQA0rW5yrdsrzz+mHhNfRbnJ86UPYU/SG4/Le4ZzzibGBFVCpW7UV7ZwFHO8 yVk+N2WwTL2XwFg6UjDbo0rZl7xn7R0kIiLS+fLb6VkK8IckcChvilFwIXdQVzbRrqDW UusBxmzn5Z7ESCEL6H3Qqopfq5wzeAayqy1XWJmaVKAclBENNO4l81688/khaEkrCK7i OeN5mtQ0kD+HXLNLp+7uram/8nJkGYMyMZcGcLqLJNZZxxHfM5M2lsZfa2hc4ossBpHE W28fcX+AMn05uFo8CYBfF5osl6SbqGFsTOF8FALfGkOmTaEHsKb/OSk7DMfXNkdgb4Eu yzuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396142; x=1743000942; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8XT8DN3ZleGCofMlQ1mMQjy7k/r/T2YXHUe97y9PZGo=; b=L71R1eOM89l2DKV0ZSt99l/UrryLkWY6GvRPDxJra/14tuCBYfIsclAFzSMwsJ8lxT OhuR59fIKPypxDTPqFB1HNbg4tq8tJr7ZvcmR9hRRyIObyccVZbZpj43V/FtFm6SwWBk qKVW4OImsP3vWE/WEEv/ANKEtZBnSdeIs+k4GGSl90yHozwmVDbhxnEiclzD5d1MU3EL ZcV51mWasH9X1mTNmnaCW3AJYQJosgSMykyks5FzMFs6eGnviYZ6RsTcWoB6229WhpHP WourFbFcJUZg/JVjA2ohD43ttqZL8iX2mo9eV37mEi92HCkzgFFFqTqVgdxAvHZzJHGG I5pA== X-Forwarded-Encrypted: i=1; AJvYcCUL6Zxtigd9HQvagMeE1Gd4xyJOrmUHME1FRMq5/yRjZ9I/PPhjBVgHKIBkeKOq+25/g0d/Lll1XPuDc5Y=@vger.kernel.org, AJvYcCWuZXIWajfhz+Foj85D8A7z+AwXHgJk2HryYA9J1EJsBOX6pkl0AZJux+wCcjt48reDL1ct1Wj4+mJMFY4Z@vger.kernel.org, AJvYcCXY6XQUo4izBMWyOHZjlgm9dRKML2HAv21+rOYhrhBFFGMQIlt6lPj25uR+KtaPgIg/u/4ThB0IYzivJla4@vger.kernel.org X-Gm-Message-State: AOJu0Yzvx9wNnNtBfRRrdv9lkXzmBwJCKPGBBO+YcX9fMJ+vWHeq5IBX ga08+HElZFJGKrl+mz4Tm8LyXkILbXHmX2Kjx+CJxsQOplPhWxjj X-Gm-Gg: ASbGncsUk2rA5kaCZEwzo5daOJgFw4gQGxRPSStSJP0MqFh6U8mi9AQJKLKt8pC2aFD hg58G9zpNQCbsH5wdbdPyb8Qhm3RGak/OH8WyNLEYAI5+zr8hLR55evu0rG6qLbzH+6V1psrC4m /U0d7pMqIEvgSbDxiY019PyxuAx9zOm96EFJ9f4GzKDBJJLtLaxfFqNzvYyLXY0E0SKf8ECegcQ Z55nR0vJ6qO5b3bVAldwfsxlDkLbMK8CLXR/7xwK6pEEfhAGreOtw/4iWEjUIqHVoENsN55CMYh gFOGzKUif4MFV71Qb4TfzycnV6edm7amIK1QBDXiEg9epXajIIt2N3pFvY0qk5UtzjnOgD7b2kJ kjvEoS8/nT5xW68HKoZk= X-Google-Smtp-Source: AGHT+IGKKoZeugIMzJo9Kl0W+rcTmXvkdvQFEcCdjrL7jzz+9e7IC2DNU5xrIm+Wn0RuPWMuOADobQ== X-Received: by 2002:a17:902:e803:b0:21f:6f33:f96 with SMTP id d9443c01a7336-2262c99c631mr107794785ad.6.1742396141855; Wed, 19 Mar 2025 07:55:41 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6ba7260sm114891055ad.110.2025.03.19.07.55.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:40 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v2 18/34] drm/msm: Add VM_BIND submitqueue Date: Wed, 19 Mar 2025 07:52:30 -0700 Message-ID: <20250319145425.51935-19-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This submitqueue type isn't tied to a hw ringbuffer, but instead executes on the CPU for performing async VM_BIND ops. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 3 +- drivers/gpu/drm/msm/msm_gem.h | 10 ++ drivers/gpu/drm/msm/msm_gem_submit.c | 128 +++++++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_vma.c | 107 +++++++++++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 3 + drivers/gpu/drm/msm/msm_submitqueue.c | 57 +++++++++--- include/uapi/drm/msm_drm.h | 9 +- 7 files changed, 287 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a5220b6f21d..4c68a3dd3fed 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -232,8 +232,7 @@ static void put_pages(struct drm_gem_object *obj) } } -static struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, - unsigned madv) +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 8e29e36ca9c5..d427ead2dce0 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -53,6 +53,13 @@ struct msm_gem_vm { /** @base: Inherit from drm_gpuvm. */ struct drm_gpuvm base; + /** + * @sched: Scheduler used for asynchronous VM_BIND request. + * + * Unused for kernel managed VMs (where all operations are synchronous). + */ + struct drm_gpu_scheduler sched; + /** * @mm: Memory management for kernel managed VA allocations * @@ -106,6 +113,8 @@ struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); +void msm_gem_vm_close(struct drm_gpuvm *gpuvm); + struct msm_fence_context; /** @@ -195,6 +204,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index bb61231ab8ba..39a6e0418bdf 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -23,6 +23,11 @@ #define SUBMIT_ERROR(err, submit, fmt, ...) \ UERR(err, (submit)->dev, fmt, ##__VA_ARGS__) +static bool submit_is_vmbind(struct msm_gem_submit *submit) +{ + return !!(submit->queue->flags & MSM_SUBMITQUEUE_VM_BIND); +} + /* * Cmdstream submission: */ @@ -115,6 +120,17 @@ void __msm_gem_submit_destroy(struct kref *kref) kfree(submit); } +static bool invalid_bo_flags(bool vm_bind, uint32_t flags) +{ + if (vm_bind) { + return flags & ~(MSM_SUBMIT_BO_DUMP | MSM_SUBMIT_BO_OP_MASK); + } else { + /* at least one of READ and/or WRITE flags should be set: */ + return (flags & ~MSM_SUBMIT_BO_FLAGS) || + !(flags & (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE)); + } +} + static bool invalid_alignment(uint64_t addr) { /* @@ -129,9 +145,10 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { unsigned i, bo_stride = args->bos_stride; + bool vm_bind = submit_is_vmbind(submit); int ret = 0; - if (!bo_stride) + if (!bo_stride || !vm_bind) bo_stride = sizeof(struct drm_msm_gem_submit_bo); for (i = 0; i < args->nr_bos; i++) { @@ -151,11 +168,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, goto out; } -/* at least one of READ and/or WRITE flags should be set: */ -#define MANDATORY_FLAGS (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE) - - if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) || - !(submit_bo.flags & MANDATORY_FLAGS)) + if (invalid_bo_flags(vm_bind, submit_bo.flags)) ret = SUBMIT_ERROR(EINVAL, submit, "invalid flags: %x\n", submit_bo.flags); if (invalid_alignment(submit_bo.address)) @@ -174,6 +187,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, submit->bos[i].handle = submit_bo.handle; submit->bos[i].flags = submit_bo.flags; + submit->bos[i].iova = submit_bo.address; submit->bos[i].bo_offset = submit_bo.bo_offset; submit->bos[i].range = submit_bo.range; } @@ -183,6 +197,12 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, for (i = 0; i < args->nr_bos; i++) { struct drm_gem_object *obj; + if (vm_bind) { + unsigned op = submit->bos[i].flags & MSM_SUBMIT_BO_OP_MASK; + if (op != MSM_SUBMIT_BO_OP_MAP) + continue; + } + /* normally use drm_gem_object_lookup(), but for bulk lookup * all under single table_lock just hit object_idr directly: */ @@ -297,13 +317,22 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, /* This is where we make sure all the bo's are reserved and pin'd: */ static int submit_lock_objects(struct msm_gem_submit *submit) { + bool vm_bind = submit_is_vmbind(submit); + unsigned flags = DRM_EXEC_INTERRUPTIBLE_WAIT; int ret; - drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos); + if (vm_bind) + flags |= DRM_EXEC_IGNORE_DUPLICATES; + + drm_exec_init(&submit->exec, flags, submit->nr_bos); drm_exec_until_all_locked (&submit->exec) { for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; + + if (!obj) + continue; + ret = drm_exec_prepare_obj(&submit->exec, obj, 1); drm_exec_retry_on_contention(&submit->exec); if (ret) @@ -372,6 +401,28 @@ static int submit_pin_vmas(struct msm_gem_submit *submit) return ret; } +static int submit_get_pages(struct msm_gem_submit *submit) +{ + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + for (int i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + struct page **pages; + + if (!obj) + continue; + + pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); + if (IS_ERR(pages)) + return PTR_ERR(pages); + } + + return 0; +} + static void submit_pin_objects(struct msm_gem_submit *submit) { struct msm_drm_private *priv = submit->dev->dev_private; @@ -385,7 +436,12 @@ static void submit_pin_objects(struct msm_gem_submit *submit) */ mutex_lock(&priv->lru.lock); for (int i = 0; i < submit->nr_bos; i++) { - msm_gem_pin_obj_locked(submit->bos[i].obj); + struct drm_gem_object *obj = submit->bos[i].obj; + + if (!obj) + continue; + + msm_gem_pin_obj_locked(obj); } mutex_unlock(&priv->lru.lock); @@ -400,6 +456,9 @@ static void submit_unpin_objects(struct msm_gem_submit *submit) for (int i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; + if (!obj) + continue; + msm_gem_unpin_locked(obj); } @@ -413,6 +472,9 @@ static void submit_attach_object_fences(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; + if (!obj) + continue; + if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) dma_resv_add_fence(obj->resv, submit->user_fence, DMA_RESV_USAGE_WRITE); @@ -708,6 +770,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct msm_ringbuffer *ring; struct msm_submit_post_dep *post_deps = NULL; struct drm_syncobj **syncobjs_to_reset = NULL; + unsigned cmds_to_parse; int out_fence_fd = -1; unsigned i; int ret; @@ -724,9 +787,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (MSM_PIPE_ID(args->flags) != MSM_PIPE_3D0) return UERR(EINVAL, dev, "invalid pipe"); - if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS) - return UERR(EINVAL, dev, "invalid flags"); - if (args->flags & MSM_SUBMIT_SUDO) { if (!IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) || !capable(CAP_SYS_RAWIO)) @@ -737,6 +797,26 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (!queue) return -ENOENT; + if (queue->flags & MSM_SUBMITQUEUE_VM_BIND) { + if (args->nr_cmds || args->cmds) { + ret = UERR(EINVAL, dev, "nr_cmds should be zero for VM_BIND queue"); + goto out_post_unlock; + } + if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_VM_BIND_FLAGS) { + ret = UERR(EINVAL, dev, "invalid flags"); + goto out_post_unlock; + } + } else { + if (msm_context_is_vmbind(ctx) && (args->nr_bos || args->bos)) { + ret = UERR(EINVAL, dev, "nr_bos should be zero for VM_BIND contexts"); + goto out_post_unlock; + } + if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS) { + ret = UERR(EINVAL, dev, "invalid flags"); + goto out_post_unlock; + } + } + ring = gpu->rb[queue->ring_nr]; if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { @@ -813,19 +893,37 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; - if (!(args->flags & MSM_SUBMIT_NO_IMPLICIT)) { + if (msm_context_is_vmbind(ctx) && !submit_is_vmbind(submit)) { + /* + * If we are not using VM_BIND, submit_pin_vmas() will validate + * just the BOs attached to the submit. In that case we don't + * need to validate the _entire_ vm, because userspace tracked + * what BOs are associated with the submit. + */ + ret = drm_gpuvm_validate(submit->vm, &submit->exec); + if (ret) + goto out; + } + + if (!(args->flags & MSM_SUBMIT_NO_IMPLICIT) && !submit_is_vmbind(submit)) { ret = submit_fence_sync(submit); if (ret) goto out; } - ret = submit_pin_vmas(submit); + if (submit_is_vmbind(submit)) { + ret = submit_get_pages(submit); + } else { + ret = submit_pin_vmas(submit); + } if (ret) goto out; submit_pin_objects(submit); - for (i = 0; i < args->nr_cmds; i++) { + cmds_to_parse = msm_context_is_vmbind(ctx) ? 0 : args->nr_cmds; + + for (i = 0; i < cmds_to_parse; i++) { struct drm_gem_object *obj; uint64_t iova; @@ -856,7 +954,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; } - submit->nr_cmds = i; + submit->nr_cmds = args->nr_cmds; idr_preload(GFP_KERNEL); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 0bc22618e9f0..8c780dd6a936 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -162,6 +162,70 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, }; +static int +run_bo_op(struct msm_gem_submit *submit, const struct msm_gem_submit_bo *bo) +{ + unsigned op = bo->flags & MSM_SUBMIT_BO_OP_MASK; + + switch (op) { + case MSM_SUBMIT_BO_OP_MAP: + case MSM_SUBMIT_BO_OP_MAP_NULL: + return drm_gpuvm_sm_map(submit->vm, submit->vm, bo->iova, + bo->range, bo->obj, bo->bo_offset); + break; + case MSM_SUBMIT_BO_OP_UNMAP: + return drm_gpuvm_sm_unmap(submit->vm, submit->vm, bo->iova, + bo->bo_offset); + } + + return -EINVAL; +} + +static struct dma_fence * +msm_vma_job_run(struct drm_sched_job *job) +{ + struct msm_gem_submit *submit = to_msm_submit(job); + + for (unsigned i = 0; i < submit->nr_bos; i++) { + int ret = run_bo_op(submit, &submit->bos[i]); + if (ret) { + to_msm_vm(submit->vm)->unusable = true; + return ERR_PTR(ret); + } + } + + /* VM_BIND ops run on CPU, so we are done now: */ + msm_submit_retire(submit); + + for (int i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + + if (!obj) + continue; + + msm_gem_lock(obj); + msm_gem_unpin_locked(obj); + msm_gem_unlock(obj); + } + + /* VM_BIND ops are synchronous, so no fence to wait on: */ + return NULL; +} + +static void +msm_vma_job_free(struct drm_sched_job *job) +{ + struct msm_gem_submit *submit = to_msm_submit(job); + + drm_sched_job_cleanup(job); + msm_gem_submit_put(submit); +} + +static const struct drm_sched_backend_ops msm_vm_bind_ops = { + .run_job = msm_vma_job_run, + .free_job = msm_vma_job_free +}; + /** * msm_gem_vm_create() - Create and initialize a &msm_gem_vm * @drm: the drm device @@ -198,6 +262,21 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, goto err_free_vm; } + if (!managed) { + struct drm_sched_init_args args = { + .ops = &msm_vm_bind_ops, + .num_rqs = 1, + .credit_limit = 1, + .timeout = MAX_SCHEDULE_TIMEOUT, + .name = "msm-vm-bind", + .dev = drm->dev, + }; + + ret = drm_sched_init(&vm->sched, &args); + if (ret) + goto err_free_dummy; + } + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); @@ -212,8 +291,36 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, return &vm->base; +err_free_dummy: + drm_gem_object_put(dummy_gem); + err_free_vm: kfree(vm); return ERR_PTR(ret); } + +/** + * msm_gem_vm_close() - Close a VM + * @gpuvm: The VM to close + * + * Called when the drm device file is closed, to tear down VM related resources + * (which will drop refcounts to GEM objects that were still mapped into the + * VM at the time). + */ +void +msm_gem_vm_close(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm = to_msm_vm(gpuvm); + + /* + * For kernel managed VMs, the VMAs are torn down when the handle is + * closed, so nothing more to do. + */ + if (vm->managed) + return; + + /* Kill the scheduler now, so we aren't racing with it for cleanup: */ + drm_sched_stop(&vm->sched, NULL); + drm_sched_fini(&vm->sched); +} diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 20f52d9636b0..49c8862ada13 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -562,6 +562,9 @@ struct msm_gpu_submitqueue { struct mutex lock; struct kref ref; struct drm_sched_entity *entity; + + /** @_vm_bind_entity: used for @entity pointer for VM_BIND queues */ + struct drm_sched_entity _vm_bind_entity[0]; }; struct msm_gpu_state_bo { diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 8ced49c7557b..99ab780d5d7b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -72,6 +72,9 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); + if (queue->entity == &queue->_vm_bind_entity[0]) + drm_sched_entity_destroy(queue->entity); + msm_context_put(queue->ctx); kfree(queue); @@ -115,6 +118,11 @@ void msm_submitqueue_close(struct msm_context *ctx) list_del(&entry->node); msm_submitqueue_put(entry); } + + if (!ctx->vm) + return; + + msm_gem_vm_close(ctx->vm); } static struct drm_sched_entity * @@ -160,8 +168,6 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, struct msm_drm_private *priv = drm->dev_private; struct msm_gpu_submitqueue *queue; enum drm_sched_priority sched_prio; - extern int enable_preemption; - bool preemption_supported; unsigned ring_nr; int ret; @@ -171,26 +177,53 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, if (!priv->gpu) return -ENODEV; - preemption_supported = priv->gpu->nr_rings == 1 && enable_preemption != 0; + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + unsigned sz; - if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) - return -EINVAL; + /* Not allowed for kernel managed VMs (ie. kernel allocs VA) */ + if (!msm_context_is_vmbind(ctx)) + return -EINVAL; - ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); - if (ret) - return ret; + if (prio) + return -EINVAL; + + sz = struct_size(queue, _vm_bind_entity, 1); + queue = kzalloc(sz, GFP_KERNEL); + } else { + extern int enable_preemption; + bool preemption_supported = + priv->gpu->nr_rings == 1 && enable_preemption != 0; + + if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) + return -EINVAL; - queue = kzalloc(sizeof(*queue), GFP_KERNEL); + ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); + if (ret) + return ret; + + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + } if (!queue) return -ENOMEM; kref_init(&queue->ref); queue->flags = flags; - queue->ring_nr = ring_nr; - queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], - ring_nr, sched_prio); + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + struct drm_gpu_scheduler *sched = &to_msm_vm(msm_context_vm(drm, ctx))->sched; + + queue->entity = &queue->_vm_bind_entity[0]; + + drm_sched_entity_init(queue->entity, DRM_SCHED_PRIORITY_KERNEL, + &sched, 1, NULL); + } else { + queue->ring_nr = ring_nr; + + queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], + ring_nr, sched_prio); + } + if (IS_ERR(queue->entity)) { ret = PTR_ERR(queue->entity); kfree(queue); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 1a948d49c610..39b55c8d7413 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -404,12 +404,19 @@ struct drm_msm_gem_madvise { /* * Draw queues allow the user to set specific submission parameter. Command * submissions specify a specific submitqueue to use. ID 0 is reserved for - * backwards compatibility as a "default" submitqueue + * backwards compatibility as a "default" submitqueue. + * + * Because VM_BIND async updates happen on the CPU, they must run on a + * virtual queue created with the flag MSM_SUBMITQUEUE_VM_BIND. If we had + * a way to do pgtable updates on the GPU, we could drop this restriction. */ #define MSM_SUBMITQUEUE_ALLOW_PREEMPT 0x00000001 +#define MSM_SUBMITQUEUE_VM_BIND 0x00000002 /* virtual queue for VM_BIND ops */ + #define MSM_SUBMITQUEUE_FLAGS ( \ MSM_SUBMITQUEUE_ALLOW_PREEMPT | \ + MSM_SUBMITQUEUE_VM_BIND | \ 0) /* From patchwork Wed Mar 19 14:52:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875051 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 994AD1DED6B; Wed, 19 Mar 2025 14:55:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396146; cv=none; b=RBYI0q3eiThUeFWsFqOcljobhqQa3gc2aercpM+9/RCIboUV7hID8ZYCDAeJARfH6UUHVwwKm7aMnxf85MQFHxYhpxbAXqafuNjCxrB4zo2WCYXU8L9tkarf2b8Jo4KXZC6ZjQ5w18mUsN2WvgYLvgK8jZWz7BrKSjSSK2xA5xc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396146; c=relaxed/simple; bh=Wf8zd+5GgvnEbx72cIC0QKnNJVwGLKIpsyfv36Azr5Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=vC5q2hLDrd6nn56ES6LV3mohhaeTVMw9kUBcdJastBbOXjSOvA30p1QPdSfis6C50PPTFtT0x/EivTyUCQnB8I4vCST+dub0s2Uvui7SOFrwRaQvuHUwOaUVmk7nx6HeS65/1iDGJFPWKJcln57Z9rAHt0Rr4NpnevefxaiV/HA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=jCsiCuT8; arc=none smtp.client-ip=209.85.216.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jCsiCuT8" Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-2ff85fec403so1814206a91.1; Wed, 19 Mar 2025 07:55:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396144; x=1743000944; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AwhScULefMfboLX0liFkQPUFffcIJe/d8pWVK3z+O00=; b=jCsiCuT8y/tqssH6KM8GdOGJyn6CnacElejkLiewckQ9bSJIrnV1tXI+2f6Yjkc56+ n2SyUCASg+tF/RfY/CebNTTvUs2dj3z4hqM3d57V0XonIvJJkFW5zWIXmmHecg/yHvNj qJzfiNgXBV0aIUHRDviudoTcNz5cuMgRRkr8asO6xoumvWQZKBxP0DtcTi2o3MqYwGk0 o948dIDy9UdwGfpF1D2uy9oBymuCfkIrCJ57fCcSqIrlzo0/E5YJ3QaCxSP+EDE/r1A6 njtKT29ReyhTIVSnW6MAsfpmqdRaIDE8HpoQrud3oD9K+mG8S0vQq4ZwzlDH5kP9SQC2 dwuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396144; x=1743000944; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AwhScULefMfboLX0liFkQPUFffcIJe/d8pWVK3z+O00=; b=o9QFqe1/56cwIrZppxaNWsFnfMtabozxqb3d4ghSmoluKk9m6bkXie9X6o1N+VZnZ6 2t39BwpMlvRIRv4E4yDHJ7dsiysCI7iT19cxrtO7RRHlq3awfBkdgp/bf2Mq2qDM897U SRgjr4DMh5wuCtoQg6feWs87Z1+rO6liYG0ve2LxIpSh8TYLLGbriBHp6TfTa2fN40I0 y7X8eJXws6rR/GebsulwsRPUNK70ptty4/ocSXMiX49co9bBu95Wp3olubkqTgiyvnnD aDfKoYHnre4f6NPLwpGm097n8mkyE4sJW8/vy+J1lB9edDG4uKR6b3hdqT+WwTP9mNY/ 7aHA== X-Forwarded-Encrypted: i=1; AJvYcCVJE1KPXIMNa8xTL+GvMhAc1GEaKwqcOr2+P0/25L/UwUX9TqC79VJyLhvbR+O2LTeKcOj2SMVRCRantfc=@vger.kernel.org, AJvYcCW/1ymqLHt7g15fMEV3JGL63bpkGkOlOu9M7d90foKENHb955RHIJtNgwokyhehdVDnAKVmq3tkxpxvdFyc@vger.kernel.org, AJvYcCWAq5m1FtT9K9K+ynITDxYX1vNWI2oV0+PROy+PswKInbwcfziFd0ecXjwQgzg/yPM1DSl7OHkxnszqVXrS@vger.kernel.org X-Gm-Message-State: AOJu0Yz+SVYr0xxZRD/CldYuV6iOM8oPBSLL77TRUhWdxhFXvcmTTbr6 ffxWMsFm1clUgDibpXX2skbxMQ8jBGR+Lu88bG+FjasPTFglJHyU X-Gm-Gg: ASbGncvpAaSVnlAgCO3WIvNYRipi8mAfVmrNVfXgPtP9CHMkgAqCxEj7XqJB3zwhJTT oe8VniAdimq9+iydKWYg9xVgQOUoFPgkafwrLQA50Q8qT/alBrhtHnhbU/43/1mFW0aMu9ZVwXT 8ZWB0PhrvbLRH23HYVM7cDCjbV1xFpSQcexXXq3kr32M+nh6uwcmd6WOWiBdW4vjL/0mRi+4sKm 9l5MzXkYFe9BrMEI+YNh0ooVqv2TjT7GDVZJnCEa14Dxx7PVNtTEk0gsWZYHReO/RmdAebK0tET tBrlERJFUcQMyZZy7BcT6POSDD1fUf2iapwcVv/TbvLGVV3BUWNDrOIa1ftaIBRLbegGuZYUFcG SaZCD0juCrX0+4PpqxQ5wnzAdMSSTlA== X-Google-Smtp-Source: AGHT+IHCA/GqCE/f2P5dI0HamwPMEXDfsvX/N0jLiLWDfWHv3zNa6XPbkeXex6uF1RGR2mrPoY2f+w== X-Received: by 2002:a05:6a00:8c4:b0:728:f21b:ce4c with SMTP id d2e1a72fcca58-7376d55d06emr5667798b3a.5.1742396143748; Wed, 19 Mar 2025 07:55:43 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73711695abbsm11691211b3a.150.2025.03.19.07.55.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:42 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v2 19/34] drm/msm: Add _NO_SHARE flag Date: Wed, 19 Mar 2025 07:52:31 -0700 Message-ID: <20250319145425.51935-20-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Buffers that are not shared between contexts can share a single resv object. This way drm_gpuvm will not track them as external objects, and submit-time validating overhead will be O(1) for all N non-shared BOs, instead of O(n). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_gem.c | 23 +++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++ include/uapi/drm/msm_drm.h | 14 ++++++++++++++ 4 files changed, 53 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b77fd2c531c3..b0add236cbb3 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 4c68a3dd3fed..9d4f7b76471f 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -522,6 +522,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -1032,6 +1035,16 @@ static void msm_gem_free_object(struct drm_gem_object *obj) put_pages(obj); } + if (msm_obj->flags & MSM_BO_NO_SHARE) { + struct drm_gem_object *r_obj = + container_of(obj->resv, struct drm_gem_object, _resv); + + BUG_ON(obj->resv == &obj->_resv); + + /* Drop reference we hold to shared resv obj: */ + drm_gem_object_put(r_obj); + } + drm_gem_object_release(obj); kfree(msm_obj->metadata); @@ -1064,6 +1077,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, if (name) msm_gem_object_set_name(obj, "%s", name); + if (flags & MSM_BO_NO_SHARE) { + struct msm_context *ctx = file->driver_priv; + struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm); + + drm_gem_object_get(r_obj); + + obj->resv = r_obj->resv; + } + ret = drm_gem_handle_create(file, obj, handle); /* drop reference from allocate - handle holds it now */ @@ -1096,6 +1118,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = { .free = msm_gem_free_object, .open = msm_gem_open, .close = msm_gem_close, + .export = msm_gem_prime_export, .pin = msm_gem_prime_pin, .unpin = msm_gem_prime_unpin, .get_sg_table = msm_gem_prime_get_sg_table, diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index ee267490c935..1a6d8099196a 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); int npages = obj->size >> PAGE_SHIFT; + if (msm_obj->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EINVAL); + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ return ERR_PTR(-ENOMEM); @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, return msm_gem_import(dev, attach->dmabuf, sg); } + +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) +{ + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EPERM); + + return drm_gem_prime_export(obj, flags); +} + int msm_gem_prime_pin(struct drm_gem_object *obj) { struct page **pages; @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) if (obj->import_attach) return 0; + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + pages = msm_gem_pin_pages_locked(obj); if (IS_ERR(pages)) ret = PTR_ERR(pages); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 39b55c8d7413..a7e48ee1dd95 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -138,6 +138,19 @@ struct drm_msm_param { #define MSM_BO_SCANOUT 0x00000001 /* scanout capable */ #define MSM_BO_GPU_READONLY 0x00000002 +/* Private buffers do not need to be explicitly listed in the SUBMIT + * ioctl, unless referenced by a drm_msm_gem_submit_cmd. Private + * buffers may NOT be imported/exported or used for scanout (or any + * other situation where buffers can be indefinitely pinned, but + * cases other than scanout are all kernel owned BOs which are not + * visible to userspace). + * + * In exchange for those constraints, all private BOs associated with + * a single context (drm_file) share a single dma_resv, and if there + * has been no eviction since the last submit, there are no per-BO + * bookeeping to do, significantly cutting the SUBMIT overhead. + */ +#define MSM_BO_NO_SHARE 0x00000004 #define MSM_BO_CACHE_MASK 0x000f0000 /* cache modes */ #define MSM_BO_CACHED 0x00010000 @@ -147,6 +160,7 @@ struct drm_msm_param { #define MSM_BO_FLAGS (MSM_BO_SCANOUT | \ MSM_BO_GPU_READONLY | \ + MSM_BO_NO_SHARE | \ MSM_BO_CACHE_MASK) struct drm_msm_gem_new { From patchwork Wed Mar 19 14:52:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874760 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12AA3216392; Wed, 19 Mar 2025 14:55:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396148; cv=none; b=i1WzmIAJgbwHmnjQ5tyH9T9dirF4jZy25yFxWkuQXX71aGIFJgMFyB7ddVhbN/+ZAUa+mxv7aYah27DUlHnJL9ni1LGn4k8xiEmt8wit2aaR9D2UowMfigZGyygkSi8gfUvuBwFm6R5lpvFh8GmqiEQ4dIgSExhbDcRaiEUDW3w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396148; c=relaxed/simple; bh=s/IDGSPtFtyq6BsRNppwoWFj2s0mKyrrEhZ7Z4bbcdw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nM4yHIcMu6RQmgyqoP2RN/K1qniLHoEoKVhLTFqgFlZcx8cAsLviZF0pYSDYPMH7Py2V3vckTQXPnA+jQMZN43y9Pb+F0I32+2k7EOLsMJMGmgFfYig1d1NxoHRXFzg7uO20prXLhPRA+P80QMiGMfStDkgD/vsrTLAwBiRVP+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mPgtRdO1; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mPgtRdO1" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-2235189adaeso17197235ad.0; Wed, 19 Mar 2025 07:55:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396146; x=1743000946; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JTa88Wc1PRQnt0Bq+oT6ziGwXmxanAJ4pfowAqqd5w4=; b=mPgtRdO1lBqT5X/LEM1r/2QTzBxxlfqGhVlFsWQPXFcGv3sbrb7r87tXaASW91OX65 To/aq1Y8rBkE35vuDZHz6IfkHcx4Okug9m9VeI2e0KlQ9gSAM9RYUe45yRqdevsN/pK6 c3vxumycp87iqIx/GX+xM1NgzZ+/irB76yzxbgk0iavJmc6G6RmDfiKVtCv2zcxjd60C i9TYUIcJX/PIV9XWyESGKn34zwMEfnlZB07hxh7ZO8ZNGOQq/i+Hji0+NyIoXkBqEXK9 RCZTDeNa3rXJMlxLVzy3ooHPDQmTExgjOOxmitev/pFpLsH5xpVL+jjE92rem9aFswif 2d0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396146; x=1743000946; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JTa88Wc1PRQnt0Bq+oT6ziGwXmxanAJ4pfowAqqd5w4=; b=uNzprZHJMC8pnROv9f+JANiU/2DPX2+MrT+/48QUovyRsKyg+glpp2L3D53tlhDSUC hKnGbzNg0V6l37zJvulWeEbkbmBoSrl6mPSMsquCEmWP6hrZclPf0b6bavkxGqG8ezBm 5XRfkDjD2pZOXkedtaFDqS3KSClKncrAwterL7SnR11rff95US3Q0eSqy2VTnIga1xYU d1mw6ObvhMVSpNmIyiDpZtP4KPQUqG0EqhBA5k0SjsBdMWrNg/ftP0c+4NVYlWQcxONN k0iUVsTFZcrHik3m6R4VEWzOtei1lOlMr2v9NENPA/9CrYiTUhg1MM2d1e4bMJOEU1Yf H27g== X-Forwarded-Encrypted: i=1; AJvYcCWpUaDM70q1Q6CHF6/FptHGD3N7f2bOAC5vgFhb9N0vvNSvW5o3xfsbsMeBiAWIJGTnMIY06IJgmuP8mN/d@vger.kernel.org, AJvYcCXL1zG5bQ3ADUdiWN0gClGVOwfh9h7AG61J+TfwQ4SLTLxiATrg611i5mBTq7fxE5DuNwYBFO+ovjXYfRHV@vger.kernel.org X-Gm-Message-State: AOJu0Ywn/VZ4EasvcTMyLHfaY+XEJa5jjyY1JS+JYwtyPzY5gco0rj9W C+Az6/G98XNC7fFjb2VcldwmgK4DvA2gT/IgiYQgWxfHvNFHs4HcJcjFeQ== X-Gm-Gg: ASbGncuL56EtGLTLB64jpB9fQeWMs9flsHpBspRg4oVNf7h0pmr6X9M/k86eJZZC5Xb FMHMor0k9hjmkL3fOT2nXHhAO4t7JNy3wjvHfmgjtHdM3PU3NSTOZwl/G34TUKBwkg8wi0N0bdr tEQ51Qr0pcc8EuBbHEmZsO3BxlVNCbsVX2GlQJVqFtCF1M6FLU1cYMsZF4pQOc1qRtlp6zcKqxV T9X6bXJdhtdpvArgaajV1TKRv3GB6sub3OaPN0MUooT1/jE1kfgEG3iUEQs3OveoUbs06S29fcK 7+4ul3yiDdmCAKkkBDIdxi+bpP+a6jYRSTHs0osFVUDllS3Oe89mvZcBIhzjQPyZ2eZKMNcd6lI MBefgq7XJ+4KLCgZdgaw= X-Google-Smtp-Source: AGHT+IEYmBCck03qV4awJfglZ1ySlOg+TYRs9TrxTfus0jdkH3x90RPMTd9fqhOBmyksuKLy5IuLjw== X-Received: by 2002:a17:902:e844:b0:224:3994:8a8c with SMTP id d9443c01a7336-2262c99f180mr124160545ad.8.1742396146125; Wed, 19 Mar 2025 07:55:46 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6bbcba7sm115335995ad.173.2025.03.19.07.55.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:45 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 20/34] drm/msm: Split out helper to get iommu prot flags Date: Wed, 19 Mar 2025 07:52:32 -0700 Message-ID: <20250319145425.51935-21-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark We'll re-use this in the vm_bind path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 12 ++++++++++-- drivers/gpu/drm/msm/msm_gem.h | 1 + 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 9d4f7b76471f..632f560c81ec 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -450,10 +450,9 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +int msm_gem_prot(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct page **pages; int prot = IOMMU_READ; if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) @@ -469,6 +468,15 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) else if (prot == 2) prot |= IOMMU_USE_LLC_NWA; + return prot; +} + +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + struct page **pages; + int prot = msm_gem_prot(obj); + msm_gem_assert_locked(obj); pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d427ead2dce0..7ccdf15476b9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -188,6 +188,7 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); From patchwork Wed Mar 19 14:52:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875050 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8445C211715; Wed, 19 Mar 2025 14:55:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396150; cv=none; b=i3APBjE6r9xVNg0mlMFeaAI/9DfA0jkYXLlaWByReRNh9dl6xZO/8X5egcc0iaTBH6hZqZW6hh5aKcbDNzOV/85F3SwpWUQmWWzceUNE8rO34FVxMUcLm7awDU9BYZgLYWiwVXFBChqCafdLUqxE/cTmlMWYt80wLP1EvFYe5LA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396150; c=relaxed/simple; bh=VywkRH36PUkOix/NxRNrDuKvZQG7BxYGapOk1mZMPJo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QJazie43+ch5d0meMnsmRSYna5FYmOfSliHfGSPRLGqcDw5qjo56N1MylQA6ppakpcQetJYYjuF8oBekvDaACBo1DBCrPV7VYseh5MGSTH1yDIk4tdkUK0qNyQygL7lv6LlsCMdLNaLSXswYb1SDaiZinvhwgYC0oL6XO20s8wo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nnlB2JSV; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nnlB2JSV" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-22398e09e39so153797295ad.3; Wed, 19 Mar 2025 07:55:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396148; x=1743000948; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R8+zPwPf6DFamv9kvd5m//5hO8beo2yPdyaSE+SMRQ4=; b=nnlB2JSVDOxOohaTPC4EXe9u5tdlr/qM5ILck74W5YM+thNzQNuTgazrJ9GJoHfFqC 0Fws3T0LVYtYDmYQaiVgALCcHl1T7hZEWhd6rkzJzi+oofzvxbBxHkzwsd+b8p/MV6fV rbkQ3A0u0elvRze3H6xyM4d+1XOFSj7iA7laHI+E4HTYiS+hOGKRYLq0/HgQVD4zNjQr OJMfZW6q2kkS9+1oYbXjNQYtAYohq4S3FutZHzVGtL4wUL31el9YvM3RbffraPcGmSYU yZVYmNZ3EF2nH2LwviCARo2bqQ47rrwz/Ui8cQ0R/N25WZ9OB6X3Av6ifEn3jy2Em3gu j1Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396148; x=1743000948; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R8+zPwPf6DFamv9kvd5m//5hO8beo2yPdyaSE+SMRQ4=; b=EVxRCsIg08XZi+XVJjX25ocW/z5HPyMffNi9WZe0l0087SkgoigCi4hc2VmErJgWtK pcBmN0DYBU20b2+Cqixw8CkxI/Q9p29/8bVl5AqslucB6MCe4XvndF2NW8/iYBfgzn5f OkvBE+i7QH9DD8G5Wv3mEBMBTg21XtK9phUXP6+ueOcsxOHhc3EQF8U4+/MdsCIxFivX 0YV5HVLp/4OU6V9p4JTSiqB9iWW698kCRk/SWGY0u8j67y49jNDkXJ/davkJZLcYV9eZ jgyI0Vf5/wsfG6tC2EGzbh5c2s0Lba7fM9q501yiekMY3khpv9F5zE+TSLtighG0d9Ze temA== X-Forwarded-Encrypted: i=1; AJvYcCW0M4INGY1XAyMWKib4C01PBerXYMDYc3hE2Gpa0lHoFfaJ6sGpGKXGWUAWeZn+Z7TDO/WsdayP7yQCQ8Ll@vger.kernel.org, AJvYcCX4f3oVGoMVE9s2HecLgcitdG64+VJz9BDe9oJCTVNWrnl3b+YWRWXuKHJshJ5ErVTr22szFtFLyO1cAc+k@vger.kernel.org X-Gm-Message-State: AOJu0Yx5eijmAyRcYWi+Ipm4NqWuIHaemhCene2v9PVoosGrqO3UNwa6 8L2jWGPM88oz/6LrxvDz993ydoIJDj+XT3UJpaQzstLc9LWugxVEgldS/g== X-Gm-Gg: ASbGncu93vU+etXTNrkHbUHDDjFC4liE8LtqiJ/ZpCD7OkngWRXrHr3+SsZ0QQ37YU8 PUJujZjS6QauPl39A6mD8JQuwqEfNTJcA56bdU2x2uWlHMgoc3wgtZbFp7+WF0268FMrJdaR7wx TNYg2dMDpzB/u0H6R/R1fjkfocrOJeapLr78f91mqSSRaUTSCpg8JSv7N0f6njMta6GLIj7mdvN aJ7yX8+ZCwG3yaGQiCd2F2lsvbSyp49RE3CCWTVKqrjKeDu7EsB8W6E5n8SIzmylmauoXZ2qe6Z n7bQo2ASRrwv+twXL6j9Z3AWJ+WjJOAgt1raQsLIsSZXi2ld+UANnUgJ8TKQbEM4tud4g+6FKBG 2tSgrBseaATTYwmYvcOs= X-Google-Smtp-Source: AGHT+IHiqo+qawMmux5rf8+b3xJRoCuA1P23oR0ah0wr7xAD/8HYKD0QG4mticK1SAzdlYmjYI1XvA== X-Received: by 2002:a05:6a21:3987:b0:1f5:9330:2a18 with SMTP id adf61e73a8af0-1fbebc85602mr4728730637.23.1742396147805; Wed, 19 Mar 2025 07:55:47 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73730ca057csm8445640b3a.48.2025.03.19.07.55.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 21/34] drm/msm: Add mmu support for non-zero offset Date: Wed, 19 Mar 2025 07:52:33 -0700 Message-ID: <20250319145425.51935-22-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c index 39641551eeb6..6124336af2ec 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); unsigned idx = (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits = 0; + WARN_ON(off != 0); + if (prot & IOMMU_WRITE) prot_bits |= 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 632f560c81ec..577da3c54c8c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -441,7 +441,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - vma = msm_gem_vma_new(vm, obj, range_start, range_end); + vma = msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -483,7 +483,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 7ccdf15476b9..3919b384d599 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -140,9 +140,9 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 8c780dd6a936..d51d54c0da33 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped = false; } @@ -97,7 +97,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm = to_msm_vm(_vm); struct drm_gpuvm_bo *vm_bo; @@ -109,6 +109,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, return ERR_PTR(-ENOMEM); if (vm->managed) { + BUG_ON(offset != 0); spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, @@ -124,7 +125,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, GEM_WARN_ON((range_end - range_start) > obj->size); - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; mutex_lock(&vm->vm_lock); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index e70088a91283..2fd48e66bc98 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, } static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, size_t size = sg->length; phys_addr_t phys = sg_phys(sg); + if (!len) + break; + + if (size <= off) { + off -= size; + continue; + } + + phys += off; + size -= off; + size = min_t(size_t, size, len); + off = 0; + while (size) { size_t pgsize, count, mapped = 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, phys += mapped; addr += mapped; size -= mapped; + len -= mapped; if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -400,11 +415,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu = to_msm_iommu(mmu); size_t ret; + WARN_ON(off != 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |= GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c33247e459d6..c874852b7331 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu); From patchwork Wed Mar 19 14:52:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874759 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CDCE219E8C; Wed, 19 Mar 2025 14:55:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396151; cv=none; b=fqEwBGwo0Tc4lzwyjx+9qgvKdjZnwR5EckJtFIIbdkpn4ZtQnX86OUTA4d8GKQcrSrSXzEtDd1Rceh1ygGFUFTeB74MkIlxI9U9o5lgD3h6Qw+T5sVWUtRVmijKFIlxOuelH7ChQ27aSl5DhH0odRP+0hc1YsOlghMEnJpDZpPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396151; c=relaxed/simple; bh=2jldU7p0sEc3/rDueFhZhuvGaxYdW8GFjUP65QuVYGU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Pmoo3ODwtL+YD2Di/PcYH+OMoZnXtiBtx/0FomBiyb0WMh52BLsMbK7gVqg4a24xtG+9lt6gzEXAFK81W/bgXIKNX/GRj/Fy23Pi7N41OX0Q/l0M74u3IgE6A6uynCBLqHBdsIIPqqJpatnxbAzLRuhoSd4sSyzDtDBQ5dSZh0E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=PQ/qJ446; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PQ/qJ446" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-223594b3c6dso153301105ad.2; Wed, 19 Mar 2025 07:55:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396149; x=1743000949; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=M81I/o3InSDYy6U6TKbMU3g38x1txBgAg5RvAUkv+pg=; b=PQ/qJ446ttWpqqsdvS8uGFc2BwdQs0bk0keZ+ZbeVGotMljJkH5CzonIvK8AaIA53a JVGhW93jGpBn9YQTEsQa5XkxepHfowoD7mdQx/RFQ/RJmaDz3xa68dfGXN5NE1lfdWQx A8bPetH0A/0g8+Z60LN4bbAzGJjU2hzdid03Z/x1omj3NAiLUjR6Wef8rJ8O8v2amG8Z 3Q5p233iCEs6Q0il7rqmhTciEMzog2q0dFeWtG6Omp/g018ijsGEHe2uzIvXGcoxZBDf YfOAvzNFRHJqUY9VsNk+OcTij7UfpMH0pYkHCjAPP/jjRa0nrS15TGDdG39WRWDzj9vR 7Hvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396149; x=1743000949; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M81I/o3InSDYy6U6TKbMU3g38x1txBgAg5RvAUkv+pg=; b=s49MF6OloZdSipd5qbJe57yZXHn78hIysI+sGe1YBeqIVn8JCrDsWWErIr0wCjP4za OYBLK8r09Ukx2xUs2qw9CI+7sW6an7ttzoX/uQ/v9JlgJba3jZNSjnUFszDvRNBBW+0Y rGbLRvuEYz0m5qdIPt0taSLI/cUbfYV9QSWRAICQaU5Wdgamo9TrM6quuW9P0HsWWlJv aw93EUWjmEGAEDmnkYltrbZ4LxC70bgmEC8MbrSe/TAHJktcqHPnC6mwtAIKZ1h6UIoo 404xGsG2H4HiXkU7qHm4s4nbrgvP5WQtJ7T6LtLLgJJgen6Q4ZgEknBdLGh9mriUriq7 DbUg== X-Forwarded-Encrypted: i=1; AJvYcCURLyMcPwue0xygUdpz4TiPJ8DQDqgC4yukpbzxvazwZiudBcomWs1c3+4j1J8Rup502DDJZEbecudi6iPp@vger.kernel.org, AJvYcCUXrODaI9YwV0vTV8P+mPwsw2Fz0Y8yv9EL77XgbqHXUm9W4IPda8jYXuZG3sIsblwK+FtLldIw//ZzI9o5@vger.kernel.org X-Gm-Message-State: AOJu0Yx7rF8esHQ/gwKG3YG5I4etLR1tBoZaeDtcY8672mjK5U95ESqk l0IuuVJH8Qv7b0wJfApUPNaQrJ1Omoa/XGa5/l48auw7n1i6/dbg X-Gm-Gg: ASbGncv0kBHuKOMuB/EHxa8ouOHq49P5s/5YAeBuEKzQ/9PUj4QFEDlerHRdTjTldWA J1tYkUWBMBggAHOPNw0Y2LpPMo89cG50WfbyERyQU1IJj6EXmMk+Hev1PWHRXt8zQAaUX4DMQE+ mL2YxJMmWOA79y7f4IS3Gw4AHtOeViqVG2JiiyZRww+HW0B0pVpSsi5vNTtZ542HjON7u5O42TW hl1EqihpQTgvsgj7fF2ZY5oOXmhkpvIVGwNbHvIsNk8Fs/Bs9uUchT2f+grAtOhZqNlAtKXzhBv vhkpX4snNy/SxVycRQbi69vm7GOmVkQp6XLiy4SsjNGGXYLVoINlWc/pnYsNbkU1IQuT2ul5Bga 0tGpfL+W+coGlCr8UHQM= X-Google-Smtp-Source: AGHT+IF/rBOn7CqfMuc/KHKbW4bZ3JRjCbiml1XOPHu0OqpiWKiawVfBn9/M7JSM5193+QVVydJyFg== X-Received: by 2002:a17:902:f64d:b0:224:1157:6d26 with SMTP id d9443c01a7336-2264992842fmr55429405ad.4.1742396149344; Wed, 19 Mar 2025 07:55:49 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6ba6d5dsm115202375ad.153.2025.03.19.07.55.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:48 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 22/34] drm/msm: Add PRR support Date: Wed, 19 Mar 2025 07:52:34 -0700 Message-ID: <20250319145425.51935-23-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add PRR (Partial Resident Region) is a bypass address which make GPU writes go to /dev/null and reads return zero. This is used to implement vulkan sparse residency. To support PRR/NULL mappings, we allocate a page to reserve a physical address which we know will not be used as part of a GEM object, and configure the SMMU to use this address for PRR/NULL mappings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++ drivers/gpu/drm/msm/msm_iommu.c | 62 ++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 2 + 3 files changed, 73 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 739161df3e3c..bac6cd3afe37 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -346,6 +346,13 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } +static bool +adreno_smmu_has_prr(struct msm_gpu *gpu) +{ + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); + return adreno_smmu && adreno_smmu->set_prr_addr; +} + int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { @@ -431,6 +438,9 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_UCHE_TRAP_BASE: *value = adreno_gpu->uche_trap_base; return 0; + case MSM_PARAM_HAS_PRR: + *value = adreno_smmu_has_prr(gpu); + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 2fd48e66bc98..756bd55ee94f 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -13,6 +13,7 @@ struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; atomic_t pagetables; + struct page *prr_page; }; #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -112,6 +113,36 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, return (size == 0) ? 0 : -EINVAL; } +static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + struct msm_iommu *iommu = to_msm_iommu(pagetable->parent); + phys_addr_t phys = page_to_phys(iommu->prr_page); + u64 addr = iova; + + while (len) { + size_t mapped = 0; + size_t size = PAGE_SIZE; + int ret; + + ret = ops->map_pages(ops, addr, phys, size, 1, prot, GFP_KERNEL, &mapped); + + /* map_pages could fail after mapping some of the pages, + * so update the counters before error handling. + */ + addr += mapped; + len -= mapped; + + if (ret) { + msm_iommu_pagetable_unmap(mmu, iova, addr - iova); + return -EINVAL; + } + } + + return 0; +} + static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, struct sg_table *sgt, size_t off, size_t len, int prot) @@ -122,6 +153,9 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, u64 addr = iova; unsigned int i; + if (!sgt) + return msm_iommu_pagetable_map_prr(mmu, iova, len, prot); + for_each_sgtable_sg(sgt, sg, i) { size_t size = sg->length; phys_addr_t phys = sg_phys(sg); @@ -177,9 +211,16 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu) * If this is the last attached pagetable for the parent, * disable TTBR0 in the arm-smmu driver */ - if (atomic_dec_return(&iommu->pagetables) == 0) + if (atomic_dec_return(&iommu->pagetables) == 0) { adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); + if (adreno_smmu->set_prr_bit) { + adreno_smmu->set_prr_bit(adreno_smmu->cookie, false); + __free_page(iommu->prr_page); + iommu->prr_page = NULL; + } + } + free_io_pgtable_ops(pagetable->pgtbl_ops); kfree(pagetable); } @@ -336,6 +377,25 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) kfree(pagetable); return ERR_PTR(ret); } + + BUG_ON(iommu->prr_page); + if (adreno_smmu->set_prr_bit) { + /* + * We need a zero'd page for two reasons: + * + * 1) Reserve a known physical address to use when + * mapping NULL / sparsely resident regions + * 2) Read back zero + * + * It appears the hw drops writes to the PRR region + * on the floor, but reads actually return whatever + * is in the PRR page. + */ + iommu->prr_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + adreno_smmu->set_prr_addr(adreno_smmu->cookie, + page_to_phys(iommu->prr_page)); + adreno_smmu->set_prr_bit(adreno_smmu->cookie, true); + } } /* Needed later for TLB flush */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index a7e48ee1dd95..48bc0374e2ae 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -115,6 +115,8 @@ struct drm_msm_timespec { * ioctl will throw -EPIPE. */ #define MSM_PARAM_EN_VM_BIND 0x15 /* WO, once */ +/* PRR (Partially Resident Region) is required for sparse residency: */ +#define MSM_PARAM_HAS_PRR 0x16 /* RO */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Wed Mar 19 14:52:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875049 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C618821A92F; Wed, 19 Mar 2025 14:55:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396153; cv=none; b=S8of/MgzC+81oOLrLHLl3DhRq/E7mMPwTHSASGJfWohE//a9na6+80XZhTy9P9YeGcyLP2vQzyR0RFY7M9gnqsGak5MLbL0rkfCYXiRqmmIhTzsIMdnFANT+NEEAgwXPUfGj3YaAtD3RCwEvlC9Wr5qRlLaE9R++bo/puKQ1rEs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396153; c=relaxed/simple; bh=apOwKrOGjMBFZG/bB8HtRZL//5ABh3ISxn80csiHTzw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ivdwIU9eIlIG3AQfdtbHIm5/UBFLJSDGX4p5z+JLGPMsggCxWvLRTWLyLTAXpm4HJDdOdfsY0W1UnfG0ehsEenKergdjmm63tJnI7dTWfXu39EJsydxxyTpr4UuEaftCqgZqBTGCPp9pJXJMrHIkAQfeqXrugSov/hD/DkYx8wU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=i38Fh7zJ; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="i38Fh7zJ" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-2239c066347so153011705ad.2; Wed, 19 Mar 2025 07:55:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396151; x=1743000951; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J0exLxAAAlQWwVFJcNsbOVvtN8oA++7dfPbHDbFWzdQ=; b=i38Fh7zJYQMywr9OoQxSPYUE0RFlBxUI9FVbS4dK069vbhUR//EZze4UTTTrZfeoeA 1zgKQTwKhc6pYWthFVBZgdFdt+jCyrbp7+wFUwu7sKcrGgP3VRXYpExHjHpaFUA2Jsxa IPxLmnP3d0VnXLbq5RFMnEwEA7yf5KBwm55pFkZyTcCNKK38siWDK5GAewTsd5ndPiRs q2M5KeC7PXOcEkyZJj45bXsqPJh6NrpE0fhrQSb1zuafAK3oUbvqGZoWQtT+NS4CGKO1 ddv5XIQ+nd/Euw+BsDy/LbvgiSbRvvw/maENyLLOGw8GE+pVqYGHAfwT8yYQkI9wEfmO jX3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396151; x=1743000951; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J0exLxAAAlQWwVFJcNsbOVvtN8oA++7dfPbHDbFWzdQ=; b=uxnUtkA6DRm4dGm480Bh0EsHcGgfbfeSkrsR145ucFgJjPpjtPk2uuKcS+crAEsTj/ k31pWLAmMF9KXGTYfiuLxcU7vttNAJW95w0oR4BTJ3o9K7FWklnK5sx5IdTldtW++Le4 wm6j1Xy8qsNdDJrJMmeQc6ZqF2jJ2+AuW9ZIOhghMNjVAEzApb/swO8kz84TyaLEBroa 6BENN0PeKScm9dCc3W9xV90BRoHCDKxqClzUNj2kpbeNT0PCLS7Hs1YiS/Jx0R783sq3 aJBHIQFO4MRVYxU2Gn4BYlwWUP6I6FjkQIsCHX+xpazPuDf0bMluCiIAQYqMfMh2PaGg idNw== X-Forwarded-Encrypted: i=1; AJvYcCVIaO5brzomnZ1rFTgWt6y6Pl+ReAMs+RLXB1GsZRGRi1jJ+dC8y2L5CSM+oOIKA8gu0PlMFNXYVhUbQ7BV@vger.kernel.org, AJvYcCVfGeXHN0d6chwArfdMqfzEYthV6xC6JZLSffNb1lfIn3kGN0JU4HcrJI9/Wo/H2VBSIgk2bzbNenIq95qY@vger.kernel.org X-Gm-Message-State: AOJu0YxwwjpgWDdwRm7w8cuP3YORMjBFYwiq6GwW5fOOfKjAZAh0EElD jNm56yrh7jXk5hjmYiCh/WFLMByzCUpqpw6xMwJQ5d0VTx+CNnML X-Gm-Gg: ASbGncvGTSMXamV4p0UwRae467XjuhMxm6s1Sr3Ndz3xwwlqYeWBxdK5lWPtVZFEqyL 5P3uVUqxIlk1InRcAlJ+KUXDnn9M38zWzFymohE6vMrEKpRua7jMlmXVQE72woqTdZYVzGPC6sy 0h1AK78KZDd9xvaA7xGGUQ9aHabOvBG0zgI6IAR4WwZda8MKi09WDR/cRUEzkSPSKjFzJoV7LE8 yDnozqvVyE2cD8prWHQyLnx2LoGZ4ChP+iiDL5CMOXVoyUQg2JydIdwSfvyjgdBjNaMJdR9ayab tlQ+1wb3Tl1IoPqmikcZ+2+y4ErRgNrkyOoytCLlQ7caf973ynoBzb1fqluIMKfjAXoEuVClYw3 MloCr5g95hUoyvuq0NfIqNqdaT/N//w== X-Google-Smtp-Source: AGHT+IFoDLZMYe7t6b2/oYO0pMxb9/WPero4OWX4DxcDjSMBc4z23b1LAAx4FG3aJW9vP0kMLtgYMg== X-Received: by 2002:a17:902:ecc5:b0:224:1221:1ab4 with SMTP id d9443c01a7336-22649a39fd5mr44273665ad.22.1742396151063; Wed, 19 Mar 2025 07:55:51 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6bbe8f9sm114441635ad.175.2025.03.19.07.55.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 23/34] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Date: Wed, 19 Mar 2025 07:52:35 -0700 Message-ID: <20250319145425.51935-24-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 577da3c54c8c..dcf5f6a25f87 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -421,7 +421,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); if (close) msm_gem_vma_close(vma); } @@ -600,7 +600,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); return 0; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 3919b384d599..1622d557ea1f 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -141,7 +141,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index d51d54c0da33..baa5c6a0ff22 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -21,7 +21,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) } /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); From patchwork Wed Mar 19 14:52:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874758 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80C3621B1AC; Wed, 19 Mar 2025 14:55:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396155; cv=none; b=Az5H0WMIcSb4Ug+CQxBDJZztQCVtqX0kclEeu68A5OYuen0C0+MuDa93rzakUQZAcMswstCMetaQFnB+u5/U4gIT20tmP6dODz51cVZpfeYmfGNBls5i6ZrIGpyHHhYqjie9V7OUMnu88XNrUrR5n9TSqKH8ALvcVhnyvaMXTYw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396155; c=relaxed/simple; bh=FwxaUtuGUGnf+03W7aSRAmknTmY2nh4vyXLrmgsC2SM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hGvYme2MkWee7t/teDu90PVwnr7T7rsM02NGleUGs9AF0VFmG85dd9oVrqjKpy6B5otMChZvBblNozW48YZmrUIN60EKQfH1JKH7zYs+klw6ydbLZArJjeeFP6bFFruGZ4aOpMDeA6mAGA45rpHSxDFuzfyz8kmrl2UNVcL89p8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Y6ug+hVe; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Y6ug+hVe" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-224341bbc1dso135052505ad.3; Wed, 19 Mar 2025 07:55:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396153; x=1743000953; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NtskW6rdhWE0A0tJOFKFCUzMYzKfmRFhzl7PyUes/t4=; b=Y6ug+hVewtLjeQXRzfyto+Xe8vjzAXP+ciLVqVbfmwNC4JnxbLlyvIXQ4xvmUftGze CmpwZNh6qqkOio199TSYsWK5VSpoT8+FtVSByo54ryBb45SXMzajesrtqmAAavp8YqEB /k5ELWuyneGBMGRwd6xs0/MYLW+SPpsl3mOtwzz8o6xzGQtcPiLZxZSaGEdZoVyaISNk ZzJh0nAkgxdKU2f1RqA1Z45Uumeihvo+iIectZiAyHqLPfXdI6ruEyITJuGMLm0xJWK4 tdb7mvqMvt4ZS39Rr7NiAf9jNrxOe7Nwu/UVnCahDwv6lcF+DbrrdCSFthVBeR8Un5oQ 32YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396153; x=1743000953; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NtskW6rdhWE0A0tJOFKFCUzMYzKfmRFhzl7PyUes/t4=; b=iMix/MzmfPoBHFuqeEiw3Vms8v/FT39ZNq7Y9ID5ccxveprORXSRZmUpi7E2WKGVrZ MwnYMkA0sks/mENsdH6N0IpUtaQqfx+LLIwbFhaBqImj5c1qxtgvd6YP4N4VgVSr7X6g hzZUqoWSJAGyjUWwTq6ujgecxazF5frN+pq7i97egdiy2AhjnpFNNa7nZS8a3bMal0Or u39C9zm6TN5Ziu7u7sSHMQZ5/k7qKI3pd/naJPYUNWaqCrSAXBF4LOFOLUY6JpZXwz6b TZJBab/zb7/Yj46Byb9+gRs+TVonguMkL1v6/HkvZOPB5JF2t60Fd+jbpICFEzTndFcg T73A== X-Forwarded-Encrypted: i=1; AJvYcCU7uCAJ3nX27DV0uDkSvVtGHuj44FsBcolgbyrB5VW644ge+cCLDgB2oIqXlty4z9J6GzE8/NFdbh5zkffa@vger.kernel.org, AJvYcCVcOaWe137AbEofzuI36gzlCvCnvjSJfKO1bvdqvdBEo4qrppBYKbmwRw7yfVBLg5plpdPIb6+gWdL1qgZ2@vger.kernel.org X-Gm-Message-State: AOJu0YwsIKQy1HbfFSTt+DZBrjF21CjBkOy7PYYYOecnoyFEpaAq84HP x/Ymbh4uNhm+sK66/HRQ2QUFHPTVOW3bsD3POVzoTb8wF2V7fM0N X-Gm-Gg: ASbGnctLhB+1x3nLf2/4pGemkDnl0nLXS971+y06P95PkHdsH3EOXjfODJlyey31bE1 ZaoajC97Gfw6kSud/zOkEMa9XrbWR4vSMMeludk/o5UCpvQDv8KsU2vhXJbFxWzLsvKhrk1IVW4 F7TB9yEcAfewLlsDl+ZiLroEwUtAmH0vJqVvZIEP+CRGrZHvwFAhglOkbpNxtoScdx4iBlhpsGG RhDrylqf0W+SMTHgYNf59eASZkcezyhcjwH0kMqUIFfdLMTepNMjcPWjYSuofE6fYGR1b3tUpkw 0gnJ7CqA4yU7rf+StMe91r1dendr8TrQGbD/CA3TKLgmnn3LnlRA8DCyOWB8uwmsMjsLa2MLf5Q qBREa/o1D89GzZUQgZQ8RSzwuZHpWDg== X-Google-Smtp-Source: AGHT+IG6wCp6TlA5EcFXQS2zFAxUCtLNnEc3uD8oXoV+1DTRNZ9SvFOcNery9DY95MRlGiEtjpK1Aw== X-Received: by 2002:a17:902:d4c7:b0:21f:522b:690f with SMTP id d9443c01a7336-22649a57d5emr44840415ad.46.1742396152544; Wed, 19 Mar 2025 07:55:52 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6bbe4c5sm115221445ad.192.2025.03.19.07.55.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:51 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 24/34] drm/msm: Split msm_gem_vma_new() Date: Wed, 19 Mar 2025 07:52:36 -0700 Message-ID: <20250319145425.51935-25-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Split memory allocation from vma initialization. Async vm-bind happens in the fence signalling path, so it will need to use pre-allocated memory. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_vma.c | 67 ++++++++++++++++++++++--------- 1 file changed, 49 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index baa5c6a0ff22..7d40b151aa95 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -71,40 +71,54 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) return ret; } -/* Close an iova. Warn if it is still in use */ -void msm_gem_vma_close(struct drm_gpuva *vma) +static void __vma_close(struct drm_gpuva *vma) { struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma = to_msm_vma(vma); GEM_WARN_ON(msm_vma->mapped); + GEM_WARN_ON(!mutex_is_locked(&vm->vm_lock)); spin_lock(&vm->mm_lock); if (vma->va.addr && vm->managed) drm_mm_remove_node(&msm_vma->node); spin_unlock(&vm->mm_lock); - dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL); - mutex_lock(&vm->vm_lock); drm_gpuva_remove(vma); drm_gpuva_unlink(vma); - mutex_unlock(&vm->vm_lock); - dma_resv_unlock(drm_gpuvm_resv(vma->vm)); kfree(vma); } -/* Create a new vma and allocate an iova for it */ -struct drm_gpuva * -msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, - u64 offset, u64 range_start, u64 range_end) +/* Close an iova. Warn if it is still in use */ +void msm_gem_vma_close(struct drm_gpuva *vma) +{ + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + + /* + * Only used in legacy (kernel managed) VM, if userspace is managing + * the VM, the legacy paths should be disallowed: + */ + GEM_WARN_ON(!vm->managed); + + dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL); + mutex_lock(&vm->vm_lock); + __vma_close(vma); + mutex_unlock(&vm->vm_lock); + dma_resv_unlock(drm_gpuvm_resv(vma->vm)); +} + +static struct drm_gpuva * +__vma_init(struct msm_gem_vma *vma, struct drm_gpuvm *_vm, + struct drm_gem_object *obj, u64 offset, + u64 range_start, u64 range_end) { struct msm_gem_vm *vm = to_msm_vm(_vm); struct drm_gpuvm_bo *vm_bo; - struct msm_gem_vma *vma; int ret; - vma = kzalloc(sizeof(*vma), GFP_KERNEL); + GEM_WARN_ON(!mutex_is_locked(&vm->vm_lock)); + if (!vma) return ERR_PTR(-ENOMEM); @@ -128,9 +142,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; - mutex_lock(&vm->vm_lock); ret = drm_gpuva_insert(&vm->base, &vma->base); - mutex_unlock(&vm->vm_lock); if (ret) goto err_free_range; @@ -140,17 +152,13 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, goto err_va_remove; } - mutex_lock(&vm->vm_lock); drm_gpuva_link(&vma->base, vm_bo); - mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return &vma->base; err_va_remove: - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); - mutex_unlock(&vm->vm_lock); err_free_range: if (vm->managed) drm_mm_remove_node(&vma->node); @@ -159,6 +167,29 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, return ERR_PTR(ret); } +/* Create a new vma and allocate an iova for it */ +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, + u64 offset, u64 range_start, u64 range_end) +{ + struct msm_gem_vm *vm = to_msm_vm(_vm); + struct msm_gem_vma *vma; + + /* + * Only used in legacy (kernel managed) VM, if userspace is managing + * the VM, the legacy paths should be disallowed: + */ + GEM_WARN_ON(!vm->managed); + + vma = kzalloc(sizeof(*vma), GFP_KERNEL); + + mutex_lock(&vm->vm_lock); + __vma_init(vma, _vm, obj, offset, range_start, range_end); + mutex_unlock(&vm->vm_lock); + + return &vma->base; +} + static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, }; From patchwork Wed Mar 19 14:52:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875048 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6437D21C19F; Wed, 19 Mar 2025 14:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396156; cv=none; b=AyobxKPnxgpFsaPATPDOdliMEmL5PzpIIMB24oo9J1EuGHbwtBfeb59ysJaPhpg+vbpATr5p6yxkeppiR7B6ex2xLbVsgSK1qB4ynp9ve67nZh3iBjR46NxF9uaTGU3hsxxB+HMh/H+yZ6P9DlMap4/8StAO3KY3blecCEFEP7Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396156; c=relaxed/simple; bh=aOgSjVOhEszlfemvyAqw9dIncr8PXvsr22O37Zn2bBk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MTJapPslLoS+BlQLvgfzVHh09t57SaOdx3QNrWf28f9fJKRr123YoFNgMotx9eJhPuQhabyEuYhd+yQ/oDG+DMzHBffUf1y8pveYanTONqWGgPWkdu5di19J3GVWuEM0roKKWaefF7qtHIUBUtGRJiHIqnK8j/QWG10gwoA28B0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HM5tNO1c; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HM5tNO1c" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-224019ad9edso25558885ad.1; Wed, 19 Mar 2025 07:55:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396155; x=1743000955; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ykRZx1zxGai8riCEOFUoLeUcHnk9mI1cgk2cvB/7auU=; b=HM5tNO1c5crJl7g5DuysOzGEVhcI3ajLCJaC/LxBC2wxdnbZpeNuX0zPeHws1iFeC2 JjPwNaeDhJqctN94A1B+gf0iZxYi65M8YFCFQdExO78AAx0fwrW0Fedi+LkG1uwej1sw LpO2rYARhWxvI05bDS8k7sF4OBcF9cyBDUTmMMc6incOFeOHRLoC9AAtQ3Nu30oGGse1 npb/WM5K1Or2HaSdfj6Go6bNNSZ38kMAKcjtqoNMyxzMgohFBOBJvi2rSZCnVZBM9Hl4 FhXTAVghRBiqXvxHQrA7mBD1I3uTuHQAzPCW/OROToc895V8eJa+Oi07T41xZGHnvJF5 CEKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396155; x=1743000955; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ykRZx1zxGai8riCEOFUoLeUcHnk9mI1cgk2cvB/7auU=; b=smApxQV/bZRMmwCgLUJKbZGg12kuXrPMA+Ouq7sh7+klQ8z8HhavP0sqis0pq8Ggnr rXn5Wk9m2alBiaJPAjoJ1g4aTzFXYOULQyKXxltZq5iAWRCeD2KdqKziU+XEYliXRSV3 P3NBw6n2RorQ0r2fIZuz6R4uQr/Thm70Un6GpvNC5IAh8EvcQ1EQofrwKWzwRjBz+Y67 asAxiDsBgoczVTfYbwaelNsDQuT2WvrxEx9ckuje1L9stLpIOqSrnLrwh3pGoCtvoATc mHztdz4G6DJmOqIFR+KI2BCrXwi/bQm9v75PTjkyUej31qBZVgEBq1NDPc0nCSwT8SMU pbqA== X-Forwarded-Encrypted: i=1; AJvYcCWStEkOvL2xN5qs9m3HjazKNIbbJrYU/GhfPqDXK2qzBDWAwBWpmFAZvIzjjvjZxGbuuLAbQ14VraYo241g@vger.kernel.org, AJvYcCXLRxOCJKer4Jm3zFjKaHx+3RV9aLKicllDDVvNtRhSYXuQnqm8LWFIKcxbKlAggWQa2ePRkUidPfVM+G4f@vger.kernel.org X-Gm-Message-State: AOJu0YxdKyO1b02lIkpFmhU+mA8N/cSNopbc0mDn5kFR2SVV8tdbD981 5a66ZddOsRdJcHyxpOsjTFGK08H/DaG0ekUQ083A3Beau/ALnv3w X-Gm-Gg: ASbGncsISE2w/VVLTxR9HKIxABsfR+ROZRc/odu6zf/NFMYlHxshhYPYPdPG5WFwO6h 7S4leu9qLDta9rJxeLFq2qFJUTfuD+5XXKXHqAvFQp62nwG+gu6ZdIz/peLNdy8qNIYJ8IEMFmB hVLAIjRgZHAL1/TqRHMNwKYVqIM9Kys5b3b6QZnR1xPjOipMZAHe/UMxdMp4i3VgO748NyW4z1j E0vT5TMXUv2tfCH6ndLAhj9H3CTrj9H9oDawLN3onBOlI3WewIk5ah2FOZNgOtkKz/qtJRQh3br uWprz4l7hBzLk9dMjJ6tU51MktlMX7ta+RIew8wfZI7+pdwyXfoxe0gfJBWmyvaLafG7DiVMEH/ q/ZIaAdFn0V4wm3nJ4uQ= X-Google-Smtp-Source: AGHT+IEVSY7Gut4/GrsN86Y6Sp1q/BM0Hd+KvVhzTdD7P8COT0VA6pKX3430prcbrjpzaGzHmvnQdw== X-Received: by 2002:a17:903:4407:b0:223:397f:46be with SMTP id d9443c01a7336-22649ca8f5cmr48378535ad.47.1742396154522; Wed, 19 Mar 2025 07:55:54 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-226273e1459sm42742005ad.145.2025.03.19.07.55.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 25/34] drm/msm: Pre-allocate VMAs Date: Wed, 19 Mar 2025 07:52:37 -0700 Message-ID: <20250319145425.51935-26-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Pre-allocate the VMA objects that we will need in the vm bind job. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 9 +++++ drivers/gpu/drm/msm/msm_gem_submit.c | 5 +++ drivers/gpu/drm/msm/msm_gem_vma.c | 60 ++++++++++++++++++++++++++++ 3 files changed, 74 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 1622d557ea1f..cb76959fa8a8 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -115,6 +115,9 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, void msm_gem_vm_close(struct drm_gpuvm *gpuvm); +void msm_vma_job_prepare(struct msm_gem_submit *submit); +void msm_vma_job_cleanup(struct msm_gem_submit *submit); + struct msm_fence_context; /** @@ -339,6 +342,12 @@ struct msm_gem_submit { int fence_id; /* key into queue->fence_idr */ struct msm_gpu_submitqueue *queue; + + /* List of pre-allocated msm_gem_vma's, used to avoid memory allocation + * in fence signalling path. + */ + struct list_head preallocated_vmas; + struct pid *pid; /* submitting process */ bool bos_pinned : 1; bool fault_dumped:1;/* Limit devcoredump dumping to one per submit */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 39a6e0418bdf..a9b3e6692db3 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -80,6 +80,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, submit->ident = atomic_inc_return(&ident) - 1; INIT_LIST_HEAD(&submit->node); + INIT_LIST_HEAD(&submit->preallocated_vmas); return submit; } @@ -584,6 +585,9 @@ void msm_submit_retire(struct msm_gem_submit *submit) { int i; + if (submit_is_vmbind(submit)) + msm_vma_job_cleanup(submit); + for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; @@ -912,6 +916,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, } if (submit_is_vmbind(submit)) { + msm_vma_job_prepare(submit); ret = submit_get_pages(submit); } else { ret = submit_pin_vmas(submit); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 7d40b151aa95..5c7d44b004fb 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -258,6 +258,66 @@ static const struct drm_sched_backend_ops msm_vm_bind_ops = { .free_job = msm_vma_job_free }; +/** + * msm_vma_job_prepare() - VM_BIND job setup + * @submit: the VM_BIND job + * + * Prepare for a VM_BIND job by pre-allocating various memory that will + * be required once the job runs. Memory allocations cannot happen in + * the fence signalling path (ie. from job->run()) as that could recurse + * into the shrinker and potentially block waiting on the fence that is + * signalled when this job completes (ie. deadlock). + * + * Called after BOs are locked. + */ +void +msm_vma_job_prepare(struct msm_gem_submit *submit) +{ + unsigned num_prealloc_vmas = 0; + + for (int i = 0; i < submit->nr_bos; i++) { + unsigned op = submit->bos[i].flags & MSM_SUBMIT_BO_OP_MASK; + + if (submit->bos[i].obj) + msm_gem_assert_locked(submit->bos[i].obj); + + /* + * OP_MAP/OP_MAP_NULL has one new VMA for the new mapping, + * and potentially remaps with a prev and next VMA, for a + * total of 3 new VMAs. + * + * OP_UNMAP could trigger a remap with either a prev or + * next VMA, but not both. + */ + num_prealloc_vmas += (op == MSM_SUBMIT_BO_OP_UNMAP) ? 1 : 3; + } + + while (num_prealloc_vmas-- > 0) { + struct msm_gem_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL); + list_add_tail(&vma->base.rb.entry, &submit->preallocated_vmas); + } +} + +/** + * msm_vma_job_cleanup() - cleanup after a VM_BIND job + * @submit: the VM_BIND job + * + * The counterpoint to msm_vma_job_prepare(). + */ +void +msm_vma_job_cleanup(struct msm_gem_submit *submit) +{ + struct drm_gpuva *vma; + + while (!list_empty(&submit->preallocated_vmas)) { + vma = list_first_entry(&submit->preallocated_vmas, + struct drm_gpuva, + rb.entry); + list_del(&vma->rb.entry); + kfree(to_msm_vma(vma)); + } +} + /** * msm_gem_vm_create() - Create and initialize a &msm_gem_vm * @drm: the drm device From patchwork Wed Mar 19 14:52:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874757 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC4E521A92F; Wed, 19 Mar 2025 14:55:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396158; cv=none; b=Lm8sZcS3/orPZY4n9l/5xOXxeu/dsQ2y1ZUXMlHDgWuaIbWVN6amzQ7jPhWZtxgFcAKFKeBWhUa2UzbHKBrUVPG5nPi+zahrBjfuJKnNH29/aY5qvQ/+3IUoFIZ5q13JyFGIphUS1MjDLICFpIOgyk5062AAeSorL+wBoSffCbo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396158; c=relaxed/simple; bh=vvEcPlK3s58XiV/xwu9CpE4wF7xeTRvpA4b5qkLWYYs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PFFouDBB16Lu88u4L9EZxG8NKj2kgcBYa4HApnKF53CWdPWpu5MJoKlm1IV38uPuLbT01P0PQiqK8CCHOOlpn2S1yfqzuh6yDYlkm6qk0VlhmoPMJcbEEuoqsiEMKNUusXVHkP1EcHJxMHbqPkRIYqcKuut9SVPnUfCF05K/zAM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hnxLcZrn; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hnxLcZrn" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-2264aefc45dso23524575ad.0; Wed, 19 Mar 2025 07:55:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396156; x=1743000956; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=D08iU4HNjMQuncNoMHRtvLgm/o9EB4ZZJ6auL8xvp2Q=; b=hnxLcZrnBI8Maa4AqCnIe/SnyNQDmtU3GvujrC5AIjxvZBBoT2PyNxF1V9GpLg7GZ2 +bU/r3FdYWSWdYshPYR/RkXvEwy0M+kNO6Ton+4cTV7q6tEaqz042h1RWjwvVmZem/0T 8cH9S45fEabq6Oy8eKTI6sKaZpgZ+eUKU8Wvr+jcDKusKkgCRL+CptEWH107W+0kn1+V sDFT57I5TirpXHoq/6S87cyxGpuLQEcx4YOY217bP2CFa+cib2nvuWSRAVzd3S4Knv/K kiZYFmaEXe9r/V9rzHc4kYgZVf4IJLr0cwfg9ANk0wvps/7fCb2WKvwsF6bpUV9HDeJ7 4sOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396156; x=1743000956; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D08iU4HNjMQuncNoMHRtvLgm/o9EB4ZZJ6auL8xvp2Q=; b=Xc0VvNRfOvpdhVw59AKknj4r1oIJdQ+KMTFb3pnISTNDlIqR6ou0Gr9iN+BlRIhASj aBLxdeiFOa7wTwFxxQpGg3wNWJl32TsVvUG+RuZrT6TgB+yUYnj535BK3p3EoBk9JAWl DqT2FC97M9QLoEWFhGpN3XqoPAsurELgAoaAebIydf+2LMt/Q5eBj7uszJO+ZcrNY/0+ xaZZiU3z6pAfnw9TuMuiNv9FPooW6AR1YgiEoKvKSTk7xMhMkiVvuy9i+MSEt+YK20fV Q7+lfjnbyBddBzeZkOC0dRhlIyaTxPrKWwtl3ppMlrX9KNCz+RUhxA3K1HdIrpKybbUJ Wb/Q== X-Forwarded-Encrypted: i=1; AJvYcCWN+E/i1Vtl4YYorybTgHEGDsLiwoIUH1h/XknuXLgecj5agRCPdb9xXY5lI9VvMm5Hjw03fKebWzoPrkHs@vger.kernel.org, AJvYcCWlhaB6mZeh8CLEJARs70Mpyd+9lUSx+7qWZjwS+SBMUYyizgZDDI/Qvd9eMgFuu/GLMCqtnvorK6HiIVa0@vger.kernel.org X-Gm-Message-State: AOJu0YyDf3Mz0T+pZXFEjM+oSGPZn5t7dOSkcJgIHuFsnHEIfMFTniJS OejCOurqMnRyYqWWzhNHBYkg4oh3XANFF4dHjPYmCv5+OgO5Kabu X-Gm-Gg: ASbGncunOur8J2LZH/roGVxWcyF1+Ea/Tn4L0fISM+C/GTAf4QnsYoptT1ZXXJkjxjI DYwPxFFCpY1MZ5Wrd7t7j9rgCkvW9K8+BN6m7qMBbgp20r0CBKj2UitkZXMKgQaGDJWDQHT6gbq PcH7DVcECPWd0AkKHfK6VBFwG2izHJD08/heJGyldiSBBeEacn7jgXfpV94hL54rL3TG11LBCzY S2d7UJ6R35nkcCzvbUUjnxk8aX8xj5p8WzeYFQGhA0hw3aJ0M6elOqzC0c8TNZ9YNXEgQPm1dco MY03OS/xE6qw2i6JFv6et00kyTLbHPDhTQxeXtR4zPzaEKm0USI9vdiKtHlwToVpN/gFk/0+CmY z2YSg5brUtLWQYp5THbrz5q+l+qCFfw== X-Google-Smtp-Source: AGHT+IHr7J/3njBmlr1aPAWAlW7J4WxSjPaEbuE4k28sara3wMuWM/iCK9GgeAO5Qywl3XyBMlZ91A== X-Received: by 2002:a05:6a00:2341:b0:736:ab49:a6e4 with SMTP id d2e1a72fcca58-7376d5f1844mr4191055b3a.1.1742396156063; Wed, 19 Mar 2025 07:55:56 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-af56ea9724esm11057137a12.74.2025.03.19.07.55.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:55 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 26/34] drm/msm: Pre-allocate vm_bo objects Date: Wed, 19 Mar 2025 07:52:38 -0700 Message-ID: <20250319145425.51935-27-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Use drm_gpuvm_bo_obtain() in the synchronous part of the VM_BIND submit, to hold a reference to the vm_bo for the duration of the submit. This ensures that the vm_bo already exists before the async part of the job, which is in the fence signalling path (and therefore cannot allocate memory). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_gem_vma.c | 19 +++++++++++++++++-- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index cb76959fa8a8..d2ffaa11ec1a 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -369,6 +369,7 @@ struct msm_gem_submit { uint32_t flags; uint32_t handle; struct drm_gem_object *obj; + struct drm_gpuvm_bo *vm_bo; uint64_t iova; uint64_t bo_offset; uint64_t range; diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 5c7d44b004fb..b1808d95002f 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -278,8 +278,18 @@ msm_vma_job_prepare(struct msm_gem_submit *submit) for (int i = 0; i < submit->nr_bos; i++) { unsigned op = submit->bos[i].flags & MSM_SUBMIT_BO_OP_MASK; - if (submit->bos[i].obj) - msm_gem_assert_locked(submit->bos[i].obj); + if (submit->bos[i].obj) { + struct drm_gem_object *obj = submit->bos[i].obj; + + msm_gem_assert_locked(obj); + + /* + * Ensure the vm_bo is already allocated by + * holding a ref until the submit is retired + */ + submit->bos[i].vm_bo = + drm_gpuvm_bo_obtain(submit->vm, obj); + } /* * OP_MAP/OP_MAP_NULL has one new VMA for the new mapping, @@ -309,6 +319,11 @@ msm_vma_job_cleanup(struct msm_gem_submit *submit) { struct drm_gpuva *vma; + for (int i = 0; i < submit->nr_bos; i++) { + /* If we're holding an extra ref to the vm_bo, drop it now: */ + drm_gpuvm_bo_put(submit->bos[i].vm_bo); + } + while (!list_empty(&submit->preallocated_vmas)) { vma = list_first_entry(&submit->preallocated_vmas, struct drm_gpuva, From patchwork Wed Mar 19 14:52:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875047 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60D6321CC6D; Wed, 19 Mar 2025 14:55:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396161; cv=none; b=eyLtzqTWItgfrgVvD3JDAj/11RabE5e63gK1NkYKqt9mvNHL4oBn3iPus6Mp/uKCe16qYYGDkPlqV6WLNn7o2Kox0/jzjn5/H1xxb8Qa69gfRhGTSAD/Z+3q8uvA4yVIuJBTwD5rWSgm7IM7zQJofexVJHLUfG9NV0DaMpe22pI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396161; c=relaxed/simple; bh=ekdgcBAS0ggm2+xP926PED2wwIrwSL8aP+1AtstrJ9M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=omDFrTp5ZGCvnDRHBHLU7lIOLNlOrEgkNoOkewy9VQn0G9MhOUDztoOo/OuqQ871uwl8YrR39fDnyXvTAKcOSIVd9AnA0yxv46hPkgLnSdv3fkPwYDnxt/8+fpzo9dsT1nFtR9Odb/edbckRRXfUwQ61yeBDd0ZcFXBGM9QUsJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZjHdRAs2; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZjHdRAs2" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-2240b4de12bso46467095ad.2; Wed, 19 Mar 2025 07:55:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396159; x=1743000959; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jyO6kdDDIQqJK9TGT4jcPv5A3omBF22Bq6SHdz7luwA=; b=ZjHdRAs2Rs3S/k3lxM4QlrkgkpYya9TeZPyPbJX7Za063Zv52uV9Ycz3o0NMFERFWm 2WoT2L8g4RoAyl8DOjNoIudMKr1TkeqF0Bq+SGxbG6wz3BeOSzfgZq1otMaMl3AA9klG MUC4tGo49CmbwSFJerl5U+4ctrpdc4cxj1DBTABCrNFgd8IEmHFgku6Tub0zcq15icrf JdrqTpqtSiSnsxwo7NJw3GznIWlciVDMIwrdEk22NycvbxyAUI6UPneN7qyeAhwX/RmT 1e28CyawTrbitWWuyoo+rPZZ3f5tZO5+vNlM0BIDQjFhl6APMYtfYTJ+SiUFMYgFh4PC 4Xyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396159; x=1743000959; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jyO6kdDDIQqJK9TGT4jcPv5A3omBF22Bq6SHdz7luwA=; b=jnItHFFt+pmgZOAm+zEr9VRQENJhOFiUSeAspa+9YJnWG5UrDPedrmesyznv+T9mx1 JvsjqzxIqDag9XJM2G0AAlzziOaOWXhViQHbNEzQ7Iyx43sd2DgAA1qV5Hm/0cR+2Xlu ng71X4rAg5G1DTI7Ar1Ais3j4dTKXOXnQxDMIdmGhUZXMDSQOTfsCM/MtEddAXgw9V4+ AuoZOvwqasDqRgkuDUQExTrkCbnXQWudhhbIxDb7shd+5lo2S4lBHmRSP9gkCgRXSm1n gSTsHD81Z8Cigen7KJOcvevB4Yfj+/CqT0FbnSQZlODkMeCIyDUGzZY8Cmm1JdpoCaLK n/+Q== X-Forwarded-Encrypted: i=1; AJvYcCU0p6/yMQ2/Jvl9gDJufFcIZmqKcMvid4DAoLZOBoy9VbucdDngAk61/4kHCAsZweUFU1Op98h4ALW8mpMa@vger.kernel.org, AJvYcCUkg3sr/NzpwvdLi2Vn/cOF0XZrGf3nrAi7VfoPSB93w7e+5hDbbnaLdLfBn79FZNRYb7F2S9LKmS3+QbqN@vger.kernel.org X-Gm-Message-State: AOJu0YzwdSeyOhIaMtfgDuNxb8Hht/h0UChj7XEmt4DFmlx4xz+MsFCd cq6/AO1J5KUQ/nPE/IrEsugm4O5hlzIfSIdKjHqmGAMdAPHrQIF/ X-Gm-Gg: ASbGncsiL/Az6Iowez+vOy1tTgPieK+F7YC/rPkA8jUHXsAeuWnpaVZ5jNf9wTqBt2B J8eEX63eoI/0yzm7PnWC14gcnn3aFIrz9dIFUWrVnLqEWQPs1xdKFjHTBmxjaM00yIHuk2W75D+ xXLqeGoubSkuP4Ht9cuI50YAIO8Hd8EdY7RrQePqcaNYeZm7g3rDSUVpag6G17HAY/a6heIaUyJ xXdg0FMRkKMXZbje2LqRAxCq1QXqhklCvlQsdTdpQDR8Z2YC95A/2bDnuCCKHjeE8HvOHyThP3k 5oyR2i2P1OBLJV2UmQQgLqxsvZIBkBPYoK6nOED2xn27+wq3h/lmAC23XZ7hgfw5wI+yXv3bL1o DGf6TSqYVB3BqcO3S+SE= X-Google-Smtp-Source: AGHT+IH6q5hAIc9t3N2nh4KbWjpJ3bsjbKbZog07bzbek/wCl7nglfjUn21b9XL9FzZ3ul4wkgb/gw== X-Received: by 2002:a17:902:c40a:b0:21f:7a8b:d675 with SMTP id d9443c01a7336-2264981d956mr47600395ad.4.1742396158377; Wed, 19 Mar 2025 07:55:58 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6bd4e8dsm114677535ad.228.2025.03.19.07.55.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:57 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 27/34] drm/msm: Pre-allocate pages for pgtable entries Date: Wed, 19 Mar 2025 07:52:39 -0700 Message-ID: <20250319145425.51935-28-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Introduce a mechanism to count the worst case # of pages required in a VM_BIND op. Note that previously we would have had to somehow account for allocations in unmap, when splitting a block. This behavior was removed in commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap behavior)" Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/msm_gem.h | 7 +- drivers/gpu/drm/msm/msm_gem_submit.c | 5 +- drivers/gpu/drm/msm/msm_gem_vma.c | 18 ++- drivers/gpu/drm/msm/msm_iommu.c | 201 +++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_mmu.h | 36 ++++- 6 files changed, 260 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index ca3247f845b5..9f66ad5bf0dc 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2267,7 +2267,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); + mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu, kernel_managed); if (IS_ERR(mmu)) return ERR_CAST(mmu); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d2ffaa11ec1a..117f0e35e628 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -7,6 +7,7 @@ #ifndef __MSM_GEM_H__ #define __MSM_GEM_H__ +#include "msm_mmu.h" #include #include #include "drm/drm_exec.h" @@ -115,7 +116,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, void msm_gem_vm_close(struct drm_gpuvm *gpuvm); -void msm_vma_job_prepare(struct msm_gem_submit *submit); +int msm_vma_job_prepare(struct msm_gem_submit *submit); void msm_vma_job_cleanup(struct msm_gem_submit *submit); struct msm_fence_context; @@ -348,6 +349,10 @@ struct msm_gem_submit { */ struct list_head preallocated_vmas; + /* Tracking for pre-allocated pgtable pages. + */ + struct msm_mmu_prealloc prealloc; + struct pid *pid; /* submitting process */ bool bos_pinned : 1; bool fault_dumped:1;/* Limit devcoredump dumping to one per submit */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a9b3e6692db3..ed0265ac4e1d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -916,8 +916,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, } if (submit_is_vmbind(submit)) { - msm_vma_job_prepare(submit); - ret = submit_get_pages(submit); + ret = msm_vma_job_prepare(submit); + if (!ret) + ret = submit_get_pages(submit); } else { ret = submit_pin_vmas(submit); } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index b1808d95002f..554ec93456a0 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -270,9 +270,10 @@ static const struct drm_sched_backend_ops msm_vm_bind_ops = { * * Called after BOs are locked. */ -void +int msm_vma_job_prepare(struct msm_gem_submit *submit) { + struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; unsigned num_prealloc_vmas = 0; for (int i = 0; i < submit->nr_bos; i++) { @@ -299,13 +300,23 @@ msm_vma_job_prepare(struct msm_gem_submit *submit) * OP_UNMAP could trigger a remap with either a prev or * next VMA, but not both. */ - num_prealloc_vmas += (op == MSM_SUBMIT_BO_OP_UNMAP) ? 1 : 3; + if (op != MSM_SUBMIT_BO_OP_UNMAP) { + num_prealloc_vmas += 3; + + mmu->funcs->prealloc_count(mmu, &submit->prealloc, + submit->bos[i].iova, + submit->bos[i].range); + } else { + num_prealloc_vmas += 1; + } } while (num_prealloc_vmas-- > 0) { struct msm_gem_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL); list_add_tail(&vma->base.rb.entry, &submit->preallocated_vmas); } + + return mmu->funcs->prealloc_allocate(mmu, &submit->prealloc); } /** @@ -317,6 +328,7 @@ msm_vma_job_prepare(struct msm_gem_submit *submit) void msm_vma_job_cleanup(struct msm_gem_submit *submit) { + struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; struct drm_gpuva *vma; for (int i = 0; i < submit->nr_bos; i++) { @@ -331,6 +343,8 @@ msm_vma_job_cleanup(struct msm_gem_submit *submit) list_del(&vma->rb.entry); kfree(to_msm_vma(vma)); } + + mmu->funcs->prealloc_cleanup(mmu, &submit->prealloc); } /** diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 756bd55ee94f..ff04f2451d1d 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -6,6 +6,7 @@ #include #include +#include #include "msm_drv.h" #include "msm_mmu.h" @@ -14,6 +15,8 @@ struct msm_iommu { struct iommu_domain *domain; atomic_t pagetables; struct page *prr_page; + + struct kmem_cache *pt_cache; }; #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -27,6 +30,12 @@ struct msm_iommu_pagetable { unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ phys_addr_t ttbr; u32 asid; + + /** @root_page_table: Stores the root page table pointer. */ + void *root_page_table; + + /** @tblsz: pt table entry size */ + size_t tblsz; }; static struct msm_iommu_pagetable *to_pagetable(struct msm_mmu *mmu) { @@ -273,7 +282,145 @@ msm_iommu_pagetable_walk(struct msm_mmu *mmu, unsigned long iova, uint64_t ptes[ return 0; } +static void +msm_iommu_pagetable_prealloc_count(struct msm_mmu *mmu, struct msm_mmu_prealloc *p, + uint64_t iova, size_t len) +{ + u64 pt_count; + + /* + * L1, L2 and L3 page tables. + * + * We could optimize L3 allocation by iterating over the sgt and merging + * 2M contiguous blocks, but it's simpler to over-provision and return + * the pages if they're not used. + * + * The first level descriptor (v8 / v7-lpae page table format) encodes + * 30 bits of address. The second level encodes 29. For the 3rd it is + * 39. + * + * https://developer.arm.com/documentation/ddi0406/c/System-Level-Architecture/Virtual-Memory-System-Architecture--VMSA-/Long-descriptor-translation-table-format/Long-descriptor-translation-table-format-descriptors?lang=en#BEIHEFFB + */ + pt_count = ((ALIGN(iova + len, 1ull << 39) - ALIGN_DOWN(iova, 1ull << 39)) >> 39) + + ((ALIGN(iova + len, 1ull << 30) - ALIGN_DOWN(iova, 1ull << 30)) >> 30) + + ((ALIGN(iova + len, 1ull << 21) - ALIGN_DOWN(iova, 1ull << 21)) >> 21); + + p->count += pt_count; +} + +static struct kmem_cache * +get_pt_cache(struct msm_mmu *mmu) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + return to_msm_iommu(pagetable->parent)->pt_cache; +} + +static int +msm_iommu_pagetable_prealloc_allocate(struct msm_mmu *mmu, struct msm_mmu_prealloc *p) +{ + struct kmem_cache *pt_cache = get_pt_cache(mmu); + int ret; + + p->pages = kcalloc(p->count, sizeof(p->pages), GFP_KERNEL); + if (!p->pages) + return -ENOMEM; + + ret = kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, p->count, p->pages); + if (ret != p->count) { + p->count = ret; + return -ENOMEM; + } + + return 0; +} + +static void +msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_prealloc *p) +{ + struct kmem_cache *pt_cache = get_pt_cache(mmu); + uint32_t remaining_pt_count = p->count - p->ptr; + + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); + kfree(p->pages); +} + +/** + * alloc_pt() - Custom page table allocator + * @cookie: Cookie passed at page table allocation time. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + * @gfp: GFP flags. + * + * We want a custom allocator so we can use a cache for page table + * allocations and amortize the cost of the over-reservation that's + * done to allow asynchronous VM operations. + * + * Return: non-NULL on success, NULL if the allocation failed for any + * reason. + */ +static void * +msm_iommu_pagetable_alloc_pt(void *cookie, size_t size, gfp_t gfp) +{ + struct msm_iommu_pagetable *pagetable = cookie; + struct msm_mmu_prealloc *p = pagetable->base.prealloc; + void *page; + + /* Allocation of the root page table happening during init. */ + if (unlikely(!pagetable->root_page_table)) { + struct page *p; + + p = alloc_pages_node(dev_to_node(pagetable->iommu_dev), + gfp | __GFP_ZERO, get_order(size)); + page = p ? page_address(p) : NULL; + pagetable->root_page_table = page; + return page; + } + + if (WARN_ON(!p) || WARN_ON(p->ptr >= p->count)) + return NULL; + + page = p->pages[p->ptr++]; + memset(page, 0, size); + + /* + * Page table entries don't use virtual addresses, which trips out + * kmemleak. kmemleak_alloc_phys() might work, but physical addresses + * are mixed with other fields, and I fear kmemleak won't detect that + * either. + * + * Let's just ignore memory passed to the page-table driver for now. + */ + kmemleak_ignore(page); + + return page; +} + + +/** + * free_pt() - Custom page table free function + * @cookie: Cookie passed at page table allocation time. + * @data: Page table to free. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + */ +static void +msm_iommu_pagetable_free_pt(void *cookie, void *data, size_t size) +{ + struct msm_iommu_pagetable *pagetable = cookie; + + if (unlikely(pagetable->root_page_table == data)) { + free_pages((unsigned long)data, get_order(size)); + pagetable->root_page_table = NULL; + return; + } + + kmem_cache_free(get_pt_cache(&pagetable->base), data); +} + static const struct msm_mmu_funcs pagetable_funcs = { + .prealloc_count = msm_iommu_pagetable_prealloc_count, + .prealloc_allocate = msm_iommu_pagetable_prealloc_allocate, + .prealloc_cleanup = msm_iommu_pagetable_prealloc_cleanup, .map = msm_iommu_pagetable_map, .unmap = msm_iommu_pagetable_unmap, .destroy = msm_iommu_pagetable_destroy, @@ -324,7 +471,19 @@ static const struct iommu_flush_ops tlb_ops = { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags, void *arg); -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) + +static size_t get_tblsz(const struct io_pgtable_cfg *cfg) +{ + int pg_shift, bits_per_level; + + pg_shift = __ffs(cfg->pgsize_bitmap); + /* arm_lpae_iopte is u64: */ + bits_per_level = pg_shift - ilog2(sizeof(u64)); + + return sizeof(u64) << bits_per_level; +} + +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed) { struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev); struct msm_iommu *iommu = to_msm_iommu(parent); @@ -358,6 +517,36 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; ttbr0_cfg.tlb = &tlb_ops; + if (!kernel_managed) { + /* + * With userspace managed VM (aka VM_BIND), we need to pre- + * allocate pages ahead of time for map/unmap operations, + * handing them to io-pgtable via custom alloc/free ops as + * needed: + */ + ttbr0_cfg.alloc = msm_iommu_pagetable_alloc_pt; + ttbr0_cfg.free = msm_iommu_pagetable_free_pt; + + pagetable->tblsz = get_tblsz(&ttbr0_cfg); + + /* + * Restrict to single page granules. Otherwise we may run + * into a situation where userspace wants to unmap/remap + * only a part of a larger block mapping, which is not + * possible without unmapping the entire block. Which in + * turn could cause faults if the GPU is accessing other + * parts of the block mapping. + * + * Note that prior to commit 33729a5fc0ca ("iommu/io-pgtable-arm: + * Remove split on unmap behavior)" this was handled in + * io-pgtable-arm. But this apparently does not work + * correctly on SMMUv3. + */ + WARN_ON(!(ttbr0_cfg.pgsize_bitmap & PAGE_SIZE)); + ttbr0_cfg.pgsize_bitmap = PAGE_SIZE; + } + + pagetable->iommu_dev = ttbr1_cfg->iommu_dev; pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); @@ -401,7 +590,6 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) /* Needed later for TLB flush */ pagetable->parent = parent; pagetable->tlb = ttbr1_cfg->tlb; - pagetable->iommu_dev = ttbr1_cfg->iommu_dev; pagetable->pgsize_bitmap = ttbr0_cfg.pgsize_bitmap; pagetable->ttbr = ttbr0_cfg.arm_lpae_s1_cfg.ttbr; @@ -509,6 +697,7 @@ static void msm_iommu_destroy(struct msm_mmu *mmu) { struct msm_iommu *iommu = to_msm_iommu(mmu); iommu_domain_free(iommu->domain); + kmem_cache_destroy(iommu->pt_cache); kfree(iommu); } @@ -583,6 +772,14 @@ struct msm_mmu *msm_iommu_gpu_new(struct device *dev, struct msm_gpu *gpu, unsig return mmu; iommu = to_msm_iommu(mmu); + if (adreno_smmu && adreno_smmu->cookie) { + const struct io_pgtable_cfg *cfg = + adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie); + size_t tblsz = get_tblsz(cfg); + + iommu->pt_cache = + kmem_cache_create("msm-mmu-pt", tblsz, tblsz, 0, NULL); + } iommu_set_fault_handler(iommu->domain, msm_gpu_fault_handler, iommu); /* Enable stall on iommu fault: */ diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c874852b7331..24ef04d267a6 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -9,8 +9,16 @@ #include +struct msm_mmu_prealloc; +struct msm_mmu; +struct msm_gpu; + struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); + void (*prealloc_count)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p, + uint64_t iova, size_t len); + int (*prealloc_allocate)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); + void (*prealloc_cleanup)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); @@ -25,12 +33,38 @@ enum msm_mmu_type { MSM_MMU_IOMMU_PAGETABLE, }; +/** + * struct msm_mmu_prealloc - Tracking for pre-allocated pages for MMU updates. + */ +struct msm_mmu_prealloc { + /** @count: Number of pages reserved. */ + uint32_t count; + /** @ptr: Index of first unused page in @pages */ + uint32_t ptr; + /** + * @pages: Array of pages preallocated for MMU table updates. + * + * After a VM operation, there might be free pages remaining in this + * array (since the amount allocated is a worst-case). These are + * returned to the pt_cache at mmu->prealloc_cleanup(). + */ + void **pages; +}; + struct msm_mmu { const struct msm_mmu_funcs *funcs; struct device *dev; int (*handler)(void *arg, unsigned long iova, int flags, void *data); void *arg; enum msm_mmu_type type; + + /** + * @prealloc: pre-allocated pages for pgtable + * + * Set while a VM_BIND job is running, serialized under + * msm_gem_vm::op_lock. + */ + struct msm_mmu_prealloc *prealloc; }; static inline void msm_mmu_init(struct msm_mmu *mmu, struct device *dev, @@ -52,7 +86,7 @@ static inline void msm_mmu_set_fault_handler(struct msm_mmu *mmu, void *arg, mmu->handler = handler; } -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent); +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed); int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr, int *asid); From patchwork Wed Mar 19 14:52:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874756 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 067E321D001; Wed, 19 Mar 2025 14:56:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396162; cv=none; b=rPzsF6do143Ym9e+AVaHXGslHrRWGtfux6lUEy55XotsmfjQaAt5o2jnM+CQNf9yuasRUmD1ZvVYAAaAXyhBhCpVNY644KhtRODbuEBlH/6AYQNJcYxNTFdfGDhX7CjZC1+pjapw69xqwIEAi0RF6Isd6+C4kDtf17rs0/nfHsI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396162; c=relaxed/simple; bh=1X+qbZxa9F+NcTL67XT4jiTLjUrLRWWhtE3pMYBf59M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uSlEyZpIkyk75aoCTB0l+p7m/t133XRvPDMCsfdttrDa3VpQCJRN8WRAfjrxQpuK++FmxBUYmxOPh15hBPsJSC/82XfFKq3+biq5VTftJD/7eFnSAbvnK9CIxV2A6JDomvqL99iem5n5FX8QfY4XAV6VB1S9QN7TJan6NzqSpgs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WBLmmT8Z; arc=none smtp.client-ip=209.85.214.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WBLmmT8Z" Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-224171d6826so45678905ad.3; Wed, 19 Mar 2025 07:56:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396160; x=1743000960; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lzrclKqnSLuC6igPSGd4oKUZVxjgZgQQ21kYeu+qNAw=; b=WBLmmT8ZI+WIRhTuHdmltlHu2B5SJL64lM3HSq1Hj1evWG4yF6P6MR7Avm03uDuOjt R2fpnZ2zzfcoljfo62n5ehyWUKzxGNDZBmkVFO/jeTlo5PmJw0WGFTztIqQ83QJUsZ5J le5UWb59oT78T/JjWiP3zwbkFGngHxn4BRUFsZfTrgoFuErSwFUDhhG3TFRcsXzozroQ pTaD2JZeKm5FmdCDAiFyStaJDPcebt2hD96ksEUTSsuCkGX5QophXvWXdwEfYA/jNDth 2KEIjyk51n2SLDfVrtaswkBvNLnRzCKRRmSEHNdd/nf7iVl2JY0LdWag3ulmzbz+OpYl ybYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396160; x=1743000960; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lzrclKqnSLuC6igPSGd4oKUZVxjgZgQQ21kYeu+qNAw=; b=JWCNtPJpfvSqWqxLCHC+v+ZndFccj3+dGoIC2NhONz29hMnXNqtf1C1Wwidv8+ScRs gvGjaqRWSrEOn3QcRdFil7rCMHeqX61LrwDlJ3gxil97W0NivNVF6DWtog14pwPbAiYA gCWWkfmp8x8HqzlgGPM0aIt/YErCTutY7VeFwLWw4uEwrMr5mFZaubat1hJtF7S1Yw1U NOh7pymiTkH7KlsuicIw90HpVIKxGj2SGmqOacglEPA4gok3BkNltbCweIuwqtP9bkW4 cvj+oeNTQdsWl6hItBWRNOds+gLxBeemL59YwozP97S8/mwaghIOOGjnRtHw/n/RMdC1 ivFw== X-Forwarded-Encrypted: i=1; AJvYcCVTJpYpG/SjQu5rl2i1qPxYO2UOpCDaBX0tACqWgtrdeyw5JgtBpylp4KE/oGF7QRqbr+2GXKwujW+f6jr4@vger.kernel.org, AJvYcCWd7+KqKv6E1fsGJ/GZBfFqqaA1WtQLxyszGnV/XB6bkOLtYfUdEJfiqKFkucJCf+LtfNIi4FQ4pvYe84Pc@vger.kernel.org X-Gm-Message-State: AOJu0YwFKbKcQisRfzBPiGe4vMELtW43SSie7hd/c00ohc1aTlXIhwAv WVZthmpyhGtWpKVk6JKaD+qaLBzx+GmQC/scnx6ia9WWcBRzFAfX X-Gm-Gg: ASbGnctja6xtpBo2rbOx1yUjGOuVf2urlgnX84bTNc217FEoYag491/Iag7xWGvndaC BmRS3d3Zda5/fJaBHOkATJ+U/UUKTvn2ysKsx6U8Z9mPx8BCsEGms1EAZ2cfIdwpZQDAYydTRrW IfwP6jjXy3rwGc0v8wDzzOjL8kf2XzZg24N8A0KxCJbAUkJewjlciQ7x2kMUBCMqFTcTw+TEmLy +GFxEBRqj9i56RmqMcwRrOmQMlsDWKE2JvkLB8Hz1S/2Hn60wokBwSAYdKbFUhU0Q5/a5As4yi2 dNg0ER39ZWAiOtKM7CXssnE7L3gbfgwljabdos0cZggqukKDiEhnAH4+uJSX3NEnE4XE+VexC9v Mx8iurgmT/LKtQcxGp4/2CVBmUd0SjA== X-Google-Smtp-Source: AGHT+IHdDL0qktvi0766ilD4DZ91nDGI11VuBNzb7WvAPu6gZEBkKU3yNLQGg0sf610Nty8FPhSWEQ== X-Received: by 2002:a17:902:e952:b0:223:5379:5e4e with SMTP id d9443c01a7336-2264981dc4dmr63164075ad.10.1742396160120; Wed, 19 Mar 2025 07:56:00 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6ba6a13sm115019115ad.118.2025.03.19.07.55.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:59 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 28/34] drm/msm: Wire up gpuvm ops Date: Wed, 19 Mar 2025 07:52:40 -0700 Message-ID: <20250319145425.51935-29-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Hook up the map/remap/unmap ops to apply MAP/UNMAP operations. The MAP/UNMAP operations are split up by drm_gpuvm into a series of map/ remap/unmap ops, for example an UNMAP operation which spans multiple vmas will get split up into a sequence of unmap (and possibly remap) ops which each apply to a single vma. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 12 ++ drivers/gpu/drm/msm/msm_gem_vma.c | 329 ++++++++++++++++++++++++++++-- 2 files changed, 320 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 117f0e35e628..7f6315a66751 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -76,6 +76,16 @@ struct msm_gem_vm { /** @vm_lock: protects gpuvm insert/remove/traverse */ struct mutex vm_lock; + /** + * @op_lock: + * + * Serializes VM operations. Typically operations are serialized + * by virtue of running on the VM_BIND queue, but in the cleanup + * path (or if multiple VM_BIND queues) the @op_lock provides the + * needed serialization. + */ + struct mutex op_lock; + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; @@ -121,6 +131,8 @@ void msm_vma_job_cleanup(struct msm_gem_submit *submit); struct msm_fence_context; +#define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) + /** * struct msm_gem_vma - a VMA mapping * diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 554ec93456a0..09d4746248c2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -8,6 +8,8 @@ #include "msm_gem.h" #include "msm_mmu.h" +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -20,18 +22,29 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } +static void +msm_gem_vma_unmap_range(struct drm_gpuva *vma, uint64_t unmap_start, uint64_t unmap_range) +{ + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + + vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, unmap_start, unmap_start + unmap_range); + + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + + vm->mmu->funcs->unmap(vm->mmu, unmap_start, unmap_range); +} + /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); - struct msm_gem_vm *vm = to_msm_vm(vma->vm); - unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); + msm_gem_vma_unmap_range(vma, vma->va.addr, vma->va.range); msm_vma->mapped = false; } @@ -52,6 +65,11 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) msm_vma->mapped = true; + vm_dbg("%p: %016llx %016llx", vma, vma->va.addr, vma->va.range); + + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -79,10 +97,11 @@ static void __vma_close(struct drm_gpuva *vma) GEM_WARN_ON(msm_vma->mapped); GEM_WARN_ON(!mutex_is_locked(&vm->vm_lock)); - spin_lock(&vm->mm_lock); - if (vma->va.addr && vm->managed) + if (vma->va.addr && vm->managed) { + spin_lock(&vm->mm_lock); drm_mm_remove_node(&msm_vma->node); - spin_unlock(&vm->mm_lock); + spin_unlock(&vm->mm_lock); + } drm_gpuva_remove(vma); drm_gpuva_unlink(vma); @@ -101,11 +120,9 @@ void msm_gem_vma_close(struct drm_gpuva *vma) */ GEM_WARN_ON(!vm->managed); - dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL); mutex_lock(&vm->vm_lock); __vma_close(vma); mutex_unlock(&vm->vm_lock); - dma_resv_unlock(drm_gpuvm_resv(vma->vm)); } static struct drm_gpuva * @@ -124,6 +141,7 @@ __vma_init(struct msm_gem_vma *vma, struct drm_gpuvm *_vm, if (vm->managed) { BUG_ON(offset != 0); + BUG_ON(!obj); /* NULL mappings not valid for kernel managed VM */ spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, @@ -137,7 +155,8 @@ __vma_init(struct msm_gem_vma *vma, struct drm_gpuvm *_vm, range_end = range_start + obj->size; } - GEM_WARN_ON((range_end - range_start) > obj->size); + if (obj) + GEM_WARN_ON((range_end - range_start) > obj->size); drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; @@ -146,6 +165,9 @@ __vma_init(struct msm_gem_vma *vma, struct drm_gpuvm *_vm, if (ret) goto err_free_range; + if (!obj) + return &vma->base; + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); if (IS_ERR(vm_bo)) { ret = PTR_ERR(vm_bo); @@ -190,39 +212,289 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, return &vma->base; } +static int +msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) +{ + struct drm_gem_object *obj = vm_bo->obj; + struct drm_gpuva *vma; + int ret; + + msm_gem_assert_locked(obj); + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + ret = msm_gem_pin_vma_locked(obj, vma); + if (ret) + return ret; + } + + return 0; +} + +struct op_arg { + unsigned flags; + struct drm_gpuvm *vm; + struct msm_gem_submit *submit; +}; + +static struct drm_gpuva * +vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) +{ + struct msm_gem_vma *vma; + + if (WARN_ON(list_empty(&arg->submit->preallocated_vmas))) + return NULL; + + vma = list_first_entry(&arg->submit->preallocated_vmas, + struct msm_gem_vma, base.rb.entry); + + list_del(&vma->base.rb.entry); + + return __vma_init(vma, arg->vm, op->gem.obj, op->gem.offset, + op->va.addr, op->va.addr + op->va.range); +} + +static int +msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gem_object *obj = op->map.gem.obj; + struct drm_gpuva *vma; + struct sg_table *sgt; + unsigned prot; + + vma = vma_from_op(arg, &op->map); + + vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, vma->va.addr, vma->va.range); + + if (obj) { + sgt = to_msm_bo(obj)->sgt; + prot = msm_gem_prot(obj); + } else { + sgt = NULL; + prot = IOMMU_READ | IOMMU_WRITE; + } + + if (WARN_ON(IS_ERR(vma))) + return PTR_ERR(vma); + + vma->flags = ((struct op_arg *)arg)->flags; + + return msm_gem_vma_map(vma, prot, sgt); +} + +static int +msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gpuvm *vm = ((struct op_arg *)arg)->vm; + struct drm_gpuva *orig_vma = op->remap.unmap->va; + struct drm_gpuva *prev_vma = NULL, *next_vma = NULL; + uint64_t unmap_start, unmap_range; + unsigned flags; + + vm_dbg("orig_vma: %p:%p: %016llx %016llx", vm, orig_vma, orig_vma->va.addr, orig_vma->va.range); + + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range); + + msm_gem_vma_unmap_range(op->remap.unmap->va, unmap_start, unmap_range); + + /* + * Part of this GEM obj is still mapped, but we're going to kill the + * existing VMA and replace it with one or two new ones (ie. two if) + * the unmapped range is in the middle of the existing (unmap) VMA. + * So just set the state to unmapped: + */ + to_msm_vma(orig_vma)->mapped = false; + + /* + * The prev_vma and/or next_vma are replacing the unmapped vma, and + * therefore should preserve it's flags: + */ + flags = orig_vma->flags; + + __vma_close(orig_vma); + + if (op->remap.prev) { + prev_vma = vma_from_op(arg, op->remap.prev); + if (WARN_ON(IS_ERR(prev_vma))) + return PTR_ERR(prev_vma); + + vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.addr, prev_vma->va.range); + to_msm_vma(prev_vma)->mapped = true; + prev_vma->flags = flags; + } + + if (op->remap.next) { + next_vma = vma_from_op(arg, op->remap.next); + if (WARN_ON(IS_ERR(next_vma))) + return PTR_ERR(next_vma); + + vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.addr, next_vma->va.range); + to_msm_vma(next_vma)->mapped = true; + next_vma->flags = flags; + } + + return 0; +} + +static int +msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *priv) +{ + struct drm_gpuva *vma = op->unmap.va; + + vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, vma->va.addr, vma->va.range); + + msm_gem_vma_unmap(vma); + __vma_close(vma); + + return 0; +} + static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, + .vm_bo_validate = msm_gem_vm_bo_validate, + .sm_step_map = msm_gem_vm_sm_step_map, + .sm_step_remap = msm_gem_vm_sm_step_remap, + .sm_step_unmap = msm_gem_vm_sm_step_unmap, }; +static void +cond_lock(struct drm_gem_object *obj) +{ + if (!obj) + return; + + /* + * Hold a ref while we have the obj locked, so drm_gpuvm doesn't + * manage to drop the last ref to the obj while it is locked: + */ + drm_gem_object_get(obj); + msm_gem_lock(obj); +} + +static void +cond_unlock(struct drm_gem_object *obj) +{ + if (!obj) + return; + + msm_gem_unlock(obj); + /* Drop the ref obtained in cond_lock(): */ + drm_gem_object_put(obj); +} + +static int +run_bo_unmap(struct op_arg *arg, u64 req_addr, u64 req_range) +{ + struct drm_gpuva *vma, *next; + struct drm_gpuvm *vm = arg->vm; + int ret; + u64 req_end = req_addr + req_range; + + GEM_WARN_ON(!mutex_is_locked(&to_msm_vm(vm)->op_lock)); + + /* + * There are two locks at play when it comes to inserting/ + * removing objs into a VM. There is vm->vm_lock and the + * obj resv lock. Because there are N objs and M VMs, and + * an obj can be attached to an arbitrary # of VMs, the only + * alternative is a single global lock. + * + * With two locks at play, there are two possible orderings + * of locking: + * + * 1) locking obj first, and then vm_lock. The problem with + * this ordering is that OP_UNMAP can be touching an + * arbitrary # of VMAs, with their own objs. So we have + * no way to know which objs to lock without traversing + * the VM. Which requires vm_lock! + * + * 2) vm_lock first, and then obj lock. The problem with this + * ordering is that non-VM_BIND (ie. kernel managed VM) + * userspace relies on implicit VMA teardown when a BO is + * freed. Which means there are paths where we need to + * traverse the obj's gpuva list to find the VM(s). Which + * requires the obj lock! + * + * To resolve this we rely on the fact that, with a VM_BIND + * userspace the legacy paths are disallowed. And all paths + * which modify the VM also hold op_lock. So we can savely + * reach into the VM to find the first object to lock. But + * it means having to bypass drm_gpuvm_sm_unmap(), and pull + * out the impacted VMAs ourself. + * + * (The other option would be to use drm_gpuvm_sm_unmap_ops_create(), + * but that requires memory allocation.. which we can't do + * here.) + */ + drm_gpuvm_for_each_va_range_safe(vma, next, vm, req_addr, req_end) { + struct drm_gem_object *obj = vma->gem.obj; + + cond_lock(obj); + mutex_lock(&to_msm_vm(vm)->vm_lock); + + ret = drm_gpuvm_sm_unmap_va(vma, arg, + vma->va.addr, + vma->va.range); + + mutex_unlock(&to_msm_vm(vm)->vm_lock); + cond_unlock(obj); + + if (ret) + break; + } + + return ret; +} + static int run_bo_op(struct msm_gem_submit *submit, const struct msm_gem_submit_bo *bo) { + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + struct op_arg arg = { + .vm = submit->vm, + .submit = submit, + }; unsigned op = bo->flags & MSM_SUBMIT_BO_OP_MASK; + int ret; - switch (op) { - case MSM_SUBMIT_BO_OP_MAP: - case MSM_SUBMIT_BO_OP_MAP_NULL: - return drm_gpuvm_sm_map(submit->vm, submit->vm, bo->iova, - bo->range, bo->obj, bo->bo_offset); - break; - case MSM_SUBMIT_BO_OP_UNMAP: - return drm_gpuvm_sm_unmap(submit->vm, submit->vm, bo->iova, - bo->bo_offset); + if (op == MSM_SUBMIT_BO_OP_UNMAP) { + vm_dbg("UNMAP: %p: %016llx %016llx", vm, bo->iova, bo->range); + ret = run_bo_unmap(&arg, bo->iova, bo->range); + } else { + if ((op == MSM_SUBMIT_BO_OP_MAP) && + (bo->flags & MSM_SUBMIT_BO_DUMP)) + arg.flags |= MSM_VMA_DUMP; + + cond_lock(bo->obj); + mutex_lock(&vm->vm_lock); + + vm_dbg("MAP: %p: %016llx %016llx", vm, bo->iova, bo->range); + ret = drm_gpuvm_sm_map(submit->vm, &arg, + bo->iova, bo->range, + bo->obj, bo->bo_offset); + + mutex_unlock(&vm->vm_lock); + cond_unlock(bo->obj); } - return -EINVAL; + return ret; } static struct dma_fence * msm_vma_job_run(struct drm_sched_job *job) { struct msm_gem_submit *submit = to_msm_submit(job); + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + int ret = 0; + + vm_dbg(""); + + mutex_lock(&vm->op_lock); + vm->mmu->prealloc = &submit->prealloc; for (unsigned i = 0; i < submit->nr_bos; i++) { int ret = run_bo_op(submit, &submit->bos[i]); if (ret) { to_msm_vm(submit->vm)->unusable = true; - return ERR_PTR(ret); } } @@ -240,8 +512,11 @@ msm_vma_job_run(struct drm_sched_job *job) msm_gem_unlock(obj); } + vm->mmu->prealloc = NULL; + mutex_unlock(&vm->op_lock); + /* VM_BIND ops are synchronous, so no fence to wait on: */ - return NULL; + return ERR_PTR(ret); } static void @@ -404,6 +679,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, spin_lock_init(&vm->mm_lock); mutex_init(&vm->vm_lock); + mutex_init(&vm->op_lock); vm->mmu = mmu; vm->managed = managed; @@ -433,6 +709,9 @@ void msm_gem_vm_close(struct drm_gpuvm *gpuvm) { struct msm_gem_vm *vm = to_msm_vm(gpuvm); + struct op_arg arg = { + .vm = gpuvm, + }; /* * For kernel managed VMs, the VMAs are torn down when the handle is @@ -444,4 +723,12 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) /* Kill the scheduler now, so we aren't racing with it for cleanup: */ drm_sched_stop(&vm->sched, NULL); drm_sched_fini(&vm->sched); + + /* Serialize against vm scheduler thread: */ + mutex_lock(&vm->op_lock); + + /* Tear down any remaining mappings: */ + run_bo_unmap(&arg, gpuvm->mm_start, gpuvm->mm_range); + + mutex_unlock(&vm->op_lock); } From patchwork Wed Mar 19 14:52:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875046 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A2A81E7C23; Wed, 19 Mar 2025 14:56:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396164; cv=none; b=i3hwNYRT1dHJfE+JiG4MBuURpUKnT0pUsOhk3ly7hDYLKLHrLobZWjTPt03For17sYJczr6792axuXGwfZpXRw/o1sm0zgvazY6HTNiqY75T9uoZYozNgJiH1FCkL1yYLoZxMsNAQAYdvoXZ586FXlIrVw/DvqcWZDvx6/Rjfas= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396164; c=relaxed/simple; bh=vQqwf3PALJORJbol4A/d3nzhw883SIIeAlCpCMUqb7M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YaV4QMEMxHEzPjSUsYIXNWTWxwbOFfAY82Sbz3fGK5uVPQCGAMmTZ9frYSqklaFqaonszFZZLGhSUubbLXVX0Mw1ZQXq1KVDsyHDU+bzxNbImFSuqIOWv9TT60ekCnM04UC9sqq0Hc48yiJs6vxwILMfdxLer0M5jY3IfxrKBPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nh4PaaNm; arc=none smtp.client-ip=209.85.216.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nh4PaaNm" Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-301c4850194so978897a91.2; Wed, 19 Mar 2025 07:56:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396162; x=1743000962; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MaBXykVFASxAwzp1K2UYEY9RdYGqxushMJyfSpoqDmM=; b=nh4PaaNmZh8HyZ6VPTwXBK1VKLs4L+A8g8quQmbco0KTiU1hW3qCkSi4gkgJWykrKl OZer96vmr2y6LIKjAkaBfdH+/wxXtx8Vs0mWTW/OYdTz4dZnaH33NRB+e43gMkOR1IXc tpmDA3oQNA/HPlMjmw4gpR2DJZF7EpcNEepbzmzU9VurejkAP1M3pYjEuNqE1572X9TA DEBM5tdjD5VJtUSL8u3OA4u+sEL/KcRVbOr/iffnyuqsWn50/Jm1Oh0yraZ2m7jhxjRA hpht87kNqcobdnvTucqSGE8kRCRHfRrLoRJPs/iJs5SY8Qs0ttk1/J6cTmWBpKUq2+8C 6rAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396162; x=1743000962; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MaBXykVFASxAwzp1K2UYEY9RdYGqxushMJyfSpoqDmM=; b=YrSTmpMU3xqQUK1OPcvw+vTbMfEVKVLEw5QnFtbeNdo+TBqheSf1cctMUkfBzvq8H+ SoayMaunY7G6Q5DkWyc7amJTMc41icohH7vro0njka0quHPH49US0A3pHqrwWww6SCpw PwqucesnUdJ0HdBYz+V7GX6/yMfKGyI2gXBqmqDXVs8t5x4JgqC4Zi6n7+sOGa3+iX24 /oKbhNere+cmBWhkVT8/N6sGkoUiXqCaF0rMbnnFkvBDo+a3Kxzht/EmdsO/mS2ZQMBv uTX4NFJYkx2ID2nfRlgZ1clHtmnmnxjnSTUgjabHPDD1sRsKsTeZrVt4nEa9u/6RxADX gcAQ== X-Forwarded-Encrypted: i=1; AJvYcCV+TbZRVvbv+yEFaQLh9/UTbXHiOjKFa9mjrnqkKvxnSHlmsvBlxtFIHPxqkKe0nVDLA7bB3QKlnEbFy+So@vger.kernel.org, AJvYcCXMRuyhB6wkYoSKJuUsyaHGag+nhdR7+D5v1o05yW+gHyi/nnjzknLt7O//xZpwl4CuzO6BSk0cyjrNHmOU@vger.kernel.org X-Gm-Message-State: AOJu0YwJM3AtFL7+qhFMpwPtS14Yf9Ydrm6lZRFMKJTbrHiBM/Ua2zqt E3zTE8ZMLt6IszTbM8kOY+2o94fo1Q4cp5X9YuQej7qKfxbsmBwO2VnYrw== X-Gm-Gg: ASbGnct5/Bae6tWAREtmnFbfylGBvVm7eU0vq1uDZemmNO6fsnwFwLSfXRcv3s3yxYS T2ZXzETZZh8dTKBwy7ZdBOdqKXU9rKHRm8e6p4Z18Q87K9iOGdsP9l2TDAnoSo/KMPC3KZBBKQL mjLrTHUMN7C2/UyqXD9+ST1JQ+YhYfvkUm3cq2RK9GtuMlrc4fwzBT3MKMALkB+WI0FtChjMBAz EfOYBuJEftXl5RHATPH7dCOZXGSmejxhQDoj7lCYG+IdUvRpu7SnC+//sZWauXgd1b6Gqc0JlfF TcOLpmJdzAvIbKfFqDcdZKNda9khi96/2WOoSVHAQZx1RYWdB4boNo3sAV23s/9z/nffUKBPoyR QtfR9u+SFK53K1/hPO/Bh33GjJGTVog== X-Google-Smtp-Source: AGHT+IFMlaSejbhUztYFhKWDKCCn4rM1IcgZmZyorwckAVgEaKWxG3BpCnnYdHXGWTzjGeXSG9hkLQ== X-Received: by 2002:a17:90a:e710:b0:2fe:a336:fe63 with SMTP id 98e67ed59e1d1-301be201f40mr5243686a91.24.1742396161750; Wed, 19 Mar 2025 07:56:01 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-301b9e19ceesm1903278a91.0.2025.03.19.07.56.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:56:01 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 29/34] drm/msm: Wire up drm_gpuvm debugfs Date: Wed, 19 Mar 2025 07:52:41 -0700 Message-ID: <20250319145425.51935-30-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Core drm already provides a helper to dump vm state. We just need to wire up tracking of VMs and giving userspace VMs a suitable name. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/msm_debugfs.c | 20 ++++++++++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 3 +++ drivers/gpu/drm/msm/msm_drv.h | 4 ++++ drivers/gpu/drm/msm/msm_gem.h | 8 ++++++++ drivers/gpu/drm/msm/msm_gem_vma.c | 23 +++++++++++++++++++++++ 6 files changed, 59 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 9f66ad5bf0dc..3189a6f75d74 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2272,7 +2272,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL, + return msm_gem_vm_create(gpu->dev, mmu, NULL, 0x100000000ULL, adreno_private_vm_size(gpu), kernel_managed); } diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c index 7ab607252d18..bde25981254f 100644 --- a/drivers/gpu/drm/msm/msm_debugfs.c +++ b/drivers/gpu/drm/msm/msm_debugfs.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -238,6 +239,24 @@ static int msm_mm_show(struct seq_file *m, void *arg) return 0; } +static int msm_gpuvas_show(struct seq_file *m, void *arg) +{ + struct drm_info_node *node = m->private; + struct drm_device *dev = node->minor->dev; + struct msm_drm_private *priv = dev->dev_private; + struct msm_gem_vm *vm; + + mutex_lock(&priv->vm_lock); + list_for_each_entry(vm, &priv->vms, node) { + mutex_lock(&vm->op_lock); + drm_debugfs_gpuva_info(m, &vm->base); + mutex_unlock(&vm->op_lock); + } + mutex_unlock(&priv->vm_lock); + + return 0; +} + static int msm_fb_show(struct seq_file *m, void *arg) { struct drm_info_node *node = m->private; @@ -266,6 +285,7 @@ static int msm_fb_show(struct seq_file *m, void *arg) static struct drm_info_list msm_debugfs_list[] = { {"gem", msm_gem_show}, { "mm", msm_mm_show }, + DRM_DEBUGFS_GPUVA_INFO(msm_gpuvas_show, NULL), }; static struct drm_info_list msm_kms_debugfs_list[] = { diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 5b5a64c8dddb..70c3a3712a3e 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -124,6 +124,9 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) goto err_put_dev; } + INIT_LIST_HEAD(&priv->vms); + mutex_init(&priv->vm_lock); + INIT_LIST_HEAD(&priv->objects); mutex_init(&priv->obj_lock); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b0add236cbb3..83d2a480cfcf 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -112,6 +112,10 @@ struct msm_drm_private { */ atomic64_t total_mem; + /** @vms: List of all VMs, protected by @vm_lock */ + struct list_head vms; + struct mutex vm_lock; + /** * List of all GEM objects (mainly for debugfs, protected by obj_lock * (acquire before per GEM object lock) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 7f6315a66751..0409d35ebb32 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -54,6 +54,9 @@ struct msm_gem_vm { /** @base: Inherit from drm_gpuvm. */ struct drm_gpuvm base; + /** @name: Storage for dynamically generated VM name for user VMs */ + char name[32]; + /** * @sched: Scheduler used for asynchronous VM_BIND request. * @@ -95,6 +98,11 @@ struct msm_gem_vm { */ struct pid *pid; + /** + * @node: List node in msm_drm_private.vms list + */ + struct list_head node; + /** @faults: the number of GPU hangs associated with this address space */ int faults; diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 09d4746248c2..8d0c4d3afa13 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -14,6 +14,11 @@ static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base); + struct msm_drm_private *priv = gpuvm->drm->dev_private; + + mutex_lock(&priv->vm_lock); + list_del(&vm->node); + mutex_unlock(&priv->vm_lock); drm_mm_takedown(&vm->mm); if (vm->mmu) @@ -640,6 +645,7 @@ struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { + struct msm_drm_private *priv = drm->dev_private; enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0; struct msm_gem_vm *vm; struct drm_gem_object *dummy_gem; @@ -673,6 +679,19 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, goto err_free_dummy; } + /* For userspace pgtables, generate a VM name based on comm and PID nr: */ + if (!name) { + char tmpname[TASK_COMM_LEN]; + struct pid *pid = get_pid(task_tgid(current)); + + get_task_comm(tmpname, current); + rcu_read_lock(); + snprintf(vm->name, sizeof(name), "%s[%d]", tmpname, pid_nr(pid)); + rcu_read_unlock(); + + name = vm->name; + } + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); @@ -686,6 +705,10 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_mm_init(&vm->mm, va_start, va_size); + mutex_lock(&priv->vm_lock); + list_add_tail(&vm->node, &priv->vms); + mutex_unlock(&priv->vm_lock); + return &vm->base; err_free_dummy: From patchwork Wed Mar 19 14:52:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874755 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5AD521D3EC; Wed, 19 Mar 2025 14:56:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396165; cv=none; b=QlcanF8V7G9b1QBAYmFdOjDOoHEKoKoECtBwmn2571PCNfEMPX2xe8slzrK9Opda16X+uMOjFP/KLxHi5jyeAG0ppufFCgwFGeN8Nmx3ggkIQ3vFc9ROF71rdFLyzHpnbxtkeC5UNLNeJG96FFgaCSG8/7IgC/HuCDhU/qx1mTA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396165; c=relaxed/simple; bh=vbQ0KEhFti8p7PTkMuoFIU0GERG+jrsA92raKdlHpSg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=paJKIlDLGUdDnVfhGvXrPMgNmAvwXds+wJM755XQkNKIUdcOhHCXooaapzoQv0NgZ26ITOHl9XD80Hx88ihIMzHL01Vi8auMAspNC4lvcchU1z3/3V6bqqAFRpnA1350LMLqy9AQvq6N9lJFgbPfhd+A+P86q7tdVzrTo7oegao= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ATpvnhGG; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ATpvnhGG" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-223594b3c6dso153316595ad.2; Wed, 19 Mar 2025 07:56:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396163; x=1743000963; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QGMUbTt7iIqwEBWWKqeJyEV7/9JgbkufipFOA1ELWGU=; b=ATpvnhGGJH+hK+zDzCPbfc6WUkKs0EjZgbcWyHOhX+Am+FSkckuGNRhSIt0Y/HAeSF 29nuz/Rs1yHJTpUlc+7ctGrnBTN/U0Xikr7vj28tR37yqbj3CJZCANIQvKynsZWXOJPw fDFSmnkC/T09HeMU33tPIPBsfAZ5LBGf5Hg17+3ChEuzLKk8jZBkQcLnytgQn5ROTJrn KQOOGXyZk5qTwUF6GmUfEEhrdx0pLFLFL71G3NW4HxNqI6lqhUTO6EjOmOlxy5vkSj+9 CMO8dGITKRqXl9DZ+/eZF4UyRmpXgrgzL+8jHaINSCW+opQ/2fMYK66FVfC4EwPLbY1z q+vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396163; x=1743000963; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QGMUbTt7iIqwEBWWKqeJyEV7/9JgbkufipFOA1ELWGU=; b=gv+79GjpattiKG3vdbiD24T6iKJ8bv63cFGLN+WBhDo2C5tl/SJjWkkFSRWvSTwF/0 FvLoCs6mLao+S/HvbTsTuwMcATghgYmD1bdLUmOZPYLNofVteGH+icaYp6HHrET5c4x5 79snbC64WoMvhTa9DdtqpSrN+GdJKfqQMt6n+BFHeEe+IdEG+EoFQArg4WmHsepNEsrb 3A7KbXSPdZNzc4VpYUxx4UoKAAlBzm9ujQqnx6bZhP2zeeo/pbzMVrrrWpAdE0lfZ0JJ c8ESZgOutdEbRIzu0QxRqq830AolEhT1sAPb6cpN8SQZJ9wAtrWNtzKEV/tyElmMcjgu GIrQ== X-Forwarded-Encrypted: i=1; AJvYcCVypC22sRqYPRvc8/SHNgr0QcinM48a5hBrYHo4YNDSyaVKReIIIlnYPVCxvnEfYzAuFS3IuFCUNaBw3umm@vger.kernel.org, AJvYcCXAvnzVidzCcNpG61qeAFKuR/aLITzPGbZvmseghiUW8CD5tO/OgQaS2mwPOsbzT6VdHpzzxitUviWSxN6f@vger.kernel.org X-Gm-Message-State: AOJu0Yy632PR9OFYHdH+m06a8e0OY25cWQH1rITKGkXfnnp7pEyFyrJ8 ddzVTvv9txm3Nh+7pXHD+con2Ha0RSEdtl+0fZinxmHMh1zJLyuk X-Gm-Gg: ASbGnctvNYs6e/wC2q3t0Qf0I4A34Bx66wCtjCAublDdBth8IhuJHMOK7ATSVQFzyiB tcQBB3AsmUgePCnIbAVeY84o+exKMoCZ4jJ9CiFxNZub0MK0Td/lZuCHG/Sayqhn75HsKdVfAS3 VMHTTuf7jkZqRVLsUHkpiTbaK8V/bLkve8flIDp4eXI3a1gxO0aJv3nRp90ZwucH90I6wYviBfd yoJu3mNz89/dNVyDX3AvbAF6Nci7Xuk1sVVowS1hZNsVlTfqSsPnpexxF08TvsYFv8xEK3QsSFo E8dVpo+ca4/7vNEn8co6zNWCCSy4FtdlL7a5eHw7ekwjK32iGANCtI7VBsrg59zJXzOUZqvLMEg WBYXZRf1hIf91QlXC/jI= X-Google-Smtp-Source: AGHT+IG6C1ekOTiy5yIDQoedqM/ij3ZtXbk1UlaxZBxrq32/wNuQ95sjJnGINydt9NUsDyFDytlCeg== X-Received: by 2002:a17:902:ea10:b0:21f:768:cced with SMTP id d9443c01a7336-226499284e2mr32985575ad.8.1742396163076; Wed, 19 Mar 2025 07:56:03 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-737116973easm11691119b3a.153.2025.03.19.07.56.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:56:02 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 30/34] drm/msm: Crashdump prep for sparse mappings Date: Wed, 19 Mar 2025 07:52:42 -0700 Message-ID: <20250319145425.51935-31-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In this case, userspace could request dumping partial GEM obj mappings. Also drop use of should_dump() helper, which really only makes sense in the old submit->bos[] table world. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 4831f4e42fd9..e35125d88466 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -219,13 +219,14 @@ static void msm_gpu_devcoredump_free(void *data) } static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, - struct drm_gem_object *obj, u64 iova, bool full) + struct drm_gem_object *obj, u64 iova, + bool full, size_t offset, size_t size) { struct msm_gpu_state_bo *state_bo = &state->bos[state->nr_bos]; struct msm_gem_object *msm_obj = to_msm_bo(obj); /* Don't record write only objects */ - state_bo->size = obj->size; + state_bo->size = size; state_bo->flags = msm_obj->flags; state_bo->iova = iova; @@ -236,7 +237,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, if (full) { void *ptr; - state_bo->data = kvmalloc(obj->size, GFP_KERNEL); + state_bo->data = kvmalloc(size, GFP_KERNEL); if (!state_bo->data) goto out; @@ -249,7 +250,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, goto out; } - memcpy(state_bo->data, ptr, obj->size); + memcpy(state_bo->data, ptr + offset, size); msm_gem_put_vaddr(obj); } out: @@ -279,6 +280,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, state->fault_info = gpu->fault_info; if (submit) { + extern bool rd_full; int i; if (state->fault_info.ttbr0) { @@ -294,9 +296,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); for (i = 0; state->bos && i < submit->nr_bos; i++) { - msm_gpu_crashstate_get_bo(state, submit->bos[i].obj, - submit->bos[i].iova, - should_dump(submit, i)); + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); } } From patchwork Wed Mar 19 14:52:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875045 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D87621D59C; Wed, 19 Mar 2025 14:56:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396166; cv=none; b=b3JkwXLUgXWjzuHQ7dM87xvQnA4iV0Qm3lmC+f0s1FLKv3X2+Bt6uveG80qRsve/dmGklnZv4Z5vr6eQOhRMsMpx0vTluxG8LFnpfh6zibThSN5js5gmp0MYWJKPvhaEnf174ErryfhXYwtUcACxVMDEhnUrnhlqHz/0uVRGvSg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396166; c=relaxed/simple; bh=XFrw5u2qWMhLqWQN6rAEO1fDp0MSHDKigS985FrCaD8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XRwLUAkYa9hZjWPst4roEUhbfwGJHqnUvtUMk7PuLbCL7cSxN/xpmBlHORzcOstN7uiCvJwQR7JivGMn7OM6UANVw4sLFOb/EO5MAiyBc6h3ijv6ncLmW1ahgbuMjIvLup3dnV7skZPExqHrh6P4/akicRS6EPkkk+pZ1xvDLx8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ljjzPbQB; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ljjzPbQB" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-22438c356c8so121928915ad.1; Wed, 19 Mar 2025 07:56:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396164; x=1743000964; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CZyxZF1nI8E1mZeUh1efMDQKSRZ1oVsI+H9k2RwI1Ak=; b=ljjzPbQBTNUPflXSoqkVkwVj0huKAXAT5sq5hBF7ve4hxtgfBnJaUYUdP7y9fVVNi7 yL8t90qUwC9fZwKNWxUcDwpp7eCm2pd4qGllA1adELXNyJmFFhU5NP4eJqej6wFEybIS g5zqZKidDl6IGd4AvgIRrPiTNyBsI0tCE7/+D9Q2vgzCn2Ja/3OY86ozexTg3DQFw9wa WmHgsqwqwWfNQPmtN8S9L9owTRpZ+X/7g/Xt9B1VC/tL/X8ZuPMGS1WL/FEggYUy70an mImidEPAP3HyUVJYWj1zoVzPhDKL7XI5z0KrsoYr8kk06FhVwGbL7lWMeoeOdROfCVS/ vg6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396164; x=1743000964; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CZyxZF1nI8E1mZeUh1efMDQKSRZ1oVsI+H9k2RwI1Ak=; b=F3KCjcV62ktsYXKsmQSve+w19X5cn9U9SIWcJWDYywyvYDUXJEZNMsjzN3ZkzbPK53 DusehLx4zBQ9QCD2QSzwBNlIeITUkc4mDJyv0lx25WqwSzf7ABhBy0aCF4ObJ7B15LVm kohkAd07PY0NaQKpRpepHbzJvLP3fL3/GFceqVYTvCvat94FFPp23fI3zYmSnagwvEwi +4bjyVTysLT7JFNvwwTL4SK/tQExQQ7z0mcSwZhjJVYnATWILyZ2FDPT0PWRtYE9L7XP 4598fvW4orB7LnsshDGLJva9qA38KW/vxvo9id1ZOqyzgRdNN4FBEKE1rPVqMn3GDezE Rdsw== X-Forwarded-Encrypted: i=1; AJvYcCVWXCvQgNUC0h7EIwTDlIxMvxGiKJoy8AnN2iSkb14TBSXrVPRVI5zG4zwLldNPJlbcOHELmDLoPGV21SGi@vger.kernel.org, AJvYcCWJlKtc25C55G7ndytZdw/r2rkFSHxm7seJUMEIBr2AGA+LHLkYCCY+oldkXV0jGi1c5Hq01t1c2jPci6HL@vger.kernel.org X-Gm-Message-State: AOJu0YxWasjyz80sPCke2KkbWTkhmSvYjPED0XzgOsMXe0z025CIfP6a Hrh91aAThtvnhVi5sB98poDVH2k41YvIaZute9HPoJ7Jd4vsz51+ X-Gm-Gg: ASbGncsLnQPWDC7ymdPcTYfZcNTNHSPKeI4k1IhQ7t9rHN5QeTt5DmKxBcge5/LpvH1 3H9SWo4SB5MHLgSBIROgeCPYTQy2VgZRaX0cOPMQcEys/HHxtqMguj9jRc+R1RjNQ+2WgFG1Onp HuR8O3iBVhetpdQUG/oYQsVrJ3GOZyh5caBpKpwk7wHChcgRlPZufUkxZUUukU39DtB6JkSlK5Z 28CFaDBK3gH1qGHSHKUriJFIduDlYkwgNiAdzKrHAKE0OCexmc24lYbjqAXI7zuoJ9hCjV4WyZS dqiw7GpXRkoxTNFU1Q0Ou9WhXoG++VKC/AQI4CEEJFI5fUQ3BIxBC9uprV1uWDJvVpl+04VUpI9 Ua7UF0jBuShQuxtul6wk= X-Google-Smtp-Source: AGHT+IFqEszybvbaGro60Kzsz4sWMwuybNXHGvQpUzUdz3uKHnJ5bvPgXOMr2/uENrWiEeOfd1OvXg== X-Received: by 2002:a17:902:ce84:b0:224:1294:1d26 with SMTP id d9443c01a7336-22649935818mr29989425ad.13.1742396164506; Wed, 19 Mar 2025 07:56:04 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c6bbec04sm115037555ad.191.2025.03.19.07.56.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:56:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 31/34] drm/msm: rd dumping prep for sparse mappings Date: Wed, 19 Mar 2025 07:52:43 -0700 Message-ID: <20250319145425.51935-32-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Similar to the previous commit, add support for dumping partial mappings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 10 --------- drivers/gpu/drm/msm/msm_rd.c | 38 ++++++++++++++++------------------- 2 files changed, 17 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 0409d35ebb32..bdd9b09b8ca9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -420,14 +420,4 @@ static inline void msm_gem_submit_put(struct msm_gem_submit *submit) void msm_submit_retire(struct msm_gem_submit *submit); -/* helper to determine of a buffer in submit should be dumped, used for both - * devcoredump and debugfs cmdstream dumping: - */ -static inline bool -should_dump(struct msm_gem_submit *submit, int idx) -{ - extern bool rd_full; - return rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); -} - #endif /* __MSM_GEM_H__ */ diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index 39138e190cb9..edbcb93410a9 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -308,21 +308,11 @@ void msm_rd_debugfs_cleanup(struct msm_drm_private *priv) priv->hangrd = NULL; } -static void snapshot_buf(struct msm_rd_state *rd, - struct msm_gem_submit *submit, int idx, - uint64_t iova, uint32_t size, bool full) +static void snapshot_buf(struct msm_rd_state *rd, struct drm_gem_object *obj, + uint64_t iova, bool full, size_t offset, size_t size) { - struct drm_gem_object *obj = submit->bos[idx].obj; - unsigned offset = 0; const char *buf; - if (iova) { - offset = iova - submit->bos[idx].iova; - } else { - iova = submit->bos[idx].iova; - size = obj->size; - } - /* * Always write the GPUADDR header so can get a complete list of all the * buffers in the cmd @@ -333,10 +323,6 @@ static void snapshot_buf(struct msm_rd_state *rd, if (!full) return; - /* But only dump the contents of buffers marked READ */ - if (!(submit->bos[idx].flags & MSM_SUBMIT_BO_READ)) - return; - buf = msm_gem_get_vaddr_active(obj); if (IS_ERR(buf)) return; @@ -352,6 +338,7 @@ static void snapshot_buf(struct msm_rd_state *rd, void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, const char *fmt, ...) { + extern bool rd_full; struct task_struct *task; char msg[256]; int i, n; @@ -385,16 +372,25 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); - for (i = 0; i < submit->nr_bos; i++) - snapshot_buf(rd, submit, i, 0, 0, should_dump(submit, i)); + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } for (i = 0; i < submit->nr_cmds; i++) { uint32_t szd = submit->cmd[i].size; /* in dwords */ + int idx = submit->cmd[i].idx; + bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); /* snapshot cmdstream bo's (if we haven't already): */ - if (!should_dump(submit, i)) { - snapshot_buf(rd, submit, submit->cmd[i].idx, - submit->cmd[i].iova, szd * 4, true); + if (!dump) { + struct drm_gem_object *obj = submit->bos[idx].obj; + size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; + + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); } } From patchwork Wed Mar 19 14:52:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874754 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72BA722069E; Wed, 19 Mar 2025 14:56:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396169; cv=none; b=EjWR8crAtG09wJuI+yDNQzbtfpvqT1XEKOsagc3TOPF3xzuUKNGvqXZ/iDbW/mYVJUJpVR6T3xla1N0fPv7t1d1gH5icFxjb4FhM/PAu7M+4CrHcHGZcGPukog9yYNMyR1lK7DHkmSpHy9vXy2d8DuwZ7ykIuqpW9RN6xQKbs2w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396169; c=relaxed/simple; bh=uIfgqD7C1i00zcyeb/y3LvhT1Dks9hkfIXrDIFz061Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SySw7/5PLZBPV2/7JiLac+jIjA+12rJeiqAimkvP6iIc+eZX4ak1k/1qoFVUTjMTzqTQOjFt1gI0ZkNFSaRQVqdCS5Dktmvyy6T/Edk7uPGSE8HMAC4/IikWBLyFyCF5HcdWceKUZEsPOWKuBgLoRYfWDI6horTKIsHscXnhulQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ExlfGZ0w; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ExlfGZ0w" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-22622ddcc35so26913765ad.2; Wed, 19 Mar 2025 07:56:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396166; x=1743000966; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2iJJUPboSKZ/bv7FjvwLTYQiRkvsdluYBcqCTVGU80Q=; b=ExlfGZ0wZiQ8kQUS46Cn3bKMsmEhRLZGbnf15639pEle6WeySI5Fuw29wEIn0uQEi3 D0kijxOFcHBT9uaHhrU/U/kQH20byLNWRxkEBn4AcA9fTB4SjBKWbJzCEV2yocxiVz7V CX2PQJwVQNNY1f1b79ljRXJavKAfQiQtgCfmZ3n0PfmEewRaMkeBaVFrSPbNje2nEm8j 5EQrC+tSEZ37qKhjIv7SrihfGZ8l5HK6Se1gPvs+tsv6ReihXAydDO9vhpDCwL8xZTOY satYGwKxzQ3CGrvXKk4SuJUIAVr0Mj0mIv76K/KyngPZQ05wqZPI7Yh8p2mp1YIbDc0C Cx1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396166; x=1743000966; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2iJJUPboSKZ/bv7FjvwLTYQiRkvsdluYBcqCTVGU80Q=; b=tDuL/aoPItssOIYq4heeQkbgKnZ6HFDbI//3MyO5jw2ppu6Xi0/F0jLLm4bGrvuF39 GpWAEX4apkn6BukEKX913wvSTZ02anBwQLrbNHpCPEaOSFB18S6QF7xmIz3QOaz8mmtD 6z8ExmIUNY1Pb/sadYtFLTdtoRM7ni7L4UdNvy/FSXF16zIShoYCsFuZWvZEphbRd/kY HByfQw4GCxt2iJ3nOPiDpTHm9e19US1aW12nL0LuTQwLl1118q3aNoVIFsFf5jBrazur 6I/KW8vfTXdMvE6KGVX2rp+5xF7xfRKhbx+kPIvYDWUzuljusYM3L2R2RB8FcLpG4X/u K7fw== X-Forwarded-Encrypted: i=1; AJvYcCUhByuAKqnz8XIpTy/FzMURHMLkElVIsA6NQwt87TWb2hOIfFyxD2rX89oHFOva90+PCiP6Np75AbPjNMn3@vger.kernel.org, AJvYcCW3Ac+wYJwcGEvplEbpOD15iJtWz3yd+4/BNbj8QQeg4yjM3tQn9mfd8FDa5q0Z+wZkSZzlMwYkOCcW4tIJ@vger.kernel.org X-Gm-Message-State: AOJu0Yy8ME9cmdBr17c6rbvP3VO8eeCTUAVVmqsepOqXY4xIictiYkRK 8ROgpigAiOs1RCXrZsMlzquWuJQfkAtTI7SRC4ZVtsFhqqxYrZJK X-Gm-Gg: ASbGncsQ1AZqUxjM6fk03fIqsPznjXJ0qFjr+JPvYl9NGBoFN3Dy9zMp+ERWLfUG4kv NaKsL8g7W1dXgQg8hfM/cBJDyWB6c3i86Z9D4BZDdLVokYZNUppZhBnKK5mDIY/UUzcEWZK+vKt bVjW50tnaJb7S04Ywum/vbsG/i2tOwWeKyzEO9Qr8USnfMDvMThnw4CECSlSV/cx/qw2pfZds0E qhmw6mnMP1elzbD/eOC+SX8iXpeT/gPyQEKtPTDpLG6+A2mB5ScsCSX6xHnNaaeh1uMJ6WPQg+C yDabZGhu37QCshgxV1stR+QkdVWMVk64ndGDQCBm9QZWEB8GXCi53wZ/eQKZPTDRGRsMNX9VZK3 qsrUZnt2t2/hpjL4rpVw= X-Google-Smtp-Source: AGHT+IHerem0/qAVd8w9wdi6CqAo+LURLK868SOtur7NURzmdVJu7fqFKbLZ4OivC08XjLdtTOT8uw== X-Received: by 2002:a05:6a21:78a2:b0:1f5:80a3:b008 with SMTP id adf61e73a8af0-1fbecd36bd5mr5863055637.32.1742396166652; Wed, 19 Mar 2025 07:56:06 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-af56ea041e7sm9254606a12.44.2025.03.19.07.56.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:56:06 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 32/34] drm/msm: Crashdec support for sparse Date: Wed, 19 Mar 2025 07:52:44 -0700 Message-ID: <20250319145425.51935-33-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In this case, we need to iterate the VMAs looking for ones with MSM_VMA_DUMP flag. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.c | 73 +++++++++++++++++++++++++---------- 1 file changed, 52 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index e35125d88466..aca943dc0cd7 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -257,6 +257,50 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, state->nr_bos++; } +static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submit *submit) +{ + extern bool rd_full; + + if (!submit) + return; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_gpuva *vma; + unsigned cnt = 0; + + mutex_lock(&to_msm_vm(submit->vm)->vm_lock); + + drm_gpuvm_for_each_va (vma, submit->vm) + cnt++; + + state->bos = kcalloc(cnt, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump = rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + msm_gpu_crashstate_get_bo(state, vma->gem.obj, vma->va.addr, + dump, vma->gem.offset, vma->va.range); + } + + mutex_unlock(&to_msm_vm(submit->vm)->vm_lock); + } else { + state->bos = kcalloc(submit->nr_bos, + sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + for (int i = 0; state->bos && i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); + } + } +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, char *comm, char *cmd) { @@ -279,30 +323,17 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, state->cmd = kstrdup(cmd, GFP_KERNEL); state->fault_info = gpu->fault_info; - if (submit) { - extern bool rd_full; - int i; - - if (state->fault_info.ttbr0) { - struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; - - msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, - &info->asid); - msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); - } + if (submit && state->fault_info.ttbr0) { + struct msm_gpu_fault_info *info = &state->fault_info; + struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; - state->bos = kcalloc(submit->nr_bos, - sizeof(struct msm_gpu_state_bo), GFP_KERNEL); - - for (i = 0; state->bos && i < submit->nr_bos; i++) { - struct drm_gem_object *obj = submit->bos[i].obj; - bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); - msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, - dump, 0, obj->size); - } + msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, + &info->asid); + msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } + crashstate_get_bos(state, submit); + /* Set the active crash state to be dumped on failure */ gpu->crashstate = state; From patchwork Wed Mar 19 14:52:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 875044 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0009E221564; Wed, 19 Mar 2025 14:56:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396170; cv=none; b=DWa9sK50p0LddgjlN7Zp2DfbxTZpnXk42a5H1LuMWytDMVBeyF6OuPPv6v+8+nVdAh8aE8c6S7k6WdlgITEeTa7q7Gb/SYRFEzgYs4fS2pmiQ4OrXn0Oy8A2MyCMkAegulRz2OWDQa23rP+lqjv9Po6FQaIIKWuH+Bq3qeUqi6A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396170; c=relaxed/simple; bh=1y5twiBp7POs0SWhseS4srdcj4CBuZy/9jxK1Aa6mhc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=euQRtgw4tlX0+uzmbCUefwZ/P+spSgRGCuifxpHSPpJ9AWjamoA6VSWL/XnZmvXK0DA1PFMF5MysezT8NbxfDBYV4n/Lwkg4i4nyFAwH9kZaN8FovhmouPeP+ibD0IA+136+p5v+xiZEqSUx0ZyjoinT+YhzfLaCRVWpPv503zA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GhPx2u2i; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GhPx2u2i" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-223f4c06e9fso14559835ad.1; Wed, 19 Mar 2025 07:56:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396168; x=1743000968; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=syWTjTZ+GTk6yKure/ngwPwmYsSOIjhNb/4/wpDEG8U=; b=GhPx2u2iPTfaZns5/2FckqoVQrIbxfooeZPorolaqKpistKsgg7Ht038Cp5rKZkeUG 0mdDN9xiaNoewkPXRBuyopWQveUT7v3V5ZcuZPo9ZOSG9zWSLuLXCzNqfAaXqmM/tk5P jKrC67eVUvudJKmZljdiYaVjq8574T4fogMyWNuPDInkg+5DhfHRf8ZEbnv/nTU8T5J3 GQzWAfxvIJQRtZT50oPV13Jb26Pd+8pedandHUiyko4ORqseV+0Y+eQiOqtqlK1KJwTE GzeOzP0rCf1rKn7g2ReMknZgwk7qITfD0B4FS/zg4qEPTCGs97q8CtrWEZ15IDNIap7G uLQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396168; x=1743000968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=syWTjTZ+GTk6yKure/ngwPwmYsSOIjhNb/4/wpDEG8U=; b=vIYpIvu4ncatqrE6bVQd0oX167H8SSJDBkedO9veUOn3+9lDHwKaWDRi+BeYH+uJRk IWCikrjDoC46RB7r7m63NIbfGUttoQO10VtowHedtAOmOnGQBgUzVB9D64rtfMJsZivN WbUMi9qPFiW22iGUhmygMPjwFvKcfHOXFmgq3Ghf+Xg9tYQv7Y/Cf9yk01eDfmwOgc3G moFPv9qoazNzyoEzLMxl1c+rLlfIP8DyxKtEE43Cj9TgHXGQL1Y21F60YglH2Qe0Yqc2 Q8WsHMZzA4pvM+cENl8svISbAo+ZH+lix33ERkOAu2aTikm2DvDKBz81UbqqBXPKN5b3 b7uw== X-Forwarded-Encrypted: i=1; AJvYcCXNY5/QDhO/LdbznG7Hs4ctsTzjAnPWCUq1YYUxfz8qx6dZR0mkgKi45IeW/x2uuwBfDmTZnJN3xP0U50+7@vger.kernel.org, AJvYcCXrMJwTBOSlnX++V8o4KgrH/BXPuDQPAfyQj15FUZFZh+cqLa453mAO0WCcaCKRasZYxIIQzZH7LGa6m8CU@vger.kernel.org X-Gm-Message-State: AOJu0Yxrv0DLUkZQYtZYr4OAqU59KfKRc/iMqHE7X+BCDZYNgJXirY1E Vs5zZ0F2L8HlCduzUf8rCZt4qsLstFaIpi0qqKQdOMy9T1axUnCD X-Gm-Gg: ASbGncuqU/5dgMWGw3kk3e9D6rSa5MUm77k7/Rs7xNUCosUD57FG/g1+9sdbAfW20W7 ZUbwYtlPrufhYLx93jgM2NJGJJMBymnESWm9LvAg1HmNAn10OQH7hjDlZzWSnKbiBg/+FmKy/cS oiUWLApT34FwFfg8FMbEpOlJeiAPF7LSTt61Cz3o+i6KzGUjX6PUeMk1rbbBtfLF5ovMGE1/Itz ljrB4DoRXZGvyMqyIb30kd8trtXwOgLx8wBCBS9r7fmwCj6xcCf24G13vHjbnL8RX5kRMqgDcM5 uGXmJXxSOclY5xOCxw3nrOEYg/ZkFXhoJ/HJpP+xmigH0H/+5KO3KM2+/8kPISPP+H1JSd2HJLH iulIOdy1wJWmNvvuek6MQY/S9jpFdDQ== X-Google-Smtp-Source: AGHT+IFcSZWcMU2Da/pZDo0aJtfjbW4/N0GwYAaFsHrjzlIK0U6ajFL0qhqT1uK3MnFdyHG3KtGiUg== X-Received: by 2002:a17:902:c950:b0:223:fb95:b019 with SMTP id d9443c01a7336-2262cc4a926mr120904215ad.24.1742396168054; Wed, 19 Mar 2025 07:56:08 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c68a448fsm114995215ad.72.2025.03.19.07.56.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:56:07 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 33/34] drm/msm: rd dumping support for sparse Date: Wed, 19 Mar 2025 07:52:45 -0700 Message-ID: <20250319145425.51935-34-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark As with devcoredump, we need to iterate the VMAs to figure out what to dump. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_rd.c | 48 +++++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index edbcb93410a9..1876b789c924 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -372,25 +372,43 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); - for (i = 0; i < submit->nr_bos; i++) { - struct drm_gem_object *obj = submit->bos[i].obj; - bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_gpuva *vma; - snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); - } + mutex_lock(&to_msm_vm(submit->vm)->vm_lock); + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump = rd_full || (vma->flags & MSM_VMA_DUMP); - for (i = 0; i < submit->nr_cmds; i++) { - uint32_t szd = submit->cmd[i].size; /* in dwords */ - int idx = submit->cmd[i].idx; - bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + snapshot_buf(rd, vma->gem.obj, vma->va.addr, dump, + vma->gem.offset, vma->va.range); + } + mutex_unlock(&to_msm_vm(submit->vm)->vm_lock); + + } else { + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } + + for (i = 0; i < submit->nr_cmds; i++) { + uint32_t szd = submit->cmd[i].size; /* in dwords */ + int idx = submit->cmd[i].idx; + bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); - /* snapshot cmdstream bo's (if we haven't already): */ - if (!dump) { - struct drm_gem_object *obj = submit->bos[idx].obj; - size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; + /* snapshot cmdstream bo's (if we haven't already): */ + if (!dump) { + struct drm_gem_object *obj = submit->bos[idx].obj; + size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; - snapshot_buf(rd, obj, submit->cmd[i].iova, true, - offset, szd * 4); + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); + } } } From patchwork Wed Mar 19 14:52:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 874753 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7241E22173D; Wed, 19 Mar 2025 14:56:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396172; cv=none; b=hht3Ube67rSmrwdXvWdPtWmTQ2kVuPRFUG7rL+CqfzIQYkCtLr8+aWezVzRpf2u1uSqkmEiUMla/DAmJD/ZkM+ckar5dCM/AKn4jV/jwqz1gE/UTwgm/5johRNTCRa/ZkKLC38ClsFTS/kdQbeae5sieCQKPy3FONHVR0ei2DLc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742396172; c=relaxed/simple; bh=EMh7ikUmKJiwc+h49mq+pybrwDLR96YV5FxSuN5ZxBM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T/mzp5yLTdPxaAstbnCbCF8GGX/zl37jXRMC6I4tHAseEFFKXRShgcyw6YxXK+6Hq25kreVFXbUyMwHOC727t29afLEe6G6XGCvQR41ZMlXgWDdQ4DUPbILuGg4iFzsevleB663qLkhbg6PbB/1xrhRBDvZlyUszmAv7pxaBBp4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=APBIwsxD; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="APBIwsxD" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-2241053582dso40881735ad.1; Wed, 19 Mar 2025 07:56:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396170; x=1743000970; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=b5vghT+lRUUhODnBwluijwY+DynV1hCCLl7u4RlxQto=; b=APBIwsxDR8ryzsAwiTniLvuv4KPfDVNJKSAAf/SCP/6S4WOYQ19GRBMnkyL357cpUS 0tkL3tP+UNnRDDAI9HGbzspKb3mFNQTF0MQTAW2NSI/A2z0wAatD0eBbpiZ3kBi/8i2/ 8QnHkY9SxiSp9JUc1DQrlc3W9gE6clD+r06tI+6acaWaQX5zadDIZAP99TlcM/W6vMhw YAJSQ7xkt95FY1RvgjpqSVJrT0g1LjeTfz9qnuLuQgnXfQIHue0w/0PNXGApvgQRSw3V 9fFZeywQEh7pXtnwTvAPo+VdbLff2b2bTRolV3ib7ncul3UAcYSfWHu91hSG7EZ88wau JuuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396170; x=1743000970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b5vghT+lRUUhODnBwluijwY+DynV1hCCLl7u4RlxQto=; b=kwSISib/gMgQ7ZRqf+KOynTLZcH7QZdkFpTQthTW7+UAnk+k9TF7iH1UMR17TNXO18 uKts8LF1v0Fl80Om6q4kJGcROj+QvfbUtALysS8lgn+Tgaw6joEOSgB24iq0qpYyaA23 Yf/jeXB4ZPwGagBzr2vbw7Gq5fLhyVbsR6km5PrpnPkPG/L+OSGF87j8WGuhNfJ7MCWS IpH8GFvo2xZSWCxIEWCf8Nmmdo3fvl6dXK6fJKB9I6Nq83jCpegDUogtDtqwPD22h86W tGR7VsQrZ6cOby3NUgEEbi7C+gsyahb5Dw0p2fTga7qj016VUslbcTWCBh7TLW+U/oiB +KYw== X-Forwarded-Encrypted: i=1; AJvYcCW4hayLNJcRxrrR7LTxgva/MfIktjgIjKA90eNgvpOubLzWvXFuE1IUDTGjtbkMmNDCqufrurC/a4BdLk95@vger.kernel.org, AJvYcCWDKGwp5UzjiXEEq8bCid58b4dyDJQg8WT68jG7hpJqdiPqU4N1ux4aaOMw3bUCXorGQFTNTVxpgpYEk3NX@vger.kernel.org X-Gm-Message-State: AOJu0Yw4/WRTKvx1xjQdnblbEZauKK/+YK23GYxluAsecQ56rsajOc/E Cd5w8l/Pns0MRfLrY3MxHwiBpGuThVjPseCiap+u40u34H72OPm9 X-Gm-Gg: ASbGncsiysz8yl5tE5m+F1+KoB5VuqqMX5NinkhrPGUvsPd4jGE/CfRYeljo697qOgq PD8lSLGniSCtcFXyh8YyhYtMCkU4Gs7pcoWGZaZ2qMKNhs2bNke+cmdO/Ai8xDBMT1EeEPK8RGL y61fZSpN1oy1smvSJSzQ3o6mEQEIaxQ9QWUPr8BJAHojj5Ghruz6DySbWdkw2W8QgXGdS7aNneY RVDB/HZ4clrFUi9lR6U+DjSdYX+6q+PqKu/wVHMbdsupaTB3xKG45tS8M3q+3FMiP/Zi+u57yFx +33WkbNNailTqdmMk4/mVV/N7GGA6AaBBg8/JJbvExQRCqPTC07Zc2IVoDewJ+dJzPaJ8eTXj4d JUCJCxeVhRrrmntjwC/A= X-Google-Smtp-Source: AGHT+IG1ust121HcuGrJL14qmOc3DcJXO+l7Y43CZxyk5EJV4JgmEiZBepa3nzlmKg+3Zuaz5qbsrA== X-Received: by 2002:a05:6a00:3c89:b0:736:4536:26cc with SMTP id d2e1a72fcca58-7376d6ff858mr5191039b3a.23.1742396169555; Wed, 19 Mar 2025 07:56:09 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-737115593aesm11643184b3a.46.2025.03.19.07.56.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:56:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 34/34] drm/msm: Bump UAPI version Date: Wed, 19 Mar 2025 07:52:46 -0700 Message-ID: <20250319145425.51935-35-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Bump version to signal to userspace that VM_BIND is supported. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 70c3a3712a3e..ee5a1e3d5f3b 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -41,9 +41,10 @@ * - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT * - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST) * - 1.12.0 - Add MSM_INFO_SET_METADATA and MSM_INFO_GET_METADATA + * - 1.13.0 - Add VM_BIND */ #define MSM_VERSION_MAJOR 1 -#define MSM_VERSION_MINOR 12 +#define MSM_VERSION_MINOR 13 #define MSM_VERSION_PATCHLEVEL 0 bool dumpstate;