From patchwork Fri May 2 16:56:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 887591 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C52C26C3AA; Fri, 2 May 2025 17:08:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746205715; cv=none; b=HfnbWKuPoIaIagnfMHZPuMno2O/0tWtA0wiKNgPgROkNjjgiw+2FH1N8KIJxfd34Io8Q0x6CIBpk0dmykac59rg8UNpwTZP0r6UzssNMWaxG42sWVkQPYo1b1SYAS/OVqbyQXZVSbGXvXws+sk6u47uBkyx4tDjszO++9rZZ8ks= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746205715; c=relaxed/simple; bh=ZYVeI7Wl5SbjGoRLS5fDI5vme7a6l7QoQ1Uar+IWY10=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l0g206+XyRSz7plcKRqSvZMuxX0fMOOCq5rIo4RSgplu04jqYCf0aysicN/mn2ENqp+A75nOek6YlxYQP6pQ9pnl46c8HHtKVtJ3dBjIciDVLuej+CzfG9qYbyATy77OnD5rIpU30WUGS0JKsbU0YwtJX9mCMhldJWPBqKCkAhY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=czelqHfg; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="czelqHfg" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-af908bb32fdso2194461a12.1; Fri, 02 May 2025 10:08:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1746205713; x=1746810513; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J5GP4/TLuzMHqPbHHkQT0pCXSGEunuRhgicyee+ZC78=; b=czelqHfgZy3BeEDTyDwmCXmeFXba/zMt4vvGR9F543sXbpSOazP6zaxwc15PJ2NdYj 7XDrtXMbpl31inTtnu61V0P0Kmj32G7ba3tHusX4jaB2Tczb3G5nhvNEdLnmdjzrrl1K nZyUCbnpf5jiFNPwwuM3xIsB90Tp6Kzbo6DMAFRSAAGUW+eP9nBVGXwdOgsllpZKgk1Y 2YaTbkvAEQTHXmmBAvIGs9wpRcLceI6ybcQevHOtnPeFVOuKSzgyoRPoOT0xcCLNt5LR nDmheM6/TxbDTktkUUBhNQgSFf7FKSK+nYtldeHMvgb4hF/C2+JMjhAgOmjLPJ0Jb/b2 q+8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746205713; x=1746810513; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J5GP4/TLuzMHqPbHHkQT0pCXSGEunuRhgicyee+ZC78=; b=R0qfHG5bi6rqD0O6deL2BR/wrL6VvsAKEBUwhXyJKZisAombVGKhfYnnZz5Ad7AC0Q Wyj+89e22nHrSA7y/bZcKeoDKQNRGM92fZgZ4KhhxU+4PItK+LpZVBN6dnGIOHcoaR9T UP30hFtfPDue6qf16OyYbUjbYme1bxQfZB5QKwUM30Vt7arUNwF8a/qyM6DUovhWGBqR E9M0TWagithpUNxfc0/cJMn0oMCACgr5Nn2V/pQ+98E3VVEQhMfXNZSU4qt2ADsPVM8e 218dDZMHxmc35KXc8N09Rpaeff52HEGcjLXMrabTriTK8WG+kQH6lHOhS/UuWzcHycZW Tgpw== X-Forwarded-Encrypted: i=1; AJvYcCU/4jJylrs46uTgRJLgk0NFqepNfsZLhyzH3/CJaygpY1uU5zSbrAu2BIXmKb2bbmn+6KGwNZGVbwKQ38WR@vger.kernel.org, AJvYcCWHZi+nNPbiRqQ233ZFGvyM9NBtTrdi2ZgF8p//Fg8GahPOKnx9/YN/L/Jfuk7swtAQko+lPhRSaP8XIkkE@vger.kernel.org X-Gm-Message-State: AOJu0YyL+eblGhzMj8+tJqo55TjqrFCH2rpVrdwpyKF+F9vdkEvImbwh 2gao/7Vwt5ZS9d0T1ihOqfaF8M95R5ltDQ94yKXHDGaHwUUP3BLS X-Gm-Gg: ASbGncs2Q3zci/GPM5C0RSbvqODbTQQBxOT1x51qGm1w/wH6ESiYy5Sr/ytwHglcayC ZG+0Xs6sBr9Ao+cdaRLzh2TSN0BwIOLGgEpu7FwwlkfdAM3ino0G2ycefxmOcjST2DFpYz6gXRK mJ3Fd0D9Iv6WhhI40Gu0m96GdrjPFAHGT/cnZqnCRcbroyCGO/hfPykHYcDUWbzsZMjuxDjSr1q 0/s2jgkPMWGMSkxWFOo1OOobB+ycp6OTc3BnS0rstQn4jokz80Yf68Ni1N1yOY88Wxp7uH9BztG RlTm7mCCc/TDmtV4p9GL1EdSJni7nmafwNBvZzRs+Uo61QMtYr30DM0yQuU1wiMV5W2M75JGgCP dEE2RTGkwz+Vqq1M= X-Google-Smtp-Source: AGHT+IEmx+Gsky1IOF3IWVaLeCfaQckJi/BZ5+0M6NCkjv1kJRH4dHVudHLT0lIAw6TbQRwbeLUnhQ== X-Received: by 2002:a17:90b:584d:b0:2fa:4926:d18d with SMTP id 98e67ed59e1d1-30a42fd0debmr9071756a91.13.1746205712725; Fri, 02 May 2025 10:08:32 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22e1521fa6esm10076635ad.152.2025.05.02.10.08.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 May 2025 10:08:32 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 15/33] drm/msm: Add mmu support for non-zero offset Date: Fri, 2 May 2025 09:56:42 -0700 Message-ID: <20250502165831.44850-16-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250502165831.44850-1-robdclark@gmail.com> References: <20250502165831.44850-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c index 39641551eeb6..6124336af2ec 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); unsigned idx = (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits = 0; + WARN_ON(off != 0); + if (prot & IOMMU_WRITE) prot_bits |= 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ecafc6b4a6b4..9cca5997f45c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -435,7 +435,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - vma = msm_gem_vma_new(vm, obj, range_start, range_end); + vma = msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -477,7 +477,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 3a853fcb8944..0d755b9d5f26 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -110,9 +110,9 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 89e8f6e21b8d..c3bd89243a71 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped = false; } @@ -93,7 +93,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm = to_msm_vm(_vm); struct drm_gpuvm_bo *vm_bo; @@ -107,6 +107,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, return ERR_PTR(-ENOMEM); if (vm->managed) { + BUG_ON(offset != 0); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -120,7 +121,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, GEM_WARN_ON((range_end - range_start) > obj->size); - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; ret = drm_gpuva_insert(&vm->base, &vma->base); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index e70088a91283..2fd48e66bc98 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, } static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, size_t size = sg->length; phys_addr_t phys = sg_phys(sg); + if (!len) + break; + + if (size <= off) { + off -= size; + continue; + } + + phys += off; + size -= off; + size = min_t(size_t, size, len); + off = 0; + while (size) { size_t pgsize, count, mapped = 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, phys += mapped; addr += mapped; size -= mapped; + len -= mapped; if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -400,11 +415,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu = to_msm_iommu(mmu); size_t ret; + WARN_ON(off != 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |= GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c33247e459d6..c874852b7331 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu);