From patchwork Thu Dec 20 13:14:23 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 13674 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id E0B0D23E2D for ; Thu, 20 Dec 2012 13:24:18 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id 609ABA184A7 for ; Thu, 20 Dec 2012 13:24:18 +0000 (UTC) Received: by mail-ie0-f180.google.com with SMTP id c10so4453839ieb.25 for ; Thu, 20 Dec 2012 05:24:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:dkim-signature:x-received:from:to:date:message-id :x-mailer:in-reply-to:references:cc:subject:x-beenthere :x-mailman-version:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=HWY1chTkat1oVsy9rBbLDiRV9FJTbX8Xbj4/QQdFQLQ=; b=FffEPQOOZ9F0aXQLgu8J6GSlxPlhjS/6KL3XJFjcR7/dN9zd+vJuOlSQ3LlDkyXyZe fISyKQgVaB9eu7xJQmGLBOYmd+MwEnKoB5x8rkSja85uu452cCqITHc7IP6o3b71agGV ToXUkz+sC8bs7b8TFyRckwYQ0rbu8vXT5Gnd3v9QUxsx/byyg+XXTpZ/mPUYHlkum5j/ CEhJBQdGD4vhE+MKi9RQOnwejE4OLoVWCF2owRbxfKQo82AGnwqNKDjD55K8h2eS0lPC HOif0BIz4CCDUDSZUSArBQZOyyvx72r7K/RJQrxSGHvy44Vqdt0RzLe48AXb/bjSA9UL IzMQ== X-Received: by 10.50.12.138 with SMTP id y10mr10310222igb.58.1356009857766; Thu, 20 Dec 2012 05:24:17 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.115 with SMTP id m19csp27770igt; Thu, 20 Dec 2012 05:24:16 -0800 (PST) X-Received: by 10.194.71.244 with SMTP id y20mr9117328wju.19.1356009855774; Thu, 20 Dec 2012 05:24:15 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id cq9si25441807wib.12.2012.12.20.05.24.13; Thu, 20 Dec 2012 05:24:15 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Tlg6d-0007ty-A0; Thu, 20 Dec 2012 13:24:11 +0000 Received: from mail-ee0-f54.google.com ([74.125.83.54]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Tlg6b-0007tt-8X for linaro-mm-sig@lists.linaro.org; Thu, 20 Dec 2012 13:24:09 +0000 Received: by mail-ee0-f54.google.com with SMTP id c13so1774206eek.41 for ; Thu, 20 Dec 2012 05:24:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references; bh=b5ReGGmErWUWxl3FXl6xc/h5WDP8sIeg1C0HfqPAn0U=; b=bXndxK+90hl0EqlneUB0GwAN477Ase4x2+rpn2D+hMB36Y6BpJjFVLzh/VrTH0qHTa +jdm/P16HL8pjBSYg3BoTDrWQ+PXbJJOk8Jpwfazbnd1C3Ighs/V1op+qxI6nsfyWOm4 71IEvHWyGfn82AFGecMtiYNPeN+NxlbnJ64ew= X-Received: by 10.14.219.3 with SMTP id l3mr23320339eep.5.1356009848805; Thu, 20 Dec 2012 05:24:08 -0800 (PST) Received: from wespe.ffwll.local (178-83-130-250.dynamic.hispeed.ch. [178.83.130.250]) by mx.google.com with ESMTPS id f6sm15361115eeo.7.2012.12.20.05.24.07 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 20 Dec 2012 05:24:07 -0800 (PST) From: Daniel Vetter To: DRI Development , linaro-mm-sig@lists.linaro.org, linux-media@vger.kernel.org Date: Thu, 20 Dec 2012 14:14:23 +0100 Message-Id: <1356009263-15822-1-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <50D255F5.5030602@nvidia.com> References: <50D255F5.5030602@nvidia.com> Cc: LKML Subject: [Linaro-mm-sig] [PATCH] [RFC] dma-buf: implement vmap refcounting in the interface logic X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQmBnRmE9Fw5KWz8tTsT2/sw4e2uAzTOYyRzyApsbhf1zwgwa6UNZoNQfhrwd6RGXZjuqy2G All drivers which implement this need to have some sort of refcount to allow concurrent vmap usage. Hence implement this in the dma-buf core. To protect against concurrent calls we need a lock, which potentially causes new funny locking inversions. But this shouldn't be a problem for exporters with statically allocated backing storage, and more dynamic drivers have decent issues already anyway. Inspired by some refactoring patches from Aaron Plattner, who implemented the same idea, but only for drm/prime drivers. v2: Check in dma_buf_release that no dangling vmaps are left. Suggested by Aaron Plattner. We might want to do similar checks for attachments, but that's for another patch. Also fix up ERR_PTR return for vmap. v3: Check whether the passed-in vmap address matches with the cached one for vunmap. Eventually we might want to remove that parameter - compared to the kmap functions there's no need for the vaddr for unmapping. Suggested by Chris Wilson. v4: Fix a brown-paper-bag bug spotted by Aaron Plattner. Cc: Aaron Plattner Signed-off-by: Daniel Vetter Reviewed-by: Aaron Plattner Tested-by: Aaron Plattner Signed-off-by: Rob Clark Reviewed-by: Rob Clark --- Documentation/dma-buf-sharing.txt | 6 +++++- drivers/base/dma-buf.c | 43 ++++++++++++++++++++++++++++++++++----- include/linux/dma-buf.h | 4 +++- 3 files changed, 46 insertions(+), 7 deletions(-) diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt index 0188903..4966b1b 100644 --- a/Documentation/dma-buf-sharing.txt +++ b/Documentation/dma-buf-sharing.txt @@ -302,7 +302,11 @@ Access to a dma_buf from the kernel context involves three steps: void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) The vmap call can fail if there is no vmap support in the exporter, or if it - runs out of vmalloc space. Fallback to kmap should be implemented. + runs out of vmalloc space. Fallback to kmap should be implemented. Note that + the dma-buf layer keeps a reference count for all vmap access and calls down + into the exporter's vmap function only when no vmapping exists, and only + unmaps it once. Protection against concurrent vmap/vunmap calls is provided + by taking the dma_buf->lock mutex. 3. Finish access diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index a3f79c4..26b68de 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -39,6 +39,8 @@ static int dma_buf_release(struct inode *inode, struct file *file) dmabuf = file->private_data; + BUG_ON(dmabuf->vmapping_counter); + dmabuf->ops->release(dmabuf); kfree(dmabuf); return 0; @@ -482,12 +484,34 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap); */ void *dma_buf_vmap(struct dma_buf *dmabuf) { + void *ptr; + if (WARN_ON(!dmabuf)) return NULL; - if (dmabuf->ops->vmap) - return dmabuf->ops->vmap(dmabuf); - return NULL; + if (!dmabuf->ops->vmap) + return NULL; + + mutex_lock(&dmabuf->lock); + if (dmabuf->vmapping_counter) { + dmabuf->vmapping_counter++; + BUG_ON(!dmabuf->vmap_ptr); + ptr = dmabuf->vmap_ptr; + goto out_unlock; + } + + BUG_ON(dmabuf->vmap_ptr); + + ptr = dmabuf->ops->vmap(dmabuf); + if (IS_ERR_OR_NULL(ptr)) + goto out_unlock; + + dmabuf->vmap_ptr = ptr; + dmabuf->vmapping_counter = 1; + +out_unlock: + mutex_unlock(&dmabuf->lock); + return ptr; } EXPORT_SYMBOL_GPL(dma_buf_vmap); @@ -501,7 +525,16 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) if (WARN_ON(!dmabuf)) return; - if (dmabuf->ops->vunmap) - dmabuf->ops->vunmap(dmabuf, vaddr); + BUG_ON(!dmabuf->vmap_ptr); + BUG_ON(dmabuf->vmapping_counter == 0); + BUG_ON(dmabuf->vmap_ptr != vaddr); + + mutex_lock(&dmabuf->lock); + if (--dmabuf->vmapping_counter == 0) { + if (dmabuf->ops->vunmap) + dmabuf->ops->vunmap(dmabuf, vaddr); + dmabuf->vmap_ptr = NULL; + } + mutex_unlock(&dmabuf->lock); } EXPORT_SYMBOL_GPL(dma_buf_vunmap); diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index bd2e52c..e3bf2f6 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -119,8 +119,10 @@ struct dma_buf { struct file *file; struct list_head attachments; const struct dma_buf_ops *ops; - /* mutex to serialize list manipulation and attach/detach */ + /* mutex to serialize list manipulation, attach/detach and vmap/unmap */ struct mutex lock; + unsigned vmapping_counter; + void *vmap_ptr; void *priv; };