From patchwork Tue Feb 2 09:51:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 374878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25538C433E6 for ; Tue, 2 Feb 2021 09:53:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CBC5564F55 for ; Tue, 2 Feb 2021 09:53:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231157AbhBBJwq (ORCPT ); Tue, 2 Feb 2021 04:52:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229631AbhBBJwK (ORCPT ); Tue, 2 Feb 2021 04:52:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57CB7C0613D6; Tue, 2 Feb 2021 01:51:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=V9Mg8tXW0rNlZuOFD1i/bt5R8THD0IcaaC9lFgbmReU=; b=L1fWioLwTtRfOXtTgZb+howLjx 1EOfyYP2fxMMgZOqeg6cYBIek1eFvmNWURkW/ho5a4l+tE/MUIANWk3E0S7h/Mt7LCiz6TxpjYp6E x5mWQKiAczR8jnfqIiZyAvVaMj7O+YebW2+i/5VgKXJWTWR6m9sYkwPwbLX5hqxSyaYFPbLWGRkE5 7m93kXAk6+bWufC3sEZD67B8GM6J0auo3xk96027YqMgGr2L+eLrS91h7Yc9QKKdmsghG4ZPc9VV8 79cNjJiENTav+zXR7nuu8SDFnkffhIwzK5Hl3epzy1iGNrSVRGIo4fizoql04VTcLzdf7wIVMfUBC jkkdGOfw==; Received: from [2001:4bb8:198:6bf4:7f38:755e:a6e0:73e9] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l6sL7-00F0wH-31; Tue, 02 Feb 2021 09:51:20 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , Ricardo Ribalda , Sergey Senozhatsky , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 2/7] dma-mapping: add a dma_mmap_pages helper Date: Tue, 2 Feb 2021 10:51:05 +0100 Message-Id: <20210202095110.1215346-3-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210202095110.1215346-1-hch@lst.de> References: <20210202095110.1215346-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add a helper to map memory allocated using dma_alloc_pages into a user address space, similar to the dma_alloc_attrs function for coherent allocations. Signed-off-by: Christoph Hellwig --- Documentation/core-api/dma-api.rst | 10 ++++++++++ include/linux/dma-mapping.h | 2 ++ kernel/dma/mapping.c | 13 +++++++++++++ 3 files changed, 25 insertions(+) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index e6d23f117308df..157a474ae54416 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -563,6 +563,16 @@ Free a region of memory previously allocated using dma_alloc_pages(). dev, size, dma_handle and dir must all be the same as those passed into dma_alloc_pages(). page must be the pointer returned by dma_alloc_pages(). +:: + + int + dma_mmap_pages(struct device *dev, struct vm_area_struct *vma, + size_t size, struct page *page) + +Map an allocation returned from dma_alloc_pages() into a user address space. +dev and size must be the same as those passed into dma_alloc_pages(). +page must be the pointer returned by dma_alloc_pages(). + :: void * diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index fbfa3f5abd9498..4977a748cb9483 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -263,6 +263,8 @@ struct page *dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void dma_free_pages(struct device *dev, size_t size, struct page *page, dma_addr_t dma_handle, enum dma_data_direction dir); +int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma, + size_t size, struct page *page); static inline void *dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 68992e35c8c3a7..c1e515496c067b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -515,6 +515,19 @@ void dma_free_pages(struct device *dev, size_t size, struct page *page, } EXPORT_SYMBOL_GPL(dma_free_pages); +int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma, + size_t size, struct page *page) +{ + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + + if (vma->vm_pgoff >= count || vma_pages(vma) > count - vma->vm_pgoff) + return -ENXIO; + return remap_pfn_range(vma, vma->vm_start, + page_to_pfn(page) + vma->vm_pgoff, + vma_pages(vma) << PAGE_SHIFT, vma->vm_page_prot); +} +EXPORT_SYMBOL_GPL(dma_mmap_pages); + int dma_supported(struct device *dev, u64 mask) { const struct dma_map_ops *ops = get_dma_ops(dev);