From patchwork Mon Jul 26 15:38:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 486338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C183C4320A for ; Mon, 26 Jul 2021 16:12:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 69BD260F5A for ; Mon, 26 Jul 2021 16:12:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233538AbhGZPcY (ORCPT ); Mon, 26 Jul 2021 11:32:24 -0400 Received: from mail.kernel.org ([198.145.29.99]:45618 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229533AbhGZPaP (ORCPT ); Mon, 26 Jul 2021 11:30:15 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0C54960F93; Mon, 26 Jul 2021 16:10:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1627315844; bh=rqdLP8v9JW5gP1K232CCcqSKiXV/Hhn8DC/DNiPw1AY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JtlLY892HiqWk/bDZpNj38LU+ks6x5FVbhN9g+BTqU3XV0CPnkyH56tWpc8aO0V7Z 4niTXN+3dfa93iBsx0l+NDDmqJuaWJ6EtC8oirClViusaEO/YDdWHl6h0p8a1Wct/V CIeR6swoxO5G+wu1dinZLUlkWxUcYTXTAPeKrykM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Roman Skakun , Andrii Anisov , Christoph Hellwig , Sasha Levin Subject: [PATCH 5.13 089/223] dma-mapping: handle vmalloc addresses in dma_common_{mmap, get_sgtable} Date: Mon, 26 Jul 2021 17:38:01 +0200 Message-Id: <20210726153849.164103359@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210726153846.245305071@linuxfoundation.org> References: <20210726153846.245305071@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Roman Skakun [ Upstream commit 40ac971eab89330d6153e7721e88acd2d98833f9 ] xen-swiotlb can use vmalloc backed addresses for dma coherent allocations and uses the common helpers. Properly handle them to unbreak Xen on ARM platforms. Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") Signed-off-by: Roman Skakun Reviewed-by: Andrii Anisov [hch: split the patch, renamed the helpers] Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- kernel/dma/ops_helpers.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..af4a6ef48ce0 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,13 @@ */ #include +static struct page *dma_common_vaddr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = dma_common_vaddr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off = vma->vm_pgoff; + struct page *page = dma_common_vaddr_to_page(cpu_addr); int ret = -ENXIO; vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(page) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO;