From patchwork Thu Nov 8 17:55:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 150559 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1133366ljp; Thu, 8 Nov 2018 09:53:47 -0800 (PST) X-Google-Smtp-Source: AJdET5cYwtOAaJVQNhzqMjaymgw6PvluJul8TAjhj4RhLbLBenYAKFivWir5OGUQDHahEiBFtKHw X-Received: by 2002:a17:902:bd4a:: with SMTP id b10-v6mr5327696plx.171.1541699627542; Thu, 08 Nov 2018 09:53:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541699627; cv=none; d=google.com; s=arc-20160816; b=Q5qZZiCH3QSLcXH28pzInW+KCMl1M9Dmol2imSEn3hKnwXuEPtVqoqdb+K5lDjkAyr lN+ZG59fGnqLFqEbpaLo2rNY0PbRzH5wEyOPx0E4//Ul7mxyHTVciwduR3TOGNkElqcy IxLpCfJbn9/O+6p/G41inCB0uRw+zviDbGx0okS1iKZ/nLYLoMIpblmznqxg3b1/9pMN 9KTfiL/XXdy7Ap6Ne9TnEJ1yiXSox4arVsA/Xk3H0pMeghYW7+d0Xym6jWxHBmJVpQBN mbhZxk4C8f6CuEJp+FU1qO/afLXplnGbAQFljZW7Tp4u0LesMkQvLojy6SC632YyD7S3 1AqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=dU2k/JtY8uhSd25UPYaRtZKdwUVn/Je3S2xNEi24eSo=; b=hC7h84MzkmLedD8Iz6WZBLI1xOeG6uH7qmrqSh3O9M1CBymBm9YqlLn1ItDTxY2wkY nmF5kHR8rSYW0zJj9/kOxpasAKjiV+bQ14tvLOoxYDtwhMAzOPoUFX4de1euVRkpdK/u Jo50E5ZmfGe6gGT8aBdp+yvyEyZ7CdsaNxeqCuWfV/Q497GO28X8ERqJKzhhaw0hOWBo 6HQzEZ42EesRtzUiuHmmlhDwnNo3NFTLjSCHVVHuTBIRTjq7PAKU7Rie8ZMmKyaN2uQ1 LtUYROGRaxQPeNWDwIQrw7Lvk2Usy6N4Rs4Zpum+9wekgLUMFM06oM0eEQhmTRkIkOnq sMFg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p189-v6si4850421pfg.235.2018.11.08.09.53.47; Thu, 08 Nov 2018 09:53:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727255AbeKIDaV (ORCPT + 32 others); Thu, 8 Nov 2018 22:30:21 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:59068 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726781AbeKIDaV (ORCPT ); Thu, 8 Nov 2018 22:30:21 -0500 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 86CCF341B93CD; Fri, 9 Nov 2018 01:53:40 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Fri, 9 Nov 2018 01:53:30 +0800 From: John Garry To: CC: , , , , , , , , John Garry Subject: [PATCH] iommu/dma: Use NUMA aware memory allocations in __iommu_dma_alloc_pages() Date: Fri, 9 Nov 2018 01:55:09 +0800 Message-ID: <1541699709-25474-1-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Change function __iommu_dma_alloc_pages() to allocate memory/pages for DMA from respective device NUMA node. Originally-from: Ganapatrao Kulkarni Signed-off-by: John Garry --- This patch was originally posted by Ganapatrao in [1] *. However, after initial review, it was never reposted (due to lack of cycles, I think). In addition, the functionality in its sibling patches were merged through patches, as mentioned in [2]; this also refers to a discussion on device local allocations vs CPU local allocations for DMA pool, and which is better [3]. However, as mentioned in [3], dma_alloc_coherent() uses the locality information from the device - as in direct DMA - so this patch is just applying this same policy. [1] https://lore.kernel.org/patchwork/patch/833004/ [2] https://lkml.org/lkml/2018/8/22/391 [3] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1692998.html * Authorship on this updated patch may need to be fixed - I add not want to add Ganapatrao's SOB without permission. -- 1.9.1 diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index d1b0475..ada00bc 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -449,20 +449,17 @@ static void __iommu_dma_free_pages(struct page **pages, int count) kvfree(pages); } -static struct page **__iommu_dma_alloc_pages(unsigned int count, - unsigned long order_mask, gfp_t gfp) +static struct page **__iommu_dma_alloc_pages(struct device *dev, + unsigned int count, unsigned long order_mask, gfp_t gfp) { struct page **pages; - unsigned int i = 0, array_size = count * sizeof(*pages); + unsigned int i = 0, nid = dev_to_node(dev); order_mask &= (2U << MAX_ORDER) - 1; if (!order_mask) return NULL; - if (array_size <= PAGE_SIZE) - pages = kzalloc(array_size, GFP_KERNEL); - else - pages = vzalloc(array_size); + pages = kvzalloc_node(count * sizeof(*pages), GFP_KERNEL, nid); if (!pages) return NULL; @@ -483,8 +480,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, unsigned int order = __fls(order_mask); order_size = 1U << order; - page = alloc_pages((order_mask - order_size) ? - gfp | __GFP_NORETRY : gfp, order); + page = alloc_pages_node(nid, + (order_mask - order_size) ? + gfp | __GFP_NORETRY : gfp, + order); if (!page) continue; if (!order) @@ -569,7 +568,8 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, alloc_sizes = min_size; count = PAGE_ALIGN(size) >> PAGE_SHIFT; - pages = __iommu_dma_alloc_pages(count, alloc_sizes >> PAGE_SHIFT, gfp); + pages = __iommu_dma_alloc_pages(dev, count, alloc_sizes >> PAGE_SHIFT, + gfp); if (!pages) return NULL;