From patchwork Wed Nov 21 14:54:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 151699 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp2025444ljp; Wed, 21 Nov 2018 06:53:06 -0800 (PST) X-Google-Smtp-Source: AFSGD/Vs49yhbsVf79oDF4qmFk8Ad2BlHpTnAaQywqrQJcuytlcADhEJw3IgJtr+eqxz2/PZ/9Do X-Received: by 2002:a17:902:560f:: with SMTP id h15-v6mr7302950pli.160.1542811986523; Wed, 21 Nov 2018 06:53:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542811986; cv=none; d=google.com; s=arc-20160816; b=NbO6c6tTM3F+FX87KZSNQdrZ16pvCz8kgIBelHEP231ngwLNyTJPKb2YbIIMHL95fk u6U9EwHlZz9ktDDXk1nHTphXj/lPIxUteAu6QUmwBUrZk792M3rDDc96DYrkeHzmRmsk hht6+e78j9jVwcZTz9kiVGsUjjPWOFLeznQ8057CaRLOnBw0qqmg2GJDvXCJDy8RVZje XwsnMwP9O3SqBJrhmpQAB2otMG8iipeerFEwTBBMxk2I5hWaMreXTdoxtGZ5tUfUnTzB t3Z7PJvq2cg4ywbTRnYi0opkMVPDTRL59tC2CuM1+mmaX4hKz9izS69KZ9nfK9HT+KBg 7e5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=j5V2m5ku7yyRREiYWqMdO4F2SAi4jO8qk2mSFYqItK8=; b=Ctdzk0adMNb6N+/cBIPPFLZUx+xVq666fhnSOQYTB7tyYLE0jCIDd7p0np3vEwPJxJ 7yr97xXZOsDyf/xtU+dShY9KiBrfcU/LcOLxu0lykcRkcos1UVZNDrZRzH8quJLLvpod GWGdrLIyxIzK9o4GFcuUFqLZXxzRTgKjPIRErqCXhHbr5C3uo1+rP2rEL1p4i14hN840 r1vSuOLnHl9la3xaCjcmHmw09miijjRRnjGJbvuxiRKY3W0yP8PoUFwAIU3Cs1+3ySeF YbqLUW4RqAQN6FdBjU3g1Z/kJzlmRW9uvUb1DQM/FXctl+ngAKA9oqeBiny1XqCZP5sq mT7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b63si328117pfa.250.2018.11.21.06.52.55; Wed, 21 Nov 2018 06:53:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731035AbeKVB1h (ORCPT + 32 others); Wed, 21 Nov 2018 20:27:37 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:51809 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728298AbeKVB1h (ORCPT ); Wed, 21 Nov 2018 20:27:37 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 57188107FEE0; Wed, 21 Nov 2018 22:52:47 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.408.0; Wed, 21 Nov 2018 22:52:38 +0800 From: John Garry To: CC: , , , , , , , , John Garry Subject: [PATCH v3] iommu/dma: Use NUMA aware memory allocations in __iommu_dma_alloc_pages() Date: Wed, 21 Nov 2018 22:54:10 +0800 Message-ID: <1542812051-178935-1-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ganapatrao Kulkarni Change function __iommu_dma_alloc_pages() to allocate pages for DMA from respective device NUMA node. The ternary operator which would be for alloc_pages_node() is tidied along with this. We also include a change to use kvzalloc() for kzalloc()/vzalloc() combination. Signed-off-by: Ganapatrao Kulkarni [JPG: Added kvzalloc(), drop pages ** being device local, tidied ternary operator] Signed-off-by: John Garry -- 1.9.1 diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index d1b0475..4afb1a8 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -449,20 +449,17 @@ static void __iommu_dma_free_pages(struct page **pages, int count) kvfree(pages); } -static struct page **__iommu_dma_alloc_pages(unsigned int count, - unsigned long order_mask, gfp_t gfp) +static struct page **__iommu_dma_alloc_pages(struct device *dev, + unsigned int count, unsigned long order_mask, gfp_t gfp) { struct page **pages; - unsigned int i = 0, array_size = count * sizeof(*pages); + unsigned int i = 0, nid = dev_to_node(dev); order_mask &= (2U << MAX_ORDER) - 1; if (!order_mask) return NULL; - if (array_size <= PAGE_SIZE) - pages = kzalloc(array_size, GFP_KERNEL); - else - pages = vzalloc(array_size); + pages = kvzalloc(count * sizeof(*pages), GFP_KERNEL); if (!pages) return NULL; @@ -481,10 +478,12 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, for (order_mask &= (2U << __fls(count)) - 1; order_mask; order_mask &= ~order_size) { unsigned int order = __fls(order_mask); + gfp_t alloc_flags = gfp; order_size = 1U << order; - page = alloc_pages((order_mask - order_size) ? - gfp | __GFP_NORETRY : gfp, order); + if (order_mask > order_size) + alloc_flags |= __GFP_NORETRY; + page = alloc_pages_node(nid, alloc_flags, order); if (!page) continue; if (!order) @@ -569,7 +568,8 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, alloc_sizes = min_size; count = PAGE_ALIGN(size) >> PAGE_SHIFT; - pages = __iommu_dma_alloc_pages(count, alloc_sizes >> PAGE_SHIFT, gfp); + pages = __iommu_dma_alloc_pages(dev, count, alloc_sizes >> PAGE_SHIFT, + gfp); if (!pages) return NULL;