From patchwork Sat Nov 21 04:49:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 330165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ADE0C6379F for ; Sat, 21 Nov 2020 04:50:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C56EC21D46 for ; Sat, 21 Nov 2020 04:50:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="C0W6VF+L" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726799AbgKUEuG (ORCPT ); Fri, 20 Nov 2020 23:50:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726724AbgKUEuD (ORCPT ); Fri, 20 Nov 2020 23:50:03 -0500 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19094C061A47 for ; Fri, 20 Nov 2020 20:50:02 -0800 (PST) Received: by mail-pg1-x544.google.com with SMTP id q34so9120282pgb.11 for ; Fri, 20 Nov 2020 20:50:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rv2xwSmub3U43VaTRd/Lmk3ISquWamnBLLJ4tbYNzrM=; b=C0W6VF+Lh+PBzrnjLy3WYjQ9e1CAQpTIRJK+cjvn6X/t3dn3V80fuGesja0S2iR/Fh b/0prPOmZw5iKIwm54eN9/ylkbkJcOHS6mGkCM9xA0jkjI+dzhSlbNvzfXkJ1Q140ylM uaemnP2pQynaPjhfLujOlAS6a7eCOYD6RMjkYLicU4JXjrry6iK6b24fAVEvcMusYxKz AhYPFG7PgTiWJC7ZP68xImSZK83b8n1UmtoNC6yJjXRH/ejJBgy78pund8sL6qMqw8wH 5Zax2mtdUBjxYd5TP7xaGibGFOyTzDLip8lRYrQ1ys4hRCqh7MJyQeL2mi/AevnvKes8 J9ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rv2xwSmub3U43VaTRd/Lmk3ISquWamnBLLJ4tbYNzrM=; b=iK1yxh44ktGUhubL/mgdqvWHZugT+Q/JllojvS4gS9Nrp//tU+YmhFcdhVbf0Jgq58 rnzEa2Q+xrX+G+VGcZjjX4vznQnXPDD+5Pi5/HmZkp8GM/d4W5PTttJ4u9PIlMzmGYrw QL9OcmnsVeAQ/RBIS9tYpyw4fPyfMnMm4yqNSOt9IOxc+BP+q7NwT6e1/EIvw1xGuKvF FRIOIKSfFA71+ElElcLUWD+lHLaN8CWVrLdx1xVPqB7Ny7SoQSd/iIoacCHCa2FRtwBA L7B39dlSkQnDLLzD2xACsnD6Ctn47xBlZuDIzH70xw9IANhXzHHkdcOTM1pJvoZbvWyz /OHw== X-Gm-Message-State: AOAM531kuC8oa/ws/DzpgrbWxfqBRNByRb6c+bXDz3vOAqfTMQ4hFzjk TvE9vg4Pc0nQqmxl7vtiX6QXJg== X-Google-Smtp-Source: ABdhPJxRZSGY2RTBYHVHuUTMk3XyzdL5HOWjt9fDPuC2L5EIr2oNqp8uSPvuKqHIudj1QH8wFsjG/w== X-Received: by 2002:a17:90a:c257:: with SMTP id d23mr13653965pjx.46.1605934201673; Fri, 20 Nov 2020 20:50:01 -0800 (PST) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id w196sm5407692pfd.177.2020.11.20.20.50.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Nov 2020 20:50:01 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v6 1/5] dma-buf: system_heap: Rework system heap to use sgtables instead of pagelists Date: Sat, 21 Nov 2020 04:49:51 +0000 Message-Id: <20201121044955.58215-2-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201121044955.58215-1-john.stultz@linaro.org> References: <20201121044955.58215-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org In preparation for some patches to optmize the system heap code, rework the dmabuf exporter to utilize sgtables rather then pageslists for tracking the associated pages. This will allow for large order page allocations, as well as more efficient page pooling. In doing so, the system heap stops using the heap-helpers logic which sadly is not quite as generic as I was hoping it to be, so this patch adds heap specific implementations of the dma_buf_ops function handlers. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Reviewed-by: Brian Starkey Signed-off-by: John Stultz --- v2: * Fix locking issue and an unused return value Reported-by: kernel test robot Julia Lawall * Make system_heap_buf_ops static Reported-by: kernel test robot v3: * Use the new sgtable mapping functions, as Suggested-by: Daniel Mentz v4: * Make sys_heap static (indirectly) Reported-by: kernel test robot * Spelling fix suggested by BrianS v6: * Fixups against drm-misc-next, from Sumit --- drivers/dma-buf/heaps/system_heap.c | 346 ++++++++++++++++++++++++---- 1 file changed, 300 insertions(+), 46 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 0bf688e3c023..e5f9f964b910 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -3,7 +3,11 @@ * DMABUF System heap exporter * * Copyright (C) 2011 Google, Inc. - * Copyright (C) 2019 Linaro Ltd. + * Copyright (C) 2019, 2020 Linaro Ltd. + * + * Portions based off of Andrew Davis' SRAM heap: + * Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/ + * Andrew F. Davis */ #include @@ -15,72 +19,323 @@ #include #include #include -#include -#include +#include + +static struct dma_heap *sys_heap; -#include "heap-helpers.h" +struct system_heap_buffer { + struct dma_heap *heap; + struct list_head attachments; + struct mutex lock; + unsigned long len; + struct sg_table sg_table; + int vmap_cnt; + struct dma_buf_map *map; +}; -struct dma_heap *sys_heap; +struct dma_heap_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; +}; -static void system_heap_free(struct heap_helper_buffer *buffer) +static struct sg_table *dup_sg_table(struct sg_table *table) { - pgoff_t pg; + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static int system_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + struct sg_table *table; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(&buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void system_heap_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table *system_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct dma_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + int ret; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(ret); + + return table; +} + +static void system_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->map->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sgtable_for_cpu(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->map->vaddr, buffer->len); - for (pg = 0; pg < buffer->pagecount; pg++) - __free_page(buffer->pages[pg]); - kfree(buffer->pages); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sgtable_for_device(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table = &buffer->sg_table; + unsigned long addr = vma->vm_start; + struct sg_page_iter piter; + int ret; + + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { + struct page *page = sg_page_iter_page(&piter); + + ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + addr += PAGE_SIZE; + if (addr >= vma->vm_end) + return 0; + } + return 0; +} + +static int system_heap_do_vmap(struct system_heap_buffer *buffer, struct dma_buf_map *map) +{ + struct sg_table *table = &buffer->sg_table; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + struct page **pages = vmalloc(sizeof(struct page *) * npages); + struct page **tmp = pages; + struct sg_page_iter piter; + void *vaddr; + + if (!pages) + return -ENOMEM; + + for_each_sgtable_page(table, &piter, 0) { + WARN_ON(tmp - pages >= npages); + *tmp++ = sg_page_iter_page(&piter); + } + + vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vfree(pages); + + if (!vaddr) + return -ENOMEM; + + dma_buf_map_set_vaddr(map, vaddr); + buffer->map = map; + + return 0; +} + +static int system_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + int ret = 0; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + buffer->vmap_cnt++; + map = buffer->map; + goto out; + } + + ret = system_heap_do_vmap(buffer, map); + if (ret) + goto out; + + buffer->vmap_cnt++; +out: + mutex_unlock(&buffer->lock); + + return ret; +} + +static void system_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->map->vaddr); + dma_buf_map_clear(buffer->map); + } + mutex_unlock(&buffer->lock); +} + +static void system_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table; + struct scatterlist *sg; + int i; + + table = &buffer->sg_table; + for_each_sgtable_sg(table, sg, i) + __free_page(sg_page(sg)); + sg_free_table(table); kfree(buffer); } +static const struct dma_buf_ops system_heap_buf_ops = { + .attach = system_heap_attach, + .detach = system_heap_detach, + .map_dma_buf = system_heap_map_dma_buf, + .unmap_dma_buf = system_heap_unmap_dma_buf, + .begin_cpu_access = system_heap_dma_buf_begin_cpu_access, + .end_cpu_access = system_heap_dma_buf_end_cpu_access, + .mmap = system_heap_mmap, + .vmap = system_heap_vmap, + .vunmap = system_heap_vunmap, + .release = system_heap_dma_buf_release, +}; + static int system_heap_allocate(struct dma_heap *heap, unsigned long len, unsigned long fd_flags, unsigned long heap_flags) { - struct heap_helper_buffer *helper_buffer; + struct system_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; - int ret = -ENOMEM; + struct sg_table *table; + struct scatterlist *sg; + pgoff_t pagecount; pgoff_t pg; + int i, ret = -ENOMEM; - helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); - if (!helper_buffer) + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) return -ENOMEM; - init_heap_helper_buffer(helper_buffer, system_heap_free); - helper_buffer->heap = heap; - helper_buffer->size = len; - - helper_buffer->pagecount = len / PAGE_SIZE; - helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, - sizeof(*helper_buffer->pages), - GFP_KERNEL); - if (!helper_buffer->pages) { - ret = -ENOMEM; - goto err0; - } + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->heap = heap; + buffer->len = len; - for (pg = 0; pg < helper_buffer->pagecount; pg++) { + table = &buffer->sg_table; + pagecount = len / PAGE_SIZE; + if (sg_alloc_table(table, pagecount, GFP_KERNEL)) + goto free_buffer; + + sg = table->sgl; + for (pg = 0; pg < pagecount; pg++) { + struct page *page; /* * Avoid trying to allocate memory if the process - * has been killed by by SIGKILL + * has been killed by SIGKILL */ if (fatal_signal_pending(current)) - goto err1; - - helper_buffer->pages[pg] = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (!helper_buffer->pages[pg]) - goto err1; + goto free_pages; + page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!page) + goto free_pages; + sg_set_page(sg, page, page_size(page), 0); + sg = sg_next(sg); } /* create the dmabuf */ - dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags); + exp_info.ops = &system_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf); - goto err1; + goto free_pages; } - helper_buffer->dmabuf = dmabuf; - ret = dma_buf_fd(dmabuf, fd_flags); if (ret < 0) { dma_buf_put(dmabuf); @@ -90,12 +345,12 @@ static int system_heap_allocate(struct dma_heap *heap, return ret; -err1: - while (pg > 0) - __free_page(helper_buffer->pages[--pg]); - kfree(helper_buffer->pages); -err0: - kfree(helper_buffer); +free_pages: + for_each_sgtable_sg(table, sg, i) + __free_page(sg_page(sg)); + sg_free_table(table); +free_buffer: + kfree(buffer); return ret; } @@ -107,7 +362,6 @@ static const struct dma_heap_ops system_heap_ops = { static int system_heap_create(void) { struct dma_heap_export_info exp_info; - int ret = 0; exp_info.name = "system"; exp_info.ops = &system_heap_ops; @@ -115,9 +369,9 @@ static int system_heap_create(void) sys_heap = dma_heap_add(&exp_info); if (IS_ERR(sys_heap)) - ret = PTR_ERR(sys_heap); + return PTR_ERR(sys_heap); - return ret; + return 0; } module_init(system_heap_create); MODULE_LICENSE("GPL v2"); From patchwork Sat Nov 21 04:49:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 330511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CBAEC64E69 for ; Sat, 21 Nov 2020 04:50:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F34CA21D40 for ; Sat, 21 Nov 2020 04:50:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="c/oNT3zI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726948AbgKUEuG (ORCPT ); Fri, 20 Nov 2020 23:50:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726719AbgKUEuD (ORCPT ); Fri, 20 Nov 2020 23:50:03 -0500 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB664C0613CF for ; Fri, 20 Nov 2020 20:50:03 -0800 (PST) Received: by mail-pl1-x641.google.com with SMTP id x15so5990493pll.2 for ; Fri, 20 Nov 2020 20:50:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bRKqPBzFBAAnkNsBgyf0HL+HNWyFgzqit/PZpr4Syq4=; b=c/oNT3zIP7O/3waGEVjSiX1aZ2huuqyinzxuM4g6UZJEmNtzJ059lB6IHYyzxJsJzS xJP+3SBfDV4fcT7rsUg6gA5pR7VyEx2UB86297BH5MNeWKsLDhbNwEY56iHYk3tVmYrk It8pB+xgZJc2a1z/hvltFdZedgAfRkIwWghY9siUmuiCaRr/7ZIcAsymlig61izrN3G5 W+m511hOuRQ/9VfQ6cEUHFaBFgQuiUrVhJ2z19bb0Bsa+yDIE2+H5bYIQnzgFFpE2fzY pP6+tDWxqkQ93MuXwWvsyPcHF5V/IjK/70YNsV9f2o3gXn4wtCc+LPAtHFWj3o9qgIj6 uBwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bRKqPBzFBAAnkNsBgyf0HL+HNWyFgzqit/PZpr4Syq4=; b=ZVtdr7OYhXQ1J33EusATj4C5znlF7kagwwr8hjdVtpny8BhV+5hfFtVEb0ZklwJAzJ KPCw708RREyCQ1Vwurmq3rfEn9dz/HnPzfOMnXIfIGlPIyHzMJM8yfmPt6xvRYbtZXHE 42jq9wj9o3Hj/pRKUpUCSwldBI16t3k5x9xMicqoLheR89AAI+gjvSZy41w6HN/G3T4v Epmgl/3U4yqT2cIkrbE3VJEyAIHW5m8KzIUhjR77WLzu271557lB865uycMFJHDlHLbV u+5a5NqQ8sNok9jPBvn/5owPFtXKO/jbFbV4r2n3XxKG0FckOdrmZ+z8Jei26DJDQ3KV zJCQ== X-Gm-Message-State: AOAM530h+pMyyd56yObDb1idHVfYWRkMDuOOoOstAT+ywVea1Kieof4b O34zwpW1n54HN6JzzDVbnGR1tA== X-Google-Smtp-Source: ABdhPJz4I7gXGaNaITJlChTd9W3c+7fHSdj6SLzUph6UXVH0oa2/1nvI5q0EL/5ic63s5JAiVD9Byg== X-Received: by 2002:a17:902:8605:b029:d5:a6dc:ad0a with SMTP id f5-20020a1709028605b02900d5a6dcad0amr16924859plo.56.1605934203284; Fri, 20 Nov 2020 20:50:03 -0800 (PST) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id w196sm5407692pfd.177.2020.11.20.20.50.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Nov 2020 20:50:02 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v6 2/5] dma-buf: heaps: Move heap-helper logic into the cma_heap implementation Date: Sat, 21 Nov 2020 04:49:52 +0000 Message-Id: <20201121044955.58215-3-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201121044955.58215-1-john.stultz@linaro.org> References: <20201121044955.58215-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Since the heap-helpers logic ended up not being as generic as hoped, move the heap-helpers dma_buf_ops implementations into the cma_heap directly. This will allow us to remove the heap_helpers code in a following patch. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Reviewed-by: Brian Starkey Signed-off-by: John Stultz --- v2: * Fix unused return value and locking issue Reported-by: kernel test robot Julia Lawall * Make cma_heap_buf_ops static suggested by kernel test robot * Fix uninitialized return in cma Reported-by: kernel test robot * Minor cleanups v3: * Use the new sgtable mapping functions, as Suggested-by: Daniel Mentz v4: * Spelling fix suggested by BrianS v6: * Fixups against drm-misc-next --- drivers/dma-buf/heaps/cma_heap.c | 315 ++++++++++++++++++++++++++----- 1 file changed, 266 insertions(+), 49 deletions(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index e55384dc115b..52459f3e60e2 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -2,76 +2,291 @@ /* * DMABUF CMA heap exporter * - * Copyright (C) 2012, 2019 Linaro Ltd. + * Copyright (C) 2012, 2019, 2020 Linaro Ltd. * Author: for ST-Ericsson. + * + * Also utilizing parts of Andrew Davis' SRAM heap: + * Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/ + * Andrew F. Davis */ - #include -#include #include #include #include #include -#include #include +#include +#include #include -#include #include -#include +#include -#include "heap-helpers.h" struct cma_heap { struct dma_heap *heap; struct cma *cma; }; -static void cma_heap_free(struct heap_helper_buffer *buffer) +struct cma_heap_buffer { + struct cma_heap *heap; + struct list_head attachments; + struct mutex lock; + unsigned long len; + struct page *cma_pages; + struct page **pages; + pgoff_t pagecount; + int vmap_cnt; + struct dma_buf_map *map; +}; + +struct dma_heap_attachment { + struct device *dev; + struct sg_table table; + struct list_head list; +}; + +static int cma_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) { - struct cma_heap *cma_heap = dma_heap_get_drvdata(buffer->heap); - unsigned long nr_pages = buffer->pagecount; - struct page *cma_pages = buffer->priv_virt; + struct cma_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + int ret; - /* free page list */ - kfree(buffer->pages); - /* release memory */ - cma_release(cma_heap->cma, cma_pages, nr_pages); + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + ret = sg_alloc_table_from_pages(&a->table, buffer->pages, + buffer->pagecount, 0, + buffer->pagecount << PAGE_SHIFT, + GFP_KERNEL); + if (ret) { + kfree(a); + return ret; + } + + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void cma_heap_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table *cma_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct dma_heap_attachment *a = attachment->priv; + struct sg_table *table = &a->table; + int ret; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(-ENOMEM); + return table; +} + +static void cma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->map->vaddr, buffer->len); + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->map->vaddr, buffer->len); + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sgtable_for_device(a->dev, &a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static vm_fault_t cma_heap_vm_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct cma_heap_buffer *buffer = vma->vm_private_data; + + if (vmf->pgoff > buffer->pagecount) + return VM_FAULT_SIGBUS; + + vmf->page = buffer->pages[vmf->pgoff]; + get_page(vmf->page); + + return 0; +} + +static const struct vm_operations_struct dma_heap_vm_ops = { + .fault = cma_heap_vm_fault, +}; + +static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) + return -EINVAL; + + vma->vm_ops = &dma_heap_vm_ops; + vma->vm_private_data = buffer; + + return 0; +} + +static int cma_heap_do_vmap(struct cma_heap_buffer *buffer, struct dma_buf_map *map) +{ + void *vaddr; + + vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL); + if (!vaddr) + return -ENOMEM; + + dma_buf_map_set_vaddr(map, vaddr); + buffer->map = map; + + return 0; +} + +static int cma_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + int ret = 0; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + buffer->vmap_cnt++; + goto out; + } + + ret = cma_heap_do_vmap(buffer, map); + if (ret) + goto out; + + buffer->vmap_cnt++; +out: + mutex_unlock(&buffer->lock); + + return ret; +} + +static void cma_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->map->vaddr); + dma_buf_map_clear(buffer->map); + } + mutex_unlock(&buffer->lock); +} + +static void cma_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + struct cma_heap *cma_heap = buffer->heap; + + if (buffer->vmap_cnt > 0) { + WARN(1, "%s: buffer still mapped in the kernel\n", __func__); + vunmap(buffer->map->vaddr); + } + + cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount); kfree(buffer); } -/* dmabuf heap CMA operations functions */ +static const struct dma_buf_ops cma_heap_buf_ops = { + .attach = cma_heap_attach, + .detach = cma_heap_detach, + .map_dma_buf = cma_heap_map_dma_buf, + .unmap_dma_buf = cma_heap_unmap_dma_buf, + .begin_cpu_access = cma_heap_dma_buf_begin_cpu_access, + .end_cpu_access = cma_heap_dma_buf_end_cpu_access, + .mmap = cma_heap_mmap, + .vmap = cma_heap_vmap, + .vunmap = cma_heap_vunmap, + .release = cma_heap_dma_buf_release, +}; + static int cma_heap_allocate(struct dma_heap *heap, - unsigned long len, - unsigned long fd_flags, - unsigned long heap_flags) + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) { struct cma_heap *cma_heap = dma_heap_get_drvdata(heap); - struct heap_helper_buffer *helper_buffer; - struct page *cma_pages; + struct cma_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); size_t size = PAGE_ALIGN(len); - unsigned long nr_pages = size >> PAGE_SHIFT; + pgoff_t pagecount = size >> PAGE_SHIFT; unsigned long align = get_order(size); + struct page *cma_pages; struct dma_buf *dmabuf; int ret = -ENOMEM; pgoff_t pg; - if (align > CONFIG_CMA_ALIGNMENT) - align = CONFIG_CMA_ALIGNMENT; - - helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); - if (!helper_buffer) + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) return -ENOMEM; - init_heap_helper_buffer(helper_buffer, cma_heap_free); - helper_buffer->heap = heap; - helper_buffer->size = len; + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->len = size; + + if (align > CONFIG_CMA_ALIGNMENT) + align = CONFIG_CMA_ALIGNMENT; - cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false); + cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false); if (!cma_pages) - goto free_buf; + goto free_buffer; + /* Clear the cma pages */ if (PageHighMem(cma_pages)) { - unsigned long nr_clear_pages = nr_pages; + unsigned long nr_clear_pages = pagecount; struct page *page = cma_pages; while (nr_clear_pages > 0) { @@ -85,7 +300,6 @@ static int cma_heap_allocate(struct dma_heap *heap, */ if (fatal_signal_pending(current)) goto free_cma; - page++; nr_clear_pages--; } @@ -93,28 +307,30 @@ static int cma_heap_allocate(struct dma_heap *heap, memset(page_address(cma_pages), 0, size); } - helper_buffer->pagecount = nr_pages; - helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, - sizeof(*helper_buffer->pages), - GFP_KERNEL); - if (!helper_buffer->pages) { + buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages), GFP_KERNEL); + if (!buffer->pages) { ret = -ENOMEM; goto free_cma; } - for (pg = 0; pg < helper_buffer->pagecount; pg++) - helper_buffer->pages[pg] = &cma_pages[pg]; + for (pg = 0; pg < pagecount; pg++) + buffer->pages[pg] = &cma_pages[pg]; + + buffer->cma_pages = cma_pages; + buffer->heap = cma_heap; + buffer->pagecount = pagecount; /* create the dmabuf */ - dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags); + exp_info.ops = &cma_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf); goto free_pages; } - helper_buffer->dmabuf = dmabuf; - helper_buffer->priv_virt = cma_pages; - ret = dma_buf_fd(dmabuf, fd_flags); if (ret < 0) { dma_buf_put(dmabuf); @@ -125,11 +341,12 @@ static int cma_heap_allocate(struct dma_heap *heap, return ret; free_pages: - kfree(helper_buffer->pages); + kfree(buffer->pages); free_cma: - cma_release(cma_heap->cma, cma_pages, nr_pages); -free_buf: - kfree(helper_buffer); + cma_release(cma_heap->cma, cma_pages, pagecount); +free_buffer: + kfree(buffer); + return ret; } From patchwork Sat Nov 21 04:49:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 330163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D18A1C64E8A for ; Sat, 21 Nov 2020 04:50:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8440E21D46 for ; Sat, 21 Nov 2020 04:50:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="mpnGXRoe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726819AbgKUEuG (ORCPT ); Fri, 20 Nov 2020 23:50:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726749AbgKUEuF (ORCPT ); Fri, 20 Nov 2020 23:50:05 -0500 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44F01C061A48 for ; Fri, 20 Nov 2020 20:50:05 -0800 (PST) Received: by mail-pg1-x544.google.com with SMTP id t21so9152771pgl.3 for ; Fri, 20 Nov 2020 20:50:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+ZFa/o3IXtRuFPVP0lGO+t7H0IQiB0tyaqwWAErJlj0=; b=mpnGXRoeHWxLnSDiNuU4QNmcbp5EgXY4+ThmaPZONMCEQZ6USx+Qngoj4HdtQlUR3r pLhCLk3h8FB5B/bky2O0P1aSY0RXKtH2fNI/7jle0cJh3ie1SBSagpV8KKcEwyurIqTD aaJWRYLUJrq/my+6ZLtC1vIb1awvBoi7yR3k6/7TmO4GbwmTom7ZbKknA90vaS632Igs ehr7bqohrKYFK17YwNLn/YpjpYaRJiL7YjhIIaBtlqNE2oDUz8DgFjXyWhamGN151r4g MjE5s7ys9qJYnYb8zTIOtlgNmF6BjiV6t48dhpLH71Uza8jQS90ul9ntQUvstspOmGpn yZIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+ZFa/o3IXtRuFPVP0lGO+t7H0IQiB0tyaqwWAErJlj0=; b=RXnZhfThvWpgH1wqbo5BR8hGT2RI4TC+haOebNv2B8pLKP+ezkhxQa7J2HZcpQd1p1 xjHqF7U5e/9f+O8A4JPg7x5lI5BCJr2VGC416EunCetYz5POhE/C+Qi1JlhJBOpqwAzN jS31UESVzTL0gtXijmH3vBH1kZFZg6LbSo8CCk7XSQFb7W2UhqTQa52YSE7Zp6wBFVXE jLaFZwVOnZ0Fh8RFpcVYFc/VBcjvf4iLQXMCBG7+zrxYkx38oaZjO+q5fMseOdIoOWsz eOgdgQ3srnh+hxWUesbmAtj2OVIcf3Qz+uA+nl/OE+LWnPIYm+RLr51xKGus0sPRkxhZ cJ2Q== X-Gm-Message-State: AOAM532JrLt6kRe9W9izGIKbsD19WH6tOq4DjSw4IZhAAi4tWdP+way1 7OAIe9lfyhH/5IydtVnU78z3CA== X-Google-Smtp-Source: ABdhPJwb7U0mVja1uUtBsKO91L7C02EXD3vLaqrEBRH8r/fsO84Q94uvo0Taj+J/Zt3hVvAurHyVcw== X-Received: by 2002:a17:90a:ff08:: with SMTP id ce8mr13899107pjb.210.1605934204739; Fri, 20 Nov 2020 20:50:04 -0800 (PST) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id w196sm5407692pfd.177.2020.11.20.20.50.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Nov 2020 20:50:04 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v6 3/5] dma-buf: heaps: Remove heap-helpers code Date: Sat, 21 Nov 2020 04:49:53 +0000 Message-Id: <20201121044955.58215-4-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201121044955.58215-1-john.stultz@linaro.org> References: <20201121044955.58215-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The heap-helpers code was not as generic as initially hoped and it is now not being used, so remove it from the tree. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Reviewed-by: Brian Starkey Signed-off-by: John Stultz --- v6: Rebased onto drm-misc-next --- drivers/dma-buf/heaps/Makefile | 1 - drivers/dma-buf/heaps/heap-helpers.c | 274 --------------------------- drivers/dma-buf/heaps/heap-helpers.h | 53 ------ 3 files changed, 328 deletions(-) delete mode 100644 drivers/dma-buf/heaps/heap-helpers.c delete mode 100644 drivers/dma-buf/heaps/heap-helpers.h diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 6e54cdec3da0..974467791032 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,4 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 -obj-y += heap-helpers.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c deleted file mode 100644 index fcf4ce3e2cbb..000000000000 --- a/drivers/dma-buf/heaps/heap-helpers.c +++ /dev/null @@ -1,274 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "heap-helpers.h" - -void init_heap_helper_buffer(struct heap_helper_buffer *buffer, - void (*free)(struct heap_helper_buffer *)) -{ - buffer->priv_virt = NULL; - mutex_init(&buffer->lock); - buffer->vmap_cnt = 0; - buffer->vaddr = NULL; - buffer->pagecount = 0; - buffer->pages = NULL; - INIT_LIST_HEAD(&buffer->attachments); - buffer->free = free; -} - -struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer, - int fd_flags) -{ - DEFINE_DMA_BUF_EXPORT_INFO(exp_info); - - exp_info.ops = &heap_helper_ops; - exp_info.size = buffer->size; - exp_info.flags = fd_flags; - exp_info.priv = buffer; - - return dma_buf_export(&exp_info); -} - -static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer) -{ - void *vaddr; - - vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL); - if (!vaddr) - return ERR_PTR(-ENOMEM); - - return vaddr; -} - -static void dma_heap_buffer_destroy(struct heap_helper_buffer *buffer) -{ - if (buffer->vmap_cnt > 0) { - WARN(1, "%s: buffer still mapped in the kernel\n", __func__); - vunmap(buffer->vaddr); - } - - buffer->free(buffer); -} - -static void *dma_heap_buffer_vmap_get(struct heap_helper_buffer *buffer) -{ - void *vaddr; - - if (buffer->vmap_cnt) { - buffer->vmap_cnt++; - return buffer->vaddr; - } - vaddr = dma_heap_map_kernel(buffer); - if (IS_ERR(vaddr)) - return vaddr; - buffer->vaddr = vaddr; - buffer->vmap_cnt++; - return vaddr; -} - -static void dma_heap_buffer_vmap_put(struct heap_helper_buffer *buffer) -{ - if (!--buffer->vmap_cnt) { - vunmap(buffer->vaddr); - buffer->vaddr = NULL; - } -} - -struct dma_heaps_attachment { - struct device *dev; - struct sg_table table; - struct list_head list; -}; - -static int dma_heap_attach(struct dma_buf *dmabuf, - struct dma_buf_attachment *attachment) -{ - struct dma_heaps_attachment *a; - struct heap_helper_buffer *buffer = dmabuf->priv; - int ret; - - a = kzalloc(sizeof(*a), GFP_KERNEL); - if (!a) - return -ENOMEM; - - ret = sg_alloc_table_from_pages(&a->table, buffer->pages, - buffer->pagecount, 0, - buffer->pagecount << PAGE_SHIFT, - GFP_KERNEL); - if (ret) { - kfree(a); - return ret; - } - - a->dev = attachment->dev; - INIT_LIST_HEAD(&a->list); - - attachment->priv = a; - - mutex_lock(&buffer->lock); - list_add(&a->list, &buffer->attachments); - mutex_unlock(&buffer->lock); - - return 0; -} - -static void dma_heap_detach(struct dma_buf *dmabuf, - struct dma_buf_attachment *attachment) -{ - struct dma_heaps_attachment *a = attachment->priv; - struct heap_helper_buffer *buffer = dmabuf->priv; - - mutex_lock(&buffer->lock); - list_del(&a->list); - mutex_unlock(&buffer->lock); - - sg_free_table(&a->table); - kfree(a); -} - -static -struct sg_table *dma_heap_map_dma_buf(struct dma_buf_attachment *attachment, - enum dma_data_direction direction) -{ - struct dma_heaps_attachment *a = attachment->priv; - struct sg_table *table = &a->table; - int ret; - - ret = dma_map_sgtable(attachment->dev, table, direction, 0); - if (ret) - table = ERR_PTR(ret); - return table; -} - -static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, - struct sg_table *table, - enum dma_data_direction direction) -{ - dma_unmap_sgtable(attachment->dev, table, direction, 0); -} - -static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf) -{ - struct vm_area_struct *vma = vmf->vma; - struct heap_helper_buffer *buffer = vma->vm_private_data; - - if (vmf->pgoff > buffer->pagecount) - return VM_FAULT_SIGBUS; - - vmf->page = buffer->pages[vmf->pgoff]; - get_page(vmf->page); - - return 0; -} - -static const struct vm_operations_struct dma_heap_vm_ops = { - .fault = dma_heap_vm_fault, -}; - -static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - - if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) - return -EINVAL; - - vma->vm_ops = &dma_heap_vm_ops; - vma->vm_private_data = buffer; - - return 0; -} - -static void dma_heap_dma_buf_release(struct dma_buf *dmabuf) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - - dma_heap_buffer_destroy(buffer); -} - -static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, - enum dma_data_direction direction) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - struct dma_heaps_attachment *a; - int ret = 0; - - mutex_lock(&buffer->lock); - - if (buffer->vmap_cnt) - invalidate_kernel_vmap_range(buffer->vaddr, buffer->size); - - list_for_each_entry(a, &buffer->attachments, list) { - dma_sync_sg_for_cpu(a->dev, a->table.sgl, a->table.nents, - direction); - } - mutex_unlock(&buffer->lock); - - return ret; -} - -static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, - enum dma_data_direction direction) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - struct dma_heaps_attachment *a; - - mutex_lock(&buffer->lock); - - if (buffer->vmap_cnt) - flush_kernel_vmap_range(buffer->vaddr, buffer->size); - - list_for_each_entry(a, &buffer->attachments, list) { - dma_sync_sg_for_device(a->dev, a->table.sgl, a->table.nents, - direction); - } - mutex_unlock(&buffer->lock); - - return 0; -} - -static int dma_heap_dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - void *vaddr; - - mutex_lock(&buffer->lock); - vaddr = dma_heap_buffer_vmap_get(buffer); - mutex_unlock(&buffer->lock); - - if (!vaddr) - return -ENOMEM; - dma_buf_map_set_vaddr(map, vaddr); - - return 0; -} - -static void dma_heap_dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - - mutex_lock(&buffer->lock); - dma_heap_buffer_vmap_put(buffer); - mutex_unlock(&buffer->lock); -} - -const struct dma_buf_ops heap_helper_ops = { - .map_dma_buf = dma_heap_map_dma_buf, - .unmap_dma_buf = dma_heap_unmap_dma_buf, - .mmap = dma_heap_mmap, - .release = dma_heap_dma_buf_release, - .attach = dma_heap_attach, - .detach = dma_heap_detach, - .begin_cpu_access = dma_heap_dma_buf_begin_cpu_access, - .end_cpu_access = dma_heap_dma_buf_end_cpu_access, - .vmap = dma_heap_dma_buf_vmap, - .vunmap = dma_heap_dma_buf_vunmap, -}; diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h deleted file mode 100644 index 805d2df88024..000000000000 --- a/drivers/dma-buf/heaps/heap-helpers.h +++ /dev/null @@ -1,53 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * DMABUF Heaps helper code - * - * Copyright (C) 2011 Google, Inc. - * Copyright (C) 2019 Linaro Ltd. - */ - -#ifndef _HEAP_HELPERS_H -#define _HEAP_HELPERS_H - -#include -#include - -/** - * struct heap_helper_buffer - helper buffer metadata - * @heap: back pointer to the heap the buffer came from - * @dmabuf: backing dma-buf for this buffer - * @size: size of the buffer - * @priv_virt pointer to heap specific private value - * @lock mutext to protect the data in this structure - * @vmap_cnt count of vmap references on the buffer - * @vaddr vmap'ed virtual address - * @pagecount number of pages in the buffer - * @pages list of page pointers - * @attachments list of device attachments - * - * @free heap callback to free the buffer - */ -struct heap_helper_buffer { - struct dma_heap *heap; - struct dma_buf *dmabuf; - size_t size; - - void *priv_virt; - struct mutex lock; - int vmap_cnt; - void *vaddr; - pgoff_t pagecount; - struct page **pages; - struct list_head attachments; - - void (*free)(struct heap_helper_buffer *buffer); -}; - -void init_heap_helper_buffer(struct heap_helper_buffer *buffer, - void (*free)(struct heap_helper_buffer *)); - -struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer, - int fd_flags); - -extern const struct dma_buf_ops heap_helper_ops; -#endif /* _HEAP_HELPERS_H */ From patchwork Sat Nov 21 04:49:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 330164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64958C64E7C for ; Sat, 21 Nov 2020 04:50:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D75521D46 for ; Sat, 21 Nov 2020 04:50:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="xlZ1IdHn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727030AbgKUEuH (ORCPT ); Fri, 20 Nov 2020 23:50:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726862AbgKUEuG (ORCPT ); Fri, 20 Nov 2020 23:50:06 -0500 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2C2DC0613CF for ; Fri, 20 Nov 2020 20:50:06 -0800 (PST) Received: by mail-pg1-x542.google.com with SMTP id 34so9130142pgp.10 for ; Fri, 20 Nov 2020 20:50:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8/ixTvQ2Zry3DTCO/KoZEMuJmBTKPEtfwzNBsOWjMeg=; b=xlZ1IdHnHFgs60OV/DrQAPIpBOgEvNLTm6jmy1ckIJf+cd/nnZGdo88FLHtPv0Cu20 AhBIou4c8XuzhWUYe9EDB9kG6cXQI3vFgprETVjkuBXALH/fU1668D9lU57dtwkR8+/6 7TiBM25khO78i/3MnBV6qW84iMcurAueKzm3bU6DfHlyMVQp5XtcLob1wtLeqhMFkyAI RTqIA88Xd2y75Es/9d1115o6t8tOmxvzFb5K/P6azp2M9yILYk1VvHsmasBX+Wd7WvC0 vzrpYG2QdEA67Hd+TF8oyaB7kFcscr/9Mp8iXFETo3WP56YOj6UQnG9jsTZKoIjMIh0c 7b2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8/ixTvQ2Zry3DTCO/KoZEMuJmBTKPEtfwzNBsOWjMeg=; b=hHB9jQGYXg08fnf6V7uDuoQpDzt9AKtK8o/LtfW4sQgpS19sCRR9VNMaKMBUOsAL7Q VBiA0FDeD983AUtBpc9KKrR7uJDKvTsHZsypWVj5OynFdgYdivABPWqbBONOBYQjG9h6 RHtxlOT/GuRGXAfhSW/BwF3geSXDhrKJW+vvxHESs9pziayJ4XdtOI+vKyyCEyj90FTm uFyAu8q0kiniPwRz8yiMcTtdMJYukvHEoSridB4daepypPbW32UP4ZGzjrCQ3PRcPSWz qbYyo71vbofxCMFAMPZYW3RYS2bllE5l3uT1hj0d1Ac9h43roa1Obu4ZgyOMmTY+lJxR /MCQ== X-Gm-Message-State: AOAM530pxepHsk1uka4PxaQilpCvwS+8q7UbMy7MKuUpYaUkU8XoYmMm IcJcpT+S2VazA0aONH2T2PbbUQ== X-Google-Smtp-Source: ABdhPJwkbIJx1k7xwrwwZK1EOfFVXpyarTDc/2XPaSAXtqk+Jnn/+DQIoqGFTNjhmRjDWU1ZphKyhA== X-Received: by 2002:aa7:8801:0:b029:18b:5878:30d6 with SMTP id c1-20020aa788010000b029018b587830d6mr17144282pfo.77.1605934206209; Fri, 20 Nov 2020 20:50:06 -0800 (PST) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id w196sm5407692pfd.177.2020.11.20.20.50.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Nov 2020 20:50:05 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v6 4/5] dma-buf: heaps: Skip sync if not mapped Date: Sat, 21 Nov 2020 04:49:54 +0000 Message-Id: <20201121044955.58215-5-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201121044955.58215-1-john.stultz@linaro.org> References: <20201121044955.58215-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This patch is basically a port of Ørjan Eide's similar patch for ION https://lore.kernel.org/lkml/20200414134629.54567-1-orjan.eide@arm.com/ Only sync the sg-list of dma-buf heap attachment when the attachment is actually mapped on the device. dma-bufs may be synced at any time. It can be reached from user space via DMA_BUF_IOCTL_SYNC, so there are no guarantees from callers on when syncs may be attempted, and dma_buf_end_cpu_access() and dma_buf_begin_cpu_access() may not be paired. Since the sg_list's dma_address isn't set up until the buffer is used on the device, and dma_map_sg() is called on it, the dma_address will be NULL if sync is attempted on the dma-buf before it's mapped on a device. Before v5.0 (commit 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code")) this was a problem as the dma-api (at least the swiotlb_dma_ops on arm64) would use the potentially invalid dma_address. How that failed depended on how the device handled physical address 0. If 0 was a valid address to physical ram, that page would get flushed a lot, while the actual pages in the buffer would not get synced correctly. While if 0 is an invalid physical address it may cause a fault and trigger a crash. In v5.0 this was incidentally fixed by commit 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code"), as this moved the dma-api to use the page pointer in the sg_list, and (for Ion buffers at least) this will always be valid if the sg_list exists at all. But, this issue is re-introduced in v5.3 with commit 449fa54d6815 ("dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device") moves the dma-api back to the old behaviour and picks the dma_address that may be invalid. dma-buf core doesn't ensure that the buffer is mapped on the device, and thus have a valid sg_list, before calling the exporter's begin_cpu_access. Logic and commit message originally by: Ørjan Eide Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Reviewed-by: Brian Starkey Signed-off-by: John Stultz --- drivers/dma-buf/heaps/cma_heap.c | 10 ++++++++++ drivers/dma-buf/heaps/system_heap.c | 10 ++++++++++ 2 files changed, 20 insertions(+) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 52459f3e60e2..2b468767a2af 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -43,6 +43,7 @@ struct dma_heap_attachment { struct device *dev; struct sg_table table; struct list_head list; + bool mapped; }; static int cma_heap_attach(struct dma_buf *dmabuf, @@ -67,6 +68,7 @@ static int cma_heap_attach(struct dma_buf *dmabuf, a->dev = attachment->dev; INIT_LIST_HEAD(&a->list); + a->mapped = false; attachment->priv = a; @@ -101,6 +103,7 @@ static struct sg_table *cma_heap_map_dma_buf(struct dma_buf_attachment *attachme ret = dma_map_sgtable(attachment->dev, table, direction, 0); if (ret) return ERR_PTR(-ENOMEM); + a->mapped = true; return table; } @@ -108,6 +111,9 @@ static void cma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { + struct dma_heap_attachment *a = attachment->priv; + + a->mapped = false; dma_unmap_sgtable(attachment->dev, table, direction, 0); } @@ -122,6 +128,8 @@ static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); } mutex_unlock(&buffer->lock); @@ -140,6 +148,8 @@ static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; dma_sync_sgtable_for_device(a->dev, &a->table, direction); } mutex_unlock(&buffer->lock); diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index e5f9f964b910..b1a7b355132f 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -37,6 +37,7 @@ struct dma_heap_attachment { struct device *dev; struct sg_table *table; struct list_head list; + bool mapped; }; static struct sg_table *dup_sg_table(struct sg_table *table) @@ -84,6 +85,7 @@ static int system_heap_attach(struct dma_buf *dmabuf, a->table = table; a->dev = attachment->dev; INIT_LIST_HEAD(&a->list); + a->mapped = false; attachment->priv = a; @@ -120,6 +122,7 @@ static struct sg_table *system_heap_map_dma_buf(struct dma_buf_attachment *attac if (ret) return ERR_PTR(ret); + a->mapped = true; return table; } @@ -127,6 +130,9 @@ static void system_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { + struct dma_heap_attachment *a = attachment->priv; + + a->mapped = false; dma_unmap_sgtable(attachment->dev, table, direction, 0); } @@ -142,6 +148,8 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, invalidate_kernel_vmap_range(buffer->map->vaddr, buffer->len); list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; dma_sync_sgtable_for_cpu(a->dev, a->table, direction); } mutex_unlock(&buffer->lock); @@ -161,6 +169,8 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, flush_kernel_vmap_range(buffer->map->vaddr, buffer->len); list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; dma_sync_sgtable_for_device(a->dev, a->table, direction); } mutex_unlock(&buffer->lock); From patchwork Sat Nov 21 04:49:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 330510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 297CCC64E7B for ; Sat, 21 Nov 2020 04:50:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF8F121D40 for ; Sat, 21 Nov 2020 04:50:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="DCrKMrkR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727156AbgKUEuM (ORCPT ); Fri, 20 Nov 2020 23:50:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727096AbgKUEuI (ORCPT ); Fri, 20 Nov 2020 23:50:08 -0500 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF374C0613CF for ; Fri, 20 Nov 2020 20:50:07 -0800 (PST) Received: by mail-pg1-x544.google.com with SMTP id 34so9130188pgp.10 for ; Fri, 20 Nov 2020 20:50:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GHpNF24q+TKBy5NU/YkjHOD8mOY77JiRU3YsnIt/B8o=; b=DCrKMrkRfr7pIobw9eFJ61OTFy8o6Mk8c8usKaJebs3hYGzyI5mm34lx1W72BDn0UK dKdVA/2b9wj9YUyc9BfAwlnPFRJr22kYTlUxkWP63lnGSM2R19BlOAk8Lhubi33p6YpL kXuTKlemDSfqrc1QgkebjmI37vhXwgvwEP1jBfPOommMfZx/kgfGlYw9nnm8JmY3HyAV 6HeKjrzWn1NdMz+c1iEKZg+0Lmu8981vgxxf/ZNA0TtWE2Cs/b4MuEuD9P2Q+RhHg7VK j0MbY+VVkmT/L2UXflqd/WIknCoxH15HI5OUthHmf1uRAfLVqVwytjq8VsJF3DHT8CPk vRWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GHpNF24q+TKBy5NU/YkjHOD8mOY77JiRU3YsnIt/B8o=; b=HHBvP04ASXn7swDkCGVqnCp0/AHMjyrAEPLz/CE/krJ1YxXp9WLdUoVG44w1y+eZjU Tn6I1RtdsPbAUjbwLNW2nnf8r3rTNI4W08UGcg5xkKKnzH2oN1F3azzII6MLY97mgOui iZUHkoiQ8FGD1mUqfDZFhDm1OokHkymiKSw8hsDr1vru0J/+eyRymYLUZzQ9D+A+farA VaFwSo2JvZRBIhxe0OdRMgdHpMBULMSDZz3+nUzZPIJ5tCCNQXsj19BtwpV1Ru7V1qyb kY5ih4T6LX6Fq4O0UEOh7NEX8yXiYPyVSR38B1QFdKJVmUaNuNxASKqiMHNTJe33oeVW ej+Q== X-Gm-Message-State: AOAM531WkvDaNPQk3j8Twd0Ub+UrNAGLAoFwA3xpmdcOZRsiFaNZI9sF hmAquQKLz2Rr/0MFeYrJoo/4eA== X-Google-Smtp-Source: ABdhPJxSUX5Zw8O37YM9KJak58EF62vq3SMAbc0DS/2cMEpBj1/bbfbSKiiRQQWrDtlqY+wdoqzfSw== X-Received: by 2002:a17:90a:12cc:: with SMTP id b12mr13258284pjg.150.1605934207573; Fri, 20 Nov 2020 20:50:07 -0800 (PST) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id w196sm5407692pfd.177.2020.11.20.20.50.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Nov 2020 20:50:07 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v6 5/5] dma-buf: system_heap: Allocate higher order pages if available Date: Sat, 21 Nov 2020 04:49:55 +0000 Message-Id: <20201121044955.58215-6-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201121044955.58215-1-john.stultz@linaro.org> References: <20201121044955.58215-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org While the system heap can return non-contiguous pages, try to allocate larger order pages if possible. This will allow slight performance gains and make implementing page pooling easier. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Reviewed-by: Brian Starkey Signed-off-by: John Stultz --- v3: * Use page_size() rather then opencoding it v5: * Add comment explaining order size rational --- drivers/dma-buf/heaps/system_heap.c | 89 +++++++++++++++++++++++------ 1 file changed, 71 insertions(+), 18 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index b1a7b355132f..de275b7ff1ed 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -40,6 +40,20 @@ struct dma_heap_attachment { bool mapped; }; +#define HIGH_ORDER_GFP (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \ + | __GFP_NORETRY) & ~__GFP_RECLAIM) \ + | __GFP_COMP) +#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO | __GFP_COMP) +static gfp_t order_flags[] = {HIGH_ORDER_GFP, LOW_ORDER_GFP, LOW_ORDER_GFP}; +/* + * The selection of the orders used for allocation (1MB, 64K, 4K) is designed + * to match with the sizes often found in IOMMUs. Using order 4 pages instead + * of order 0 pages can significantly improve the performance of many IOMMUs + * by reducing TLB pressure and time spent updating page tables. + */ +static const unsigned int orders[] = {8, 4, 0}; +#define NUM_ORDERS ARRAY_SIZE(orders) + static struct sg_table *dup_sg_table(struct sg_table *table) { struct sg_table *new_table; @@ -272,8 +286,11 @@ static void system_heap_dma_buf_release(struct dma_buf *dmabuf) int i; table = &buffer->sg_table; - for_each_sgtable_sg(table, sg, i) - __free_page(sg_page(sg)); + for_each_sg(table->sgl, sg, table->nents, i) { + struct page *page = sg_page(sg); + + __free_pages(page, compound_order(page)); + } sg_free_table(table); kfree(buffer); } @@ -291,6 +308,26 @@ static const struct dma_buf_ops system_heap_buf_ops = { .release = system_heap_dma_buf_release, }; +static struct page *alloc_largest_available(unsigned long size, + unsigned int max_order) +{ + struct page *page; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + if (size < (PAGE_SIZE << orders[i])) + continue; + if (max_order < orders[i]) + continue; + + page = alloc_pages(order_flags[i], orders[i]); + if (!page) + continue; + return page; + } + return NULL; +} + static int system_heap_allocate(struct dma_heap *heap, unsigned long len, unsigned long fd_flags, @@ -298,11 +335,13 @@ static int system_heap_allocate(struct dma_heap *heap, { struct system_heap_buffer *buffer; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + unsigned long size_remaining = len; + unsigned int max_order = orders[0]; struct dma_buf *dmabuf; struct sg_table *table; struct scatterlist *sg; - pgoff_t pagecount; - pgoff_t pg; + struct list_head pages; + struct page *page, *tmp_page; int i, ret = -ENOMEM; buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); @@ -314,25 +353,35 @@ static int system_heap_allocate(struct dma_heap *heap, buffer->heap = heap; buffer->len = len; - table = &buffer->sg_table; - pagecount = len / PAGE_SIZE; - if (sg_alloc_table(table, pagecount, GFP_KERNEL)) - goto free_buffer; - - sg = table->sgl; - for (pg = 0; pg < pagecount; pg++) { - struct page *page; + INIT_LIST_HEAD(&pages); + i = 0; + while (size_remaining > 0) { /* * Avoid trying to allocate memory if the process * has been killed by SIGKILL */ if (fatal_signal_pending(current)) - goto free_pages; - page = alloc_page(GFP_KERNEL | __GFP_ZERO); + goto free_buffer; + + page = alloc_largest_available(size_remaining, max_order); if (!page) - goto free_pages; + goto free_buffer; + + list_add_tail(&page->lru, &pages); + size_remaining -= page_size(page); + max_order = compound_order(page); + i++; + } + + table = &buffer->sg_table; + if (sg_alloc_table(table, i, GFP_KERNEL)) + goto free_buffer; + + sg = table->sgl; + list_for_each_entry_safe(page, tmp_page, &pages, lru) { sg_set_page(sg, page, page_size(page), 0); sg = sg_next(sg); + list_del(&page->lru); } /* create the dmabuf */ @@ -352,14 +401,18 @@ static int system_heap_allocate(struct dma_heap *heap, /* just return, as put will call release and that will free */ return ret; } - return ret; free_pages: - for_each_sgtable_sg(table, sg, i) - __free_page(sg_page(sg)); + for_each_sgtable_sg(table, sg, i) { + struct page *p = sg_page(sg); + + __free_pages(p, compound_order(p)); + } sg_free_table(table); free_buffer: + list_for_each_entry_safe(page, tmp_page, &pages, lru) + __free_pages(page, compound_order(page)); kfree(buffer); return ret;