From patchwork Wed Jun 12 19:58:26 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17861 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f72.google.com (mail-ee0-f72.google.com [74.125.83.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A1A6E23903 for ; Wed, 12 Jun 2013 19:59:07 +0000 (UTC) Received: by mail-ee0-f72.google.com with SMTP id d41sf5621607eek.11 for ; Wed, 12 Jun 2013 12:59:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-beenthere:x-forwarded-to:x-forwarded-for:delivered-to:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :mime-version:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe :content-type:content-transfer-encoding; bh=LduRK9Zh0128W41/LefuGEUcjhGwCwUUOkmf/T+OjbQ=; b=CGw3Rusb21RPvwxchUEMYAkg39uUoYwVzxpeZP/HkgYiDR0kFfY74Qqug9AmlbaOMp htGU+5ZMV4JvtoBvDm9KZAU0TRO1PrG8kcChqSH5oaRqhNmI/TFERVdDoOBYWMI1ir5N WXrn6xn0nlXX1aSpZGrGFW+JtFIbVy2aqEIQJj64eaMgMFe98r3uz3/WmYMb7IDY2foa RBo/iX/GsynVEq4VwkqfZf+R20pOLaMWm/hMkAIm9iU3IcjxzQL8wZNqfwws5IiD/Ava nEEL05yOFZXEor7LNtUkc25gCRvzBBlXAI+7kQn7osQ0j2qsxK6HzTQo16sSzM+YokMn xlXg== X-Received: by 10.180.185.115 with SMTP id fb19mr375063wic.0.1371067146403; Wed, 12 Jun 2013 12:59:06 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.180.76.71 with SMTP id i7ls268310wiw.47.canary; Wed, 12 Jun 2013 12:59:06 -0700 (PDT) X-Received: by 10.180.109.84 with SMTP id hq20mr5568842wib.11.1371067146057; Wed, 12 Jun 2013 12:59:06 -0700 (PDT) Received: from mail-ve0-x229.google.com (mail-ve0-x229.google.com [2607:f8b0:400c:c01::229]) by mx.google.com with ESMTPS id br2si7512223wib.31.2013.06.12.12.59.05 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 12 Jun 2013 12:59:06 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c01::229 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c01::229; Received: by mail-ve0-f169.google.com with SMTP id m1so7026546ves.14 for ; Wed, 12 Jun 2013 12:59:04 -0700 (PDT) X-Received: by 10.220.215.73 with SMTP id hd9mr10433547vcb.19.1371067144779; Wed, 12 Jun 2013 12:59:04 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.191.99 with SMTP id gx3csp157507vec; Wed, 12 Jun 2013 12:59:04 -0700 (PDT) X-Received: by 10.66.141.4 with SMTP id rk4mr10758810pab.127.1371067143555; Wed, 12 Jun 2013 12:59:03 -0700 (PDT) Received: from mail-pa0-x233.google.com (mail-pa0-x233.google.com [2607:f8b0:400e:c03::233]) by mx.google.com with ESMTPS id tw4si7254791pbc.31.2013.06.12.12.59.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 12 Jun 2013 12:59:03 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c03::233 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c03::233; Received: by mail-pa0-f51.google.com with SMTP id lf11so4489818pab.24 for ; Wed, 12 Jun 2013 12:59:03 -0700 (PDT) X-Received: by 10.68.211.199 with SMTP id ne7mr21204011pbc.56.1371067143096; Wed, 12 Jun 2013 12:59:03 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id vv6sm26056524pab.6.2013.06.12.12.59.01 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 12 Jun 2013 12:59:01 -0700 (PDT) From: John Stultz To: Rebecca Schultz Zavin Cc: John Stultz , Serban Constantinescu , Arnd Bergmann , Jesse Barker Subject: [PATCH 4/4] ion: Add chunk heap Date: Wed, 12 Jun 2013 12:58:26 -0700 Message-Id: <1371067106-19129-5-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1371067106-19129-1-git-send-email-john.stultz@linaro.org> References: <1371067106-19129-1-git-send-email-john.stultz@linaro.org> MIME-Version: 1.0 X-Gm-Message-State: ALoCoQk1e2oAhAeyB3FK19J8QeLGJVr0oV4GAT8kvw8eCX4iB5XYXRfYnS1yEyhs/ZNMrH05gXHm X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c01::229 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch adds support for a chunk heap that allows for buffers that are made up of a list of fixed size chunks taken from a carveout. Chunk sizes are configured when the heaps are created by passing the chunk size in the priv field of the heap platform data. XXX: This needs much more rational and justification. This work was originally by Rebecca Schultz Zavin And contains fixes and improvements by: Arve Hjønnevåg Benjamin Gaignard Colin Cross Dima Zavin Greg Hackmann Jesse Barker Johan Mossberg JP Abgrall KyongHo Cho Laura Abbott Olav Haugan Cc: Rebecca Schultz Zavin Cc: Serban Constantinescu Cc: Arnd Bergmann Cc: Jesse Barker Signed-off-by: John Stultz --- drivers/gpu/ion/Makefile | 2 +- drivers/gpu/ion/ion_chunk_heap.c | 207 +++++++++++++++++++++++++++++++++++++++ drivers/gpu/ion/ion_heap.c | 6 ++ include/linux/ion.h | 1 + 4 files changed, 215 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/ion/ion_chunk_heap.c diff --git a/drivers/gpu/ion/Makefile b/drivers/gpu/ion/Makefile index 988f8c6..62f5a1d 100644 --- a/drivers/gpu/ion/Makefile +++ b/drivers/gpu/ion/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_ION) += ion.o ion_heap.o ion_page_pool.o ion_system_heap.o \ - ion_carveout_heap.o + ion_carveout_heap.o ion_chunk_heap.o diff --git a/drivers/gpu/ion/ion_chunk_heap.c b/drivers/gpu/ion/ion_chunk_heap.c new file mode 100644 index 0000000..22313c0 --- /dev/null +++ b/drivers/gpu/ion/ion_chunk_heap.c @@ -0,0 +1,207 @@ +/* + * drivers/gpu/ion/ion_chunk_heap.c + * + * Copyright (C) 2012 Google, Inc. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "ion_priv.h" + +/* #include */ + +struct ion_chunk_heap { + struct ion_heap heap; + struct gen_pool *pool; + ion_phys_addr_t base; + unsigned long chunk_size; + unsigned long size; + unsigned long allocated; +}; + +static int ion_chunk_heap_allocate(struct ion_heap *heap, + struct ion_buffer *buffer, + unsigned long size, unsigned long align, + unsigned long flags) +{ + struct ion_chunk_heap *chunk_heap = + container_of(heap, struct ion_chunk_heap, heap); + struct sg_table *table; + struct scatterlist *sg; + int ret, i; + unsigned long num_chunks; + + if (ion_buffer_fault_user_mappings(buffer)) + return -ENOMEM; + + num_chunks = ALIGN(size, chunk_heap->chunk_size) / + chunk_heap->chunk_size; + buffer->size = num_chunks * chunk_heap->chunk_size; + + if (buffer->size > chunk_heap->size - chunk_heap->allocated) + return -ENOMEM; + + table = kzalloc(sizeof(struct sg_table), GFP_KERNEL); + if (!table) + return -ENOMEM; + ret = sg_alloc_table(table, num_chunks, GFP_KERNEL); + if (ret) { + kfree(table); + return ret; + } + + sg = table->sgl; + for (i = 0; i < num_chunks; i++) { + unsigned long paddr = gen_pool_alloc(chunk_heap->pool, + chunk_heap->chunk_size); + if (!paddr) + goto err; + sg_set_page(sg, phys_to_page(paddr), chunk_heap->chunk_size, 0); + sg = sg_next(sg); + } + + buffer->priv_virt = table; + chunk_heap->allocated += buffer->size; + return 0; +err: + sg = table->sgl; + for (i -= 1; i >= 0; i--) { + gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)), + sg_dma_len(sg)); + sg = sg_next(sg); + } + sg_free_table(table); + kfree(table); + return -ENOMEM; +} + +static void ion_chunk_heap_free(struct ion_buffer *buffer) +{ + struct ion_heap *heap = buffer->heap; + struct ion_chunk_heap *chunk_heap = + container_of(heap, struct ion_chunk_heap, heap); + struct sg_table *table = buffer->priv_virt; + struct scatterlist *sg; + int i; + + ion_heap_buffer_zero(buffer); + + for_each_sg(table->sgl, sg, table->nents, i) { + if (ion_buffer_cached(buffer)) + dma_sync_single_for_device(NULL, + pfn_to_dma(NULL, page_to_pfn(sg_page(sg))), + sg_dma_len(sg), DMA_BIDIRECTIONAL); + gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)), + sg_dma_len(sg)); + } + chunk_heap->allocated -= buffer->size; + sg_free_table(table); + kfree(table); +} + +struct sg_table *ion_chunk_heap_map_dma(struct ion_heap *heap, + struct ion_buffer *buffer) +{ + return buffer->priv_virt; +} + +void ion_chunk_heap_unmap_dma(struct ion_heap *heap, + struct ion_buffer *buffer) +{ + return; +} + +static struct ion_heap_ops chunk_heap_ops = { + .allocate = ion_chunk_heap_allocate, + .free = ion_chunk_heap_free, + .map_dma = ion_chunk_heap_map_dma, + .unmap_dma = ion_chunk_heap_unmap_dma, + .map_user = ion_heap_map_user, +}; + +struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *heap_data) +{ + struct ion_chunk_heap *chunk_heap; + struct vm_struct *vm_struct; + pgprot_t pgprot = pgprot_writecombine(PAGE_KERNEL); + int i, ret; + + + chunk_heap = kzalloc(sizeof(struct ion_chunk_heap), GFP_KERNEL); + if (!chunk_heap) + return ERR_PTR(-ENOMEM); + + chunk_heap->chunk_size = (unsigned long)heap_data->priv; + chunk_heap->pool = gen_pool_create(get_order(chunk_heap->chunk_size) + + PAGE_SHIFT, -1); + if (!chunk_heap->pool) { + ret = -ENOMEM; + goto error_gen_pool_create; + } + chunk_heap->base = heap_data->base; + chunk_heap->size = heap_data->size; + chunk_heap->allocated = 0; + + vm_struct = get_vm_area(PAGE_SIZE, VM_ALLOC); + if (!vm_struct) { + ret = -ENOMEM; + goto error; + } + for (i = 0; i < chunk_heap->size; i += PAGE_SIZE) { + struct page *page = phys_to_page(chunk_heap->base + i); + struct page **pages = &page; + + ret = map_vm_area(vm_struct, pgprot, &pages); + if (ret) + goto error_map_vm_area; + memset(vm_struct->addr, 0, PAGE_SIZE); + unmap_kernel_range((unsigned long)vm_struct->addr, PAGE_SIZE); + } + free_vm_area(vm_struct); + + dma_sync_single_for_device(NULL, + pfn_to_dma(NULL, page_to_pfn(phys_to_page(heap_data->base))), + heap_data->size, DMA_BIDIRECTIONAL); + gen_pool_add(chunk_heap->pool, chunk_heap->base, heap_data->size, -1); + chunk_heap->heap.ops = &chunk_heap_ops; + chunk_heap->heap.type = ION_HEAP_TYPE_CHUNK; + chunk_heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE; + pr_info("%s: base %lu size %lu align %ld\n", __func__, chunk_heap->base, + heap_data->size, heap_data->align); + + return &chunk_heap->heap; + +error_map_vm_area: + free_vm_area(vm_struct); +error: + gen_pool_destroy(chunk_heap->pool); +error_gen_pool_create: + kfree(chunk_heap); + return ERR_PTR(ret); +} + +void ion_chunk_heap_destroy(struct ion_heap *heap) +{ + struct ion_chunk_heap *chunk_heap = + container_of(heap, struct ion_chunk_heap, heap); + + gen_pool_destroy(chunk_heap->pool); + kfree(chunk_heap); + chunk_heap = NULL; +} diff --git a/drivers/gpu/ion/ion_heap.c b/drivers/gpu/ion/ion_heap.c index 3a565fc..4692a46 100644 --- a/drivers/gpu/ion/ion_heap.c +++ b/drivers/gpu/ion/ion_heap.c @@ -104,6 +104,9 @@ struct ion_heap *ion_heap_create(struct ion_platform_heap *heap_data) case ION_HEAP_TYPE_CARVEOUT: heap = ion_carveout_heap_create(heap_data); break; + case ION_HEAP_TYPE_CHUNK: + heap = ion_chunk_heap_create(heap_data); + break; default: pr_err("%s: Invalid heap type %d\n", __func__, heap_data->type); @@ -137,6 +140,9 @@ void ion_heap_destroy(struct ion_heap *heap) case ION_HEAP_TYPE_CARVEOUT: ion_carveout_heap_destroy(heap); break; + case ION_HEAP_TYPE_CHUNK: + ion_chunk_heap_destroy(heap); + break; default: pr_err("%s: Invalid heap type %d\n", __func__, heap->type); diff --git a/include/linux/ion.h b/include/linux/ion.h index 3f5858b..369483d 100644 --- a/include/linux/ion.h +++ b/include/linux/ion.h @@ -35,6 +35,7 @@ enum ion_heap_type { ION_HEAP_TYPE_SYSTEM, ION_HEAP_TYPE_SYSTEM_CONTIG, ION_HEAP_TYPE_CARVEOUT, + ION_HEAP_TYPE_CHUNK, ION_NUM_HEAPS = 16, };