From patchwork Sat May 25 00:19:45 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17210 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f200.google.com (mail-qc0-f200.google.com [209.85.216.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C77FB238FC for ; Sat, 25 May 2013 00:21:20 +0000 (UTC) Received: by mail-qc0-f200.google.com with SMTP id n10sf6238730qcx.11 for ; Fri, 24 May 2013 17:20:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-beenthere:x-forwarded-to:x-forwarded-for:delivered-to:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :mime-version:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe :content-type:content-transfer-encoding; bh=0JJrM+Dytu5kP6RZvYM9FGDukJ96JSlblhLm6tSnfWw=; b=WW5TgfKXrRuwue+B3M3Kk3nB4nyAIJDo+KbjxZWnlMp3C8UpOnvQpW3cyYpTB6BhBd AEcgufxjEsesraiZS2IzyFzisa3BSG8DO666NATp4Er2jUcdi8OYRKMYRty9bwUqkkjd pNZF0AS5QHlO89CnWGjwR0kg/UaxyWIPofi+0bj3842HuZHEWldZxT+zVpQJBOy6l4SF J0LhumoQ4vwSXtUnv3t25Gmpg759Nx6xyG2fOhdFWYPun78O3zo1zP9JiZF5JfPd31zS +WjMAqrY+1dEhyheQqmHA5LsZMMRazLfqlK2971QHfW1HnkF/P/xiMVyCeCEq930GY38 sVXw== X-Received: by 10.224.42.141 with SMTP id s13mr9650612qae.3.1369441222374; Fri, 24 May 2013 17:20:22 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.35.136 with SMTP id h8ls2320931qej.31.gmail; Fri, 24 May 2013 17:20:21 -0700 (PDT) X-Received: by 10.52.230.164 with SMTP id sz4mr8787103vdc.118.1369441221779; Fri, 24 May 2013 17:20:21 -0700 (PDT) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by mx.google.com with ESMTPS id zt2si9866245vdb.24.2013.05.24.17.20.21 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 24 May 2013 17:20:21 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.174; Received: by mail-vc0-f174.google.com with SMTP id hr11so3536024vcb.5 for ; Fri, 24 May 2013 17:20:21 -0700 (PDT) X-Received: by 10.52.27.228 with SMTP id w4mr713894vdg.1.1369441221574; Fri, 24 May 2013 17:20:21 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.126.138 with SMTP id c10csp123665vcs; Fri, 24 May 2013 17:20:21 -0700 (PDT) X-Received: by 10.68.91.131 with SMTP id ce3mr19814653pbb.46.1369441220644; Fri, 24 May 2013 17:20:20 -0700 (PDT) Received: from mail-pd0-f173.google.com (mail-pd0-f173.google.com [209.85.192.173]) by mx.google.com with ESMTPS id e8si14270853pao.165.2013.05.24.17.20.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 24 May 2013 17:20:20 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.192.173 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.173; Received: by mail-pd0-f173.google.com with SMTP id v14so3475484pde.4 for ; Fri, 24 May 2013 17:20:20 -0700 (PDT) X-Received: by 10.66.232.230 with SMTP id tr6mr20865582pac.83.1369441220211; Fri, 24 May 2013 17:20:20 -0700 (PDT) Received: from localhost.localdomain (c-24-21-54-107.hsd1.or.comcast.net. [24.21.54.107]) by mx.google.com with ESMTPSA id 3sm18032243pbj.46.2013.05.24.17.20.19 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 24 May 2013 17:20:19 -0700 (PDT) From: John Stultz To: Jesse Barker Cc: John Stultz Subject: [PATCH 4/4] ion: Add chunk heap Date: Fri, 24 May 2013 17:19:45 -0700 Message-Id: <1369441185-3434-5-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1369441185-3434-1-git-send-email-john.stultz@linaro.org> References: <1369441185-3434-1-git-send-email-john.stultz@linaro.org> MIME-Version: 1.0 X-Gm-Message-State: ALoCoQmSwaih0GFbGjxSJVeJd2rTA0JNbu63EZ+ObzizRKdZKQe0so68037XFs3GlMFwvBQUxnhF X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch adds support for a chunk heap that allows for buffers that are made up of a list of fixed size chunks taken from a carveout. Chunk sizes are configured when the heaps are created by passing the chunk size in the priv field of the heap platform data. XXX: This needs much more rational and justification. This work was originally by Rebecca Schultz Zavin And contains fixes and improvements by: Arve Hjønnevåg Benjamin Gaignard Colin Cross Dima Zavin Greg Hackmann Jesse Barker Johan Mossberg JP Abgrall KyongHo Cho Laura Abbott Olav Haugan Signed-off-by: John Stultz --- drivers/gpu/ion/Makefile | 2 +- drivers/gpu/ion/ion_chunk_heap.c | 207 +++++++++++++++++++++++++++++++++++++++ drivers/gpu/ion/ion_heap.c | 6 ++ include/linux/ion.h | 1 + 4 files changed, 215 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/ion/ion_chunk_heap.c diff --git a/drivers/gpu/ion/Makefile b/drivers/gpu/ion/Makefile index dc788e1..5634084 100644 --- a/drivers/gpu/ion/Makefile +++ b/drivers/gpu/ion/Makefile @@ -1,2 +1,2 @@ obj-$(CONFIG_ION) += ion.o ion_heap.o ion_page_pool.o ion_system_heap.o \ - ion_carveout_heap.o + ion_carveout_heap.o ion_chunk_heap.o diff --git a/drivers/gpu/ion/ion_chunk_heap.c b/drivers/gpu/ion/ion_chunk_heap.c new file mode 100644 index 0000000..22313c0 --- /dev/null +++ b/drivers/gpu/ion/ion_chunk_heap.c @@ -0,0 +1,207 @@ +/* + * drivers/gpu/ion/ion_chunk_heap.c + * + * Copyright (C) 2012 Google, Inc. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "ion_priv.h" + +/* #include */ + +struct ion_chunk_heap { + struct ion_heap heap; + struct gen_pool *pool; + ion_phys_addr_t base; + unsigned long chunk_size; + unsigned long size; + unsigned long allocated; +}; + +static int ion_chunk_heap_allocate(struct ion_heap *heap, + struct ion_buffer *buffer, + unsigned long size, unsigned long align, + unsigned long flags) +{ + struct ion_chunk_heap *chunk_heap = + container_of(heap, struct ion_chunk_heap, heap); + struct sg_table *table; + struct scatterlist *sg; + int ret, i; + unsigned long num_chunks; + + if (ion_buffer_fault_user_mappings(buffer)) + return -ENOMEM; + + num_chunks = ALIGN(size, chunk_heap->chunk_size) / + chunk_heap->chunk_size; + buffer->size = num_chunks * chunk_heap->chunk_size; + + if (buffer->size > chunk_heap->size - chunk_heap->allocated) + return -ENOMEM; + + table = kzalloc(sizeof(struct sg_table), GFP_KERNEL); + if (!table) + return -ENOMEM; + ret = sg_alloc_table(table, num_chunks, GFP_KERNEL); + if (ret) { + kfree(table); + return ret; + } + + sg = table->sgl; + for (i = 0; i < num_chunks; i++) { + unsigned long paddr = gen_pool_alloc(chunk_heap->pool, + chunk_heap->chunk_size); + if (!paddr) + goto err; + sg_set_page(sg, phys_to_page(paddr), chunk_heap->chunk_size, 0); + sg = sg_next(sg); + } + + buffer->priv_virt = table; + chunk_heap->allocated += buffer->size; + return 0; +err: + sg = table->sgl; + for (i -= 1; i >= 0; i--) { + gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)), + sg_dma_len(sg)); + sg = sg_next(sg); + } + sg_free_table(table); + kfree(table); + return -ENOMEM; +} + +static void ion_chunk_heap_free(struct ion_buffer *buffer) +{ + struct ion_heap *heap = buffer->heap; + struct ion_chunk_heap *chunk_heap = + container_of(heap, struct ion_chunk_heap, heap); + struct sg_table *table = buffer->priv_virt; + struct scatterlist *sg; + int i; + + ion_heap_buffer_zero(buffer); + + for_each_sg(table->sgl, sg, table->nents, i) { + if (ion_buffer_cached(buffer)) + dma_sync_single_for_device(NULL, + pfn_to_dma(NULL, page_to_pfn(sg_page(sg))), + sg_dma_len(sg), DMA_BIDIRECTIONAL); + gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)), + sg_dma_len(sg)); + } + chunk_heap->allocated -= buffer->size; + sg_free_table(table); + kfree(table); +} + +struct sg_table *ion_chunk_heap_map_dma(struct ion_heap *heap, + struct ion_buffer *buffer) +{ + return buffer->priv_virt; +} + +void ion_chunk_heap_unmap_dma(struct ion_heap *heap, + struct ion_buffer *buffer) +{ + return; +} + +static struct ion_heap_ops chunk_heap_ops = { + .allocate = ion_chunk_heap_allocate, + .free = ion_chunk_heap_free, + .map_dma = ion_chunk_heap_map_dma, + .unmap_dma = ion_chunk_heap_unmap_dma, + .map_user = ion_heap_map_user, +}; + +struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *heap_data) +{ + struct ion_chunk_heap *chunk_heap; + struct vm_struct *vm_struct; + pgprot_t pgprot = pgprot_writecombine(PAGE_KERNEL); + int i, ret; + + + chunk_heap = kzalloc(sizeof(struct ion_chunk_heap), GFP_KERNEL); + if (!chunk_heap) + return ERR_PTR(-ENOMEM); + + chunk_heap->chunk_size = (unsigned long)heap_data->priv; + chunk_heap->pool = gen_pool_create(get_order(chunk_heap->chunk_size) + + PAGE_SHIFT, -1); + if (!chunk_heap->pool) { + ret = -ENOMEM; + goto error_gen_pool_create; + } + chunk_heap->base = heap_data->base; + chunk_heap->size = heap_data->size; + chunk_heap->allocated = 0; + + vm_struct = get_vm_area(PAGE_SIZE, VM_ALLOC); + if (!vm_struct) { + ret = -ENOMEM; + goto error; + } + for (i = 0; i < chunk_heap->size; i += PAGE_SIZE) { + struct page *page = phys_to_page(chunk_heap->base + i); + struct page **pages = &page; + + ret = map_vm_area(vm_struct, pgprot, &pages); + if (ret) + goto error_map_vm_area; + memset(vm_struct->addr, 0, PAGE_SIZE); + unmap_kernel_range((unsigned long)vm_struct->addr, PAGE_SIZE); + } + free_vm_area(vm_struct); + + dma_sync_single_for_device(NULL, + pfn_to_dma(NULL, page_to_pfn(phys_to_page(heap_data->base))), + heap_data->size, DMA_BIDIRECTIONAL); + gen_pool_add(chunk_heap->pool, chunk_heap->base, heap_data->size, -1); + chunk_heap->heap.ops = &chunk_heap_ops; + chunk_heap->heap.type = ION_HEAP_TYPE_CHUNK; + chunk_heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE; + pr_info("%s: base %lu size %lu align %ld\n", __func__, chunk_heap->base, + heap_data->size, heap_data->align); + + return &chunk_heap->heap; + +error_map_vm_area: + free_vm_area(vm_struct); +error: + gen_pool_destroy(chunk_heap->pool); +error_gen_pool_create: + kfree(chunk_heap); + return ERR_PTR(ret); +} + +void ion_chunk_heap_destroy(struct ion_heap *heap) +{ + struct ion_chunk_heap *chunk_heap = + container_of(heap, struct ion_chunk_heap, heap); + + gen_pool_destroy(chunk_heap->pool); + kfree(chunk_heap); + chunk_heap = NULL; +} diff --git a/drivers/gpu/ion/ion_heap.c b/drivers/gpu/ion/ion_heap.c index 3a565fc..4692a46 100644 --- a/drivers/gpu/ion/ion_heap.c +++ b/drivers/gpu/ion/ion_heap.c @@ -104,6 +104,9 @@ struct ion_heap *ion_heap_create(struct ion_platform_heap *heap_data) case ION_HEAP_TYPE_CARVEOUT: heap = ion_carveout_heap_create(heap_data); break; + case ION_HEAP_TYPE_CHUNK: + heap = ion_chunk_heap_create(heap_data); + break; default: pr_err("%s: Invalid heap type %d\n", __func__, heap_data->type); @@ -137,6 +140,9 @@ void ion_heap_destroy(struct ion_heap *heap) case ION_HEAP_TYPE_CARVEOUT: ion_carveout_heap_destroy(heap); break; + case ION_HEAP_TYPE_CHUNK: + ion_chunk_heap_destroy(heap); + break; default: pr_err("%s: Invalid heap type %d\n", __func__, heap->type); diff --git a/include/linux/ion.h b/include/linux/ion.h index 1ef8a0d..d60745e 100644 --- a/include/linux/ion.h +++ b/include/linux/ion.h @@ -35,6 +35,7 @@ enum ion_heap_type { ION_HEAP_TYPE_SYSTEM, ION_HEAP_TYPE_SYSTEM_CONTIG, ION_HEAP_TYPE_CARVEOUT, + ION_HEAP_TYPE_CHUNK, ION_NUM_HEAPS = 16, };