From patchwork Fri Dec 13 22:24:35 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22426 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f198.google.com (mail-ob0-f198.google.com [209.85.214.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BF5E123FBA for ; Fri, 13 Dec 2013 22:27:44 +0000 (UTC) Received: by mail-ob0-f198.google.com with SMTP id wo20sf8773699obc.9 for ; Fri, 13 Dec 2013 14:27:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=PwAdYpf0isVHQ2LJKy/u8T+MOsee1gw/purU+F3BloE=; b=FHkP5dOweklmfGYSpkfctKXesFKvvWG4OYuDKyMToieXMBObm8LykaBg4dNf0vdQ+Y CbxcWufwZkVzlDd64FHq21ujBkiGBbgnQsbdAHCBxlJoWq6NrcOsBK8SZfYlgOT2inEE sDQ3k3NUR7iPTckonP9DWKTa7I8pVZWMB9D6Lvqkp9jgOQto0emDEWzYvhqFbzkZDwpf w+4/hL0BeztX/ej00IVNYm8PaC1s3Ri/6uaZMFAF5Nd33a1jtH3B8kbocfxhIGwex30L KG4VNW1VRNTpXvzcqdrxNN5PA4kKziiAHY6UPRn4kf0RXT7ZEzB4saR3xLrCJfR/nyVq FU3Q== X-Gm-Message-State: ALoCoQkbEIg8vydoULKJaxZHgXGPxudK0RmIYIUTfLtU0wK2lbmd3BpBvFnXm7AalF1fK/HmfMr4 X-Received: by 10.43.100.129 with SMTP id cw1mr1406078icc.30.1386973664410; Fri, 13 Dec 2013 14:27:44 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.71.237 with SMTP id y13ls1262342qeu.2.gmail; Fri, 13 Dec 2013 14:27:44 -0800 (PST) X-Received: by 10.220.170.68 with SMTP id c4mr2203578vcz.41.1386973664205; Fri, 13 Dec 2013 14:27:44 -0800 (PST) Received: from mail-ve0-f179.google.com (mail-ve0-f179.google.com [209.85.128.179]) by mx.google.com with ESMTPS id s5si1205603vev.3.2013.12.13.14.27.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:44 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.179; Received: by mail-ve0-f179.google.com with SMTP id jw12so1856473veb.10 for ; Fri, 13 Dec 2013 14:27:44 -0800 (PST) X-Received: by 10.52.187.66 with SMTP id fq2mr28048vdc.93.1386973664088; Fri, 13 Dec 2013 14:27:44 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp73569vcz; Fri, 13 Dec 2013 14:27:43 -0800 (PST) X-Received: by 10.68.231.166 with SMTP id th6mr5979537pbc.27.1386973663241; Fri, 13 Dec 2013 14:27:43 -0800 (PST) Received: from mail-pa0-f54.google.com (mail-pa0-f54.google.com [209.85.220.54]) by mx.google.com with ESMTPS id tr4si2483172pab.266.2013.12.13.14.27.42 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:43 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.54 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.54; Received: by mail-pa0-f54.google.com with SMTP id rd3so592945pab.13 for ; Fri, 13 Dec 2013 14:27:42 -0800 (PST) X-Received: by 10.66.170.138 with SMTP id am10mr6023714pac.51.1386973662865; Fri, 13 Dec 2013 14:27:42 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qz9sm7457908pbc.3.2013.12.13.14.27.41 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:42 -0800 (PST) From: John Stultz To: LKML Cc: Greg KH , Android Kernel Team , Sumit Semwal , Jesse Barker , Colin Cross , Rebecca Schultz Zavin , John Stultz Subject: [PATCH 061/115] gpu: ion: Make ion_free asynchronous Date: Fri, 13 Dec 2013 14:24:35 -0800 Message-Id: <1386973529-4884-62-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> References: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Rebecca Schultz Zavin Add the ability for a heap to free buffers asynchrounously. Freed buffers are placed on a free list and freed from a low priority background thread. If allocations from a particular heap fail, the free list is drained. This patch also enable asynchronous frees from the chunk heap. Signed-off-by: Rebecca Schultz Zavin [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion.c | 112 +++++++++++++++++++++++--- drivers/staging/android/ion/ion.h | 2 +- drivers/staging/android/ion/ion_chunk_heap.c | 3 +- drivers/staging/android/ion/ion_priv.h | 20 ++++- drivers/staging/android/ion/ion_system_heap.c | 1 + 5 files changed, 126 insertions(+), 12 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index ba65bef..b965f15 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -17,8 +17,10 @@ #include #include +#include #include #include +#include #include #include #include @@ -26,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -140,6 +143,7 @@ static void ion_buffer_add(struct ion_device *dev, static int ion_buffer_alloc_dirty(struct ion_buffer *buffer); +static bool ion_heap_drain_freelist(struct ion_heap *heap); /* this function should only be called while dev->lock is held */ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, struct ion_device *dev, @@ -161,9 +165,16 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, kref_init(&buffer->ref); ret = heap->ops->allocate(heap, buffer, len, align, flags); + if (ret) { - kfree(buffer); - return ERR_PTR(ret); + if (!(heap->flags & ION_HEAP_FLAG_DEFER_FREE)) + goto err2; + + ion_heap_drain_freelist(heap); + ret = heap->ops->allocate(heap, buffer, len, align, + flags); + if (ret) + goto err2; } buffer->dev = dev; @@ -214,27 +225,42 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, err: heap->ops->unmap_dma(heap, buffer); heap->ops->free(buffer); +err2: kfree(buffer); return ERR_PTR(ret); } -static void ion_buffer_destroy(struct kref *kref) +static void _ion_buffer_destroy(struct ion_buffer *buffer) { - struct ion_buffer *buffer = container_of(kref, struct ion_buffer, ref); - struct ion_device *dev = buffer->dev; - if (WARN_ON(buffer->kmap_cnt > 0)) buffer->heap->ops->unmap_kernel(buffer->heap, buffer); buffer->heap->ops->unmap_dma(buffer->heap, buffer); buffer->heap->ops->free(buffer); - mutex_lock(&dev->buffer_lock); - rb_erase(&buffer->node, &dev->buffers); - mutex_unlock(&dev->buffer_lock); if (buffer->flags & ION_FLAG_CACHED) kfree(buffer->dirty); kfree(buffer); } +static void ion_buffer_destroy(struct kref *kref) +{ + struct ion_buffer *buffer = container_of(kref, struct ion_buffer, ref); + struct ion_heap *heap = buffer->heap; + struct ion_device *dev = buffer->dev; + + mutex_lock(&dev->buffer_lock); + rb_erase(&buffer->node, &dev->buffers); + mutex_unlock(&dev->buffer_lock); + + if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) { + rt_mutex_lock(&heap->lock); + list_add(&buffer->list, &heap->free_list); + rt_mutex_unlock(&heap->lock); + wake_up(&heap->waitqueue); + return; + } + _ion_buffer_destroy(buffer); +} + static void ion_buffer_get(struct ion_buffer *buffer) { kref_get(&buffer->ref); @@ -1272,13 +1298,81 @@ static const struct file_operations debug_heap_fops = { .release = single_release, }; +static size_t ion_heap_free_list_is_empty(struct ion_heap *heap) +{ + bool is_empty; + + rt_mutex_lock(&heap->lock); + is_empty = list_empty(&heap->free_list); + rt_mutex_unlock(&heap->lock); + + return is_empty; +} + +static int ion_heap_deferred_free(void *data) +{ + struct ion_heap *heap = data; + + while (true) { + struct ion_buffer *buffer; + + wait_event_freezable(heap->waitqueue, + !ion_heap_free_list_is_empty(heap)); + + rt_mutex_lock(&heap->lock); + if (list_empty(&heap->free_list)) { + rt_mutex_unlock(&heap->lock); + continue; + } + buffer = list_first_entry(&heap->free_list, struct ion_buffer, + list); + list_del(&buffer->list); + rt_mutex_unlock(&heap->lock); + _ion_buffer_destroy(buffer); + } + + return 0; +} + +static bool ion_heap_drain_freelist(struct ion_heap *heap) +{ + struct ion_buffer *buffer, *tmp; + + if (ion_heap_free_list_is_empty(heap)) + return false; + rt_mutex_lock(&heap->lock); + list_for_each_entry_safe(buffer, tmp, &heap->free_list, list) { + _ion_buffer_destroy(buffer); + list_del(&buffer->list); + } + BUG_ON(!list_empty(&heap->free_list)); + rt_mutex_unlock(&heap->lock); + + + return true; +} + void ion_device_add_heap(struct ion_device *dev, struct ion_heap *heap) { + struct sched_param param = { .sched_priority = 0 }; + if (!heap->ops->allocate || !heap->ops->free || !heap->ops->map_dma || !heap->ops->unmap_dma) pr_err("%s: can not add heap with invalid ops struct.\n", __func__); + if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) { + INIT_LIST_HEAD(&heap->free_list); + rt_mutex_init(&heap->lock); + init_waitqueue_head(&heap->waitqueue); + heap->task = kthread_run(ion_heap_deferred_free, heap, + "%s", heap->name); + sched_setscheduler(heap->task, SCHED_IDLE, ¶m); + if (IS_ERR(heap->task)) + pr_err("%s: creating thread for deferred free failed\n", + __func__); + } + heap->dev = dev; down_write(&dev->lock); /* use negative heap->id to reverse the priority -- when traversing diff --git a/drivers/staging/android/ion/ion.h b/drivers/staging/android/ion/ion.h index 976123b..679031c 100644 --- a/drivers/staging/android/ion/ion.h +++ b/drivers/staging/android/ion/ion.h @@ -46,7 +46,7 @@ enum ion_heap_type { #define ION_NUM_HEAP_IDS sizeof(unsigned int) * 8 /** - * heap flags - the lower 16 bits are used by core ion, the upper 16 + * allocation flags - the lower 16 bits are used by core ion, the upper 16 * bits are reserved for use by the heaps themselves. */ #define ION_FLAG_CACHED 1 /* mappings of this buffer should be diff --git a/drivers/staging/android/ion/ion_chunk_heap.c b/drivers/staging/android/ion/ion_chunk_heap.c index 60cd91c..ac7cf13 100644 --- a/drivers/staging/android/ion/ion_chunk_heap.c +++ b/drivers/staging/android/ion/ion_chunk_heap.c @@ -160,7 +160,8 @@ struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *heap_data) gen_pool_add(chunk_heap->pool, chunk_heap->base, heap_data->size, -1); chunk_heap->heap.ops = &chunk_heap_ops; chunk_heap->heap.type = ION_HEAP_TYPE_CHUNK; - pr_info("%s: base %lu size %ld align %ld\n", __func__, chunk_heap->base, + chunk_heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE; + pr_info("%s: base %lu size %u align %ld\n", __func__, chunk_heap->base, heap_data->size, heap_data->align); return &chunk_heap->heap; diff --git a/drivers/staging/android/ion/ion_priv.h b/drivers/staging/android/ion/ion_priv.h index cfb4264..ab1a8d9 100644 --- a/drivers/staging/android/ion/ion_priv.h +++ b/drivers/staging/android/ion/ion_priv.h @@ -58,7 +58,10 @@ struct ion_buffer *ion_handle_buffer(struct ion_handle *handle); */ struct ion_buffer { struct kref ref; - struct rb_node node; + union { + struct rb_node node; + struct list_head list; + }; struct ion_device *dev; struct ion_heap *heap; unsigned long flags; @@ -109,15 +112,25 @@ struct ion_heap_ops { }; /** + * heap flags - flags between the heaps and core ion code + */ +#define ION_HEAP_FLAG_DEFER_FREE (1 << 0) + +/** * struct ion_heap - represents a heap in the system * @node: rb node to put the heap on the device's tree of heaps * @dev: back pointer to the ion_device * @type: type of heap * @ops: ops struct as above + * @flags: flags * @id: id of heap, also indicates priority of this heap when * allocating. These are specified by platform data and * MUST be unique * @name: used for debugging + * @free_list: free list head if deferred free is used + * @lock: protects the free list + * @waitqueue: queue to wait on from deferred free thread + * @task: task struct of deferred free thread * @debug_show: called when heap debug file is read to add any * heap specific debug info to output * @@ -131,8 +144,13 @@ struct ion_heap { struct ion_device *dev; enum ion_heap_type type; struct ion_heap_ops *ops; + unsigned long flags; unsigned int id; const char *name; + struct list_head free_list; + struct rt_mutex lock; + wait_queue_head_t waitqueue; + struct task_struct *task; int (*debug_show)(struct ion_heap *heap, struct seq_file *, void *); }; diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index 3ca704e..6665797 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -283,6 +283,7 @@ struct ion_heap *ion_system_heap_create(struct ion_platform_heap *unused) return ERR_PTR(-ENOMEM); heap->heap.ops = &system_heap_ops; heap->heap.type = ION_HEAP_TYPE_SYSTEM; + heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE; heap->pools = kzalloc(sizeof(struct ion_page_pool *) * num_orders, GFP_KERNEL); if (!heap->pools)