From patchwork Mon Feb 17 21:58:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 24804 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yk0-f197.google.com (mail-yk0-f197.google.com [209.85.160.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 4AE2C20143 for ; Mon, 17 Feb 2014 21:59:14 +0000 (UTC) Received: by mail-yk0-f197.google.com with SMTP id 142sf56565716ykq.0 for ; Mon, 17 Feb 2014 13:59:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=EEGFZ9kuI52Bylx6CyfoYzsfaLUi3fvv+AnkENyk1cM=; b=R6B4aqMwJb0jBLimbKZtKi16FHhiVxlUvbKlLhQSwzcIcKLlBmqQN4LDZXQONkBPUs 55rhiNv+wqGnh4CA0Zpp9TYIvAzHns5VlDwfffs9m3d17nHAFmK6d8CbezNBs0mJplwm RWE8Ef3obUvEEGeCr3eWiy0j7rMfeUnnAypfAkHhh4Vhq8mLkJhUNgQQJ+SLcPrDf0Kn U5oUZthUgaXP+332tx0BwumZLc1aKkVUNZw4Iq+I1LXZnBi9BW88BT5BbxTra0OxV6UQ rsbiTJ44mG7JLPRuhS9t3JQBrQ5U5M10mpBMdw1yUTjqteIcFNmqtdQYVK8tVyauHPEd Qo6g== X-Gm-Message-State: ALoCoQlFU72zhLZfe8nMayRHKoCZQoiLOCeGUTE8nneNdNIhj0aZUnisI/ac1/dEtqK+9rAIRq3n X-Received: by 10.224.163.71 with SMTP id z7mr11137395qax.5.1392674353366; Mon, 17 Feb 2014 13:59:13 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.20.100 with SMTP id 91ls1014652qgi.18.gmail; Mon, 17 Feb 2014 13:59:13 -0800 (PST) X-Received: by 10.221.27.133 with SMTP id rq5mr11750814vcb.9.1392674353251; Mon, 17 Feb 2014 13:59:13 -0800 (PST) Received: from mail-vc0-f179.google.com (mail-vc0-f179.google.com [209.85.220.179]) by mx.google.com with ESMTPS id t4si4831321vcz.58.2014.02.17.13.59.13 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 17 Feb 2014 13:59:13 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.179; Received: by mail-vc0-f179.google.com with SMTP id lh14so12250817vcb.38 for ; Mon, 17 Feb 2014 13:59:13 -0800 (PST) X-Received: by 10.52.117.115 with SMTP id kd19mr15552170vdb.15.1392674353151; Mon, 17 Feb 2014 13:59:13 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp169536vcz; Mon, 17 Feb 2014 13:59:12 -0800 (PST) X-Received: by 10.66.226.46 with SMTP id rp14mr10986631pac.133.1392674351098; Mon, 17 Feb 2014 13:59:11 -0800 (PST) Received: from mail-pd0-f172.google.com (mail-pd0-f172.google.com [209.85.192.172]) by mx.google.com with ESMTPS id x3si16092131pbf.61.2014.02.17.13.59.06 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 17 Feb 2014 13:59:11 -0800 (PST) Received-SPF: neutral (google.com: 209.85.192.172 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.172; Received: by mail-pd0-f172.google.com with SMTP id p10so15346238pdj.17 for ; Mon, 17 Feb 2014 13:59:06 -0800 (PST) X-Received: by 10.66.25.101 with SMTP id b5mr29132197pag.101.1392674346616; Mon, 17 Feb 2014 13:59:06 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id x5sm49045254pbw.26.2014.02.17.13.59.05 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 17 Feb 2014 13:59:05 -0800 (PST) From: John Stultz To: LKML Cc: Mitchel Humpherys , Greg KH , Colin Cross , Android Kernel Team , John Stultz Subject: [PATCH 11/14] staging: ion: Add private buffer flag to skip page pooling on free Date: Mon, 17 Feb 2014 13:58:39 -0800 Message-Id: <1392674322-9036-12-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1392674322-9036-1-git-send-email-john.stultz@linaro.org> References: <1392674322-9036-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Mitchel Humpherys Currently, when we free a buffer it might actually just go back into a heap-specific page pool rather than going back to the system. This poses a problem because sometimes (like when we're running a shrinker in low memory conditions) we need to force the memory associated with the buffer to truly be relinquished to the system rather than just going back into a page pool. There isn't a use case for this flag by Ion clients, so make it a private flag. The main use case right now is to provide a mechanism for the deferred free code to force stale buffers to bypass page pooling. Cc: Greg KH Cc: Colin Cross Cc: Android Kernel Team Signed-off-by: Mitchel Humpherys [jstultz: Minor commit subject tweak] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion_heap.c | 19 ++++++++++-- drivers/staging/android/ion/ion_priv.h | 42 ++++++++++++++++++++++++++- drivers/staging/android/ion/ion_system_heap.c | 4 +-- 3 files changed, 59 insertions(+), 6 deletions(-) diff --git a/drivers/staging/android/ion/ion_heap.c b/drivers/staging/android/ion/ion_heap.c index 49ace13..bdc6a28 100644 --- a/drivers/staging/android/ion/ion_heap.c +++ b/drivers/staging/android/ion/ion_heap.c @@ -178,7 +178,8 @@ size_t ion_heap_freelist_size(struct ion_heap *heap) return size; } -size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) +static size_t _ion_heap_freelist_drain(struct ion_heap *heap, size_t size, + bool skip_pools) { struct ion_buffer *buffer; size_t total_drained = 0; @@ -197,6 +198,8 @@ size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) list); list_del(&buffer->list); heap->free_list_size -= buffer->size; + if (skip_pools) + buffer->private_flags |= ION_PRIV_FLAG_SHRINKER_FREE; total_drained += buffer->size; spin_unlock(&heap->free_lock); ion_buffer_destroy(buffer); @@ -207,6 +210,16 @@ size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) return total_drained; } +size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) +{ + return _ion_heap_freelist_drain(heap, size, false); +} + +size_t ion_heap_freelist_shrink(struct ion_heap *heap, size_t size) +{ + return _ion_heap_freelist_drain(heap, size, true); +} + static int ion_heap_deferred_free(void *data) { struct ion_heap *heap = data; @@ -278,10 +291,10 @@ static unsigned long ion_heap_shrink_scan(struct shrinker *shrinker, /* * shrink the free list first, no point in zeroing the memory if we're - * just going to reclaim it + * just going to reclaim it. Also, skip any possible page pooling. */ if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) - freed = ion_heap_freelist_drain(heap, to_scan * PAGE_SIZE) / + freed = ion_heap_freelist_shrink(heap, to_scan * PAGE_SIZE) / PAGE_SIZE; to_scan -= freed; diff --git a/drivers/staging/android/ion/ion_priv.h b/drivers/staging/android/ion/ion_priv.h index bcf9d19..1eba3f2 100644 --- a/drivers/staging/android/ion/ion_priv.h +++ b/drivers/staging/android/ion/ion_priv.h @@ -38,6 +38,7 @@ struct ion_buffer *ion_handle_buffer(struct ion_handle *handle); * @dev: back pointer to the ion_device * @heap: back pointer to the heap the buffer came from * @flags: buffer specific flags + * @private_flags: internal buffer specific flags * @size: size of the buffer * @priv_virt: private data to the buffer representable as * a void * @@ -66,6 +67,7 @@ struct ion_buffer { struct ion_device *dev; struct ion_heap *heap; unsigned long flags; + unsigned long private_flags; size_t size; union { void *priv_virt; @@ -98,7 +100,11 @@ void ion_buffer_destroy(struct ion_buffer *buffer); * @map_user map memory to userspace * * allocate, phys, and map_user return 0 on success, -errno on error. - * map_dma and map_kernel return pointer on success, ERR_PTR on error. + * map_dma and map_kernel return pointer on success, ERR_PTR on + * error. @free will be called with ION_PRIV_FLAG_SHRINKER_FREE set in + * the buffer's private_flags when called from a shrinker. In that + * case, the pages being free'd must be truly free'd back to the + * system, not put in a page pool or otherwise cached. */ struct ion_heap_ops { int (*allocate)(struct ion_heap *heap, @@ -123,6 +129,17 @@ struct ion_heap_ops { #define ION_HEAP_FLAG_DEFER_FREE (1 << 0) /** + * private flags - flags internal to ion + */ +/* + * Buffer is being freed from a shrinker function. Skip any possible + * heap-specific caching mechanism (e.g. page pools). Guarantees that + * any buffer storage that came from the system allocator will be + * returned to the system allocator. + */ +#define ION_PRIV_FLAG_SHRINKER_FREE (1 << 0) + +/** * struct ion_heap - represents a heap in the system * @node: rb node to put the heap on the device's tree of heaps * @dev: back pointer to the ion_device @@ -258,6 +275,29 @@ void ion_heap_freelist_add(struct ion_heap *heap, struct ion_buffer *buffer); size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size); /** + * ion_heap_freelist_shrink - drain the deferred free + * list, skipping any heap-specific + * pooling or caching mechanisms + * + * @heap: the heap + * @size: amount of memory to drain in bytes + * + * Drains the indicated amount of memory from the deferred freelist immediately. + * Returns the total amount freed. The total freed may be higher depending + * on the size of the items in the list, or lower if there is insufficient + * total memory on the freelist. + * + * Unlike with @ion_heap_freelist_drain, don't put any pages back into + * page pools or otherwise cache the pages. Everything must be + * genuinely free'd back to the system. If you're free'ing from a + * shrinker you probably want to use this. Note that this relies on + * the heap.ops.free callback honoring the ION_PRIV_FLAG_SHRINKER_FREE + * flag. + */ +size_t ion_heap_freelist_shrink(struct ion_heap *heap, + size_t size); + +/** * ion_heap_freelist_size - returns the size of the freelist in bytes * @heap: the heap */ diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index f453d97..c923633 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -90,7 +90,7 @@ static void free_buffer_page(struct ion_system_heap *heap, { bool cached = ion_buffer_cached(buffer); - if (!cached) { + if (!cached && !(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) { struct ion_page_pool *pool = heap->pools[order_to_index(order)]; ion_page_pool_free(pool, page); } else { @@ -209,7 +209,7 @@ static void ion_system_heap_free(struct ion_buffer *buffer) /* uncached pages come from the page pools, zero them before returning for security purposes (other allocations are zerod at alloc time */ - if (!cached) + if (!cached && !(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) ion_heap_buffer_zero(buffer); for_each_sg(table->sgl, sg, table->nents, i)