From patchwork Mon Feb 3 18:16:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 24057 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yk0-f197.google.com (mail-yk0-f197.google.com [209.85.160.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6B2B1202B2 for ; Mon, 3 Feb 2014 18:16:57 +0000 (UTC) Received: by mail-yk0-f197.google.com with SMTP id 20sf45133483yks.0 for ; Mon, 03 Feb 2014 10:16:56 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=DjeJE3c0utdYL9YxUhXQqtLP0SsJIKotat9ePuuTJk4=; b=KLSi93MAKKY6oGYR1J4y14eovkES98eOgwJ38EyOnlm2fAvQTEF1H1xJ/gfB0rPkrD pHQhAvM+Tl69nCcwapmssri+O81ncn/HYosdfRm2MptYcyHbyeATZ3e2FqMIJWqWV1HF BtK/OKoetJ1ho8//i1aAmzSc1IeB83AqLdKF7YCiX+oe/L7+JBQ20DpIp99h79L2tXZL Q/CUbuPu98nvRP1Gi2W0YuyS+J4j5vWxjDqwznbQR5lFBWqfoIVJbvZVyIMixLg8ow1q 79zL0Q+bzFClk9APGC18fWCNXbWwSdxfjMxsP6NCDGEbyLbS4Jnv4BhdaEpP7xVfoyXC PeJg== X-Gm-Message-State: ALoCoQka4RFeEk4xF2Sm5k3Q3mQD9XqF28EZJ/3OSwlwsl4FKopiy+Ybm0XF1FbfJdW+10w/QI08 X-Received: by 10.52.187.41 with SMTP id fp9mr12382393vdc.5.1391451416580; Mon, 03 Feb 2014 10:16:56 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.94.129 with SMTP id g1ls1966556qge.69.gmail; Mon, 03 Feb 2014 10:16:56 -0800 (PST) X-Received: by 10.58.37.67 with SMTP id w3mr5791828vej.22.1391451416499; Mon, 03 Feb 2014 10:16:56 -0800 (PST) Received: from mail-vb0-f47.google.com (mail-vb0-f47.google.com [209.85.212.47]) by mx.google.com with ESMTPS id tj7si7054439vdc.72.2014.02.03.10.16.56 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 03 Feb 2014 10:16:56 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.47 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.47; Received: by mail-vb0-f47.google.com with SMTP id p6so4896638vbe.20 for ; Mon, 03 Feb 2014 10:16:56 -0800 (PST) X-Received: by 10.58.132.203 with SMTP id ow11mr29999570veb.1.1391451416404; Mon, 03 Feb 2014 10:16:56 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp174195vcz; Mon, 3 Feb 2014 10:16:55 -0800 (PST) X-Received: by 10.66.136.229 with SMTP id qd5mr39067198pab.118.1391451415494; Mon, 03 Feb 2014 10:16:55 -0800 (PST) Received: from mail-pd0-f171.google.com (mail-pd0-f171.google.com [209.85.192.171]) by mx.google.com with ESMTPS id m1si21478730pbe.208.2014.02.03.10.16.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 03 Feb 2014 10:16:55 -0800 (PST) Received-SPF: neutral (google.com: 209.85.192.171 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.171; Received: by mail-pd0-f171.google.com with SMTP id g10so7146648pdj.30 for ; Mon, 03 Feb 2014 10:16:55 -0800 (PST) X-Received: by 10.66.14.41 with SMTP id m9mr39133676pac.123.1391451415081; Mon, 03 Feb 2014 10:16:55 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id y9sm150496099pas.10.2014.02.03.10.16.53 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 03 Feb 2014 10:16:54 -0800 (PST) From: John Stultz To: LKML Cc: Mitchel Humpherys , Greg KH , Colin Cross , Android Kernel Team , John Stultz Subject: [PATCH 16/16] staging: ion: Add private buffer flag to skip page pooling on free Date: Mon, 3 Feb 2014 10:16:28 -0800 Message-Id: <1391451388-23906-17-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1391451388-23906-1-git-send-email-john.stultz@linaro.org> References: <1391451388-23906-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.47 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Mitchel Humpherys Currently, when we free a buffer it might actually just go back into a heap-specific page pool rather than going back to the system. This poses a problem because sometimes (like when we're running a shrinker in low memory conditions) we need to force the memory associated with the buffer to truly be relinquished to the system rather than just going back into a page pool. There isn't a use case for this flag by Ion clients, so make it a private flag. The main use case right now is to provide a mechanism for the deferred free code to force stale buffers to bypass page pooling. Cc: Greg KH Cc: Colin Cross Cc: Android Kernel Team Signed-off-by: Mitchel Humpherys [jstultz: Minor commit subject tweak] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion_heap.c | 19 ++++++++++-- drivers/staging/android/ion/ion_priv.h | 42 ++++++++++++++++++++++++++- drivers/staging/android/ion/ion_system_heap.c | 4 +-- 3 files changed, 59 insertions(+), 6 deletions(-) diff --git a/drivers/staging/android/ion/ion_heap.c b/drivers/staging/android/ion/ion_heap.c index f49bdc0..6ab8f13 100644 --- a/drivers/staging/android/ion/ion_heap.c +++ b/drivers/staging/android/ion/ion_heap.c @@ -178,7 +178,8 @@ size_t ion_heap_freelist_size(struct ion_heap *heap) return size; } -size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) +static size_t _ion_heap_freelist_drain(struct ion_heap *heap, size_t size, + bool skip_pools) { struct ion_buffer *buffer; size_t total_drained = 0; @@ -197,6 +198,8 @@ size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) list); list_del(&buffer->list); heap->free_list_size -= buffer->size; + if (skip_pools) + buffer->private_flags |= ION_PRIV_FLAG_SHRINKER_FREE; total_drained += buffer->size; spin_unlock(&heap->free_lock); ion_buffer_destroy(buffer); @@ -207,6 +210,16 @@ size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) return total_drained; } +size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) +{ + return _ion_heap_freelist_drain(heap, size, false); +} + +size_t ion_heap_freelist_shrink(struct ion_heap *heap, size_t size) +{ + return _ion_heap_freelist_drain(heap, size, true); +} + static int ion_heap_deferred_free(void *data) { struct ion_heap *heap = data; @@ -278,10 +291,10 @@ static unsigned long ion_heap_shrink_scan(struct shrinker *shrinker, /* * shrink the free list first, no point in zeroing the memory if we're - * just going to reclaim it + * just going to reclaim it. Also, skip any possible page pooling. */ if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) - freed = ion_heap_freelist_drain(heap, to_scan * PAGE_SIZE) / + freed = ion_heap_freelist_shrink(heap, to_scan * PAGE_SIZE) / PAGE_SIZE; to_scan -= freed; diff --git a/drivers/staging/android/ion/ion_priv.h b/drivers/staging/android/ion/ion_priv.h index a8d0ed7..fef0b6f 100644 --- a/drivers/staging/android/ion/ion_priv.h +++ b/drivers/staging/android/ion/ion_priv.h @@ -37,6 +37,7 @@ struct ion_buffer *ion_handle_buffer(struct ion_handle *handle); * @dev: back pointer to the ion_device * @heap: back pointer to the heap the buffer came from * @flags: buffer specific flags + * @private_flags: internal buffer specific flags * @size: size of the buffer * @priv_virt: private data to the buffer representable as * a void * @@ -65,6 +66,7 @@ struct ion_buffer { struct ion_device *dev; struct ion_heap *heap; unsigned long flags; + unsigned long private_flags; size_t size; union { void *priv_virt; @@ -97,7 +99,11 @@ void ion_buffer_destroy(struct ion_buffer *buffer); * @map_user map memory to userspace * * allocate, phys, and map_user return 0 on success, -errno on error. - * map_dma and map_kernel return pointer on success, ERR_PTR on error. + * map_dma and map_kernel return pointer on success, ERR_PTR on + * error. @free will be called with ION_PRIV_FLAG_SHRINKER_FREE set in + * the buffer's private_flags when called from a shrinker. In that + * case, the pages being free'd must be truly free'd back to the + * system, not put in a page pool or otherwise cached. */ struct ion_heap_ops { int (*allocate) (struct ion_heap *heap, @@ -122,6 +128,17 @@ struct ion_heap_ops { #define ION_HEAP_FLAG_DEFER_FREE (1 << 0) /** + * private flags - flags internal to ion + */ +/* + * Buffer is being freed from a shrinker function. Skip any possible + * heap-specific caching mechanism (e.g. page pools). Guarantees that + * any buffer storage that came from the system allocator will be + * returned to the system allocator. + */ +#define ION_PRIV_FLAG_SHRINKER_FREE (1 << 0) + +/** * struct ion_heap - represents a heap in the system * @node: rb node to put the heap on the device's tree of heaps * @dev: back pointer to the ion_device @@ -257,6 +274,29 @@ void ion_heap_freelist_add(struct ion_heap *heap, struct ion_buffer *buffer); size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size); /** + * ion_heap_freelist_shrink - drain the deferred free + * list, skipping any heap-specific + * pooling or caching mechanisms + * + * @heap: the heap + * @size: amount of memory to drain in bytes + * + * Drains the indicated amount of memory from the deferred freelist immediately. + * Returns the total amount freed. The total freed may be higher depending + * on the size of the items in the list, or lower if there is insufficient + * total memory on the freelist. + * + * Unlike with @ion_heap_freelist_drain, don't put any pages back into + * page pools or otherwise cache the pages. Everything must be + * genuinely free'd back to the system. If you're free'ing from a + * shrinker you probably want to use this. Note that this relies on + * the heap.ops.free callback honoring the ION_PRIV_FLAG_SHRINKER_FREE + * flag. + */ +size_t ion_heap_freelist_shrink(struct ion_heap *heap, + size_t size); + +/** * ion_heap_freelist_size - returns the size of the freelist in bytes * @heap: the heap */ diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index f453d97..c923633 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -90,7 +90,7 @@ static void free_buffer_page(struct ion_system_heap *heap, { bool cached = ion_buffer_cached(buffer); - if (!cached) { + if (!cached && !(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) { struct ion_page_pool *pool = heap->pools[order_to_index(order)]; ion_page_pool_free(pool, page); } else { @@ -209,7 +209,7 @@ static void ion_system_heap_free(struct ion_buffer *buffer) /* uncached pages come from the page pools, zero them before returning for security purposes (other allocations are zerod at alloc time */ - if (!cached) + if (!cached && !(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) ion_heap_buffer_zero(buffer); for_each_sg(table->sgl, sg, table->nents, i)