From patchwork Fri Dec 13 22:24:32 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22423 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f197.google.com (mail-vc0-f197.google.com [209.85.220.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6E7A523FBA for ; Fri, 13 Dec 2013 22:27:40 +0000 (UTC) Received: by mail-vc0-f197.google.com with SMTP id id10sf4416995vcb.0 for ; Fri, 13 Dec 2013 14:27:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=8e9n4CJV/EzD7fxbtYsERS4QqsIRrST+LYVshaRcZLc=; b=bARU8XwkFouJHq4dsJ295jZiyLgmOhltq5eDvq3KfHQ/pCL1K+RjjzSmTTP5wbpkBu g5R08xS2RiobC3ovsVd0YPc5Gugu0C86TdQgGNzrGy3PGHjlKIVtXxZGWNSHhrGoh0BL iQWhUks7S82ckX0MIhakVXJaUZ+BzNYRgg4AQy0D+2NjySHeZD2dOz7ZmTzSPJG37rhf yActw0bMB7sxfeYUOuZcH7DqPp3PTCPRBdiW9dyUen6hw+/CjXe3umAJBs2CwH1QHES3 h26GRmzz38fFRPQDplxkhYEbClyCjjFSSeU8+bj69EushqLbJtMys1z2TONWfngYp1x/ V4/w== X-Gm-Message-State: ALoCoQkPA9TvD10jEH2lo11P0RwPc6N1GuojL49/Va+YIND01lF7tQO5X8fmuzRQU498Qfze5FqK X-Received: by 10.224.160.208 with SMTP id o16mr1765594qax.8.1386973660257; Fri, 13 Dec 2013 14:27:40 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.118.233 with SMTP id kp9ls1255884qeb.4.gmail; Fri, 13 Dec 2013 14:27:40 -0800 (PST) X-Received: by 10.220.86.69 with SMTP id r5mr2242652vcl.9.1386973660107; Fri, 13 Dec 2013 14:27:40 -0800 (PST) Received: from mail-vb0-f53.google.com (mail-vb0-f53.google.com [209.85.212.53]) by mx.google.com with ESMTPS id t2si1188493vem.142.2013.12.13.14.27.40 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:40 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.53 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.53; Received: by mail-vb0-f53.google.com with SMTP id o19so1738604vbm.12 for ; Fri, 13 Dec 2013 14:27:40 -0800 (PST) X-Received: by 10.52.160.130 with SMTP id xk2mr1880089vdb.24.1386973660012; Fri, 13 Dec 2013 14:27:40 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp73565vcz; Fri, 13 Dec 2013 14:27:39 -0800 (PST) X-Received: by 10.68.195.70 with SMTP id ic6mr6059887pbc.112.1386973659234; Fri, 13 Dec 2013 14:27:39 -0800 (PST) Received: from mail-pb0-f51.google.com (mail-pb0-f51.google.com [209.85.160.51]) by mx.google.com with ESMTPS id qh6si2506427pbb.124.2013.12.13.14.27.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:39 -0800 (PST) Received-SPF: neutral (google.com: 209.85.160.51 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.160.51; Received: by mail-pb0-f51.google.com with SMTP id up15so3133456pbc.24 for ; Fri, 13 Dec 2013 14:27:38 -0800 (PST) X-Received: by 10.68.232.37 with SMTP id tl5mr6157458pbc.86.1386973658787; Fri, 13 Dec 2013 14:27:38 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qz9sm7457908pbc.3.2013.12.13.14.27.37 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:38 -0800 (PST) From: John Stultz To: LKML Cc: Greg KH , Android Kernel Team , Sumit Semwal , Jesse Barker , Colin Cross , Rebecca Schultz Zavin , John Stultz Subject: [PATCH 058/115] gpu: ion: Refactor the code to zero buffers Date: Fri, 13 Dec 2013 14:24:32 -0800 Message-Id: <1386973529-4884-59-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> References: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.53 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Rebecca Schultz Zavin Refactor the code in the system heap used to map and zero the buffers into a seperate utility so it can be called from other heaps. Use it from the chunk heap. Signed-off-by: Rebecca Schultz Zavin [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion_chunk_heap.c | 2 ++ drivers/staging/android/ion/ion_heap.c | 37 +++++++++++++++++++++++++++ drivers/staging/android/ion/ion_priv.h | 1 + drivers/staging/android/ion/ion_system_heap.c | 34 ++++++------------------ 4 files changed, 48 insertions(+), 26 deletions(-) diff --git a/drivers/staging/android/ion/ion_chunk_heap.c b/drivers/staging/android/ion/ion_chunk_heap.c index 1d47409..5582909 100644 --- a/drivers/staging/android/ion/ion_chunk_heap.c +++ b/drivers/staging/android/ion/ion_chunk_heap.c @@ -101,6 +101,8 @@ static void ion_chunk_heap_free(struct ion_buffer *buffer) struct scatterlist *sg; int i; + ion_heap_buffer_zero(buffer); + for_each_sg(table->sgl, sg, table->nents, i) { __dma_page_cpu_to_dev(sg_page(sg), 0, sg_dma_len(sg), DMA_BIDIRECTIONAL); diff --git a/drivers/staging/android/ion/ion_heap.c b/drivers/staging/android/ion/ion_heap.c index a9bf52e..11066a2 100644 --- a/drivers/staging/android/ion/ion_heap.c +++ b/drivers/staging/android/ion/ion_heap.c @@ -93,6 +93,43 @@ int ion_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer, return 0; } +int ion_heap_buffer_zero(struct ion_buffer *buffer) +{ + struct sg_table *table = buffer->sg_table; + pgprot_t pgprot; + struct scatterlist *sg; + struct vm_struct *vm_struct; + int i, j, ret = 0; + + if (buffer->flags & ION_FLAG_CACHED) + pgprot = PAGE_KERNEL; + else + pgprot = pgprot_writecombine(PAGE_KERNEL); + + vm_struct = get_vm_area(PAGE_SIZE, VM_ALLOC); + if (!vm_struct) + return -ENOMEM; + + for_each_sg(table->sgl, sg, table->nents, i) { + struct page *page = sg_page(sg); + unsigned long len = sg_dma_len(sg); + + for (j = 0; j < len / PAGE_SIZE; j++) { + struct page *sub_page = page + j; + struct page **pages = &sub_page; + ret = map_vm_area(vm_struct, pgprot, &pages); + if (ret) + goto end; + memset(vm_struct->addr, 0, PAGE_SIZE); + unmap_kernel_range((unsigned long)vm_struct->addr, + PAGE_SIZE); + } + } +end: + free_vm_area(vm_struct); + return ret; +} + struct ion_heap *ion_heap_create(struct ion_platform_heap *heap_data) { struct ion_heap *heap = NULL; diff --git a/drivers/staging/android/ion/ion_priv.h b/drivers/staging/android/ion/ion_priv.h index eef4de8..cfb4264 100644 --- a/drivers/staging/android/ion/ion_priv.h +++ b/drivers/staging/android/ion/ion_priv.h @@ -185,6 +185,7 @@ void *ion_heap_map_kernel(struct ion_heap *, struct ion_buffer *); void ion_heap_unmap_kernel(struct ion_heap *, struct ion_buffer *); int ion_heap_map_user(struct ion_heap *, struct ion_buffer *, struct vm_area_struct *); +int ion_heap_buffer_zero(struct ion_buffer *buffer); /** diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index e54307f..3ca704e 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -91,7 +91,7 @@ static struct page *alloc_buffer_page(struct ion_system_heap *heap, static void free_buffer_page(struct ion_system_heap *heap, struct ion_buffer *buffer, struct page *page, - unsigned int order, struct vm_struct *vm_struct) + unsigned int order) { bool cached = ion_buffer_cached(buffer); bool split_pages = ion_buffer_fault_user_mappings(buffer); @@ -99,20 +99,6 @@ static void free_buffer_page(struct ion_system_heap *heap, if (!cached) { struct ion_page_pool *pool = heap->pools[order_to_index(order)]; - /* zero the pages before returning them to the pool for - security. This uses vmap as we want to set the pgprot so - the writes to occur to noncached mappings, as the pool's - purpose is to keep the pages out of the cache */ - for (i = 0; i < (1 << order); i++) { - struct page *sub_page = page + i; - struct page **pages = &sub_page; - map_vm_area(vm_struct, - pgprot_writecombine(PAGE_KERNEL), - &pages); - memset(vm_struct->addr, 0, PAGE_SIZE); - unmap_kernel_range((unsigned long)vm_struct->addr, - PAGE_SIZE); - } ion_page_pool_free(pool, page); } else if (split_pages) { for (i = 0; i < (1 << order); i++) @@ -167,8 +153,6 @@ static int ion_system_heap_allocate(struct ion_heap *heap, long size_remaining = PAGE_ALIGN(size); unsigned int max_order = orders[0]; bool split_pages = ion_buffer_fault_user_mappings(buffer); - struct vm_struct *vm_struct; - pte_t *ptes; INIT_LIST_HEAD(&pages); while (size_remaining > 0) { @@ -216,13 +200,10 @@ static int ion_system_heap_allocate(struct ion_heap *heap, err1: kfree(table); err: - vm_struct = get_vm_area(PAGE_SIZE, &ptes); list_for_each_entry(info, &pages, list) { - free_buffer_page(sys_heap, buffer, info->page, info->order, - vm_struct); + free_buffer_page(sys_heap, buffer, info->page, info->order); kfree(info); } - free_vm_area(vm_struct); return -ENOMEM; } @@ -233,18 +214,19 @@ void ion_system_heap_free(struct ion_buffer *buffer) struct ion_system_heap, heap); struct sg_table *table = buffer->sg_table; + bool cached = ion_buffer_cached(buffer); struct scatterlist *sg; LIST_HEAD(pages); - struct vm_struct *vm_struct; - pte_t *ptes; int i; - vm_struct = get_vm_area(PAGE_SIZE, &ptes); + /* uncached pages come from the page pools, zero them before returning + for security purposes (other allocations are zerod at alloc time */ + if (!cached) + ion_heap_buffer_zero(buffer); for_each_sg(table->sgl, sg, table->nents, i) free_buffer_page(sys_heap, buffer, sg_page(sg), - get_order(sg_dma_len(sg)), vm_struct); - free_vm_area(vm_struct); + get_order(sg_dma_len(sg))); sg_free_table(table); kfree(table); }