From patchwork Sat Dec 14 03:26:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22471 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f199.google.com (mail-qc0-f199.google.com [209.85.216.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 97ECA23908 for ; Sat, 14 Dec 2013 03:27:01 +0000 (UTC) Received: by mail-qc0-f199.google.com with SMTP id i17sf4707754qcy.6 for ; Fri, 13 Dec 2013 19:27:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=6tLXU14cOZC5Dh0QWgR89b6hOmf/bbnJqaHDasmdT7Q=; b=YsFxc7c4weaVCYa9yVedMnrdgrlXyTPTHBMOrOLUl221rXN2RpU7gPDnv12GPueEma BoAIPM4LM3jFM8h2X5NsH/gB6ofh/jsncpSlPvKDyh1OzirD+MI1SqLz2sYUKSwbG0/n t6Ln+gTavonwNrdlPvKSIc2+tRwTyfsdnHFQbpIf/C0iu4Fbh6/1abjop8iaPdlMifvi T9hrWwoEOIyUW6C63L6tE+MH9HDdHPes2zgV4qeCg9LzOwarz5xclWkknE8FTR0UmdxA Dv7JYMBaX0kzuBYqMsnNI4kakAqHbi0399ZUjkwA7DxGs4/Hxi6MwW4Bk0Jyo/mDSIa2 E0cw== X-Gm-Message-State: ALoCoQnM3w+NqpE4oe3iTMSYDTIZm2NYERhlFu6TMkG9ro0DfyXeHDOJCZEXF6W9wLsznH6Y5gdY X-Received: by 10.58.107.198 with SMTP id he6mr2144656veb.2.1386991621429; Fri, 13 Dec 2013 19:27:01 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.49.10 with SMTP id q10ls1359067qen.0.gmail; Fri, 13 Dec 2013 19:27:01 -0800 (PST) X-Received: by 10.58.187.51 with SMTP id fp19mr2596350vec.47.1386991621347; Fri, 13 Dec 2013 19:27:01 -0800 (PST) Received: from mail-vb0-f49.google.com (mail-vb0-f49.google.com [209.85.212.49]) by mx.google.com with ESMTPS id c8si1427566vcq.63.2013.12.13.19.27.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 19:27:01 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.49 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.49; Received: by mail-vb0-f49.google.com with SMTP id x11so1814866vbb.8 for ; Fri, 13 Dec 2013 19:27:01 -0800 (PST) X-Received: by 10.58.187.81 with SMTP id fq17mr2643469vec.14.1386991621275; Fri, 13 Dec 2013 19:27:01 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp86948vcz; Fri, 13 Dec 2013 19:27:00 -0800 (PST) X-Received: by 10.66.216.129 with SMTP id oq1mr7042078pac.75.1386991619786; Fri, 13 Dec 2013 19:26:59 -0800 (PST) Received: from mail-pa0-f43.google.com (mail-pa0-f43.google.com [209.85.220.43]) by mx.google.com with ESMTPS id sg3si2960979pbb.253.2013.12.13.19.26.59 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 19:26:59 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.43 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.43; Received: by mail-pa0-f43.google.com with SMTP id bj1so801334pad.30 for ; Fri, 13 Dec 2013 19:26:59 -0800 (PST) X-Received: by 10.68.6.66 with SMTP id y2mr7248634pby.60.1386991619199; Fri, 13 Dec 2013 19:26:59 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id nw11sm11680086pab.13.2013.12.13.19.26.56 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 19:26:58 -0800 (PST) From: John Stultz To: LKML Cc: Greg KH , Android Kernel Team , Sumit Semwal , Jesse Barker , Colin Cross , John Stultz Subject: [PATCH 101/115] ion: optimize ion_heap_buffer_zero Date: Fri, 13 Dec 2013 19:26:21 -0800 Message-Id: <1386991595-6251-9-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1386991595-6251-1-git-send-email-john.stultz@linaro.org> References: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> <1386991595-6251-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.49 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Colin Cross ion_heap_buffer_zero can spend a long time in unmap_kernel_range if it has to broadcast a tlb flush to every cpu for every page. Modify it to batch pages into a larger region to clear using a single mapping. This may cause the mapping size to change if the buffer size is not a multiple of the mapping size, so switch to allocating the address space for each chunk. This allows us to use vm_map_ram to handle the allocation and mapping together. The number of pages to zero using a single mapping is set to 32 to hit the fastpath in vm_map_ram. Signed-off-by: Colin Cross Signed-off-by: John Stultz --- drivers/staging/android/ion/ion_heap.c | 36 +++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/drivers/staging/android/ion/ion_heap.c b/drivers/staging/android/ion/ion_heap.c index 0a5cea0..ce31561 100644 --- a/drivers/staging/android/ion/ion_heap.c +++ b/drivers/staging/android/ion/ion_heap.c @@ -100,40 +100,48 @@ int ion_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer, return 0; } +static int ion_heap_clear_pages(struct page **pages, int num, pgprot_t pgprot) +{ + void *addr = vm_map_ram(pages, num, -1, pgprot); + if (!addr) + return -ENOMEM; + memset(addr, 0, PAGE_SIZE * num); + vm_unmap_ram(addr, num); + + return 0; +} + int ion_heap_buffer_zero(struct ion_buffer *buffer) { struct sg_table *table = buffer->sg_table; pgprot_t pgprot; struct scatterlist *sg; - struct vm_struct *vm_struct; int i, j, ret = 0; + struct page *pages[32]; + int k = 0; if (buffer->flags & ION_FLAG_CACHED) pgprot = PAGE_KERNEL; else pgprot = pgprot_writecombine(PAGE_KERNEL); - vm_struct = get_vm_area(PAGE_SIZE, VM_ALLOC); - if (!vm_struct) - return -ENOMEM; - for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); unsigned long len = sg->length; for (j = 0; j < len / PAGE_SIZE; j++) { - struct page *sub_page = page + j; - struct page **pages = &sub_page; - ret = map_vm_area(vm_struct, pgprot, &pages); - if (ret) - goto end; - memset(vm_struct->addr, 0, PAGE_SIZE); - unmap_kernel_range((unsigned long)vm_struct->addr, - PAGE_SIZE); + pages[k++] = page + j; + if (k == ARRAY_SIZE(pages)) { + ret = ion_heap_clear_pages(pages, k, pgprot); + if (ret) + goto end; + k = 0; + } } + if (k) + ret = ion_heap_clear_pages(pages, k, pgprot); } end: - free_vm_area(vm_struct); return ret; }