From patchwork Fri Dec 13 22:24:06 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22397 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f200.google.com (mail-vc0-f200.google.com [209.85.220.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9493D23FBA for ; Fri, 13 Dec 2013 22:26:57 +0000 (UTC) Received: by mail-vc0-f200.google.com with SMTP id hu19sf4418335vcb.7 for ; Fri, 13 Dec 2013 14:26:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=u2xgv5MYbiAUxinjnskY03Fg86K+95fyRNFY7vUxQKA=; b=Wtoy5q74e6qmDCyKiPl669yxrfgn8KsK/JTT6W57XSJfqFo4Z1Ri3UCXgNaAzCd0m1 /saR/iPfDBtVryfhXjidOT8nodr1Zi557pCV1Pp8aGHs5wKPSZ4pSy3Uo65m1Aa5+qan vqZDsP7WJDK/CD+zTvel/1G7JSHEjMPVLsRERaP0GT27BpYKQ7GwWYK9Qo13ztnSY9x/ qnjrSWoKlv+p0gLFyJufwMG42m8yxfmgBa+qbm990pSzikTLh49iBLzttKfj2ukZMO1b dPCqotwaBESf34Ihrp4krW60ZHS4PlM8dIdx0JvZfUf+PUOKicit8kyD4AXBq4923T/w iyQQ== X-Gm-Message-State: ALoCoQnPrmIX9prOX6xp8F2VW78zy2Y1UdjfHrcKjSBXKNWVVL4G3FcsCOGiqmz3Tg554TSEa6dp X-Received: by 10.224.70.66 with SMTP id c2mr1853189qaj.4.1386973617448; Fri, 13 Dec 2013 14:26:57 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.128.200 with SMTP id nq8ls1213211qeb.95.gmail; Fri, 13 Dec 2013 14:26:57 -0800 (PST) X-Received: by 10.58.128.41 with SMTP id nl9mr1423206veb.65.1386973617301; Fri, 13 Dec 2013 14:26:57 -0800 (PST) Received: from mail-vc0-f180.google.com (mail-vc0-f180.google.com [209.85.220.180]) by mx.google.com with ESMTPS id qy6si1205631vcb.1.2013.12.13.14.26.57 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:57 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.180; Received: by mail-vc0-f180.google.com with SMTP id if17so1703512vcb.25 for ; Fri, 13 Dec 2013 14:26:57 -0800 (PST) X-Received: by 10.52.230.102 with SMTP id sx6mr1848159vdc.15.1386973617227; Fri, 13 Dec 2013 14:26:57 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp73506vcz; Fri, 13 Dec 2013 14:26:56 -0800 (PST) X-Received: by 10.66.158.99 with SMTP id wt3mr5847364pab.113.1386973616406; Fri, 13 Dec 2013 14:26:56 -0800 (PST) Received: from mail-pa0-f54.google.com (mail-pa0-f54.google.com [209.85.220.54]) by mx.google.com with ESMTPS id n8si2517418pax.73.2013.12.13.14.26.56 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:56 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.54 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.54; Received: by mail-pa0-f54.google.com with SMTP id rd3so600209pab.27 for ; Fri, 13 Dec 2013 14:26:56 -0800 (PST) X-Received: by 10.68.225.9 with SMTP id rg9mr6157798pbc.122.1386973616007; Fri, 13 Dec 2013 14:26:56 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qz9sm7457908pbc.3.2013.12.13.14.26.54 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:55 -0800 (PST) From: John Stultz To: LKML Cc: Greg KH , Android Kernel Team , Sumit Semwal , Jesse Barker , Colin Cross , Rebecca Schultz Zavin , John Stultz Subject: [PATCH 032/115] gpu: ion: optimize system heap for non fault buffers Date: Fri, 13 Dec 2013 14:24:06 -0800 Message-Id: <1386973529-4884-33-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> References: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Rebecca Schultz Zavin If a buffer's user mappings are not going to be faulted in it need not be allocated page wise. We can optimize this common case by allocating an sglist of larger chunks rather than creating an entry for each page in the allocation. Signed-off-by: Rebecca Schultz Zavin [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion.c | 21 ++++++++------ drivers/staging/android/ion/ion_priv.h | 9 ++++++ drivers/staging/android/ion/ion_system_heap.c | 40 ++++++++++++++++++++------- 3 files changed, 51 insertions(+), 19 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index d1c7b84..6c589cb 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -100,6 +100,12 @@ struct ion_handle { unsigned int kmap_cnt; }; +bool ion_buffer_fault_user_mappings(struct ion_buffer *buffer) +{ + return ((buffer->flags & ION_FLAG_CACHED) && + !(buffer->flags & ION_FLAG_CACHED_NEEDS_SYNC)); +} + /* this function should only be called while dev->lock is held */ static void ion_buffer_add(struct ion_device *dev, struct ion_buffer *buffer) @@ -145,6 +151,7 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, return ERR_PTR(-ENOMEM); buffer->heap = heap; + buffer->flags = flags; kref_init(&buffer->ref); ret = heap->ops->allocate(heap, buffer, len, align, flags); @@ -155,7 +162,6 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, buffer->dev = dev; buffer->size = len; - buffer->flags = flags; table = heap->ops->map_dma(heap, buffer); if (IS_ERR_OR_NULL(table)) { @@ -164,14 +170,13 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, return ERR_PTR(PTR_ERR(table)); } buffer->sg_table = table; - if (buffer->flags & ION_FLAG_CACHED && - !(buffer->flags & ION_FLAG_CACHED_NEEDS_SYNC)) { + if (ion_buffer_fault_user_mappings(buffer)) { for_each_sg(buffer->sg_table->sgl, sg, buffer->sg_table->nents, i) { if (sg_dma_len(sg) == PAGE_SIZE) continue; - pr_err("%s: cached mappings must have pagewise " - "sg_lists\n", __func__); + pr_err("%s: cached mappings that will be faulted in " + "must have pagewise sg_lists\n", __func__); ret = -EINVAL; goto err; } @@ -764,8 +769,7 @@ static void ion_buffer_sync_for_device(struct ion_buffer *buffer, pr_debug("%s: syncing for device %s\n", __func__, dev ? dev_name(dev) : "null"); - if (!(buffer->flags & ION_FLAG_CACHED) || - (buffer->flags & ION_FLAG_CACHED_NEEDS_SYNC)) + if (!ion_buffer_fault_user_mappings(buffer)) return; mutex_lock(&buffer->lock); @@ -855,8 +859,7 @@ static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) return -EINVAL; } - if (buffer->flags & ION_FLAG_CACHED && - !(buffer->flags & ION_FLAG_CACHED_NEEDS_SYNC)) { + if (ion_buffer_fault_user_mappings(buffer)) { vma->vm_private_data = buffer; vma->vm_ops = &ion_vma_ops; ion_vm_open(vma); diff --git a/drivers/staging/android/ion/ion_priv.h b/drivers/staging/android/ion/ion_priv.h index dabe1e8..1027ef4 100644 --- a/drivers/staging/android/ion/ion_priv.h +++ b/drivers/staging/android/ion/ion_priv.h @@ -132,6 +132,15 @@ struct ion_heap { }; /** + * ion_buffer_fault_user_mappings - fault in user mappings of this buffer + * @buffer: buffer + * + * indicates whether userspace mappings of this buffer will be faulted + * in, this can affect how buffers are allocated from the heap. + */ +bool ion_buffer_fault_user_mappings(struct ion_buffer *buffer); + +/** * ion_device_create - allocates and returns an ion device * @custom_ioctl: arch specific ioctl function if applicable * diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index 2fb9a64..ef8afc7 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -31,7 +31,8 @@ struct page_info { struct list_head list; }; -static struct page_info *alloc_largest_available(unsigned long size) +static struct page_info *alloc_largest_available(unsigned long size, + bool split_pages) { static unsigned int orders[] = {8, 4, 0}; struct page *page; @@ -45,7 +46,8 @@ static struct page_info *alloc_largest_available(unsigned long size) __GFP_NOWARN | __GFP_NORETRY, orders[i]); if (!page) continue; - split_page(page, orders[i]); + if (split_pages) + split_page(page, orders[i]); info = kmalloc(sizeof(struct page_info *), GFP_KERNEL); info->page = page; info->order = orders[i]; @@ -64,35 +66,49 @@ static int ion_system_heap_allocate(struct ion_heap *heap, int ret; struct list_head pages; struct page_info *info, *tmp_info; - int i; + int i = 0; long size_remaining = PAGE_ALIGN(size); + bool split_pages = ion_buffer_fault_user_mappings(buffer); + INIT_LIST_HEAD(&pages); while (size_remaining > 0) { - info = alloc_largest_available(size_remaining); + info = alloc_largest_available(size_remaining, split_pages); if (!info) goto err; list_add_tail(&info->list, &pages); size_remaining -= (1 << info->order) * PAGE_SIZE; + i++; } table = kmalloc(sizeof(struct sg_table), GFP_KERNEL); if (!table) goto err; - ret = sg_alloc_table(table, PAGE_ALIGN(size) / PAGE_SIZE, GFP_KERNEL); + if (split_pages) + ret = sg_alloc_table(table, PAGE_ALIGN(size) / PAGE_SIZE, + GFP_KERNEL); + else + ret = sg_alloc_table(table, i, GFP_KERNEL); + if (ret) goto err1; sg = table->sgl; list_for_each_entry_safe(info, tmp_info, &pages, list) { struct page *page = info->page; - for (i = 0; i < (1 << info->order); i++) { - sg_set_page(sg, page + i, PAGE_SIZE, 0); + + if (split_pages) { + for (i = 0; i < (1 << info->order); i++) { + sg_set_page(sg, page + i, PAGE_SIZE, 0); + sg = sg_next(sg); + } + } else { + sg_set_page(sg, page, (1 << info->order) * PAGE_SIZE, + 0); sg = sg_next(sg); } list_del(&info->list); - memset(info, 0, sizeof(struct page_info)); kfree(info); } @@ -105,8 +121,12 @@ err1: kfree(table); err: list_for_each_entry(info, &pages, list) { - for (i = 0; i < (1 << info->order); i++) - __free_page(info->page + i); + if (split_pages) + for (i = 0; i < (1 << info->order); i++) + __free_page(info->page + i); + else + __free_pages(info->page, info->order); + kfree(info); } return -ENOMEM;