From patchwork Fri Dec 13 22:23:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22382 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f199.google.com (mail-ie0-f199.google.com [209.85.223.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8BAA223FBA for ; Fri, 13 Dec 2013 22:26:33 +0000 (UTC) Received: by mail-ie0-f199.google.com with SMTP id lx4sf8922858iec.10 for ; Fri, 13 Dec 2013 14:26:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=Qkx27rjvSshJU/QYOD1uRwPLUGtnnHvCPVY0yBCx02I=; b=RrXKFMFCzTIAWdAIUW49lvYEfzHZ4WA5jnBI69qe/t1aVhtZs1iJxyILNPETbasbfD nOysurdzUlTdcopniDD45qM+9LrBzo0QmvMvIXakdnND3fLBVMWW27A4Mh3FSk1Qv+4C uwIiKF9v16weUZ5XPZmwIwmyP6JcllEbZ2LYabf/zCAqv6Em9ZI4FVnxYL24oJrSMSfs 1wMkRwb7zY42DKYtnh9EJOaDcVZ3hoEXUxnOeACRjMedTmGGaU0oqLgSHbztQlwIWbix u31dYg9HG3DgjuFQPHQSFpJm5FIA4JPTDvvMp+d5Z8Dl83z9IOC4bXr1byaRhY8oY4RD BXQQ== X-Gm-Message-State: ALoCoQmx/mPDCM3v5Thbz0a03B4dNS9RckDfV3TYCdJ1+hghioYQCx3xzhbMMeqXVDywv/G0dvRc X-Received: by 10.182.111.227 with SMTP id il3mr1759336obb.41.1386973593195; Fri, 13 Dec 2013 14:26:33 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.62.199 with SMTP id a7ls1252755qes.87.gmail; Fri, 13 Dec 2013 14:26:33 -0800 (PST) X-Received: by 10.52.75.99 with SMTP id b3mr928244vdw.80.1386973592993; Fri, 13 Dec 2013 14:26:32 -0800 (PST) Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by mx.google.com with ESMTPS id mh9si1188309vec.140.2013.12.13.14.26.32 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:32 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.175; Received: by mail-vc0-f175.google.com with SMTP id ld13so1735814vcb.34 for ; Fri, 13 Dec 2013 14:26:32 -0800 (PST) X-Received: by 10.220.124.68 with SMTP id t4mr2223859vcr.52.1386973592911; Fri, 13 Dec 2013 14:26:32 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp73471vcz; Fri, 13 Dec 2013 14:26:32 -0800 (PST) X-Received: by 10.68.197.36 with SMTP id ir4mr6068908pbc.96.1386973592069; Fri, 13 Dec 2013 14:26:32 -0800 (PST) Received: from mail-pd0-f180.google.com (mail-pd0-f180.google.com [209.85.192.180]) by mx.google.com with ESMTPS id e8si2489936pac.227.2013.12.13.14.26.31 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:32 -0800 (PST) Received-SPF: neutral (google.com: 209.85.192.180 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.180; Received: by mail-pd0-f180.google.com with SMTP id q10so3054356pdj.39 for ; Fri, 13 Dec 2013 14:26:31 -0800 (PST) X-Received: by 10.68.106.130 with SMTP id gu2mr5948895pbb.59.1386973591701; Fri, 13 Dec 2013 14:26:31 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qz9sm7457908pbc.3.2013.12.13.14.26.30 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:30 -0800 (PST) From: John Stultz To: LKML Cc: Greg KH , Android Kernel Team , Sumit Semwal , Jesse Barker , Colin Cross , Rebecca Schultz Zavin , John Stultz Subject: [PATCH 017/115] gpu: ion: Modify the system heap to try to allocate large/huge pages Date: Fri, 13 Dec 2013 14:23:51 -0800 Message-Id: <1386973529-4884-18-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> References: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Rebecca Schultz Zavin On some systems there is a performance benefit to reducing tlb pressure by minimizing the number of chunks in an allocation. Signed-off-by: Rebecca Schultz Zavin [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion_system_heap.c | 116 ++++++++++++++++++++------ 1 file changed, 90 insertions(+), 26 deletions(-) diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index dceed5b..98711ce 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -14,7 +14,10 @@ * */ +#include +#include #include +#include #include #include #include @@ -22,6 +25,35 @@ #include "ion.h" #include "ion_priv.h" +struct page_info { + struct page *page; + unsigned long order; + struct list_head list; +}; + +static struct page_info *alloc_largest_available(unsigned long size) +{ + static unsigned int orders[] = {8, 4, 0}; + struct page *page; + struct page_info *info; + int i; + + for (i = 0; i < ARRAY_SIZE(orders); i++) { + if (size < (1 << orders[i]) * PAGE_SIZE) + continue; + page = alloc_pages(GFP_HIGHUSER | __GFP_ZERO | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY, orders[i]); + if (!page) + continue; + split_page(page, orders[i]); + info = kmap(page); + info->page = page; + info->order = orders[i]; + return info; + } + return NULL; +} + static int ion_system_heap_allocate(struct ion_heap *heap, struct ion_buffer *buffer, unsigned long size, unsigned long align, @@ -29,30 +61,54 @@ static int ion_system_heap_allocate(struct ion_heap *heap, { struct sg_table *table; struct scatterlist *sg; - int i, j; - int npages = PAGE_ALIGN(size) / PAGE_SIZE; + int ret; + struct list_head pages; + struct page_info *info, *tmp_info; + int i; + long size_remaining = PAGE_ALIGN(size); + + INIT_LIST_HEAD(&pages); + while (size_remaining > 0) { + info = alloc_largest_available(size_remaining); + if (!info) + goto err; + list_add_tail(&info->list, &pages); + size_remaining -= (1 << info->order) * PAGE_SIZE; + } table = kmalloc(sizeof(struct sg_table), GFP_KERNEL); if (!table) - return -ENOMEM; - i = sg_alloc_table(table, npages, GFP_KERNEL); - if (i) - goto err0; - for_each_sg(table->sgl, sg, table->nents, i) { - struct page *page; - page = alloc_page(GFP_HIGHUSER | __GFP_ZERO); - if (!page) - goto err1; - sg_set_page(sg, page, PAGE_SIZE, 0); + goto err; + + ret = sg_alloc_table(table, PAGE_ALIGN(size) / PAGE_SIZE, GFP_KERNEL); + if (ret) + goto err1; + + sg = table->sgl; + list_for_each_entry_safe(info, tmp_info, &pages, list) { + struct page *page = info->page; + for (i = 0; i < (1 << info->order); i++) { + sg_set_page(sg, page + i, PAGE_SIZE, 0); + sg = sg_next(sg); + } + list_del(&info->list); + memset(info, 0, sizeof(struct page_info)); + kunmap(page); } + + dma_sync_sg_for_device(NULL, table->sgl, table->nents, + DMA_BIDIRECTIONAL); + buffer->priv_virt = table; return 0; err1: - for_each_sg(table->sgl, sg, i, j) - __free_page(sg_page(sg)); - sg_free_table(table); -err0: kfree(table); +err: + list_for_each_entry(info, &pages, list) { + for (i = 0; i < (1 << info->order); i++) + __free_page(info->page + i); + kunmap(info->page); + } return -ENOMEM; } @@ -63,7 +119,7 @@ void ion_system_heap_free(struct ion_buffer *buffer) struct sg_table *table = buffer->priv_virt; for_each_sg(table->sgl, sg, table->nents, i) - __free_page(sg_page(sg)); + __free_pages(sg_page(sg), get_order(sg_dma_len(sg))); if (buffer->sg_table) sg_free_table(buffer->sg_table); kfree(buffer->sg_table); @@ -85,22 +141,29 @@ void *ion_system_heap_map_kernel(struct ion_heap *heap, struct ion_buffer *buffer) { struct scatterlist *sg; - int i; + int i, j; void *vaddr; pgprot_t pgprot; struct sg_table *table = buffer->priv_virt; - struct page **pages = kmalloc(sizeof(struct page *) * table->nents, - GFP_KERNEL); - - for_each_sg(table->sgl, sg, table->nents, i) - pages[i] = sg_page(sg); + int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE; + struct page **pages = kzalloc(sizeof(struct page *) * npages, + GFP_KERNEL); + struct page **tmp = pages; if (buffer->flags & ION_FLAG_CACHED) pgprot = PAGE_KERNEL; else pgprot = pgprot_writecombine(PAGE_KERNEL); - vaddr = vmap(pages, table->nents, VM_MAP, pgprot); + for_each_sg(table->sgl, sg, table->nents, i) { + int npages_this_entry = PAGE_ALIGN(sg_dma_len(sg)) / PAGE_SIZE; + struct page *page = sg_page(sg); + BUG_ON(i >= npages); + for (j = 0; j < npages_this_entry; j++) { + *(tmp++) = page++; + } + } + vaddr = vmap(pages, npages, VM_MAP, pgprot); kfree(pages); return vaddr; @@ -126,8 +189,9 @@ int ion_system_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer, offset--; continue; } - vm_insert_page(vma, addr, sg_page(sg)); - addr += PAGE_SIZE; + remap_pfn_range(vma, addr, page_to_pfn(sg_page(sg)), + sg_dma_len(sg), vma->vm_page_prot); + addr += sg_dma_len(sg); } return 0; }