From patchwork Thu Aug 30 02:39:27 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 11034 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 9FB7823E02 for ; Thu, 30 Aug 2012 02:47:53 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id 29B3FA18A8E for ; Thu, 30 Aug 2012 02:47:19 +0000 (UTC) Received: by ieak11 with SMTP id k11so544577iea.11 for ; Wed, 29 Aug 2012 19:47:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to :date:message-id:x-mailer:in-reply-to:references :x-originalarrivaltime:cc:subject:x-beenthere:x-mailman-version :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=uXO54twbY82nsOXVPPEAih4mM43jSx5hFHMrU/Yc6WM=; b=avEj7DtxESOMRLd7pf36mpRVj06kZPyIPt59Las+Hpde+ZQOi9X4+aTmGXBIDqV5wK o2hnDcCqHM2hTT5sM2KhfO+bcVTk2DMmsWMRc76QwawULQ4ky5FrTDo4XUicBMWxfCXp V43MMRNiY+LsF9qGVhRjcusw7jzQu9nswAyhYkG0APdtAHQip4AhTDXRZlcSLovH62jw O4jtV1+HZtFJ6j3vJVv28v0c5kVXrjeUODvM+nQi4ghzorTdJVizeL6keAlr5H3DCi7h TLdbFuvi2/YxJPUALI0TY1YeWoQARImJkcLl2knebdgkKhqKpxMtf1xSDlQHBvoBQwL3 C/3Q== Received: by 10.50.7.212 with SMTP id l20mr3921469iga.43.1346294872546; Wed, 29 Aug 2012 19:47:52 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp118729igc; Wed, 29 Aug 2012 19:47:51 -0700 (PDT) Received: by 10.180.107.103 with SMTP id hb7mr7968044wib.3.1346294870886; Wed, 29 Aug 2012 19:47:50 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id bc8si18137395wib.35.2012.08.29.19.47.47; Wed, 29 Aug 2012 19:47:50 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1T6unH-0002ow-Ct; Thu, 30 Aug 2012 02:47:43 +0000 Received: from na3sys009aog124.obsmtp.com ([74.125.149.151]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1T6unE-0002nz-MS for linaro-mm-sig@lists.linaro.org; Thu, 30 Aug 2012 02:47:41 +0000 Received: from MSI-MTA.marvell.com ([65.219.4.132]) (using TLSv1) by na3sys009aob124.postini.com ([74.125.148.12]) with SMTP ID DSNKUD7USUhXSZk34w0kXHsJv8kViTLY1W/p@postini.com; Wed, 29 Aug 2012 19:47:40 PDT Received: from maili.marvell.com ([10.68.76.210]) by MSI-MTA.marvell.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 29 Aug 2012 19:39:32 -0700 Received: from localhost (unknown [10.26.128.111]) by maili.marvell.com (Postfix) with ESMTP id 9CE994E510; Wed, 29 Aug 2012 19:39:32 -0700 (PDT) From: Zhangfei Gao To: Rebecca Schultz Zavin , "linaro-mm-sig@lists.linaro.org" , Haojian Zhuang Date: Thu, 30 Aug 2012 10:39:27 +0800 Message-Id: <1346294367-23519-1-git-send-email-zhangfei.gao@marvell.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: X-OriginalArrivalTime: 30 Aug 2012 02:39:32.0878 (UTC) FILETIME=[AFD442E0:01CD8658] Cc: Zhangfei Gao Subject: [Linaro-mm-sig] [PATCH 2/3] gpu: ion: carveout_heap page wised cache flush X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQkhmHsFwAveR5Ms/WP8tHX05vD67OFRbZ32bC7A39uzFWXSsVve2/dmtB30FzB7UJgPXTe4 Extend dirty bit per PAGE_SIZE Page wised cache flush is supported and only takes effect for dirty buffer Signed-off-by: Zhangfei Gao --- drivers/gpu/ion/ion_carveout_heap.c | 23 +++++++++++++++++------ 1 files changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/ion/ion_carveout_heap.c b/drivers/gpu/ion/ion_carveout_heap.c index 13f6e8d..60e97e5 100644 --- a/drivers/gpu/ion/ion_carveout_heap.c +++ b/drivers/gpu/ion/ion_carveout_heap.c @@ -88,25 +88,36 @@ struct sg_table *ion_carveout_heap_map_dma(struct ion_heap *heap, struct ion_buffer *buffer) { struct sg_table *table; - int ret; + struct scatterlist *sg; + int ret, i; + int nents = PAGE_ALIGN(buffer->size) / PAGE_SIZE; + struct page *page = phys_to_page(buffer->priv_phys); table = kzalloc(sizeof(struct sg_table), GFP_KERNEL); if (!table) return ERR_PTR(-ENOMEM); - ret = sg_alloc_table(table, 1, GFP_KERNEL); + + ret = sg_alloc_table(table, nents, GFP_KERNEL); if (ret) { kfree(table); return ERR_PTR(ret); } - sg_set_page(table->sgl, phys_to_page(buffer->priv_phys), buffer->size, - 0); + + sg = table->sgl; + for (i = 0; i < nents; i++) { + sg_set_page(sg, page + i, PAGE_SIZE, 0); + sg = sg_next(sg); + } + return table; } void ion_carveout_heap_unmap_dma(struct ion_heap *heap, struct ion_buffer *buffer) { - sg_free_table(buffer->sg_table); + if (buffer->sg_table) + sg_free_table(buffer->sg_table); + kfree(buffer->sg_table); } void *ion_carveout_heap_map_kernel(struct ion_heap *heap, @@ -157,7 +168,7 @@ struct ion_heap *ion_carveout_heap_create(struct ion_platform_heap *heap_data) if (!carveout_heap) return ERR_PTR(-ENOMEM); - carveout_heap->pool = gen_pool_create(12, -1); + carveout_heap->pool = gen_pool_create(PAGE_SHIFT, -1); if (!carveout_heap->pool) { kfree(carveout_heap); return ERR_PTR(-ENOMEM);