From patchwork Wed Dec 13 21:26:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 121863 Delivered-To: patches@linaro.org Received: by 10.140.22.227 with SMTP id 90csp5997468qgn; Wed, 13 Dec 2017 13:26:12 -0800 (PST) X-Received: by 10.159.202.143 with SMTP id p15mr7116864plo.79.1513200372891; Wed, 13 Dec 2017 13:26:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1513200372; cv=none; d=google.com; s=arc-20160816; b=SR3hfQmBNzebKXUmfnL8W+KlqdTbM5D3Q4TlDO9POxCBe9Xs1pk2G4KClXuAebGwQ3 cprJTUC4aSIoxjEZrEgmPvaO2xcQUAtSEReFnO1GoY8kffWEFz0qWiL3t0X9SQ3oA8iN bZ7Gb2kqeXPL00Idg8tKkqFCtzIdU/EFxIPSNOUFbwerxTX5DxBjOIuCHA1DQf5MIBwN mrjBO9UC3CHGbYlfwZUzk8IAeCPdZ+xthPS3h2q1jeJi+AEQUvplAaUUFKRdjWpBhTbj AcCkYuf5NqGWnqTffPTQhRjJJ55qo161anMmE9CQBiqQUiGbA9ryMwPcVal88UFU8fkt EjdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=dWQd72KE0lW1DHBZs+m1EJvyUwDytOMEJooSfqPfijc=; b=MZVLjllKgzk7dLJ1IydC8MKn4v5f/jLxME1zOelR/Kyqs202VAuo9EXiagKgntZjQo X3itfCWNHifGufcL0cCjxD01yPt0lzsJIgc5zdnKlfBVpu2ZVq56JtpkI5r6a+gv7kKh MRJsc4unZX6awQd7ScvfRd/prJZqDMCFFKyAlUejtl2OFsoEwWlNhtM4m6YYrk4oH83j h4Pvv/k3x35gZC5t/WdXF2wKOO120biu3ZCYbfvMKC5ALP8hyRM9MJp7oHazVM9hXiSY 3Pvn/OXWvjqV0fN+Eu+/WeNo+UuAhIYQB/Mo7B9axqfuP6EDteRuPhmBtvrlK13dyeGl bq0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jQLaC1+D; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id i3sor931775plk.57.2017.12.13.13.26.12 for (Google Transport Security); Wed, 13 Dec 2017 13:26:12 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jQLaC1+D; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=dWQd72KE0lW1DHBZs+m1EJvyUwDytOMEJooSfqPfijc=; b=jQLaC1+D62JSffA5XvF1rN4bJtcUnrDh1rtXcknukMBtPT695hPDeu05b+x70L/nLv aCEh5qdK6uAAMBfA+vXY1vZgJ10GP0yyXFQbZB1vshytz7bnTUcgtfwciBwku7y2AXTE YasvXJo4LvQ956eerGadus1tH4CdAs0o915qM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=dWQd72KE0lW1DHBZs+m1EJvyUwDytOMEJooSfqPfijc=; b=D68SYzMQN8ukZ8KRK414eTzSkLEJe4ELm+qZ8d88ZYsK2610uWfQDijPeWa1v0be9W aLcsGEb45jZDm13FRn3nDQDQVzMznDvvx7okgorj4hv36ghgTZEAla77Oqt0Wnuu3B2j 5twAkwwaDtYa8Giu4n6EKpSfgum7co920thh3vZtuVKbehQ0hqwY15EoiVxBG3fWbG+H e0x1ee4aZycULyvg+aIpst4hD1l9Dr3XJzIo104D69tT1t3uXqZkLSDC2EDAP6QtM/Zq l486C+Frqbx4o8Y0LHCA2YsyjdKpbtu5MUybjf6s9LDgZvGG8Z9SYgni9aJYmuv2JibR 6+wA== X-Gm-Message-State: AKGB3mJ9Feq8XO1t0pDrn7RfsB6Igp+2oW9ODWGXBRI2MQvHls6Qg2fX dhfDonOWEzd+8k1cXqHhRk5yltbO X-Google-Smtp-Source: ACJfBotQgLMs5cSlQ9siEtEPNVzcWYul1Sl6KIeUzuTr7zWHq3l4ghyUkk6/hxf4W5mWMPDR0WJmYg== X-Received: by 10.159.249.6 with SMTP id bf6mr7063681plb.252.1513200372303; Wed, 13 Dec 2017 13:26:12 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:600:5100:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id v88sm5680301pfk.31.2017.12.13.13.26.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 13 Dec 2017 13:26:11 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Sumit Semwal , Benjamin Gaignard , Archit Taneja , Greg KH , Daniel Vetter , Dmitry Shmidt , Todd Kjos , Amit Pundir Subject: [PATCH v2] staging: ion: Fix ion_cma_heap allocations Date: Wed, 13 Dec 2017 13:26:04 -0800 Message-Id: <1513200364-19523-1-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In trying to add support for drm_hwcomposer to HiKey, I've needed to utilize the ION CMA heap, and I've noticed problems with allocations on newer kernels failing. It seems back with 204f672255c2 ("ion: Use CMA APIs directly"), the ion_cma_heap code was modified to use the CMA API, but kept the arguments as buffer lengths rather then number of pages. This results in errors as we don't have enough pages in CMA to satisfy the exaggerated requests. This patch converts the ion_cma_heap CMA API usage to properly request pages. It also fixes a minor issue in the allocation where in the error path, the cma_release is called with the buffer->size value which hasn't yet been set. Cc: Laura Abbott Cc: Sumit Semwal Cc: Benjamin Gaignard Cc: Archit Taneja Cc: Greg KH Cc: Daniel Vetter Cc: Dmitry Shmidt Cc: Todd Kjos Cc: Amit Pundir Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly") Acked-by: Laura Abbott Signed-off-by: John Stultz --- v2: Fix build errors when CONFIG_CMA_ALIGNMENT isn't defined drivers/staging/android/ion/ion_cma_heap.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c index dd5545d..ff405c7 100644 --- a/drivers/staging/android/ion/ion_cma_heap.c +++ b/drivers/staging/android/ion/ion_cma_heap.c @@ -31,6 +31,12 @@ struct ion_cma_heap { #define to_cma_heap(x) container_of(x, struct ion_cma_heap, heap) +#ifdef CONFIG_CMA_ALIGNMENT + #define CMA_ALIGNMENT CONFIG_CMA_ALIGNMENT +#else + #define CMA_ALIGNMNET 8 +#endif + /* ION CMA heap operations functions */ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, unsigned long len, @@ -39,9 +45,15 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, struct ion_cma_heap *cma_heap = to_cma_heap(heap); struct sg_table *table; struct page *pages; + unsigned long size = PAGE_ALIGN(len); + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long align = get_order(size); int ret; - pages = cma_alloc(cma_heap->cma, len, 0, GFP_KERNEL); + if (align > CMA_ALIGNMENT) + align = CMA_ALIGNMENT; + + pages = cma_alloc(cma_heap->cma, nr_pages, align, GFP_KERNEL); if (!pages) return -ENOMEM; @@ -53,7 +65,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, if (ret) goto free_mem; - sg_set_page(table->sgl, pages, len, 0); + sg_set_page(table->sgl, pages, size, 0); buffer->priv_virt = pages; buffer->sg_table = table; @@ -62,7 +74,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, free_mem: kfree(table); err: - cma_release(cma_heap->cma, pages, buffer->size); + cma_release(cma_heap->cma, pages, nr_pages); return -ENOMEM; } @@ -70,9 +82,10 @@ static void ion_cma_free(struct ion_buffer *buffer) { struct ion_cma_heap *cma_heap = to_cma_heap(buffer->heap); struct page *pages = buffer->priv_virt; + unsigned long nr_pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT; /* release memory */ - cma_release(cma_heap->cma, pages, buffer->size); + cma_release(cma_heap->cma, pages, nr_pages); /* release sg table */ sg_free_table(buffer->sg_table); kfree(buffer->sg_table);