From patchwork Mon Sep 26 12:32:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 77033 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp1151722qgf; Mon, 26 Sep 2016 05:33:29 -0700 (PDT) X-Received: by 10.98.223.218 with SMTP id d87mr38736268pfl.48.1474893209800; Mon, 26 Sep 2016 05:33:29 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x82si22639924pfa.261.2016.09.26.05.33.29; Mon, 26 Sep 2016 05:33:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1034856AbcIZMd0 (ORCPT + 27 others); Mon, 26 Sep 2016 08:33:26 -0400 Received: from mail-pf0-f180.google.com ([209.85.192.180]:36368 "EHLO mail-pf0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1034604AbcIZMcu (ORCPT ); Mon, 26 Sep 2016 08:32:50 -0400 Received: by mail-pf0-f180.google.com with SMTP id q2so65472760pfj.3 for ; Mon, 26 Sep 2016 05:32:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AZXlTx0XCzNvKeic77IID03N8QLNKI7C42IC94G8ZnA=; b=OTqmH1W4OcV6RX9QuuTN9D2Dpnb6m6lrHm/KZp3T1OY0iJuD7Dv+RKIeIZEC4ScEsA NQ3MkzlolKUNOrVscsZyMMYjPaAAXoUYFp2aG4haEGpvRTla1NsBA7ycRQpJZxiHkI5v NtQojXwRs83/5zPKZllX1EZugg9E1ec3aZUbw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AZXlTx0XCzNvKeic77IID03N8QLNKI7C42IC94G8ZnA=; b=XcswD1pQLJ4akTx5FisD8w1ns56jVVbHP+L2Q4UdC2OdVY9In/kfS7EbQyTIRQn+Mk VBuBb/VuFMn7MFYNgp9bo6A2kEbOzO0lh5r6HVIK4EjQnvpLBdrAW8ms1bA59lcStToz 4i5fBMawmpdNEx6dPgMQR1KK5W9pkGwfsCFppvNIPrEJAF/m6n+XaLoHbBBdzXKULof5 gkiLDEpNN9k7TizPbv2Ae+C1nKgz8ycIUALJ8Wj0QdwXPm6ufITR/acXVPvv55kEtY3S soyoKQc7wKwaq0ujbU/1/NElq2PK3WEASNZUwNGQPmzW20h3eTvjH7RPEMR6YLkAyj1a GDLw== X-Gm-Message-State: AA6/9RktbwauxV4qSqEeeHMb2G3YDRKjHJe8hbA3B4f9u6k1+v+ogaNh5G5ZCRFN6p2CWifV X-Received: by 10.98.137.89 with SMTP id v86mr24854182pfd.38.1474893169789; Mon, 26 Sep 2016 05:32:49 -0700 (PDT) Received: from localhost.localdomain ([67.238.99.186]) by smtp.gmail.com with ESMTPSA id c15sm10893809pfl.25.2016.09.26.05.32.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 26 Sep 2016 05:32:49 -0700 (PDT) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, bskeggs@redhat.com, airlied@linux.ie, dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org Cc: Ard Biesheuvel Subject: [PATCH v4 2/3] drm/nouveau/fb/gf100: defer DMA mapping of scratch page to init() hook Date: Mon, 26 Sep 2016 05:32:39 -0700 Message-Id: <1474893160-12321-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1474893160-12321-1-git-send-email-ard.biesheuvel@linaro.org> References: <1474893160-12321-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The 100c10 scratch page is mapped using dma_map_page() before the TTM layer has had a chance to set the DMA mask. This means we are still running with the default of 32 when this code executes, and this causes problems for platforms with no memory below 4 GB (such as AMD Seattle) So move the dma_map_page() to the .init hook, which executes after the DMA mask has been set. Signed-off-by: Ard Biesheuvel --- drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c | 26 ++++++++++++++------ 1 file changed, 18 insertions(+), 8 deletions(-) -- 2.7.4 diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c index 76433cc66fff..5c8132873e60 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c @@ -93,7 +93,18 @@ gf100_fb_init(struct nvkm_fb *base) struct gf100_fb *fb = gf100_fb(base); struct nvkm_device *device = fb->base.subdev.device; - if (fb->r100c10_page) + if (!fb->r100c10) { + dma_addr_t addr = dma_map_page(device->dev, fb->r100c10_page, 0, + PAGE_SIZE, DMA_BIDIRECTIONAL); + if (!dma_mapping_error(device->dev, addr)) { + fb->r100c10 = addr; + } else { + nvkm_warn(&fb->base.subdev, + "dma_map_page() failed on 100c10 page\n"); + } + } + + if (fb->r100c10) nvkm_wr32(device, 0x100c10, fb->r100c10 >> 8); } @@ -103,12 +114,13 @@ gf100_fb_dtor(struct nvkm_fb *base) struct gf100_fb *fb = gf100_fb(base); struct nvkm_device *device = fb->base.subdev.device; - if (fb->r100c10_page) { + if (fb->r100c10) { dma_unmap_page(device->dev, fb->r100c10, PAGE_SIZE, DMA_BIDIRECTIONAL); - __free_page(fb->r100c10_page); } + __free_page(fb->r100c10_page); + return fb; } @@ -124,11 +136,9 @@ gf100_fb_new_(const struct nvkm_fb_func *func, struct nvkm_device *device, *pfb = &fb->base; fb->r100c10_page = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (fb->r100c10_page) { - fb->r100c10 = dma_map_page(device->dev, fb->r100c10_page, 0, - PAGE_SIZE, DMA_BIDIRECTIONAL); - if (dma_mapping_error(device->dev, fb->r100c10)) - return -EFAULT; + if (!fb->r100c10_page) { + nvkm_error(&fb->base.subdev, "failed 100c10 page alloc\n"); + return -ENOMEM; } return 0;