From patchwork Tue Jun 21 12:50:35 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 70551 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp2000195qgy; Tue, 21 Jun 2016 05:50:44 -0700 (PDT) X-Received: by 10.98.70.11 with SMTP id t11mr28577474pfa.16.1466513444678; Tue, 21 Jun 2016 05:50:44 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o90si40129190pfj.121.2016.06.21.05.50.44; Tue, 21 Jun 2016 05:50:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751959AbcFUMum (ORCPT + 30 others); Tue, 21 Jun 2016 08:50:42 -0400 Received: from mail-lb0-f176.google.com ([209.85.217.176]:36556 "EHLO mail-lb0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751307AbcFUMul (ORCPT ); Tue, 21 Jun 2016 08:50:41 -0400 Received: by mail-lb0-f176.google.com with SMTP id ak10so9974794lbc.3 for ; Tue, 21 Jun 2016 05:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=NsAxHKjEDKyAWuDu1vCB/ntpWBNf0JRfUR6s7UgNdSo=; b=Y6tCHXPMFHXvjaUeeCL/qFYs8RDfBlGMl/KNMcFgLGjwSbzAe5KUdU4laWRJXt6rYB qcoXZlLLzhcAV8P+TNT14FvFYnhFAn+HW0KjfLxT356yT+oXPIyaIBbjOchKqUFDyMr0 3UHVSGUcQg8hJmKKtAQz7o7irOUISWLQ/D2A4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=NsAxHKjEDKyAWuDu1vCB/ntpWBNf0JRfUR6s7UgNdSo=; b=LBWAQKkBS9WioK3um7rruGlgIdrd4qYcVTd8Yfb7VNTJY3HOivY2KwQawy6zi52S+5 bUjADrGUZ7g5ak9k7Tsu5qXocwEIQf6bJiMvlV0/MWrE/XfMlysjS77BstFqOkRV53BJ E5G5Yjb/KB4IcilSjvCNI7/Gt4TaUMcuednYHHj9BdVMmD360qsuAo8w4Xoc9GFtB4qg z0JCynLs8t+UQ300PvYiQ/8E99iMADo8GO2wTyLX/vvd6yck8DMIU2TTa5GBTsyg7g9U RAhYkrZrE9234WgMNWeljSVRf3isc2bI67DpPNwsWTwp2T290mJxDDJDc0H6UjRjBEb4 JrjA== X-Gm-Message-State: ALyK8tJMoR7MV+I5FocARf1IZqnZX8gVXSI2QgRornNMoeuMy3dfjXUUj24JGxpO61AveVBM X-Received: by 10.194.216.33 with SMTP id on1mr19128167wjc.153.1466513438881; Tue, 21 Jun 2016 05:50:38 -0700 (PDT) Received: from localhost.localdomain ([188.203.148.129]) by smtp.gmail.com with ESMTPSA id bh4sm38713542wjc.43.2016.06.21.05.50.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 21 Jun 2016 05:50:37 -0700 (PDT) From: Ard Biesheuvel To: nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, bskeggs@redhat.com Cc: airlied@linux.ie, linux-kernel@vger.kernel.org, Ard Biesheuvel Subject: [RFC PATCH v2] drm/nouveau/fb/nv50: set DMA mask before mapping scratch page Date: Tue, 21 Jun 2016 14:50:35 +0200 Message-Id: <1466513435-11599-1-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The 100c08 scratch page is mapped using dma_map_page() before the TTM layer has had a chance to set the DMA mask. This means we are still running with the default of 32 when this code executes, and this causes problems for platforms with no memory below 4 GB (such as AMD Seattle) So move the dma_map_page() to the .init hook, and set the streaming DMA mask based on the MMU subdev parameters before performing the call. Signed-off-by: Ard Biesheuvel --- I am sure there is a much better way to address this, but this fixes the problem I get on AMD Seattle with a GeForce 210 PCIe card: nouveau 0000:02:00.0: enabling device (0000 -> 0003) nouveau 0000:02:00.0: NVIDIA GT218 (0a8280b1) nouveau 0000:02:00.0: bios: version 70.18.a6.00.00 nouveau 0000:02:00.0: fb ctor failed, -14 nouveau: probe of 0000:02:00.0 failed with error -14 v2: replace incorrect comparison of dma_addr_t type var against NULL drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c | 37 ++++++++++++++------ 1 file changed, 26 insertions(+), 11 deletions(-) -- 2.7.4 diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c index 1b5fb02eab2a..033ca0effb7e 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.c @@ -216,11 +216,30 @@ nv50_fb_init(struct nvkm_fb *base) struct nv50_fb *fb = nv50_fb(base); struct nvkm_device *device = fb->base.subdev.device; + if (!fb->r100c08) { + /* + * We are calling the DMA api way before the TTM layer sets the + * DMA mask based on the MMU subdev parameters. This means we + * are using the default DMA mask of 32, which may cause + * problems on systems with no RAM below the 4 GB mark. So set + * the streaming DMA mask here as well. + */ + dma_set_mask(device->dev, DMA_BIT_MASK(device->mmu->dma_bits)); + + fb->r100c08 = dma_map_page(device->dev, fb->r100c08_page, 0, + PAGE_SIZE, DMA_BIDIRECTIONAL); + if (dma_mapping_error(device->dev, fb->r100c08)) { + nvkm_warn(&fb->base.subdev, + "dma_map_page() failed on 100c08 page\n"); + } + } + /* Not a clue what this is exactly. Without pointing it at a * scratch page, VRAM->GART blits with M2MF (as in DDX DFS) * cause IOMMU "read from address 0" errors (rh#561267) */ - nvkm_wr32(device, 0x100c08, fb->r100c08 >> 8); + if (fb->r100c08 != DMA_ERROR_CODE) + nvkm_wr32(device, 0x100c08, fb->r100c08 >> 8); /* This is needed to get meaningful information from 100c90 * on traps. No idea what these values mean exactly. */ @@ -233,11 +252,11 @@ nv50_fb_dtor(struct nvkm_fb *base) struct nv50_fb *fb = nv50_fb(base); struct nvkm_device *device = fb->base.subdev.device; - if (fb->r100c08_page) { + if (fb->r100c08 && fb->r100c08 != DMA_ERROR_CODE) dma_unmap_page(device->dev, fb->r100c08, PAGE_SIZE, DMA_BIDIRECTIONAL); - __free_page(fb->r100c08_page); - } + + __free_page(fb->r100c08_page); return fb; } @@ -264,13 +283,9 @@ nv50_fb_new_(const struct nv50_fb_func *func, struct nvkm_device *device, *pfb = &fb->base; fb->r100c08_page = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (fb->r100c08_page) { - fb->r100c08 = dma_map_page(device->dev, fb->r100c08_page, 0, - PAGE_SIZE, DMA_BIDIRECTIONAL); - if (dma_mapping_error(device->dev, fb->r100c08)) - return -EFAULT; - } else { - nvkm_warn(&fb->base.subdev, "failed 100c08 page alloc\n"); + if (!fb->r100c08_page) { + nvkm_error(&fb->base.subdev, "failed 100c08 page alloc\n"); + return -ENOMEM; } return 0;