From patchwork Thu Oct 6 15:49:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 77308 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp45384qge; Thu, 6 Oct 2016 08:50:54 -0700 (PDT) X-Received: by 10.66.166.238 with SMTP id zj14mr22703254pab.205.1475769054836; Thu, 06 Oct 2016 08:50:54 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cw3si12271662pad.246.2016.10.06.08.50.54; Thu, 06 Oct 2016 08:50:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756038AbcJFPuv (ORCPT + 27 others); Thu, 6 Oct 2016 11:50:51 -0400 Received: from mail-wm0-f50.google.com ([74.125.82.50]:38202 "EHLO mail-wm0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753972AbcJFPul (ORCPT ); Thu, 6 Oct 2016 11:50:41 -0400 Received: by mail-wm0-f50.google.com with SMTP id i130so12584546wmg.1 for ; Thu, 06 Oct 2016 08:49:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NHo+AG3DCe8I4B3ZQeOtz7rXXnNkeWIPm9SADh8jacg=; b=GUJRcW7GflAIB4Ce3mo2vqHWKFOnP/LBvDzmCcYl9VbbMLY6n85HpGgUPftNo8528c Wb92lJNDyriPs0fzQVdx53NT4Gm/pmgNVmEmbOahYO/cgKygEPbVuiZ24c9nnoexSAmG 7OixRlHJp7wrfQYt/G44VyWHL17HZ5HUeUQLY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NHo+AG3DCe8I4B3ZQeOtz7rXXnNkeWIPm9SADh8jacg=; b=Y0cGKMt2Eyf14eyM3/7TVICVeNI6VOVPT8pLmwadXSmbQuAGseqGhKYP3rklCFEO2H IKYAoOqbhtiE5tbMqOgK7G1Q6kkLWHX+MQeFtj9IcUaddTFf2IlEPIPmLOkrTbRQ8+QZ k/pTeg+2iToCmE4UkGSnSOBAD8AtQ5BYfO0S5ZPPLZVF2AA9x+EUXE88KnBRzw/1qcTw pcYY2xkqdGPQMhrO0p/5V4124T65IdRy0ltq7vSZyByDgrvd8nQYmNSKm94MVpdOgBao QY3PfUytX6jzG1wp3WKyUFeGMykBJCUQv76kg655ETCUHUbpM6nhZeBJ3gS9zEYZi8Vh eL2w== X-Gm-Message-State: AA6/9RkvuX7Wu773zPTpHvZvtOuqJLxnuRGPqzTX7IRFkDs2tdJpyVzOEdVWtlWpGIo/8gLT X-Received: by 10.194.83.166 with SMTP id r6mr12903445wjy.186.1475768979961; Thu, 06 Oct 2016 08:49:39 -0700 (PDT) Received: from localhost.localdomain ([197.128.55.6]) by smtp.gmail.com with ESMTPSA id x124sm36001176wmf.22.2016.10.06.08.49.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 06 Oct 2016 08:49:39 -0700 (PDT) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: airlied@linux.ie, bskeggs@redhat.com, gnurou@gmail.com, Ard Biesheuvel Subject: [PATCH v5 1/3] drm/nouveau: set streaming DMA mask early Date: Thu, 6 Oct 2016 16:49:28 +0100 Message-Id: <1475768970-32512-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475768970-32512-1-git-send-email-ard.biesheuvel@linaro.org> References: <1475768970-32512-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some subdevices (i.e., fb/nv50.c and fb/gf100.c) map a scratch page using dma_map_page() way before the TTM layer has had a chance to set the DMA mask. This may prevent the driver from loading at all on platforms whose system memory is not covered by the default DMA mask of 32-bit (i.e., when all RAM is above 4 GB). So set a preliminary DMA mask right after constructing the PCI device, and base it on the .dma_bits member of the MMU subdevice, which is what the TTM layer will base the DMA mask on as well. Signed-off-by: Ard Biesheuvel --- drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c | 37 ++++++++++++++------ 1 file changed, 27 insertions(+), 10 deletions(-) -- 2.7.4 diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c index 62ad0300cfa5..0030cd9543b2 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c @@ -1665,14 +1665,31 @@ nvkm_device_pci_new(struct pci_dev *pci_dev, const char *cfg, const char *dbg, *pdevice = &pdev->device; pdev->pdev = pci_dev; - return nvkm_device_ctor(&nvkm_device_pci_func, quirk, &pci_dev->dev, - pci_is_pcie(pci_dev) ? NVKM_DEVICE_PCIE : - pci_find_capability(pci_dev, PCI_CAP_ID_AGP) ? - NVKM_DEVICE_AGP : NVKM_DEVICE_PCI, - (u64)pci_domain_nr(pci_dev->bus) << 32 | - pci_dev->bus->number << 16 | - PCI_SLOT(pci_dev->devfn) << 8 | - PCI_FUNC(pci_dev->devfn), name, - cfg, dbg, detect, mmio, subdev_mask, - &pdev->device); + ret = nvkm_device_ctor(&nvkm_device_pci_func, quirk, &pci_dev->dev, + pci_is_pcie(pci_dev) ? NVKM_DEVICE_PCIE : + pci_find_capability(pci_dev, PCI_CAP_ID_AGP) ? + NVKM_DEVICE_AGP : NVKM_DEVICE_PCI, + (u64)pci_domain_nr(pci_dev->bus) << 32 | + pci_dev->bus->number << 16 | + PCI_SLOT(pci_dev->devfn) << 8 | + PCI_FUNC(pci_dev->devfn), name, + cfg, dbg, detect, mmio, subdev_mask, + &pdev->device); + + if (ret) + return ret; + + /* + * Set a preliminary DMA mask based on the .dma_bits member of the + * MMU subdevice. This allows other subdevices to create DMA mappings + * in their init() or oneinit() methods, which may be called before the + * TTM layer sets the DMA mask definitively. + * This is necessary for platforms where the default DMA mask of 32 + * does not cover any system memory, i.e., when all RAM is > 4 GB. + */ + if (subdev_mask & BIT(NVKM_SUBDEV_MMU)) + dma_set_mask_and_coherent(&pci_dev->dev, + DMA_BIT_MASK(pdev->device.mmu->dma_bits)); + + return 0; }