From patchwork Wed Sep 26 09:11:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 147557 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp524948lji; Wed, 26 Sep 2018 03:17:47 -0700 (PDT) X-Google-Smtp-Source: ACcGV60PWsbeLW1B9saCZa3MAvxEuUZkHw4Vz84AQHrAPxMfUaCt8q/OXcp0yuH5890uuH7/xx2w X-Received: by 2002:a62:9992:: with SMTP id t18-v6mr5472905pfk.239.1537957067217; Wed, 26 Sep 2018 03:17:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537957067; cv=none; d=google.com; s=arc-20160816; b=0SMap8qrswXLU4JZ/uERVaOfLF5Lr950jnpZaxOGqxgLH6HOeDHckRJE0a64CxL6sd ugOmwzaBwXH80JSq5XEmF5MpdYVrXGRdNq3zkX9zoD9N34UjyXeGakHoB81bp+SU3YhA ARNxIZyGO0/zXjt2/xTY+PUbtFSu0MCh2nI//DsAEFKRdWn/QsiRWo6ogpOPQK+J+rQB sBmCoXkXFRwKLb5wADgaPeSFWhR557Zz4+mR0zq+N5+mhc6jDPG87qem7QNucTVKSDbt PeibONRepM+HgbZlQp3znO5pCtPO9SxO4dkTrtHgsbd4onhlMP1CQB4dVyVPWTX/4eFY K1Yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:mime-version:references:in-reply-to:message-id:date :subject:to:from:delivered-to; bh=1xxgwLKypv3AXpLS2U/LAyLYxTkmD8r30scKHJt4p9U=; b=jsmx5M07ocBh9G9xhXX7q/gSAtRw1GDPXKwJUFeOpJZboRmNmbXrcNr22Q6DAqCCj+ 4D5sqQHtdJunbBgGyr6U6OQ4oI+YID+9rbPSiCQggvzMIdthXmQeBFfVU4L+iLsTz9yb 9Db3ZfEJroqfCmwppLv887pf95X7mw+12JAjjENgXYUs1tqbU6KObKW2aSDXO27KfiN9 +M/9O5qyERD+bib55Hw9WQpfQV6eBeMzcpTBYrgGlJYWNT9YKCJlKA6mfMZjYv6bArwE sRFm6x7oNLBGmB/1jUrqGMwa5cLBsql6a9N0w7Utpmjegs/nfjvG9ENrPhIg0AxBEH7J bU+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of dri-devel-bounces@lists.freedesktop.org designates 131.252.210.177 as permitted sender) smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from gabe.freedesktop.org (gabe.freedesktop.org. [131.252.210.177]) by mx.google.com with ESMTPS id b4-v6si5312721pfg.90.2018.09.26.03.17.46 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 03:17:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of dri-devel-bounces@lists.freedesktop.org designates 131.252.210.177 as permitted sender) client-ip=131.252.210.177; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of dri-devel-bounces@lists.freedesktop.org designates 131.252.210.177 as permitted sender) smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6471D6E40C; Wed, 26 Sep 2018 10:17:15 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org X-Greylist: delayed 3974 seconds by postgrey-1.36 at gabe; Wed, 26 Sep 2018 10:17:13 UTC Received: from fllv0015.ext.ti.com (fllv0015.ext.ti.com [198.47.19.141]) by gabe.freedesktop.org (Postfix) with ESMTPS id AE5B86E40C for ; Wed, 26 Sep 2018 10:17:13 +0000 (UTC) Received: from dflxv15.itg.ti.com ([128.247.5.124]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id w8Q9AtJ2085780; Wed, 26 Sep 2018 04:10:55 -0500 Received: from DFLE106.ent.ti.com (dfle106.ent.ti.com [10.64.6.27]) by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id w8Q9AtNZ002980; Wed, 26 Sep 2018 04:10:55 -0500 Received: from DFLE111.ent.ti.com (10.64.6.32) by DFLE106.ent.ti.com (10.64.6.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1466.3; Wed, 26 Sep 2018 04:10:54 -0500 Received: from dflp32.itg.ti.com (10.64.6.15) by DFLE111.ent.ti.com (10.64.6.32) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.1466.3 via Frontend Transport; Wed, 26 Sep 2018 04:10:54 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dflp32.itg.ti.com (8.14.3/8.13.8) with ESMTP id w8Q9Aina021168; Wed, 26 Sep 2018 04:10:53 -0500 From: Peter Ujfalusi To: , Subject: [PATCH v4 4/4] drm/omap: partial workaround for DRA7xx DMM errata i878 Date: Wed, 26 Sep 2018 12:11:30 +0300 Message-ID: <20180926091130.5379-5-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180926091130.5379-1-peter.ujfalusi@ti.com> References: <20180926091130.5379-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jsarha@ti.com, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tomi Valkeinen Errata i878 says that MPU should not be used to access RAM and DMM at the same time. As it's not possible to prevent MPU accessing RAM, we need to access DMM via a proxy. This patch changes DMM driver to access DMM registers via sDMA. Instead of doing a normal readl/writel call to read/write a register, we use sDMA to copy 4 bytes from/to the DMM registers. This patch provides only a partial workaround for i878, as not only DMM register reads/writes are affected, but also accesses to the DMM mapped buffers (framebuffers, usually). Signed-off-by: Tomi Valkeinen Signed-off-by: Peter Ujfalusi --- drivers/gpu/drm/omapdrm/omap_dmm_priv.h | 7 ++ drivers/gpu/drm/omapdrm/omap_dmm_tiler.c | 149 ++++++++++++++++++++++- 2 files changed, 154 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_priv.h b/drivers/gpu/drm/omapdrm/omap_dmm_priv.h index c2785cc98dc9..60bb3f9297bc 100644 --- a/drivers/gpu/drm/omapdrm/omap_dmm_priv.h +++ b/drivers/gpu/drm/omapdrm/omap_dmm_priv.h @@ -159,6 +159,7 @@ struct dmm_platform_data { struct dmm { struct device *dev; + dma_addr_t phys_base; void __iomem *base; int irq; @@ -189,6 +190,12 @@ struct dmm { struct list_head alloc_head; const struct dmm_platform_data *plat_data; + + bool dmm_workaround; + spinlock_t wa_lock; + u32 *wa_dma_data; + dma_addr_t wa_dma_handle; + struct dma_chan *wa_dma_chan; }; #endif diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c index ef9a88c772ed..05723527bec4 100644 --- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c +++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -79,14 +80,138 @@ static const u32 reg[][4] = { DMM_PAT_DESCR__2, DMM_PAT_DESCR__3}, }; +static int dmm_dma_copy(struct dmm *dmm, dma_addr_t src, dma_addr_t dst) +{ + struct dma_device *dma_dev = dmm->wa_dma_chan->device; + struct dma_async_tx_descriptor *tx; + enum dma_status status; + dma_cookie_t cookie; + + tx = dma_dev->device_prep_dma_memcpy(dmm->wa_dma_chan, dst, src, 4, 0); + if (!tx) { + dev_err(dmm->dev, "Failed to prepare DMA memcpy\n"); + return -EIO; + } + + cookie = tx->tx_submit(tx); + if (dma_submit_error(cookie)) { + dev_err(dmm->dev, "Failed to do DMA tx_submit\n"); + return -EIO; + } + + dma_async_issue_pending(dmm->wa_dma_chan); + status = dma_sync_wait(dmm->wa_dma_chan, cookie); + if (status != DMA_COMPLETE) + dev_err(dmm->dev, "i878 wa DMA copy failure\n"); + + dmaengine_terminate_all(dmm->wa_dma_chan); + return 0; +} + +static u32 dmm_read_wa(struct dmm *dmm, u32 reg) +{ + dma_addr_t src, dst; + int r; + + src = dmm->phys_base + reg; + dst = dmm->wa_dma_handle; + + r = dmm_dma_copy(dmm, src, dst); + if (r) { + dev_err(dmm->dev, "sDMA read transfer timeout\n"); + return readl(dmm->base + reg); + } + + /* + * As per i878 workaround, the DMA is used to access the DMM registers. + * Make sure that the readl is not moved by the compiler or the CPU + * earlier than the DMA finished writing the value to memory. + */ + rmb(); + return readl(dmm->wa_dma_data); +} + +static void dmm_write_wa(struct dmm *dmm, u32 val, u32 reg) +{ + dma_addr_t src, dst; + int r; + + writel(val, dmm->wa_dma_data); + /* + * As per i878 workaround, the DMA is used to access the DMM registers. + * Make sure that the writel is not moved by the compiler or the CPU, so + * the data will be in place before we start the DMA to do the actual + * register write. + */ + wmb(); + + src = dmm->wa_dma_handle; + dst = dmm->phys_base + reg; + + r = dmm_dma_copy(dmm, src, dst); + if (r) { + dev_err(dmm->dev, "sDMA write transfer timeout\n"); + writel(val, dmm->base + reg); + } +} + static u32 dmm_read(struct dmm *dmm, u32 reg) { - return readl(dmm->base + reg); + if (dmm->dmm_workaround) { + u32 v; + unsigned long flags; + + spin_lock_irqsave(&dmm->wa_lock, flags); + v = dmm_read_wa(dmm, reg); + spin_unlock_irqrestore(&dmm->wa_lock, flags); + + return v; + } else { + return readl(dmm->base + reg); + } } static void dmm_write(struct dmm *dmm, u32 val, u32 reg) { - writel(val, dmm->base + reg); + if (dmm->dmm_workaround) { + unsigned long flags; + + spin_lock_irqsave(&dmm->wa_lock, flags); + dmm_write_wa(dmm, val, reg); + spin_unlock_irqrestore(&dmm->wa_lock, flags); + } else { + writel(val, dmm->base + reg); + } +} + +static int dmm_workaround_init(struct dmm *dmm) +{ + dma_cap_mask_t mask; + + spin_lock_init(&dmm->wa_lock); + + dmm->wa_dma_data = dma_alloc_coherent(dmm->dev, sizeof(u32), + &dmm->wa_dma_handle, GFP_KERNEL); + if (!dmm->wa_dma_data) + return -ENOMEM; + + dma_cap_zero(mask); + dma_cap_set(DMA_MEMCPY, mask); + + dmm->wa_dma_chan = dma_request_channel(mask, NULL, NULL); + if (!dmm->wa_dma_chan) { + dma_free_coherent(dmm->dev, 4, dmm->wa_dma_data, dmm->wa_dma_handle); + return -ENODEV; + } + + return 0; +} + +static void dmm_workaround_uninit(struct dmm *dmm) +{ + dma_release_channel(dmm->wa_dma_chan); + + dma_free_coherent(dmm->dev, 4, dmm->wa_dma_data, dmm->wa_dma_handle); } /* simple allocator to grab next 16 byte aligned memory from txn */ @@ -636,6 +761,9 @@ static int omap_dmm_remove(struct platform_device *dev) if (omap_dmm->dummy_page) __free_page(omap_dmm->dummy_page); + if (omap_dmm->dmm_workaround) + dmm_workaround_uninit(omap_dmm); + iounmap(omap_dmm->base); kfree(omap_dmm); omap_dmm = NULL; @@ -681,6 +809,7 @@ static int omap_dmm_probe(struct platform_device *dev) goto fail; } + omap_dmm->phys_base = mem->start; omap_dmm->base = ioremap(mem->start, SZ_2K); if (!omap_dmm->base) { @@ -696,6 +825,22 @@ static int omap_dmm_probe(struct platform_device *dev) omap_dmm->dev = &dev->dev; + if (of_machine_is_compatible("ti,dra7")) { + /* + * DRA7 Errata i878 says that MPU should not be used to access + * RAM and DMM at the same time. As it's not possible to prevent + * MPU accessing RAM, we need to access DMM via a proxy. + */ + if (!dmm_workaround_init(omap_dmm)) { + omap_dmm->dmm_workaround = true; + dev_info(&dev->dev, + "workaround for errata i878 in use\n"); + } else { + dev_warn(&dev->dev, + "failed to initialize work-around for i878\n"); + } + } + hwinfo = dmm_read(omap_dmm, DMM_PAT_HWINFO); omap_dmm->num_engines = (hwinfo >> 24) & 0x1F; omap_dmm->num_lut = (hwinfo >> 16) & 0x1F;