From patchwork Mon Jul 22 13:09:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 169386 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp7508803ilk; Mon, 22 Jul 2019 06:10:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqyCeoVHuDAheVHC8Tt9Bd/xpayRHRaLLoBps9zgbbZsRhhbA2EDW2TgK6Ri6KvSEZdh4JmM X-Received: by 2002:a17:902:6b0c:: with SMTP id o12mr73119951plk.113.1563801015003; Mon, 22 Jul 2019 06:10:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563801014; cv=none; d=google.com; s=arc-20160816; b=g82+UARbwq3BUAFl1I73O3IXBVKL1PRzUtcp2TbAiQixvsTt9pMgaD0v+pUcA0lI87 P+nLlbO9uHG6Jc2yy2GGZbnjAFwWOisaVK9mMlMMAkQIDEUigMy65WEQJnZUw2CSwz+t vU6Lns3noiSr+P+YyHeBs9sMriHIib9aoCSw4OshMLU6ABIP7JL4K2WYL1ZzdUPnbfUv Kb2MqelN5mc3DMlr/CcFsBL+1QEArPDUfYMZbrIzhts0dUkZwYDJJarHJ+iXHB6BTgr9 xZ/AxhiG3LinFtr2vmNGF3Pe2C9G1GZyizpUAmHHyu4rO48gAfj92MBED77skpIHMHUA yq8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=UrNlslrN8liHzMkSr61dS9Vo6ULE756aXr3wsWluV9g=; b=SUiPB+HmrvJUO8yc1lkRPpvYSbKHnCMFKapnY26eQJxjyohj1LBLi1r1Y7N87HMAru zH2/nubl6U12Jj24ViVDfmNqXmEQ+LWdt/Saou4infPibUZDwaGNT2znli3VHZXHC+eT mpvZbuBKC1MphqVlJrGO+s8/9cw0E1qSRpJZuftEeofl0uIswZzklHSC9rq6FyX14FjR AHWknbSOdbi6Am40lR+4DaF5aRINwW9DYzjNspFWjNHPo4XrXqgwjkOPXiOzJRVz9Yxe 2YTmrtdY+QUoTAal/JzSCTos8GWYlMKTMXBOBZHRvVDyPPntAplq910aDovYEkhuT9CH /OEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=sTWhn9Px; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q4si8432847pgt.20.2019.07.22.06.10.14; Mon, 22 Jul 2019 06:10:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=sTWhn9Px; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730254AbfGVNKM (ORCPT + 29 others); Mon, 22 Jul 2019 09:10:12 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:39919 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730231AbfGVNKK (ORCPT ); Mon, 22 Jul 2019 09:10:10 -0400 Received: by mail-pg1-f195.google.com with SMTP id u17so17642315pgi.6 for ; Mon, 22 Jul 2019 06:10:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=UrNlslrN8liHzMkSr61dS9Vo6ULE756aXr3wsWluV9g=; b=sTWhn9PxWbhK839hc00cr7mXR3PAg7IJBOZdEQ3yWz0jcnV2L5DSAZTRCNfZ3R3d54 S6lUKT9NOdU/eoqkabViw9sqI3eKK1TunpvowbjZMkQ3m+4my1dYN/MpifvTFguoJKxA JWD1MkHiP2Gh2FQF05boPObIjc3o18u4gzsxPaui+37HoJXU1rTum2aNvjhprkGHnddC rsELA6KESDEcSm345tA941PvgOu8ntB+1Jh/H37feV61ct7+ygRTTHnvsRyljp7q/PAq nv9TB8HPiWJupf7g4msnq+RXVaqFOKrcw7bYzwMzA6z3HUvhO4RP/Lt+5eWDsO075gW6 uWwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=UrNlslrN8liHzMkSr61dS9Vo6ULE756aXr3wsWluV9g=; b=Mo3EUUpjLrWGHQtt92rCWzQZM7iWy9Y3G0zNTQ2KFK0RoIgr3FNqXMj9zq+5mas52E BN/mE7A6R9o2h9dvwOitwuZHXC54V5X8jCrmQVUoQ3CBNApcA4IYZW2pho51KMb5dIhN LaP8QVKc/1fThIUpHQiwKNfinKNchn1KmdvTUQF4X8JcjujNh8Qu+NO1gRVXFCamOISB 2Kl8oNmN7QPpmwHEtt79piJ4Z2Z7dZZm8lq8vnM8QVlzVmniJl9UCDNUn7V/A5ESUNrS 3y9FXfZ4kIwmwoCJ7O23m8N9N0Lw+lEEcWcndq0i8tsmHz9Y2OjHO6p4+QJ3rin3jI0Z pTCQ== X-Gm-Message-State: APjAAAWCVo0N95On6o5Bn58LJALKwnD/ME5uMELo46q41kvJAXNrIcL7 0K7p3I+CleANsYGsqqWff4R4Ng== X-Received: by 2002:a17:90a:8985:: with SMTP id v5mr76073955pjn.136.1563801009693; Mon, 22 Jul 2019 06:10:09 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id p19sm47013192pfn.99.2019.07.22.06.10.06 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Jul 2019 06:10:09 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, adrian.hunter@intel.com, ulf.hansson@linaro.org Cc: zhang.lyra@gmail.com, orsonzhai@gmail.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang@linaro.org, vincent.guittot@linaro.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [RFC PATCH 1/7] blk-mq: Export blk_mq_hctx_has_pending() function Date: Mon, 22 Jul 2019 21:09:36 +0800 Message-Id: <94a0d20e6304b909391abd9a425c71c376cad964.1563782844.git.baolin.wang@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some SD/MMC host controllers can support packed command or packed request, that means we can package several requests to host controller at one time to improve performence. And this patch set will introduce MMC packed function to support this feature by following patches. To support MMC packed function, the MMC layer need to know if there are requests are pending now in hardware queue to help to combine requests as much as possible. If we know there are requests pending in hardware queue, then we should not package requests to host controller immediately, instead we should collect more requests into MMC packed queue to be packed to host controller with packed condition. Thus export this function for MMC packed function. Signed-off-by: Baolin Wang --- block/blk-mq.c | 3 ++- include/linux/blk-mq.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) -- 1.7.9.5 diff --git a/block/blk-mq.c b/block/blk-mq.c index b038ec6..5bd4ef9 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -63,12 +63,13 @@ static int blk_mq_poll_stats_bkt(const struct request *rq) * Check if any of the ctx, dispatch list or elevator * have pending work in this hardware queue. */ -static bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx) +bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx) { return !list_empty_careful(&hctx->dispatch) || sbitmap_any_bit_set(&hctx->ctx_map) || blk_mq_sched_has_work(hctx); } +EXPORT_SYMBOL_GPL(blk_mq_hctx_has_pending); /* * Mark this ctx as having pending work in this hardware queue diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 3fa1fa5..15a2b7b 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -334,6 +334,7 @@ int blk_mq_freeze_queue_wait_timeout(struct request_queue *q, void blk_mq_quiesce_queue_nowait(struct request_queue *q); unsigned int blk_mq_rq_cpu(struct request *rq); +bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx); /* * Driver command data is immediately after the request. So subtract request From patchwork Mon Jul 22 13:09:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 169387 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp7508876ilk; Mon, 22 Jul 2019 06:10:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqwa4Zq2dT8MqUYCyvCdXgde9Yf5VjpzYQ+4iNjADHowpOkP6mHOKQDgaeuZ9iibYsrHdHMf X-Received: by 2002:a62:1616:: with SMTP id 22mr185843pfw.120.1563801018482; Mon, 22 Jul 2019 06:10:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563801018; cv=none; d=google.com; s=arc-20160816; b=HmOKvfezQB6YU8E0gTtI86TPzHup2PWeGGjDIvMxvfSXeP7Uifxjdxm84WqeTx0pYf VcpJKRQc+lR7qWd97/vZNbmU4kG3j+YcjmUdT8LcSezNhKah+MyjSsBwqAqU9NzxUp5V 3738pGeuBZYFBnTFEAmADLGOl2j+E0B8CaKB2EihdV+6XSlNTSepzbyqw7ofWcF9hppU 9JZf0c7z/hQ+oMSblZ88/u9GnrBDGHoG6U+0Ba50NKJC1njCY/g7MxcFtAzc/KocSjlq 1eW+NbAYbJ7WfUSM6CYG/A9OS0jSoIsaJlP83gHIpxFooQIJiLAVJg/qn9o1bEkADzz3 TbQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=PL4wujbDE/jVNm+cD6uv0bxeBq10mT3Y+rFvonbxsNc=; b=ZI6WU76STx+pKht28PWSmV36TE/Wn4I1LWp+9att/wQMfLYws5tHP39pu+9fe5j8Cr dcTVZb0ms4Xk5y5EXe4E3aLFC/kHzRuMlvSKUkhw2S+Jwn6NPV59kw9iQDXznUT/oYF6 +vfzsoeIDz3WZQVYxQ5WGcn+pwg30KfzvHlyqURxrsPn9bHoxvHQm7ie6J9KpEWdo0kb PM+Eca+FqSnONi8H8iwvrqyr6fQK08W81BBU+wNbdtCxlVccCuwKm9ig9sKI2ICUkc0A OWgyIqwiUaJ6f2Ql6On75H6iTigvWik8qBt66w5WBELeUzLMUJLZS8ZITDPMCtpq15lS fVJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cMn0Hwbm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f9si13483368pgc.510.2019.07.22.06.10.18; Mon, 22 Jul 2019 06:10:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cMn0Hwbm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730291AbfGVNKQ (ORCPT + 29 others); Mon, 22 Jul 2019 09:10:16 -0400 Received: from mail-pf1-f194.google.com ([209.85.210.194]:40863 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730264AbfGVNKP (ORCPT ); Mon, 22 Jul 2019 09:10:15 -0400 Received: by mail-pf1-f194.google.com with SMTP id p184so17346548pfp.7 for ; Mon, 22 Jul 2019 06:10:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=PL4wujbDE/jVNm+cD6uv0bxeBq10mT3Y+rFvonbxsNc=; b=cMn0Hwbmr1DioULhwjef22r6ESF9b6HnZEpFttS1rc6wi51ll/EbEyMA6OH7Y31uXS YMJ/TQ6odtO+zfte+IAmZKMylCQ7fDD13dQi1X0nVHWE36T14l2F9VoWZlSjO4rLyUWY bDKL2dHDKwKOC8l/9h0v1OlMTPZYadun/jlM/pxADIAP/Evd+yo1s3grNScLFEmtghv9 kdThtJvHbxdgBU1VJhRK7iof5cPZP1fhqHUKtpvt5Czg7T6PmKqa/3nwwlUSPap7Yq0p eep5k7wo43f+0G1P7mFWaRxs5EJWrZZItbRRfRMbXFegkSOtFwQ99lqfv8BmLS1K6iYl tJHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=PL4wujbDE/jVNm+cD6uv0bxeBq10mT3Y+rFvonbxsNc=; b=HXNNgd8aFl0xdzQqNLPzekG8DmbH1lVEdxPwb1RXifuPVm2NmLGELt+woeOmWRjmsY CulPFJiTYSLva0vUPxsEB7D9AEWnuvAIzraRJOWmzZUcExE9a3cABAwTPCovLUwvVoVj DlusTUvVVoM7X7Ubv2WwM4vC8QXvSr9PkHYgW8Zi3CEs7uMu/gZqv3PkuOrRyh94uFsR TibeT8YmYt1cBmUMbRqsPdq7BvwDRFx/TJRza3XS3nWg9yslL8jzfTTpXoRoR3xglFnK f9w3g1uIPwzHqc5jQrO2vz047g61sypC4yzFV0Ha8W9q858Oz+ybyDNPesS6tV+iMKlF mnLQ== X-Gm-Message-State: APjAAAW5GQ6W/dr1FqDmtkzYmprdXLnX6gUAVNa/xiVWJfJLPpJ+nw9n vVRS8YoDLad/qnzXfB/zWE/uHg== X-Received: by 2002:a17:90a:ba94:: with SMTP id t20mr27270562pjr.116.1563801013959; Mon, 22 Jul 2019 06:10:13 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id p19sm47013192pfn.99.2019.07.22.06.10.09 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Jul 2019 06:10:13 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, adrian.hunter@intel.com, ulf.hansson@linaro.org Cc: zhang.lyra@gmail.com, orsonzhai@gmail.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang@linaro.org, vincent.guittot@linaro.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [RFC PATCH 2/7] mmc: core: Add MMC packed request function Date: Mon, 22 Jul 2019 21:09:37 +0800 Message-Id: <4c2e5104da5497985c0d997934e6dc475b15c8f9.1563782844.git.baolin.wang@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some SD controllers can support packed command or packed request, that means it can package several requests to host controller to be handled at one time, which can reduce interrutps and improve the DMA transfer. As a result, the I/O performence can be improved. Thus this patch adds MMC packed function to support packed requests or packe command. The basic concept of this function is that, we try to collect more requests from block layer as much as possible to be linked into MMC packed queue by mmc_blk_packed_issue_rw_rq(). When the last request of the hardware queue comes, or the collected request numbers are larger than 16, or a larger request comes, then we can start to pakage a packed request to host controller. The MMC packed function also supplies packed algorithm operations to help to package qualified requests. After finishing the packed request, the MMC packed function will help to complete each request, at the same time, the MMC packed queue will allow to collect more requests from block layer. After completing each request, the MMC packed function can try to package another packed request to host controller directly in the complete path, if there are enough requests in MMC packed queue or the request pending flag is not set. If the pending flag was set, we should let the mmc_blk_packed_issue_rw_rq() collect more request as much as possible. Signed-off-by: Baolin Wang --- drivers/mmc/core/Kconfig | 2 + drivers/mmc/core/Makefile | 1 + drivers/mmc/core/block.c | 71 ++++++- drivers/mmc/core/block.h | 3 +- drivers/mmc/core/core.c | 51 +++++ drivers/mmc/core/core.h | 3 + drivers/mmc/core/packed.c | 478 ++++++++++++++++++++++++++++++++++++++++++++ drivers/mmc/core/queue.c | 28 ++- include/linux/mmc/core.h | 1 + include/linux/mmc/host.h | 3 + include/linux/mmc/packed.h | 123 ++++++++++++ 11 files changed, 760 insertions(+), 4 deletions(-) create mode 100644 drivers/mmc/core/packed.c create mode 100644 include/linux/mmc/packed.h -- 1.7.9.5 diff --git a/drivers/mmc/core/Kconfig b/drivers/mmc/core/Kconfig index c12fe13..50d1a2f 100644 --- a/drivers/mmc/core/Kconfig +++ b/drivers/mmc/core/Kconfig @@ -81,3 +81,5 @@ config MMC_TEST This driver is only of interest to those developing or testing a host driver. Most people should say N here. +config MMC_PACKED + bool diff --git a/drivers/mmc/core/Makefile b/drivers/mmc/core/Makefile index 95ffe00..dd303d9 100644 --- a/drivers/mmc/core/Makefile +++ b/drivers/mmc/core/Makefile @@ -18,3 +18,4 @@ obj-$(CONFIG_MMC_BLOCK) += mmc_block.o mmc_block-objs := block.o queue.o obj-$(CONFIG_MMC_TEST) += mmc_test.o obj-$(CONFIG_SDIO_UART) += sdio_uart.o +obj-$(CONFIG_MMC_PACKED) += packed.o diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 2c71a43..e7a8b2c 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -44,6 +44,7 @@ #include #include #include +#include #include @@ -2208,11 +2209,77 @@ static int mmc_blk_wait_for_idle(struct mmc_queue *mq, struct mmc_host *host) { if (mq->use_cqe) return host->cqe_ops->cqe_wait_for_idle(host); + else if (host->packed) + return mmc_packed_wait_for_idle(host->packed); return mmc_blk_rw_wait(mq, NULL); } -enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req) +static void mmc_blk_packed_req_done(struct mmc_request *mrq) +{ + struct mmc_queue_req *mqrq = + container_of(mrq, struct mmc_queue_req, brq.mrq); + struct request *req = mmc_queue_req_to_req(mqrq); + struct request_queue *q = req->q; + struct mmc_queue *mq = q->queuedata; + + mutex_lock(&mq->complete_lock); + mmc_blk_mq_poll_completion(mq, req); + mutex_unlock(&mq->complete_lock); + + mmc_blk_mq_post_req(mq, req); +} + +static int mmc_blk_packed_issue_rw_rq(struct mmc_queue *mq, struct request *req, + bool last) +{ + struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); + struct mmc_host *host = mq->card->host; + unsigned long nr_rqs; + int err; + + /* + * If the packed queue has pumped all requests, then we should check if + * need retuning firstly. + */ + nr_rqs = mmc_packed_queue_length(host->packed); + if (!nr_rqs) + host->retune_now = host->need_retune && !host->hold_retune; + + mutex_lock(&mq->complete_lock); + mmc_retune_hold(host); + mutex_unlock(&mq->complete_lock); + + mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); + mmc_pre_req(host, &mqrq->brq.mrq); + mqrq->brq.mrq.done = mmc_blk_packed_req_done; + + err = mmc_packed_start_req(host, &mqrq->brq.mrq); + if (err) { + mutex_lock(&mq->complete_lock); + mmc_retune_release(host); + mutex_unlock(&mq->complete_lock); + + mmc_post_req(host, &mqrq->brq.mrq, err); + + return err; + } + + /* + * If it is the last request from block layer or a larger request or + * request count is larger than MMC_PACKED_MAX_REQUEST_COUNT, we should + * pump requests to controller. Otherwise we should try to combine + * requests as much as we can. + */ + if (last || blk_rq_bytes(req) > MMC_PACKED_FLUSH_SIZE || + nr_rqs > MMC_PACKED_MAX_REQUEST_COUNT) + mmc_packed_pump_requests(host->packed); + + return 0; +} + +enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req, + bool last) { struct mmc_blk_data *md = mq->blkdata; struct mmc_card *card = md->queue.card; @@ -2257,6 +2324,8 @@ enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req) case REQ_OP_WRITE: if (mq->use_cqe) ret = mmc_blk_cqe_issue_rw_rq(mq, req); + else if (host->packed) + ret = mmc_blk_packed_issue_rw_rq(mq, req, last); else ret = mmc_blk_mq_issue_rw_rq(mq, req); break; diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h index 31153f6..8bfb89f 100644 --- a/drivers/mmc/core/block.h +++ b/drivers/mmc/core/block.h @@ -9,7 +9,8 @@ enum mmc_issued; -enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req); +enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req, + bool last); void mmc_blk_mq_complete(struct request *req); void mmc_blk_mq_recovery(struct mmc_queue *mq); diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 2211273..924e733 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -329,6 +330,7 @@ static int mmc_mrq_prep(struct mmc_host *host, struct mmc_request *mrq) } } + INIT_LIST_HEAD(&mrq->packed_list); return 0; } @@ -487,6 +489,55 @@ int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq) } EXPORT_SYMBOL(mmc_cqe_start_req); +int mmc_packed_start_req(struct mmc_host *host, struct mmc_request *mrq) +{ + int err; + + if (mmc_card_removed(host->card)) + return -ENOMEDIUM; + + err = mmc_retune(host); + if (err) + return err; + + mrq->host = host; + + mmc_mrq_pr_debug(host, mrq, true); + + err = mmc_mrq_prep(host, mrq); + if (err) + return err; + + err = mmc_packed_queue_request(host->packed, mrq); + if (err) + return err; + + trace_mmc_request_start(host, mrq); + + return 0; +} +EXPORT_SYMBOL(mmc_packed_start_req); + +void mmc_packed_request_done(struct mmc_host *host, struct mmc_request *mrq) +{ + mmc_should_fail_request(host, mrq); + + /* Flag re-tuning needed on CRC errors */ + if (mrq->data && mrq->data->error == -EILSEQ) + mmc_retune_needed(host); + + trace_mmc_request_done(host, mrq); + + if (mrq->data) { + pr_debug("%s: %d bytes transferred: %d\n", + mmc_hostname(host), + mrq->data->bytes_xfered, mrq->data->error); + } + + mrq->done(mrq); +} +EXPORT_SYMBOL(mmc_packed_request_done); + /** * mmc_cqe_request_done - CQE has finished processing an MMC request * @host: MMC host which completed request diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 328c78d..b88b3b3 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -138,6 +138,9 @@ static inline void mmc_claim_host(struct mmc_host *host) void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq); int mmc_cqe_recovery(struct mmc_host *host); +int mmc_packed_start_req(struct mmc_host *host, struct mmc_request *mrq); +void mmc_packed_request_done(struct mmc_host *host, struct mmc_request *mrq); + /** * mmc_pre_req - Prepare for a new request * @host: MMC host to prepare command diff --git a/drivers/mmc/core/packed.c b/drivers/mmc/core/packed.c new file mode 100644 index 0000000..91b7e9d --- /dev/null +++ b/drivers/mmc/core/packed.c @@ -0,0 +1,478 @@ +// SPDX-License-Identifier: GPL-2.0 +// +// MMC packed request support +// +// Copyright (C) 2019 Linaro, Inc. +// Author: Baolin Wang + +#include +#include +#include +#include +#include +#include + +#include "block.h" +#include "card.h" +#include "core.h" +#include "host.h" +#include "queue.h" + +#define MMC_PACKED_REQ_DIR(mrq) \ + (((mrq)->cmd->opcode == MMC_READ_MULTIPLE_BLOCK || \ + (mrq)->cmd->opcode == MMC_READ_SINGLE_BLOCK) ? READ : WRITE) + +static void mmc_packed_allow_pump(struct mmc_packed *packed) +{ + struct mmc_packed_request *prq = &packed->prq; + unsigned long flags, remains; + bool need_pump; + + /* Allow requests to be pumped after completing previous requests. */ + spin_lock_irqsave(&packed->lock, flags); + prq->nr_reqs = 0; + need_pump = !packed->rqs_pending; + remains = packed->rqs_len; + + if (packed->waiting_for_idle && !remains) { + packed->waiting_for_idle = false; + wake_up(&packed->wait_queue); + } + + spin_unlock_irqrestore(&packed->lock, flags); + + /* + * If there are not enough requests in queue and the request pending + * flag was set, then do not pump requests here, and let the + * mmc_blk_packed_issue_rw_rq() combine more requests and pump them. + */ + if ((need_pump && remains > 0) || remains >= packed->max_entries) + mmc_packed_pump_requests(packed); +} + +static void mmc_packed_complete_work(struct work_struct *work) +{ + struct mmc_packed *packed = + container_of(work, struct mmc_packed, complete_work); + struct mmc_request *mrq, *t; + unsigned long flags; + LIST_HEAD(head); + + spin_lock_irqsave(&packed->lock, flags); + list_splice_tail_init(&packed->complete_list, &head); + spin_unlock_irqrestore(&packed->lock, flags); + + list_for_each_entry_safe(mrq, t, &head, packed_list) { + list_del(&mrq->packed_list); + mmc_packed_request_done(packed->host, mrq); + } + + mmc_packed_allow_pump(packed); +} + +/** + * mmc_packed_finalize_requests - finalize one packed request + * if the packed request is done + * @host: the host controller + * @prq: the packed request need to be finalized + * @err: error number + */ +void mmc_packed_finalize_requests(struct mmc_host *host, + struct mmc_packed_request *prq) +{ + struct mmc_packed *packed = host->packed; + struct mmc_request *mrq, *t; + LIST_HEAD(head); + unsigned long flags; + + if (packed->ops->unprepare_hardware && + packed->ops->unprepare_hardware(packed)) + pr_err("failed to unprepare hardware\n"); + + /* + * Clear busy flag to let more requests link into MMC packed queue, + * but now we can not pump them to controller, we should wait for all + * requests are completed. During the period of completing request, + * we should collect more requests from block layer as much as possible. + */ + spin_lock_irqsave(&packed->lock, flags); + list_splice_tail_init(&prq->list, &head); + packed->busy = false; + spin_unlock_irqrestore(&packed->lock, flags); + + list_for_each_entry_safe(mrq, t, &head, packed_list) { + if (mmc_host_done_complete(host)) { + list_del(&mrq->packed_list); + + mmc_packed_request_done(host, mrq); + } + } + + /* + * If we cannot complete these requests in this context, so + * queue a work to do this. + * + * Note: we must make sure all requests are completed before + * pumping new requests to host controller. + */ + if (!mmc_host_done_complete(host)) { + spin_lock_irqsave(&packed->lock, flags); + list_splice_tail_init(&head, &packed->complete_list); + spin_unlock_irqrestore(&packed->lock, flags); + + schedule_work(&packed->complete_work); + return; + } + + mmc_packed_allow_pump(packed); +} +EXPORT_SYMBOL_GPL(mmc_packed_finalize_requests); + +/** + * mmc_packed_queue_length - return the request number in MMC packed queue + * @packed: the mmc_packed + */ +unsigned long mmc_packed_queue_length(struct mmc_packed *packed) +{ + unsigned long flags; + unsigned long len; + + spin_lock_irqsave(&packed->lock, flags); + len = packed->rqs_len; + spin_unlock_irqrestore(&packed->lock, flags); + + return len; +} +EXPORT_SYMBOL_GPL(mmc_packed_queue_is_empty); + +/** + * mmc_packed_queue_is_busy - if the MMC packed queue is busy or not + * @packed: the mmc_packed + * + * If the MMC hardware is busy now, we should not add more rquests + * into MMC packed queue, instead we should return busy to block layer, + * to make block layer tell MMC layer there are more requests will be coming. + */ +bool mmc_packed_queue_is_busy(struct mmc_packed *packed) +{ + unsigned long flags; + bool busy; + + spin_lock_irqsave(&packed->lock, flags); + busy = packed->busy; + spin_unlock_irqrestore(&packed->lock, flags); + + return busy; +} +EXPORT_SYMBOL_GPL(mmc_packed_queue_is_busy); + +/** + * mmc_packed_queue_commit_rqs - tell us more request will be coming + * @packed: the mmc_packed + */ +void mmc_packed_queue_commit_rqs(struct mmc_packed *packed) +{ + unsigned long flags; + + spin_lock_irqsave(&packed->lock, flags); + + /* Set pending flag which indicates more request will be coming */ + if (!packed->rqs_pending) + packed->rqs_pending = true; + + spin_unlock_irqrestore(&packed->lock, flags); +} +EXPORT_SYMBOL_GPL(mmc_packed_queue_is_busy); + +/** + * mmc_packed_queue_request - add one mmc request into packed list + * @packed: the mmc_packed + * @mrq: the MMC request + */ +int mmc_packed_queue_request(struct mmc_packed *packed, + struct mmc_request *mrq) +{ + unsigned long flags; + + spin_lock_irqsave(&packed->lock, flags); + + if (!packed->running) { + spin_unlock_irqrestore(&packed->lock, flags); + return -ESHUTDOWN; + } + + list_add_tail(&mrq->packed_list, &packed->list); + + /* New request comes, then clear pending flag */ + if (packed->rqs_pending) + packed->rqs_pending = false; + + packed->rqs_len++; + spin_unlock_irqrestore(&packed->lock, flags); + + return 0; +} +EXPORT_SYMBOL_GPL(mmc_packed_queue_request); + +/** + * mmc_packed_queue_start - start the MMC packed queue + * @packed: the mmc_packed + */ +int mmc_packed_queue_start(struct mmc_packed *packed) +{ + unsigned long flags; + + spin_lock_irqsave(&packed->lock, flags); + + if (packed->running || packed->busy) { + spin_unlock_irqrestore(&packed->lock, flags); + return -EBUSY; + } + + packed->running = true; + spin_unlock_irqrestore(&packed->lock, flags); + + return 0; +} +EXPORT_SYMBOL_GPL(mmc_packed_queue_start); + +static bool mmc_packed_queue_is_idle(struct mmc_packed *packed) +{ + unsigned long flags; + bool is_idle; + + spin_lock_irqsave(&packed->lock, flags); + is_idle = !packed->prq.nr_reqs && list_empty(&packed->list); + + packed->waiting_for_idle = !is_idle; + spin_unlock_irqrestore(&packed->lock, flags); + + return is_idle; +} + +/** + * mmc_packed_queue_stop - stop the MMC packed queue + * @packed: the mmc_packed + */ +int mmc_packed_queue_stop(struct mmc_packed *packed) +{ + unsigned long flags; + u32 timeout = 500; + int ret; + + ret = wait_event_timeout(packed->wait_queue, + mmc_packed_queue_is_idle(packed), + msecs_to_jiffies(timeout)); + if (ret == 0) { + pr_warn("could not stop mmc packed queue\n"); + return -ETIMEDOUT; + } + + spin_lock_irqsave(&packed->lock, flags); + packed->running = false; + spin_unlock_irqrestore(&packed->lock, flags); + + return 0; +} +EXPORT_SYMBOL_GPL(mmc_packed_queue_stop); + +/** + * mmc_packed_wait_for_idle - wait for all requests are finished + * @packed: the mmc_packed + */ +int mmc_packed_wait_for_idle(struct mmc_packed *packed) +{ + wait_event(packed->wait_queue, mmc_packed_queue_is_idle(packed)); + + return 0; +} +EXPORT_SYMBOL_GPL(mmc_packed_wait_for_idle); + +/** + * mmc_packed_algo_rw - the algorithm to pack read or write requests + * @packed: the mmc_packed + * + * TODO: we can add more condition to decide if we can package this + * request or not. + */ +void mmc_packed_algo_rw(struct mmc_packed *packed) +{ + struct mmc_packed_request *prq = &packed->prq; + struct mmc_request *mrq, *t; + u32 i = 0; + + list_for_each_entry_safe(mrq, t, &packed->list, packed_list) { + if (++i > packed->max_entries) + break; + + list_move_tail(&mrq->packed_list, &prq->list); + prq->nr_reqs++; + } +} +EXPORT_SYMBOL_GPL(mmc_packed_algo_rw); + +/** + * mmc_packed_algo_ro - the algorithm only to pack read requests + * @packed: the mmc_packed + * + * TODO: more condition need to consider + */ +void mmc_packed_algo_ro(struct mmc_packed *packed) +{ + struct mmc_packed_request *prq = &packed->prq; + struct mmc_request *mrq, *t; + u32 i = 0; + + list_for_each_entry_safe(mrq, t, &packed->list, packed_list) { + if (++i > packed->max_entries) + break; + + if (MMC_PACKED_REQ_DIR(mrq) != READ) { + if (!prq->nr_reqs) { + list_move_tail(&mrq->packed_list, &prq->list); + prq->nr_reqs = 1; + } + + break; + } + + list_move_tail(&mrq->packed_list, &prq->list); + prq->nr_reqs++; + } +} +EXPORT_SYMBOL_GPL(mmc_packed_algo_ro); + +/** + * mmc_packed_algo_wo - the algorithm only to pack write requests + * @packed: the mmc_packed + * + * TODO: more condition need to consider + */ +void mmc_packed_algo_wo(struct mmc_packed *packed) +{ + struct mmc_packed_request *prq = &packed->prq; + struct mmc_request *mrq, *t; + u32 i = 0; + + list_for_each_entry_safe(mrq, t, &packed->list, packed_list) { + if (++i > packed->max_entries) + break; + + if (MMC_PACKED_REQ_DIR(mrq) != WRITE) { + if (!prq->nr_reqs) { + list_move_tail(&mrq->packed_list, &prq->list); + prq->nr_reqs = 1; + } + + break; + } + + list_move_tail(&mrq->packed_list, &prq->list); + prq->nr_reqs++; + } +} +EXPORT_SYMBOL_GPL(mmc_packed_algo_wo); + +/** + * mmc_packed_pump_requests - start to pump packed request to host controller + * @packed: the mmc_packed + */ +void mmc_packed_pump_requests(struct mmc_packed *packed) +{ + struct mmc_packed_request *prq = &packed->prq; + struct mmc_host *host = packed->host; + struct mmc_request *mrq; + unsigned long flags; + int ret; + + spin_lock_irqsave(&packed->lock, flags); + + /* Make sure we are not already running a packed request */ + if (packed->prq.nr_reqs) { + spin_unlock_irqrestore(&packed->lock, flags); + return; + } + + /* Make sure there are remain requests need to pump */ + if (list_empty(&packed->list) || !packed->running) { + spin_unlock_irqrestore(&packed->lock, flags); + return; + } + + /* Try to package requests */ + packed->ops->packed_algo(packed); + + packed->rqs_len -= packed->prq.nr_reqs; + packed->busy = true; + + spin_unlock_irqrestore(&packed->lock, flags); + + if (packed->ops->prepare_hardware) { + ret = packed->ops->prepare_hardware(packed); + if (ret) { + pr_err("failed to prepare hardware\n"); + goto error; + } + } + + ret = packed->ops->packed_request(packed, prq); + if (ret) { + pr_err("failed to packed requests\n"); + goto error; + } + + return; + +error: + spin_lock_irqsave(&packed->lock, flags); + + list_for_each_entry(mrq, &packed->prq.list, packed_list) { + struct mmc_data *data = mrq->data; + + data->error = ret; + data->bytes_xfered = 0; + } + + spin_unlock_irqrestore(&packed->lock, flags); + + mmc_packed_finalize_requests(host, prq); +} +EXPORT_SYMBOL_GPL(mmc_packed_pump_requests); + +int mmc_packed_init(struct mmc_host *host, const struct mmc_packed_ops *ops, + int max_packed) +{ + struct mmc_packed *packed; + + packed = kzalloc(sizeof(struct mmc_packed), GFP_KERNEL); + if (!packed) + return -ENOMEM; + + packed->max_entries = max_packed; + packed->ops = ops; + packed->host = host; + spin_lock_init(&packed->lock); + INIT_LIST_HEAD(&packed->list); + INIT_LIST_HEAD(&packed->complete_list); + INIT_LIST_HEAD(&packed->prq.list); + INIT_WORK(&packed->complete_work, mmc_packed_complete_work); + init_waitqueue_head(&packed->wait_queue); + + host->packed = packed; + packed->running = true; + + dev_info(host->parent, "Enable MMC packed requests, max packed = %d\n", + packed->max_entries); + return 0; +} +EXPORT_SYMBOL_GPL(mmc_packed_init); + +void mmc_packed_exit(struct mmc_host *host) +{ + struct mmc_packed *packed = host->packed; + + mmc_packed_queue_stop(packed); + kfree(packed); + host->packed = NULL; +} +EXPORT_SYMBOL_GPL(mmc_packed_exit); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index e327f80..0a1782d 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -244,7 +244,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, struct mmc_host *host = card->host; enum mmc_issue_type issue_type; enum mmc_issued issued; - bool get_card, cqe_retune_ok; + bool get_card, cqe_retune_ok, last = false; int ret; if (mmc_card_removed(mq->card)) { @@ -270,6 +270,15 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, } break; case MMC_ISSUE_ASYNC: + /* + * If packed request is busy now, we can return BLK_STS_RESOURCE + * to tell block layer to queue them later and MMC packed layer + * will try to combine requests as much as possible. + */ + if (host->packed && mmc_packed_queue_is_busy(host->packed)) { + spin_unlock_irq(&mq->lock); + return BLK_STS_RESOURCE; + } break; default: /* @@ -305,9 +314,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, !host->hold_retune; } + if (host->packed) + last = bd->last && !blk_mq_hctx_has_pending(hctx); + blk_mq_start_request(req); - issued = mmc_blk_mq_issue_rq(mq, req); + issued = mmc_blk_mq_issue_rq(mq, req, last); switch (issued) { case MMC_REQ_BUSY: @@ -339,8 +351,20 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, return ret; } +static void mmc_mq_commit_rqs(struct blk_mq_hw_ctx *hctx) +{ + struct mmc_queue *mq = hctx->queue->queuedata; + struct mmc_card *card = mq->card; + struct mmc_host *host = card->host; + + /* Tell MMC packed more requests will be coming */ + if (host->packed) + mmc_packed_queue_commit_rqs(host->packed); +} + static const struct blk_mq_ops mmc_mq_ops = { .queue_rq = mmc_mq_queue_rq, + .commit_rqs = mmc_mq_commit_rqs, .init_request = mmc_mq_init_request, .exit_request = mmc_mq_exit_request, .complete = mmc_blk_mq_complete, diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h index b7ba881..1602556 100644 --- a/include/linux/mmc/core.h +++ b/include/linux/mmc/core.h @@ -165,6 +165,7 @@ struct mmc_request { bool cap_cmd_during_tfr; int tag; + struct list_head packed_list; }; struct mmc_card; diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 4a351cb..8ecc244 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -13,6 +13,7 @@ #include #include +#include #include #include @@ -441,6 +442,8 @@ struct mmc_host { /* Ongoing data transfer that allows commands during transfer */ struct mmc_request *ongoing_mrq; + struct mmc_packed *packed; + #ifdef CONFIG_FAIL_MMC_REQUEST struct fault_attr fail_mmc_request; #endif diff --git a/include/linux/mmc/packed.h b/include/linux/mmc/packed.h new file mode 100644 index 0000000..a952889 --- /dev/null +++ b/include/linux/mmc/packed.h @@ -0,0 +1,123 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2019 Linaro, Inc. +// Author: Baolin Wang + +#ifndef MMC_PACKED_H +#define MMC_PACKED_H + +#include +#include +#include + +#define MMC_PACKED_MAX_REQUEST_COUNT 16 +#define MMC_PACKED_FLUSH_SIZE (128 * 1024) + +struct mmc_packed; + +struct mmc_packed_request { + struct list_head list; + u32 nr_reqs; +}; + +struct mmc_packed_ops { + void (*packed_algo)(struct mmc_packed *packed); + int (*prepare_hardware)(struct mmc_packed *packed); + int (*unprepare_hardware)(struct mmc_packed *packed); + int (*packed_request)(struct mmc_packed *packed, + struct mmc_packed_request *prq); +}; + +struct mmc_packed { + struct list_head list; + bool busy; + bool rqs_pending; + bool running; + bool waiting_for_idle; + spinlock_t lock; + u32 max_entries; + unsigned long rqs_len; + + struct mmc_host *host; + struct mmc_packed_request prq; + const struct mmc_packed_ops *ops; + + struct work_struct complete_work; + struct list_head complete_list; + + wait_queue_head_t wait_queue; +}; + +#ifdef CONFIG_MMC_PACKED +int mmc_packed_init(struct mmc_host *host, const struct mmc_packed_ops *ops, + int max_packed); +void mmc_packed_exit(struct mmc_host *host); +void mmc_packed_finalize_requests(struct mmc_host *host, + struct mmc_packed_request *prq); +int mmc_packed_queue_request(struct mmc_packed *packed, + struct mmc_request *mrq); +void mmc_packed_pump_requests(struct mmc_packed *packed); +bool mmc_packed_queue_is_busy(struct mmc_packed *packed); +unsigned long mmc_packed_queue_length(struct mmc_packed *packed); +void mmc_packed_queue_commit_rqs(struct mmc_packed *packed); +int mmc_packed_wait_for_idle(struct mmc_packed *packed); + +int mmc_packed_queue_start(struct mmc_packed *packed); +int mmc_packed_queue_stop(struct mmc_packed *packed); + +/* Some packed algorithm helpers */ +void mmc_packed_algo_rw(struct mmc_packed *packed); +void mmc_packed_algo_ro(struct mmc_packed *packed); +void mmc_packed_algo_wo(struct mmc_packed *packed); +#else +static inline int mmc_packed_init(struct mmc_host *host, + const struct mmc_packed_ops *ops, + int max_packed) +{ + return 0; +} +static inline void mmc_packed_exit(struct mmc_host *host) +{ } +static inline void mmc_packed_finalize_requests(struct mmc_host *host, + struct mmc_packed_request *prq) +{ } +static inline int mmc_packed_queue_request(struct mmc_packed *packed, + struct mmc_request *mrq) +{ + return -EINVAL; +} +static inline int mmc_packed_pump_requests(struct mmc_packed *packed) +{ + return -EINVAL; +} +static inline bool mmc_packed_queue_is_busy(struct mmc_packed *packed) +{ + return false; +} +static inline unsigned long mmc_packed_queue_length(struct mmc_packed *packed) +{ + return 0; +} +static inline void mmc_packed_queue_commit_rqs(struct mmc_packed *packed) +{ } +static inline int mmc_packed_wait_for_idle(struct mmc_packed *packed) +{ + return -EBUSY; +} +static inline int mmc_packed_queue_start(struct mmc_packed *packed) +{ + return -EINVAL; +} +static inline int mmc_packed_queue_stop(struct mmc_packed *packed) +{ + return -EINVAL; +} +static inline void mmc_packed_algo_rw(struct mmc_packed *packed) +{ } +static inline void mmc_packed_algo_ro(struct mmc_packed *packed) +{ } +static inline void mmc_packed_algo_wo(struct mmc_packed *packed) +{ } + +#endif + +#endif From patchwork Mon Jul 22 13:09:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 169388 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp7508977ilk; Mon, 22 Jul 2019 06:10:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqyXD7QBaFwgmMrruCVBf6grtwD57zdcNP93M/fjzIFXt+RVmmxL/hH+Mtwq2Y5nPpbZK/fC X-Received: by 2002:a65:6294:: with SMTP id f20mr73500703pgv.349.1563801022733; Mon, 22 Jul 2019 06:10:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563801022; cv=none; d=google.com; s=arc-20160816; b=BsD9eMqFhVyvZAyBQ8IIrmmMhHPEapKogOO8PD2fZWM5rO/R9HwkEb6aaz4VhhGiNL M3NP9Yyz9NhAGMd07yF6czX4FmcooNFnSUvyOCLOtuH+aXFcN5nEf+EgENSSdMLu64O8 t/dz79L3aV3apJeP3FamI6jd83ydoD7NdS/v/C60DB2SnFUuxwkN6myz+cIIy1fXgNve L0PA1bFkaKh4v2WQfEupRRwB11lz6akIJe39QP5nWk70ePMgH/le3wiHOmDKuN0ZenGS j3t7FjXMbuhag/9FI42YgzBCLKmbndTVP0Q2dwil2oK44bqOPJu+9hAYn1wrf5LJ35hw pCVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=dG/LqrH9yTrCP4/AqHuYjFpolOWLOfEjROmiSDxrhyI=; b=yLDaIAZCRxILEyJxcLBHrnQIZ/oCil7j7jyOV+OsDXXHAuyIkVp+TQyGgtNdJMVWpx ojOVrvZuGoTG1cg7zhGkHM0HibJXzTL6K6AfdGEANmW3opUtn9LvTje3E3ruLa5tKo7P nyXslKx5rEtRhI2OdNpIXSCuMyLHRPyFHFNdgpLtRfyrksLdyEGvTEEOslw5LsBFhnj3 aXWoLMz4J67pg4h/tA30W94bTmIxDyjf8zLUOO0d00c9TTE5VkF4UvJQVQY4bDB22odp 6UhMIvgl/0n9K1+JyxMft5RFYfVUTEJIGEwoVsgbR6BmnUGhoBhiHTXynD3WYs8G9J+/ rFkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=xsWBnyL2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f9si13483368pgc.510.2019.07.22.06.10.21; Mon, 22 Jul 2019 06:10:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=xsWBnyL2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730331AbfGVNKU (ORCPT + 29 others); Mon, 22 Jul 2019 09:10:20 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:37337 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730264AbfGVNKS (ORCPT ); Mon, 22 Jul 2019 09:10:18 -0400 Received: by mail-pg1-f196.google.com with SMTP id i70so6904702pgd.4 for ; Mon, 22 Jul 2019 06:10:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=dG/LqrH9yTrCP4/AqHuYjFpolOWLOfEjROmiSDxrhyI=; b=xsWBnyL2HHwNYBdOs1xtJ+gKp5zK3I/iBKhtQ3IJVL4TbemyEJn2Yh/NU32uSQhfZP RSGQ6vDyOme1RaQM71oJz+BajQNDUa/7gE/rdJaU+FIx5YHW6NJUv3dN9bx9rK1yCL+C I1fn0TeveF3hrmoOeyEGCu5cxwxKVmuDAmTpyVmuwqL+6/f/mai82tDqlHgqkeEuTwf/ VvqSE0sbF4xjlghheF1jOmiRzwD/LA+rv9GJzkloVtuvRzGbpm74wih39id0FS3cR6zW Tjiw0/NVn8nxCGmtvOS9KUGm0AxN+V84gOVqY//nRrDNCEQ8fYxy4/ndAOqKAIPE61w7 pjkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=dG/LqrH9yTrCP4/AqHuYjFpolOWLOfEjROmiSDxrhyI=; b=IawyS/rx/8ehaEu7UsBxIDNAE8VkjwIcQxxRkXhTrgOm24NMgGNVpq5H0ULPkVdUOF Hp5JcPXzt2PwarWysOLhFQHcq0v18pP7dLpodFcCTmL0ka1E6UT2PQQupGwNP7ZcCp3+ hEfDNn/PVeV/c/0JvpYB03IJwfsXj0te3Tsfj9r6D50zgAL339eZ74pax91WgxG4/csE /x9IbEhoOF4pRM+xT1wf8YartgXuu/C/jO94eFbPYS5OnEuig3Ig5tY+o+F/UuE6t8e0 ENrFbgHtldGDAHQBzmjoj8OTLJW0Jb1HqeK3pIbiYWLrotwBOuFE1L/es/ty7qJzaKSd U84Q== X-Gm-Message-State: APjAAAXqGrUL4xcDRdiMCC/vRzZD1Rw8SjF9/u7PbAlqy5g2qzigO3/H uzCckcsve7hlMAdNwZrr+xryuA== X-Received: by 2002:a63:a35c:: with SMTP id v28mr28042212pgn.144.1563801017601; Mon, 22 Jul 2019 06:10:17 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id p19sm47013192pfn.99.2019.07.22.06.10.14 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Jul 2019 06:10:17 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, adrian.hunter@intel.com, ulf.hansson@linaro.org Cc: zhang.lyra@gmail.com, orsonzhai@gmail.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang@linaro.org, vincent.guittot@linaro.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [RFC PATCH 3/7] mmc: host: sdhci: Introduce ADMA3 transfer mode Date: Mon, 22 Jul 2019 21:09:38 +0800 Message-Id: X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The standard SD host controller can support ADMA3 transfer mode optionally. The ADMA3 uses command descriptor to issue an SD command, and a multi-block data transfer is programmed by using a pair of command descriptor and ADMA2 descriptor. ADMA3 performs multiple of multi-block data transfer by using integrated descriptor. This is a preparation patch to add ADMA3 structures and help to expand the ADMA buffer for support ADMA3 transfer mode. Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci.c | 105 ++++++++++++++++++++++++++++++++++++++-------- drivers/mmc/host/sdhci.h | 48 +++++++++++++++++++++ 2 files changed, 136 insertions(+), 17 deletions(-) -- 1.7.9.5 diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 59acf8e..e57a5b7 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -240,7 +240,7 @@ static void sdhci_do_reset(struct sdhci_host *host, u8 mask) host->ops->reset(host, mask); if (mask & SDHCI_RESET_ALL) { - if (host->flags & (SDHCI_USE_SDMA | SDHCI_USE_ADMA)) { + if (host->flags & (SDHCI_USE_SDMA | SDHCI_USE_ADMA | SDHCI_USE_ADMA3)) { if (host->ops->enable_dma) host->ops->enable_dma(host); } @@ -3750,10 +3750,17 @@ int sdhci_setup_host(struct sdhci_host *host) (host->caps & SDHCI_CAN_DO_ADMA2)) host->flags |= SDHCI_USE_ADMA; + if ((host->quirks2 & SDHCI_QUIRK2_USE_ADMA3_SUPPORT) && + (host->flags & SDHCI_USE_ADMA) && + (host->caps1 & SDHCI_CAN_DO_ADMA3)) { + DBG("Enable ADMA3 mode for data transfer\n"); + host->flags |= SDHCI_USE_ADMA3; + } + if ((host->quirks & SDHCI_QUIRK_BROKEN_ADMA) && (host->flags & SDHCI_USE_ADMA)) { DBG("Disabling ADMA as it is marked broken\n"); - host->flags &= ~SDHCI_USE_ADMA; + host->flags &= ~(SDHCI_USE_ADMA | SDHCI_USE_ADMA3); } /* @@ -3775,7 +3782,7 @@ int sdhci_setup_host(struct sdhci_host *host) if (ret) { pr_warn("%s: No suitable DMA available - falling back to PIO\n", mmc_hostname(mmc)); - host->flags &= ~(SDHCI_USE_SDMA | SDHCI_USE_ADMA); + host->flags &= ~(SDHCI_USE_SDMA | SDHCI_USE_ADMA | SDHCI_USE_ADMA3); ret = 0; } @@ -3799,31 +3806,68 @@ int sdhci_setup_host(struct sdhci_host *host) host->desc_sz = SDHCI_ADMA2_32_DESC_SZ; } + host->adma3_table_cnt = 1; + + if (host->flags & SDHCI_USE_ADMA3) { + /* We can pack maximum 16 requests once */ + host->adma3_table_cnt = SDHCI_MAX_ADMA3_ENTRIES; + + if (host->flags & SDHCI_USE_64_BIT_DMA) + host->integr_desc_sz = SDHCI_INTEGR_64_DESC_SZ; + else + host->integr_desc_sz = SDHCI_INTEGR_32_DESC_SZ; + + host->cmd_desc_sz = SDHCI_ADMA3_CMD_DESC_SZ; + host->cmd_table_sz = host->adma3_table_cnt * + SDHCI_ADMA3_CMD_DESC_SZ * SDHCI_ADMA3_CMD_DESC_ENTRIES; + + buf = dma_alloc_coherent(mmc_dev(mmc), + host->adma3_table_cnt * + host->integr_desc_sz, + &dma, GFP_KERNEL); + if (!buf) { + pr_warn("%s: Unable to allocate ADMA3 integrated buffers - falling back to ADMA\n", + mmc_hostname(mmc)); + host->flags &= ~SDHCI_USE_ADMA3; + host->adma3_table_cnt = 1; + } else { + host->integr_table = buf; + host->integr_addr = dma; + } + } + host->align_buffer_sz = SDHCI_MAX_SEGS * SDHCI_ADMA2_ALIGN; /* * Use zalloc to zero the reserved high 32-bits of 128-bit * descriptors so that they never need to be written. */ buf = dma_alloc_coherent(mmc_dev(mmc), - host->align_buffer_sz + host->adma_table_sz, + host->align_buffer_sz * + host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * + host->adma3_table_cnt, &dma, GFP_KERNEL); if (!buf) { pr_warn("%s: Unable to allocate ADMA buffers - falling back to standard DMA\n", mmc_hostname(mmc)); - host->flags &= ~SDHCI_USE_ADMA; - } else if ((dma + host->align_buffer_sz) & + host->flags &= ~(SDHCI_USE_ADMA | SDHCI_USE_ADMA3); + } else if ((dma + host->align_buffer_sz * host->adma3_table_cnt) & (SDHCI_ADMA2_DESC_ALIGN - 1)) { pr_warn("%s: unable to allocate aligned ADMA descriptor\n", mmc_hostname(mmc)); - host->flags &= ~SDHCI_USE_ADMA; - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + - host->adma_table_sz, buf, dma); + host->flags &= ~(SDHCI_USE_ADMA | SDHCI_USE_ADMA3); + dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz * + host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * + host->adma3_table_cnt, buf, dma); } else { host->align_buffer = buf; host->align_addr = dma; - host->adma_table = buf + host->align_buffer_sz; - host->adma_addr = dma + host->align_buffer_sz; + host->adma_table = buf + host->align_buffer_sz * host->adma3_table_cnt; + host->adma_addr = dma + host->align_buffer_sz * host->adma3_table_cnt; } } @@ -4222,12 +4266,21 @@ int sdhci_setup_host(struct sdhci_host *host) regulator_disable(mmc->supply.vqmmc); undma: if (host->align_buffer) - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + - host->adma_table_sz, host->align_buffer, + dma_free_coherent(mmc_dev(mmc), + host->align_buffer_sz * host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * host->adma3_table_cnt, + host->align_buffer, host->align_addr); host->adma_table = NULL; host->align_buffer = NULL; + if (host->integr_table) + dma_free_coherent(mmc_dev(mmc), + host->adma3_table_cnt * host->integr_desc_sz, + host->integr_table, host->integr_addr); + host->integr_table = NULL; + return ret; } EXPORT_SYMBOL_GPL(sdhci_setup_host); @@ -4240,11 +4293,20 @@ void sdhci_cleanup_host(struct sdhci_host *host) regulator_disable(mmc->supply.vqmmc); if (host->align_buffer) - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + - host->adma_table_sz, host->align_buffer, + dma_free_coherent(mmc_dev(mmc), + host->align_buffer_sz * host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * host->adma3_table_cnt, + host->align_buffer, host->align_addr); host->adma_table = NULL; host->align_buffer = NULL; + + if (host->integr_table) + dma_free_coherent(mmc_dev(mmc), + host->adma3_table_cnt * host->integr_desc_sz, + host->integr_table, host->integr_addr); + host->integr_table = NULL; } EXPORT_SYMBOL_GPL(sdhci_cleanup_host); @@ -4372,12 +4434,21 @@ void sdhci_remove_host(struct sdhci_host *host, int dead) regulator_disable(mmc->supply.vqmmc); if (host->align_buffer) - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + - host->adma_table_sz, host->align_buffer, + dma_free_coherent(mmc_dev(mmc), + host->align_buffer_sz * host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * host->adma3_table_cnt, + host->align_buffer, host->align_addr); host->adma_table = NULL; host->align_buffer = NULL; + + if (host->integr_table) + dma_free_coherent(mmc_dev(mmc), + host->adma3_table_cnt * host->integr_desc_sz, + host->integr_table, host->integr_addr); + host->integr_table = NULL; } EXPORT_SYMBOL_GPL(sdhci_remove_host); diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 89fd965..010cc29 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -273,6 +273,9 @@ #define SDHCI_PRESET_SDCLK_FREQ_MASK 0x3FF #define SDHCI_PRESET_SDCLK_FREQ_SHIFT 0 +#define SDHCI_ADMA3_ADDRESS 0x78 +#define SDHCI_ADMA3_ADDRESS_HI 0x7c + #define SDHCI_SLOT_INT_STATUS 0xFC #define SDHCI_HOST_VERSION 0xFE @@ -345,6 +348,41 @@ struct sdhci_adma2_64_desc { #define ADMA2_NOP_END_VALID 0x3 #define ADMA2_END 0x2 +#define SDHCI_MAX_ADMA3_ENTRIES 16 + +/* ADMA3 command descriptor */ +struct sdhci_adma3_cmd_desc { + __le32 cmd; + __le32 reg; +} __packed __aligned(4); + +#define ADMA3_TRAN_VALID 0x9 +#define ADMA3_TRAN_END 0xb + +/* ADMA3 command descriptor size */ +#define SDHCI_ADMA3_CMD_DESC_ENTRIES 4 +#define SDHCI_ADMA3_CMD_DESC_SZ 8 + +/* ADMA3 integrated 32-bit descriptor */ +struct sdhci_integr_32_desc { + __le32 cmd; + __le32 addr; +} __packed __aligned(4); + +#define SDHCI_INTEGR_32_DESC_SZ 8 + +/* ADMA3 integrated 64-bit descriptor. */ +struct sdhci_integr_64_desc { + __le32 cmd; + __le32 addr_lo; + __le32 addr_hi; +} __packed __aligned(4); + +#define SDHCI_INTEGR_64_DESC_SZ 16 + +#define ADMA3_INTEGR_TRAN_VALID 0x39 +#define ADMA3_INTEGR_TRAN_END 0x3b + /* * Maximum segments assuming a 512KiB maximum requisition size and a minimum * 4KiB page size. @@ -481,6 +519,8 @@ struct sdhci_host { * block count. */ #define SDHCI_QUIRK2_USE_32BIT_BLK_CNT (1<<18) +/* use ADMA3 for data read/write if hardware supports */ +#define SDHCI_QUIRK2_USE_ADMA3_SUPPORT (1<<19) int irq; /* Device IRQ */ void __iomem *ioaddr; /* Mapped address */ @@ -517,6 +557,7 @@ struct sdhci_host { #define SDHCI_SIGNALING_330 (1<<14) /* Host is capable of 3.3V signaling */ #define SDHCI_SIGNALING_180 (1<<15) /* Host is capable of 1.8V signaling */ #define SDHCI_SIGNALING_120 (1<<16) /* Host is capable of 1.2V signaling */ +#define SDHCI_USE_ADMA3 (1<<17) /* Host is ADMA3 capable */ unsigned int version; /* SDHCI spec. version */ @@ -547,14 +588,19 @@ struct sdhci_host { void *adma_table; /* ADMA descriptor table */ void *align_buffer; /* Bounce buffer */ + void *integr_table; /* ADMA3 intergrate descriptor table */ size_t adma_table_sz; /* ADMA descriptor table size */ size_t align_buffer_sz; /* Bounce buffer size */ + size_t cmd_table_sz; /* ADMA3 command descriptor table size */ dma_addr_t adma_addr; /* Mapped ADMA descr. table */ dma_addr_t align_addr; /* Mapped bounce buffer */ + dma_addr_t integr_addr; /* Mapped ADMA3 intergrate descr. table */ unsigned int desc_sz; /* ADMA descriptor size */ + unsigned int cmd_desc_sz; /* ADMA3 command descriptor size */ + unsigned int integr_desc_sz; /* ADMA3 intergrate descriptor size */ struct workqueue_struct *complete_wq; /* Request completion wq */ struct work_struct complete_work; /* Request completion work */ @@ -600,6 +646,8 @@ struct sdhci_host { /* Host ADMA table count */ u32 adma_table_cnt; + /* Host ADMA3 table count */ + u32 adma3_table_cnt; u64 data_timeout; From patchwork Mon Jul 22 13:09:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 169390 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp7509198ilk; Mon, 22 Jul 2019 06:10:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqwsHfJK9wM3MEBDf/blZivmuVxhhur0FLj2eHjoF4f2uupoU1ieZDu9jjt5RuE3Crj58phE X-Received: by 2002:a17:90a:ff17:: with SMTP id ce23mr76780898pjb.47.1563801031936; Mon, 22 Jul 2019 06:10:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563801031; cv=none; d=google.com; s=arc-20160816; b=YFtRKFepI2kntpPyWJQgSnyhggAFKSXqWP/5BeA9EcDiQ6+Q89LSkI9rtQrusM0EP4 ShUoW9Yqh+AlKMIWl5pQiw5rnEuGOlucouTXRGsFMFPilCUoHKijkHM2sWZB7JqviipG MmUJwoqCL8D/EA+Dj4/77sTsPsHJBW+cCVLg+Q28QYc5QThgOQGSQZEp+5cNUytvYb21 5Kk7+9owQlmlDS7hA0Ur1mDivOD4GBYK1wqQQ3Os03sfD70cA/jiItY7885EfanFiDwN jzXu4HbPI2r8dE9w7y2sitqq/+Bn41EVDwEn5ac5URi2vKxuPwc4Hc+oUksuKGYAa6np h00A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=nK15mMz9psJ4xYNbb/viJ2eSjpHSRghs1AoPCx3Qibw=; b=d0eWxSn/8j/1TQe9BPVJ32H7rt0xU0WtYQ35XfeHjQeShxs6zGqF9DP+uwrlsG4R2y VSaH8OAmvdQQpn5sJX9GtZMhP/uopk1zwq4eTNluF95tuivJFQWHfBQRUQnvrsWQCXU1 imJ7xu18P/zBiB0pxkPB7ecvI7nT3pYNLjvnStUAKC/eCfdq8nytG2P8XokOdiYiG2WW 5wQgvfp5eqax8AcF0KYa3TUm2/5gL8DcRjpTz1Meb51oe8Dm0E8NSt2Ui0X6SML8r/vR Tj7TojuWnB/DhKovMqCcNms5dkoJYkVWW0JfXsuZnekZJYVWm1zIa5yVu9WcGadkLaNT xSvg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KY1bnd4U; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z2si8370290pgv.417.2019.07.22.06.10.31; Mon, 22 Jul 2019 06:10:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KY1bnd4U; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730455AbfGVNK3 (ORCPT + 29 others); Mon, 22 Jul 2019 09:10:29 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:43086 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730264AbfGVNK0 (ORCPT ); Mon, 22 Jul 2019 09:10:26 -0400 Received: by mail-pg1-f194.google.com with SMTP id f25so17631509pgv.10 for ; Mon, 22 Jul 2019 06:10:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=nK15mMz9psJ4xYNbb/viJ2eSjpHSRghs1AoPCx3Qibw=; b=KY1bnd4Up1tXXPW2ZmnY4asvQ3zqGzy3u3b1KUBtemsy/00kmw7MuqyaQeqIKTwo9Y 71DSBIUgWn9dRmVialFx4Joe3zX5wJ3d6HasrDvAlfxWeltqtCqgo3qWTHcd38Tqgwec 0O3V3bS09p9DpBrbf1r6vLUM0Gj1uSr/P01k9rClYJIwmhLZrD9HqLIBnDN3VsIsVVr9 rMw0i9EVoiUHIQbpGJfm02M1gMj+Sa24Heh0UYaRRoV3q2ZuI5BuqfR0YE1q5mGxJlWX Z714pEfRJyiEtR6dq9EnC2Wl/g6fjwzXljrMMCca0NnDyp+iyt9CbMTkpw3XQn44z/1+ p+sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=nK15mMz9psJ4xYNbb/viJ2eSjpHSRghs1AoPCx3Qibw=; b=e36bVjOlqtYL/kPuZ3UC87C+7euqTqdLzErxM1JshoYRGGiXo5uQY1Ejlb689ft4dn /UrEPg1QOLmLBbawueEsdMqSb8gJryClvDqsehQXb53sizwU8grwumxYkoMtuzHF8v8r C0nm47pad/3KpSHZAMQOgkwOtecRn83enWDMpG0z9Hm69cs7fYp2ot6mGxCNuF8myizW QMfodQ5g6clGF4uvIMufHmv96XTL7J++xxH+fK/Kk0u07TGk5N5VOlYqOXpPDT3xzz/8 VBzy9DCmwl3lZ4UsAwGC0gDcoEJFoey8nT8ezX8xODnQin17dpDo7hZUG8ybQFX7zosI fItQ== X-Gm-Message-State: APjAAAXBGEqO7s8OCmKw9fJ68FA82eFmraniaIxePzuT+ivOZYiVcn9F 3jfu1P2Wm1YVZmXJh0Vjsl/Uuw== X-Received: by 2002:a17:90a:37e9:: with SMTP id v96mr75508203pjb.10.1563801025887; Mon, 22 Jul 2019 06:10:25 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id p19sm47013192pfn.99.2019.07.22.06.10.22 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Jul 2019 06:10:25 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, adrian.hunter@intel.com, ulf.hansson@linaro.org Cc: zhang.lyra@gmail.com, orsonzhai@gmail.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang@linaro.org, vincent.guittot@linaro.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [RFC PATCH 5/7] mmc: host: sdhci: Remove redundant sg_count member of struct sdhci_host Date: Mon, 22 Jul 2019 21:09:40 +0800 Message-Id: X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The mmc_data structure has a member to save the mapped sg count, so no need introduce a redundant sg_count of struct sdhci_host, remove it. This is also a preparation patch to support ADMA3 transfer mode. Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci.c | 12 +++++------- drivers/mmc/host/sdhci.h | 2 -- 2 files changed, 5 insertions(+), 9 deletions(-) -- 1.7.9.5 diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 5760b7c..9fec82f 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -696,7 +696,7 @@ static void sdhci_adma_mark_end(void *desc) } static void sdhci_adma_table_pre(struct sdhci_host *host, - struct mmc_data *data, int sg_count) + struct mmc_data *data) { struct scatterlist *sg; unsigned long flags; @@ -710,14 +710,12 @@ static void sdhci_adma_table_pre(struct sdhci_host *host, * We currently guess that it is LE. */ - host->sg_count = sg_count; - desc = host->adma_table; align = host->align_buffer; align_addr = host->align_addr; - for_each_sg(data->sg, sg, host->sg_count, i) { + for_each_sg(data->sg, sg, data->sg_count, i) { addr = sg_dma_address(sg); len = sg_dma_len(sg); @@ -788,7 +786,7 @@ static void sdhci_adma_table_post(struct sdhci_host *host, bool has_unaligned = false; /* Do a quick scan of the SG list for any unaligned mappings */ - for_each_sg(data->sg, sg, host->sg_count, i) + for_each_sg(data->sg, sg, data->sg_count, i) if (sg_dma_address(sg) & SDHCI_ADMA2_MASK) { has_unaligned = true; break; @@ -800,7 +798,7 @@ static void sdhci_adma_table_post(struct sdhci_host *host, align = host->align_buffer; - for_each_sg(data->sg, sg, host->sg_count, i) { + for_each_sg(data->sg, sg, data->sg_count, i) { if (sg_dma_address(sg) & SDHCI_ADMA2_MASK) { size = SDHCI_ADMA2_ALIGN - (sg_dma_address(sg) & SDHCI_ADMA2_MASK); @@ -1094,7 +1092,7 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) WARN_ON(1); host->flags &= ~SDHCI_REQ_USE_DMA; } else if (host->flags & SDHCI_USE_ADMA) { - sdhci_adma_table_pre(host, data, sg_cnt); + sdhci_adma_table_pre(host, data); sdhci_writel(host, host->adma_addr, SDHCI_ADMA_ADDRESS); if (host->flags & SDHCI_USE_64_BIT_DMA) diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 010cc29..4548d9c 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -584,8 +584,6 @@ struct sdhci_host { struct sg_mapping_iter sg_miter; /* SG state for PIO */ unsigned int blocks; /* remaining PIO blocks */ - int sg_count; /* Mapped sg entries */ - void *adma_table; /* ADMA descriptor table */ void *align_buffer; /* Bounce buffer */ void *integr_table; /* ADMA3 intergrate descriptor table */ From patchwork Mon Jul 22 13:09:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 169391 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp7509279ilk; Mon, 22 Jul 2019 06:10:35 -0700 (PDT) X-Google-Smtp-Source: APXvYqxBdUwb8JNTqMmW6YDRARXRdnyXUhOdl5NaCpKLwPJJ1/yZJwr7VkYoOPhMO1NcMEbeQFY/ X-Received: by 2002:a63:8f16:: with SMTP id n22mr1051078pgd.306.1563801034666; Mon, 22 Jul 2019 06:10:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563801034; cv=none; d=google.com; s=arc-20160816; b=aq0EB8ywiOUzbb/RGIyihfb61UOknJfY24Ey2b0cAeTaWYgbqEMDS66tonn8NnJfYM BoTybzLGavgRi3NIU1GPexo/x1VUOY4xvb9Y1lqrn++V2SIz2MR/R09b/AsVdFQLlH6V lW9/6fucMfjdDALhyIqr0hKJTp5rGGa+7cu55O6ggE/oUljUv9t6VeExaFuULMOV/jh+ 1fHdc41EMj89nbHZzBTIrEackJ9ZPJL1wFIBtZMkDqjW23k6IFtjgSosGvYPn9ej1J+0 s/BqnKhCxsbjYaybE1YXgEb2+EvXMMdPudEhscVbtgch9SZ2Wq4tk7ZEn+s0ximqyWAz veXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=56eFY5zrf98NeoxZQ8w/ietp7b8X8om0HlbnDu6jokY=; b=GoMKW5lPjkx3KFBmYioaHix1MfwZjm9TJq31Fbi6RW7H1zl5fN26aXKUXXgguivoQO 81U7ivV/Mkk09ez6OieNGrCrcPtw+GKS9v8bhzHfjiRnSsmwJmL5EXN/NC5Sw/qQfKyr H+cUUtR8GlUUjOYMY/7SHKQqEFuFKn20CXy8g/t9Hyt0ZO93YL7dDF6Xqf7MSjHrg2qI fU0HT4ha3YskKDpM5IJN9uTesFwvs4kOApda+dEXcc6/qIuqzJ6kkgG/dHFeyJbXtS4O 9l6lovakwRLGHWkvOwrplSWK7JofyL22TFk4ySofBhJRAm359C55SL9ZTTIzDyU7uhJH tdYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eL4x2dfZ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z2si8370290pgv.417.2019.07.22.06.10.33; Mon, 22 Jul 2019 06:10:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eL4x2dfZ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730543AbfGVNKc (ORCPT + 29 others); Mon, 22 Jul 2019 09:10:32 -0400 Received: from mail-pl1-f193.google.com ([209.85.214.193]:39288 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730264AbfGVNKa (ORCPT ); Mon, 22 Jul 2019 09:10:30 -0400 Received: by mail-pl1-f193.google.com with SMTP id b7so19237469pls.6 for ; Mon, 22 Jul 2019 06:10:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=56eFY5zrf98NeoxZQ8w/ietp7b8X8om0HlbnDu6jokY=; b=eL4x2dfZ8oxaRI+ojUn+fnWLzsD3ngHZDRX1LKvPqzXl8Qz2JfR/15j/c9DWiH9Xqe I4xe6snMT0TTYTNKzLo4jLcKQ3orAC6VBNj5K/ESRZ19TfBr3hW7dYUo8L6brIXxmS+Z VqZBCGJ9sg2E7K5cYTK6JyoLu1qWifNLd9fOLXs33XP+zpeFFZXZFl/OzRn27n6KyqBy Vbwf/pkkRNjbZbPPXMktMopR3uRKB+XIZFW3Mizu07036Xglvqia1RSdNtz+Q9feXXfv skmCOgm/8ejOOvhCFjkdmedZJyVt75kY06BKIp308t9EY54CZ2mqCHUFl9J7hkPtquUW TADQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=56eFY5zrf98NeoxZQ8w/ietp7b8X8om0HlbnDu6jokY=; b=goiy8eEXmGMSInlodW/e8/iC/ExagQfwGlvaHvHmPiXE5+RcUpAC7A+IwUz9kzzmsr pyjuO6IL3SRSuilCtq7GVpstdB35gZJbZfEBHazopyokZ2jHNTygWaeOnSw1OdSA7xup fjSLz82Ua1eT4Z49SAPh9Qv4+D17jiXZ+T4sNF0VLtYkM7kShY5QAC2pFUkyEiEQ5q5Q gn90CgpKiYHtfP+03gwYtFBoRlaaTlDPkTDbygBT+MePRakSU8vsVgyFmLMIYBTrbuWS aYnsaKctBqAtQ8q2KIyJaKBzUXuahtwS/wfDJk/n+4tquwBOv91k5vVENJuwQ9sgE013 VXdg== X-Gm-Message-State: APjAAAU1qCI+AfVKqsfDv4PcOnKQBVkNxthVTUQXe73UCtE3DFyhWgpn lmgtFoB2uG6iouznps3iZb1+lQ== X-Received: by 2002:a17:902:7894:: with SMTP id q20mr72673888pll.339.1563801029653; Mon, 22 Jul 2019 06:10:29 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id p19sm47013192pfn.99.2019.07.22.06.10.26 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Jul 2019 06:10:29 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, adrian.hunter@intel.com, ulf.hansson@linaro.org Cc: zhang.lyra@gmail.com, orsonzhai@gmail.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang@linaro.org, vincent.guittot@linaro.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [RFC PATCH 6/7] mmc: host: sdhci: Add MMC packed request support Date: Mon, 22 Jul 2019 21:09:41 +0800 Message-Id: <82e3dbec69ad25250936cd4e3fed82013ed0115e.1563782844.git.baolin.wang@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds MMC packed operations to support packed requests, and enables ADMA3 transfer mode to support this feature. Enable ADMA3 transfer mode only for read and write commands, and we will disable command interrupt and data timeout interrupt, instead we will use software data timeout for ADMA3 fransfer mode. For other non-data commands, we still use the ADMA2 transfer, since no bebefits using ADMA3 transfer. Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci.c | 329 +++++++++++++++++++++++++++++++++++++++++++--- drivers/mmc/host/sdhci.h | 9 ++ 2 files changed, 322 insertions(+), 16 deletions(-) -- 1.7.9.5 diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 9fec82f..3c4f701 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -109,6 +109,19 @@ void sdhci_dumpregs(struct sdhci_host *host) } } + if (host->adma3_enabled) { + if (host->flags & SDHCI_USE_64_BIT_DMA) { + SDHCI_DUMP("ADMA3 Err: 0x%08x | ADMA3 Ptr: 0x%08x%08x\n", + sdhci_readl(host, SDHCI_ADMA_ERROR), + sdhci_readl(host, SDHCI_ADMA3_ADDRESS_HI), + sdhci_readl(host, SDHCI_ADMA3_ADDRESS)); + } else { + SDHCI_DUMP("ADMA3 Err: 0x%08x | ADMA3 Ptr: 0x%08x\n", + sdhci_readl(host, SDHCI_ADMA_ERROR), + sdhci_readl(host, SDHCI_ADMA3_ADDRESS)); + } + } + SDHCI_DUMP("============================================\n"); } EXPORT_SYMBOL_GPL(sdhci_dumpregs); @@ -286,7 +299,9 @@ static void sdhci_config_dma(struct sdhci_host *host) goto out; /* Note if DMA Select is zero then SDMA is selected */ - if (host->flags & SDHCI_USE_ADMA) + if (host->adma3_enabled) + ctrl |= SDHCI_CTRL_ADMA3; + else if (host->flags & SDHCI_USE_ADMA) ctrl |= SDHCI_CTRL_ADMA32; if (host->flags & SDHCI_USE_64_BIT_DMA) { @@ -445,7 +460,7 @@ static inline void sdhci_led_deactivate(struct sdhci_host *host) static void sdhci_mod_timer(struct sdhci_host *host, struct mmc_request *mrq, unsigned long timeout) { - if (sdhci_data_line_cmd(mrq->cmd)) + if (host->prq || sdhci_data_line_cmd(mrq->cmd)) mod_timer(&host->data_timer, timeout); else mod_timer(&host->timer, timeout); @@ -453,7 +468,7 @@ static void sdhci_mod_timer(struct sdhci_host *host, struct mmc_request *mrq, static void sdhci_del_timer(struct sdhci_host *host, struct mmc_request *mrq) { - if (sdhci_data_line_cmd(mrq->cmd)) + if (host->prq || sdhci_data_line_cmd(mrq->cmd)) del_timer(&host->data_timer); else del_timer(&host->timer); @@ -710,10 +725,16 @@ static void sdhci_adma_table_pre(struct sdhci_host *host, * We currently guess that it is LE. */ - desc = host->adma_table; - align = host->align_buffer; - - align_addr = host->align_addr; + if (host->adma3_enabled) { + desc = host->adma3_pos; + align = host->adma3_align_pos; + align_addr = host->align_addr + + host->adma3_align_pos - host->align_buffer; + } else { + desc = host->adma_table; + align = host->align_buffer; + align_addr = host->align_addr; + } for_each_sg(data->sg, sg, data->sg_count, i) { addr = sg_dma_address(sg); @@ -771,6 +792,11 @@ static void sdhci_adma_table_pre(struct sdhci_host *host, /* Add a terminating entry - nop, end, valid */ __sdhci_adma_write_desc(host, &desc, 0, 0, ADMA2_NOP_END_VALID); } + + if (host->adma3_enabled) { + host->adma3_pos = desc; + host->adma3_align_pos = align; + } } static void sdhci_adma_table_post(struct sdhci_host *host, @@ -796,7 +822,10 @@ static void sdhci_adma_table_post(struct sdhci_host *host, dma_sync_sg_for_cpu(mmc_dev(host->mmc), data->sg, data->sg_len, DMA_FROM_DEVICE); - align = host->align_buffer; + if (host->adma3_enabled) + align = host->adma3_align_pos; + else + align = host->align_buffer; for_each_sg(data->sg, sg, data->sg_count, i) { if (sg_dma_address(sg) & SDHCI_ADMA2_MASK) { @@ -810,6 +839,9 @@ static void sdhci_adma_table_post(struct sdhci_host *host, align += SDHCI_ADMA2_ALIGN; } } + + if (host->adma3_enabled) + host->adma3_align_pos = align; } } } @@ -1014,13 +1046,13 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) host->data_timeout = 0; - if (sdhci_data_line_cmd(cmd)) + if (!host->prq && sdhci_data_line_cmd(cmd)) sdhci_set_timeout(host, cmd); if (!data) return; - WARN_ON(host->data); + WARN_ON(!host->prq && host->data); /* Sanity checks */ BUG_ON(data->blksz * data->blocks > 524288); @@ -1094,11 +1126,14 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) } else if (host->flags & SDHCI_USE_ADMA) { sdhci_adma_table_pre(host, data); - sdhci_writel(host, host->adma_addr, SDHCI_ADMA_ADDRESS); - if (host->flags & SDHCI_USE_64_BIT_DMA) - sdhci_writel(host, - (u64)host->adma_addr >> 32, - SDHCI_ADMA_ADDRESS_HI); + if (!host->adma3_enabled) { + sdhci_writel(host, host->adma_addr, + SDHCI_ADMA_ADDRESS); + if (host->flags & SDHCI_USE_64_BIT_DMA) + sdhci_writel(host, + (u64)host->adma_addr >> 32, + SDHCI_ADMA_ADDRESS_HI); + } } else { WARN_ON(sg_cnt != 1); sdhci_set_sdma_addr(host, sdhci_sdma_address(host)); @@ -1121,6 +1156,9 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) sdhci_set_transfer_irqs(host); + if (host->adma3_enabled) + return; + /* Set the DMA boundary value and block size */ sdhci_writew(host, SDHCI_MAKE_BLKSZ(host->sdma_boundary, data->blksz), SDHCI_BLOCK_SIZE); @@ -1278,6 +1316,36 @@ static void sdhci_finish_mrq(struct sdhci_host *host, struct mmc_request *mrq) queue_work(host->complete_wq, &host->complete_work); } +static void sdhci_finish_packed_data(struct sdhci_host *host, int error) +{ + struct mmc_request *mrq; + + host->data = NULL; + /* + * Reset the align buffer pointer address for unaligned mappings after + * finishing the transfer. + */ + host->adma3_align_pos = host->align_buffer; + + if (error) + sdhci_do_reset(host, SDHCI_RESET_DATA); + + list_for_each_entry(mrq, &host->prq->list, packed_list) { + struct mmc_data *data = mrq->data; + + sdhci_adma_table_post(host, data); + data->error = error; + + if (data->error) + data->bytes_xfered = 0; + else + data->bytes_xfered = data->blksz * data->blocks; + } + + sdhci_del_timer(host, NULL); + sdhci_led_deactivate(host); +} + static void sdhci_finish_data(struct sdhci_host *host) { struct mmc_command *data_cmd = host->data_cmd; @@ -1786,6 +1854,209 @@ void sdhci_set_power(struct sdhci_host *host, unsigned char mode, * * \*****************************************************************************/ +static void sdhci_adma3_write_cmd_desc(struct sdhci_host *host, + struct mmc_command *cmd) +{ + struct mmc_data *data = cmd->data; + struct sdhci_adma3_cmd_desc *cmd_desc = host->adma3_pos; + int blksz, command; + u16 mode = 0; + + /* Set block count */ + cmd_desc->cmd = cpu_to_le32(ADMA3_TRAN_VALID); + cmd_desc->reg = cpu_to_le32(data->blocks); + cmd_desc++; + + /* Set block size */ + cmd_desc->cmd = cpu_to_le32(ADMA3_TRAN_VALID); + blksz = SDHCI_MAKE_BLKSZ(host->sdma_boundary, data->blksz); + cmd_desc->reg = cpu_to_le32(blksz); + cmd_desc++; + + /* Set argument */ + cmd_desc->cmd = cpu_to_le32(ADMA3_TRAN_VALID); + cmd_desc->reg = cpu_to_le32(cmd->arg); + cmd_desc++; + + /* set command and transfer mode */ + if (data->flags & MMC_DATA_READ) + mode |= SDHCI_TRNS_READ; + + if (!(host->quirks2 & SDHCI_QUIRK2_SUPPORT_SINGLE)) + mode |= SDHCI_TRNS_BLK_CNT_EN; + + if (mmc_op_multi(cmd->opcode) || data->blocks > 1) + mode |= SDHCI_TRNS_MULTI; + + sdhci_auto_cmd_select(host, cmd, &mode); + mode |= SDHCI_TRNS_DMA; + + command = sdhci_get_command(host, cmd); + command = (command << 16) | mode; + cmd_desc->cmd = cpu_to_le32(ADMA3_TRAN_END); + cmd_desc->reg = cpu_to_le32(command); + + host->adma3_pos += + SDHCI_ADMA3_CMD_DESC_SZ * SDHCI_ADMA3_CMD_DESC_ENTRIES; +} + +static void sdhci_adma3_write_integr_desc(struct sdhci_host *host, + dma_addr_t addr) +{ + struct sdhci_integr_64_desc *integr_desc = host->integr_table; + + integr_desc->cmd = cpu_to_le32(ADMA3_INTEGR_TRAN_END); + integr_desc->addr_lo = cpu_to_le32((u32)addr); + + if (host->flags & SDHCI_USE_64_BIT_DMA) + integr_desc->addr_hi = cpu_to_le32((u64)addr >> 32); +} + +static void sdhci_set_adma3_addr(struct sdhci_host *host, dma_addr_t addr) +{ + sdhci_writel(host, addr, SDHCI_ADMA3_ADDRESS); + if (host->flags & SDHCI_USE_64_BIT_DMA) + sdhci_writel(host, (u64)addr >> 32, SDHCI_ADMA3_ADDRESS_HI); +} + +int sdhci_prepare_packed(struct mmc_packed *packed) +{ + struct mmc_host *mmc = packed->host; + struct sdhci_host *host = mmc_priv(mmc); + unsigned long timeout, flags; + u32 mask; + + spin_lock_irqsave(&host->lock, flags); + + if (!(host->flags & SDHCI_USE_ADMA3) || + !(host->flags & (SDHCI_AUTO_CMD23 | SDHCI_AUTO_CMD12))) { + spin_unlock_irqrestore(&host->lock, flags); + pr_err("%s: Unsupported packed request\n", + mmc_hostname(host->mmc)); + return -EOPNOTSUPP; + } + + /* Wait max 10 ms */ + timeout = 10; + mask = SDHCI_CMD_INHIBIT | SDHCI_DATA_INHIBIT; + + while (sdhci_readl(host, SDHCI_PRESENT_STATE) & mask) { + if (timeout == 0) { + sdhci_dumpregs(host); + spin_unlock_irqrestore(&host->lock, flags); + + pr_err("%s: Controller never released inhibit bit(s).\n", + mmc_hostname(host->mmc)); + return -EIO; + } + + timeout--; + mdelay(1); + } + + /* Disable command complete event for ADMA3 mode */ + host->ier &= ~SDHCI_INT_RESPONSE; + sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); + sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); + + /* + * Disable data timeout interrupt, and will use software timeout for + * packed request. + */ + sdhci_set_data_timeout_irq(host, false); + + /* Enable ADMA3 mode for packed request */ + host->adma3_enabled = true; + + spin_unlock_irqrestore(&host->lock, flags); + + return 0; +} + +int sdhci_unprepare_packed(struct mmc_packed *packed) +{ + struct mmc_host *mmc = packed->host; + struct sdhci_host *host = mmc_priv(mmc); + unsigned long flags; + + spin_lock_irqsave(&host->lock, flags); + + /* Disable ADMA3 mode after finishing packed request */ + host->adma3_enabled = false; + + /* Re-enable command complete event after ADMA3 mode */ + host->ier |= SDHCI_INT_RESPONSE; + + sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); + sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); + spin_unlock_irqrestore(&host->lock, flags); + + return 0; +} + +int sdhci_packed_request(struct mmc_packed *packed, + struct mmc_packed_request *prq) +{ + struct mmc_host *mmc = packed->host; + struct sdhci_host *host = mmc_priv(mmc); + struct mmc_request *mrq; + unsigned long timeout, flags; + u64 data_timeout = 0; + dma_addr_t integr_addr; + int present; + + /* Firstly check card presence */ + present = mmc->ops->get_cd(mmc); + + spin_lock_irqsave(&host->lock, flags); + + sdhci_led_activate(host); + + if (!present || host->flags & SDHCI_DEVICE_DEAD) { + spin_unlock_irqrestore(&host->lock, flags); + return -ENOMEDIUM; + } + + host->prq = prq; + host->adma3_pos = host->adma_table; + host->adma3_align_pos = host->align_buffer; + integr_addr = host->adma_addr; + + list_for_each_entry(mrq, &prq->list, packed_list) { + struct mmc_command *cmd = mrq->cmd; + + /* Set command descriptor */ + sdhci_adma3_write_cmd_desc(host, cmd); + /* Set ADMA2 descriptors */ + sdhci_prepare_data(host, cmd); + /* Set integrated descriptor */ + sdhci_adma3_write_integr_desc(host, integr_addr); + + /* Update the integrated descriptor address */ + integr_addr = + host->adma_addr + (host->adma3_pos - host->adma_table); + + /* Calculate each command's data timeout */ + sdhci_calc_sw_timeout(host, cmd); + data_timeout += host->data_timeout; + } + + timeout = jiffies; + if (data_timeout) + timeout += nsecs_to_jiffies(data_timeout); + else + timeout += 10 * HZ * prq->nr_reqs; + sdhci_mod_timer(host, NULL, timeout); + + /* Start ADMA3 transfer */ + sdhci_set_adma3_addr(host, host->integr_addr); + + spin_unlock_irqrestore(&host->lock, flags); + + return 0; +} +EXPORT_SYMBOL_GPL(sdhci_packed_request); + void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq) { struct sdhci_host *host; @@ -2619,9 +2890,19 @@ static bool sdhci_request_done(struct sdhci_host *host) { unsigned long flags; struct mmc_request *mrq; + struct mmc_packed_request *prq; int i; spin_lock_irqsave(&host->lock, flags); + prq = host->prq; + + if (prq) { + host->prq = NULL; + spin_unlock_irqrestore(&host->lock, flags); + + mmc_packed_finalize_requests(host->mmc, prq); + return true; + } for (i = 0; i < SDHCI_MAX_MRQS; i++) { mrq = host->mrqs_done[i]; @@ -2763,6 +3044,17 @@ static void sdhci_timeout_data_timer(struct timer_list *t) spin_lock_irqsave(&host->lock, flags); + if (host->prq) { + pr_err("%s: Packed requests timeout for hardware interrupt.\n", + mmc_hostname(host->mmc)); + sdhci_dumpregs(host); + sdhci_finish_packed_data(host, -ETIMEDOUT); + queue_work(host->complete_wq, &host->complete_work); + spin_unlock_irqrestore(&host->lock, flags); + + return; + } + if (host->data || host->data_cmd || (host->cmd && sdhci_data_line_cmd(host->cmd))) { pr_err("%s: Timeout waiting for hardware interrupt.\n", @@ -2965,7 +3257,9 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask) host->ops->adma_workaround(host, intmask); } - if (host->data->error) + if (host->prq) + sdhci_finish_packed_data(host, host->data->error); + else if (host->data->error) sdhci_finish_data(host); else { if (intmask & (SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL)) @@ -3137,6 +3431,9 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id) host->mrqs_done[i] = NULL; } } + + if (host->prq) + result = IRQ_WAKE_THREAD; out: spin_unlock(&host->lock); diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 4548d9c..59cfa5d 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -574,6 +574,7 @@ struct sdhci_host { bool pending_reset; /* Cmd/data reset is pending */ bool irq_wake_enabled; /* IRQ wakeup is enabled */ bool v4_mode; /* Host Version 4 Enable */ + bool adma3_enabled; /* ADMA3 mode enabled */ struct mmc_request *mrqs_done[SDHCI_MAX_MRQS]; /* Requests done */ struct mmc_command *cmd; /* Current command */ @@ -581,12 +582,15 @@ struct sdhci_host { struct mmc_data *data; /* Current data request */ unsigned int data_early:1; /* Data finished before cmd */ + struct mmc_packed_request *prq; /* Current packed request */ struct sg_mapping_iter sg_miter; /* SG state for PIO */ unsigned int blocks; /* remaining PIO blocks */ void *adma_table; /* ADMA descriptor table */ void *align_buffer; /* Bounce buffer */ void *integr_table; /* ADMA3 intergrate descriptor table */ + void *adma3_pos; /* ADMA3 buffer position */ + void *adma3_align_pos; /* ADMA3 Bounce buffer position */ size_t adma_table_sz; /* ADMA descriptor table size */ size_t align_buffer_sz; /* Bounce buffer size */ @@ -843,4 +847,9 @@ bool sdhci_cqe_irq(struct sdhci_host *host, u32 intmask, int *cmd_error, void sdhci_reset_tuning(struct sdhci_host *host); void sdhci_send_tuning(struct sdhci_host *host, u32 opcode); +int sdhci_prepare_packed(struct mmc_packed *packed); +int sdhci_unprepare_packed(struct mmc_packed *packed); +int sdhci_packed_request(struct mmc_packed *packed, + struct mmc_packed_request *prq); + #endif /* __SDHCI_HW_H */ From patchwork Mon Jul 22 13:09:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 169392 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp7509358ilk; Mon, 22 Jul 2019 06:10:38 -0700 (PDT) X-Google-Smtp-Source: APXvYqx/RINbJQHryI9mK/wxDlZfW4Isnu1VWeQdofm5Arj4QSSFys19Q3YaNiJYmrgv9fwlb5q6 X-Received: by 2002:a63:b46:: with SMTP id a6mr61810737pgl.235.1563801037859; Mon, 22 Jul 2019 06:10:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563801037; cv=none; d=google.com; s=arc-20160816; b=DyDb4EIHbGR9JHHRUj4/EdpPGfDUIsdtL+UaXj41oDoyTbTs7g3iTNaTFDFhk/Gpnl pNHVMbtzjcoLc67LYUSqR8+bqcQ+a/9EY1x3aumLzg4o/J1B0arLZPusU9hQtnwnvBs9 nsSkdqUCoqf4qxJwTfm5QneJiaGj3oXVrh1oUcvvP9ja2AfqMToCCR6BzFwjoq0gNwU1 qkvozmANIJRju1VRO4CKuEkMLNcXLb77s6DhmTWYD7t1tZdopoEncwsXqZ7GtW0tEnVi II0ftbr/hGv4dBda5otBA0+PZC5IAyIvHNrlg9yIPtCc6qLpPmKTI/qUEw0vtyV4/gXL Brlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=5gE1kPucNU7X7KwlYwjXGQ4fueRicQdt9vfmEisPxts=; b=biwN9db3WpxHdBEQBYyxv9dsIE+nvHdxXpmFMC2Zm8U+zAqV7ms/B/TMtUoxx+AbBO PIMvzHnHdQ++h4h3BYPjuJzBRiISXURuOW52EBmDp+rFzmNS5vwcnnoOwaWQMmpmf9B9 ZzZrhkbXgXC4RE71bzI6ZPWz4MUrZtKp2YFAfrWLKbHcNPy/QV33GivTP4NPr6ehegwW KH3dGe+nd2JZQzK5VYKcUZQApyfb7dhuhQ/qplhLpL7M3oSwmbH2/bQpLV15nmuddf3C 5zgypBNJAMdi9GR1vWmAikGsSGkOCSz8RW/8jRKDP2QlSkslkgH+xiSMWGpm+L1Dcx/T cxKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ph38PZ7Y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z2si8370290pgv.417.2019.07.22.06.10.37; Mon, 22 Jul 2019 06:10:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ph38PZ7Y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730550AbfGVNKg (ORCPT + 29 others); Mon, 22 Jul 2019 09:10:36 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:43177 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730264AbfGVNKd (ORCPT ); Mon, 22 Jul 2019 09:10:33 -0400 Received: by mail-pf1-f193.google.com with SMTP id i189so17358155pfg.10 for ; Mon, 22 Jul 2019 06:10:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=5gE1kPucNU7X7KwlYwjXGQ4fueRicQdt9vfmEisPxts=; b=ph38PZ7Y9AtcECthPxVps5SsED9G6pKFmujg75ViiaOlYhdOPrBAJNn3tK6eiAa3fh DYzCb+S90IgJSetgzH9n0FctUAsoMgjHHzBMRDmIXw+MmJA6YL2MkZQR9L1QS+lDn19T gXxtoBimqBUIzr54st7Pd8Gkld7+xryX1jgRYuNqSUIQT4k7wQ4hWhc+NLiQXaI8dAdT QCK4ickkpiZUQqfaoWf5SR/LH7iDKx/D9UEL8UY946rrspmBDIrP/AjPDRQFS+pTid1M FI/0L1JRYZg9zfIoL2D0b3KheG8D04BAptY/xQNx9+WSKUlsR6pbvVzjGCv2iLHk1HZy K5+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=5gE1kPucNU7X7KwlYwjXGQ4fueRicQdt9vfmEisPxts=; b=kozBQUWnITnAsmm81nHcqM4OO6QnE06pq1wx1Yve0xXj9V85yOIynD2WhhfHAvC7K7 FC2QFVSHDyjQpssyNkCCol23Y+sJMuQg9fl9GkpzZq0OMTUf3E/kgtUpA4yhzIPkDp+p 24lVjESmsVYBOjrEJirvt+/cSVsjcUoXe1GQpkMOeRbymdkPj+H7QibYr6sQGZbW4AWK efMleyRNLIh34AGHBcbskQH9xewWDm3ViWikY1qRDgiuKhZBfycUIxjGoYR49bTtJKi6 WT8QTiC/lRuPaHTcAaz8eFQgWCP1Pk6ccf+A6IeiUnfs08sEoRqW+02hGCSH30yRk6AP sNFQ== X-Gm-Message-State: APjAAAVOI3hK6vm2vmsDsdN5wSircfNLK4+IxExwP4UC28yj6AuLEZ1v 4s5H2FSsO7VAuz0YoGu6YsrUPw== X-Received: by 2002:a17:90a:9f4a:: with SMTP id q10mr76886330pjv.95.1563801033086; Mon, 22 Jul 2019 06:10:33 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id p19sm47013192pfn.99.2019.07.22.06.10.29 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Jul 2019 06:10:32 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, adrian.hunter@intel.com, ulf.hansson@linaro.org Cc: zhang.lyra@gmail.com, orsonzhai@gmail.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang@linaro.org, vincent.guittot@linaro.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [RFC PATCH 7/7] mmc: host: sdhci-sprd: Add MMC packed request support Date: Mon, 22 Jul 2019 21:09:42 +0800 Message-Id: <8331abb05ff0587f01093884cc2ba4f0d2a377cc.1563782844.git.baolin.wang@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Enable the ADMA3 transfer mode as well as adding packed operations to support MMC packed requests to improve IO performance. Signed-off-by: Baolin Wang --- drivers/mmc/host/Kconfig | 1 + drivers/mmc/host/sdhci-sprd.c | 22 ++++++++++++++++++++-- 2 files changed, 21 insertions(+), 2 deletions(-) -- 1.7.9.5 diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig index 14d89a1..44ea3cc 100644 --- a/drivers/mmc/host/Kconfig +++ b/drivers/mmc/host/Kconfig @@ -619,6 +619,7 @@ config MMC_SDHCI_SPRD depends on ARCH_SPRD depends on MMC_SDHCI_PLTFM select MMC_SDHCI_IO_ACCESSORS + select MMC_PACKED help This selects the SDIO Host Controller in Spreadtrum SoCs, this driver supports R11(IP version: R11P0). diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c index 80a9055..e5651fd 100644 --- a/drivers/mmc/host/sdhci-sprd.c +++ b/drivers/mmc/host/sdhci-sprd.c @@ -524,10 +524,18 @@ static void sdhci_sprd_phy_param_parse(struct sdhci_sprd_host *sprd_host, static const struct sdhci_pltfm_data sdhci_sprd_pdata = { .quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK, .quirks2 = SDHCI_QUIRK2_BROKEN_HS200 | - SDHCI_QUIRK2_USE_32BIT_BLK_CNT, + SDHCI_QUIRK2_USE_32BIT_BLK_CNT | + SDHCI_QUIRK2_USE_ADMA3_SUPPORT, .ops = &sdhci_sprd_ops, }; +static const struct mmc_packed_ops packed_ops = { + .packed_algo = mmc_packed_algo_rw, + .prepare_hardware = sdhci_prepare_packed, + .unprepare_hardware = sdhci_unprepare_packed, + .packed_request = sdhci_packed_request, +}; + static int sdhci_sprd_probe(struct platform_device *pdev) { struct sdhci_host *host; @@ -642,10 +650,14 @@ static int sdhci_sprd_probe(struct platform_device *pdev) sprd_host->flags = host->flags; - ret = __sdhci_add_host(host); + ret = mmc_packed_init(host->mmc, &packed_ops, 10); if (ret) goto err_cleanup_host; + ret = __sdhci_add_host(host); + if (ret) + goto err_packed; + pm_runtime_mark_last_busy(&pdev->dev); pm_runtime_put_autosuspend(&pdev->dev); @@ -653,6 +665,9 @@ static int sdhci_sprd_probe(struct platform_device *pdev) __func__, host->version); return 0; +err_packed: + mmc_packed_exit(host->mmc); + err_cleanup_host: sdhci_cleanup_host(host); @@ -680,6 +695,7 @@ static int sdhci_sprd_remove(struct platform_device *pdev) struct sdhci_sprd_host *sprd_host = TO_SPRD_HOST(host); struct mmc_host *mmc = host->mmc; + mmc_packed_exit(mmc); mmc_remove_host(mmc); clk_disable_unprepare(sprd_host->clk_sdio); clk_disable_unprepare(sprd_host->clk_enable); @@ -702,6 +718,7 @@ static int sdhci_sprd_runtime_suspend(struct device *dev) struct sdhci_host *host = dev_get_drvdata(dev); struct sdhci_sprd_host *sprd_host = TO_SPRD_HOST(host); + mmc_packed_queue_stop(host->mmc->packed); sdhci_runtime_suspend_host(host); clk_disable_unprepare(sprd_host->clk_sdio); @@ -730,6 +747,7 @@ static int sdhci_sprd_runtime_resume(struct device *dev) goto clk_disable; sdhci_runtime_resume_host(host); + mmc_packed_queue_start(host->mmc->packed); return 0; clk_disable: