From patchwork Thu Oct 26 12:57:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117233 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp734105qgn; Thu, 26 Oct 2017 05:59:21 -0700 (PDT) X-Google-Smtp-Source: ABhQp+TjNe3JdHa3D1vjCTH637te/A8xw/pFL+ePyTJk/0E0lPmu/LLQNZFoJSNDp1t47zaNg0T5 X-Received: by 10.99.122.92 with SMTP id j28mr4914946pgn.154.1509022761018; Thu, 26 Oct 2017 05:59:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022761; cv=none; d=google.com; s=arc-20160816; b=pwk7s1/A3truNIs9NSNPR44I0686W9mdyq2LHqy6CHKSD+mIY6ke8VYV1yjWfI2eAY Z0EYehaVrHL1D4OVkUpiruNic5DW4sGMjPAy0CPHVKoefvcJMWxix5VBHQ7qAXUGKMRI 6FgBwSfrSrXi5pYejz+g7yk6gyhD18tsCqZO7Aymw6imdqANq1R8mK/bIruvIbHhZALF +u+fzecOAz6gEq03HmxgsMwVi7eeen1Jn0Sdn66gcRpWfyI/GMd+1BgVIL6rIB+EMWNz NKapysOh/4iG1/FGTmld6LZc0pmMYS4LCvskjOSc+FcYLxR8WeLDWErfBgMGC8OYCOK3 HW7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=SCT2dtZWu9xABbOr8pgLj2gKZQ6m4L2136REl9A8C38=; b=zpRrAEWy3URt9P59wOc8NiHRKQsABat/hI/rXbACIuOLr+vFiGvlY5eq5C2IAg8S8I E+kgjtJhV4XXaSAM/fmAZvgnIMBZtrffb8kwktbJ5QpGoX6JPFeE28DoM7xWyQJHpSA3 sYzkzRbw8984lWB9I9ORwA13EfDWvjLanIFeO+Daq7I+Gk9H2vjhoJumqu8ukO/L3t47 0cjxjvueQbdPmuhPZscJQZqT0/IG/pxSQj1P8GJlvJ4PiVPakFnJLbMZImX62eTnJlst EXhS43BcMTtwdHeLV5lKExp2hlyZd9Ryi4vc4gd3AhJAQjCiBllLt9mmRIbG34FyiOzc npVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=EDQ7CraF; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m6si3601406pfj.586.2017.10.26.05.59.20; Thu, 26 Oct 2017 05:59:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=EDQ7CraF; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932301AbdJZM7S (ORCPT + 6 others); Thu, 26 Oct 2017 08:59:18 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:54984 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932334AbdJZM7N (ORCPT ); Thu, 26 Oct 2017 08:59:13 -0400 Received: by mail-lf0-f68.google.com with SMTP id a2so3617481lfh.11 for ; Thu, 26 Oct 2017 05:59:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=P9Plg55ktmrHX1QFVXM/OjuQoCyZDvtt624BpLx0sBI=; b=EDQ7CraF3nAaNCDDyAbTJvqw27GZ7J2oWGhZx+M3BKWyFWe9LENiPJ1KSI9+CmLCE4 ubHSb7QiZzAfzJhSW30DscWAnyhF88RUvjueZ8EKIGHsnAqCY/wvg36+uAFYY5BTlB24 NxD7Us+XLa3mJi9mRwqS4bKBMGTZDhXSuLlnY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=P9Plg55ktmrHX1QFVXM/OjuQoCyZDvtt624BpLx0sBI=; b=ZBtkfjwk0PJaR90tccAOOFsIuqzQlm3OndnSKMalv/VRkWpBSBpORDi87gUaJq9mVv +1/R7hnhYzbq1dFKR5ZU2GMMWVbLPa/6FnsGFUZpgc1mYsljapWNBrvcwozE0AamOYPM r+yApJLHxwEtoeVSqBJLYv4AhTtlKvCVbou68ZwVvM8MMsqhlVY5B3eJ5poyNk/XEUq/ mbm/ty/PZbzU7QA3eRYkCIR/gMjvpwnfmqJsFdN63X6Fm1zxyl+nj1HtDsK0zhyY0dzC LxhbDQ2YNtZnqbB5cqkx/R9PHo0h+VQcjj5RAM8LmRzWtMJSp04a2nsisEGCbLjNAw9h htag== X-Gm-Message-State: AMCzsaW0FBHG/6r63EtzNTEpYSplIyOHjs5J6hKoZhScXaZIJT0/dhEa AD3tOVDFNTUzRwU99wdLYvbnJuA+VuI= X-Received: by 10.25.35.9 with SMTP id j9mr8056535lfj.24.1509022751494; Thu, 26 Oct 2017 05:59:11 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.59.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:59:10 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 12/12 v4] mmc: switch MMC/SD to use blk-mq multiqueueing Date: Thu, 26 Oct 2017 14:57:57 +0200 Message-Id: <20171026125757.10200-13-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This switches the MMC/SD stack to use the multiqueue block layer interface. We kill off the kthread that was just calling blk_fetch_request() and let blk-mq drive all traffic, nice, that is how it should work. Due to having switched the submission mechanics around so that the completion of requests is now triggered from the host callbacks, we manage to keep the same performance for linear reads/writes as we have for the old block layer. The open questions from earlier patch series v1 thru v3 have been addressed: - mmc_[get|put]_card() is now issued across requests from .queue_rq() to .complete() using Adrians nifty context lock. This means that the block layer does not compete with itself on getting access to the host, and we can let other users of the host come in. (For SDIO and mixed-mode cards.) - Partial reads are handled by open coding calls to blk_update_request() as advised by Christoph. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 87 ++++++++++-------- drivers/mmc/core/queue.c | 223 ++++++++++++++++++----------------------------- drivers/mmc/core/queue.h | 8 +- 3 files changed, 139 insertions(+), 179 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index f06f381146a5..9e0fe07e098a 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -93,7 +94,6 @@ static DEFINE_IDA(mmc_rpmb_ida); * There is one mmc_blk_data per slot. */ struct mmc_blk_data { - spinlock_t lock; struct device *parent; struct gendisk *disk; struct mmc_queue queue; @@ -1204,6 +1204,18 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type) } /* + * This reports status back to the block layer for a finished request. + */ +static void mmc_blk_complete(struct mmc_queue_req *mq_rq, + blk_status_t status) +{ + struct request *req = mmc_queue_req_to_req(mq_rq); + + blk_mq_end_request(req, status); + blk_mq_complete_request(req); +} + +/* * The non-block commands come back from the block layer after it queued it and * processed it with all other requests and then they get issued in this * function. @@ -1262,9 +1274,9 @@ static void mmc_blk_issue_drv_op(struct mmc_queue_req *mq_rq) ret = -EINVAL; break; } + mq_rq->drv_op_result = ret; - blk_end_request_all(mmc_queue_req_to_req(mq_rq), - ret ? BLK_STS_IOERR : BLK_STS_OK); + mmc_blk_complete(mq_rq, ret ? BLK_STS_IOERR : BLK_STS_OK); } static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) @@ -1308,7 +1320,7 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) else mmc_blk_reset_success(md, type); fail: - blk_end_request(req, status, blk_rq_bytes(req)); + mmc_blk_complete(mq_rq, status); } static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) @@ -1378,7 +1390,7 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) if (!err) mmc_blk_reset_success(md, type); out: - blk_end_request(req, status, blk_rq_bytes(req)); + mmc_blk_complete(mq_rq, status); } static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) @@ -1388,8 +1400,13 @@ static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) int ret = 0; ret = mmc_flush_cache(card); - blk_end_request_all(mmc_queue_req_to_req(mq_rq), - ret ? BLK_STS_IOERR : BLK_STS_OK); + /* + * NOTE: this used to call blk_end_request_all() for both + * cases in the old block layer to flush all queued + * transactions. I am not sure it was even correct to + * do that for the success case. + */ + mmc_blk_complete(mq_rq, ret ? BLK_STS_IOERR : BLK_STS_OK); } /* @@ -1768,7 +1785,6 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mq_rq, mq_rq->areq.err_check = mmc_blk_err_check; mq_rq->areq.host = card->host; - INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); } static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, @@ -1792,10 +1808,13 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, err = mmc_sd_num_wr_blocks(card, &blocks); if (err) req_pending = old_req_pending; - else - req_pending = blk_end_request(req, BLK_STS_OK, blocks << 9); + else { + req_pending = blk_update_request(req, BLK_STS_OK, + blocks << 9); + } } else { - req_pending = blk_end_request(req, BLK_STS_OK, brq->data.bytes_xfered); + req_pending = blk_update_request(req, BLK_STS_OK, + brq->data.bytes_xfered); } return req_pending; } @@ -1808,7 +1827,7 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue_req *mq_rq) if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; - while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req))); + mmc_blk_complete(mq_rq, BLK_STS_IOERR); } /** @@ -1854,8 +1873,8 @@ static void mmc_blk_rw_done_error(struct mmc_async_req *areq, case MMC_BLK_PARTIAL: /* This should trigger a retransmit */ mmc_blk_reset_success(md, type); - req_pending = blk_end_request(req, BLK_STS_OK, - brq->data.bytes_xfered); + req_pending = blk_update_request(req, BLK_STS_OK, + brq->data.bytes_xfered); break; case MMC_BLK_CMD_ERR: req_pending = mmc_blk_rw_cmd_err(md, card, brq, req, req_pending); @@ -1906,11 +1925,13 @@ static void mmc_blk_rw_done_error(struct mmc_async_req *areq, * time, so we only reach here after trying to * read a single sector. */ - req_pending = blk_end_request(req, BLK_STS_IOERR, - brq->data.blksz); + req_pending = blk_update_request(req, BLK_STS_IOERR, + brq->data.blksz); if (!req_pending) { mmc_blk_rw_try_restart(mq_rq); return; + } else { + mmc_blk_complete(mq_rq, BLK_STS_IOERR); } break; case MMC_BLK_NOMEDIUM: @@ -1941,10 +1962,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, { struct mmc_queue_req *mq_rq; struct request *req; - struct mmc_blk_request *brq; struct mmc_queue *mq; struct mmc_blk_data *md; - bool req_pending; int type; /* @@ -1957,26 +1976,13 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, /* The quick path if the request was successful */ mq_rq = container_of(areq, struct mmc_queue_req, areq); - brq = &mq_rq->brq; mq = mq_rq->mq; md = mq->blkdata; req = mmc_queue_req_to_req(mq_rq); type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; mmc_blk_reset_success(md, type); - req_pending = blk_end_request(req, BLK_STS_OK, - brq->data.bytes_xfered); - /* - * If the blk_end_request function returns non-zero even - * though all data has been transferred and no errors - * were returned by the host controller, it's a bug. - */ - if (req_pending) { - pr_err("%s BUG rq_tot %d d_xfer %d\n", - __func__, blk_rq_bytes(req), - brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq_rq); - } + mmc_blk_complete(mq_rq, BLK_STS_OK); } static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) @@ -1991,7 +1997,12 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) */ if (mmc_card_removed(card)) { req->rq_flags |= RQF_QUIET; - blk_end_request_all(req, BLK_STS_IOERR); + /* + * NOTE: this used to call blk_end_request_all() + * to flush out all queued transactions to the now + * non-present card. + */ + mmc_blk_complete(mq_rq, BLK_STS_IOERR); return; } @@ -2017,8 +2028,9 @@ void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) { int ret; struct request *req = mmc_queue_req_to_req(mq_rq); - struct mmc_blk_data *md = mq_rq->mq->blkdata; - struct mmc_card *card = md->queue.card; + struct mmc_queue *mq = mq_rq->mq; + struct mmc_blk_data *md = mq->blkdata; + struct mmc_card *card = mq->card; if (!req) { pr_err("%s: tried to issue NULL request\n", __func__); @@ -2027,7 +2039,7 @@ void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) ret = mmc_blk_part_switch(card, md->part_type); if (ret) { - blk_end_request_all(req, BLK_STS_IOERR); + mmc_blk_complete(mq_rq, BLK_STS_IOERR); return; } @@ -2124,12 +2136,11 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, goto err_kfree; } - spin_lock_init(&md->lock); INIT_LIST_HEAD(&md->part); INIT_LIST_HEAD(&md->rpmbs); md->usage = 1; - ret = mmc_init_queue(&md->queue, card, &md->lock, subname); + ret = mmc_init_queue(&md->queue, card, subname); if (ret) goto err_putdisk; diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 5511e323db31..dea6b4e3f828 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -38,74 +39,6 @@ static int mmc_prep_request(struct request_queue *q, struct request *req) return BLKPREP_OK; } -static int mmc_queue_thread(void *d) -{ - struct mmc_queue *mq = d; - struct request_queue *q = mq->queue; - bool claimed_card = false; - - current->flags |= PF_MEMALLOC; - - down(&mq->thread_sem); - do { - struct request *req; - - spin_lock_irq(q->queue_lock); - set_current_state(TASK_INTERRUPTIBLE); - req = blk_fetch_request(q); - mq->asleep = false; - spin_unlock_irq(q->queue_lock); - - if (req) { - if (!claimed_card) { - mmc_get_card(mq->card, NULL); - claimed_card = true; - } - set_current_state(TASK_RUNNING); - mmc_blk_issue_rq(req_to_mmc_queue_req(req)); - cond_resched(); - } else { - mq->asleep = true; - if (kthread_should_stop()) { - set_current_state(TASK_RUNNING); - break; - } - up(&mq->thread_sem); - schedule(); - down(&mq->thread_sem); - } - } while (1); - up(&mq->thread_sem); - - if (claimed_card) - mmc_put_card(mq->card, NULL); - - return 0; -} - -/* - * Generic MMC request handler. This is called for any queue on a - * particular host. When the host is not busy, we look for a request - * on any queue on this host, and attempt to issue it. This may - * not be the queue we were asked to process. - */ -static void mmc_request_fn(struct request_queue *q) -{ - struct mmc_queue *mq = q->queuedata; - struct request *req; - - if (!mq) { - while ((req = blk_fetch_request(q)) != NULL) { - req->rq_flags |= RQF_QUIET; - __blk_end_request_all(req, BLK_STS_IOERR); - } - return; - } - - if (mq->asleep) - wake_up_process(mq->thread); -} - static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp) { struct scatterlist *sg; @@ -136,127 +69,158 @@ static void mmc_queue_setup_discard(struct request_queue *q, queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q); } +static blk_status_t mmc_queue_request(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +{ + struct mmc_queue_req *mq_rq = blk_mq_rq_to_pdu(bd->rq); + struct mmc_queue *mq = mq_rq->mq; + + /* Claim card for block queue context */ + mmc_get_card(mq->card, &mq->blkctx); + mmc_blk_issue_rq(mq_rq); + + return BLK_STS_OK; +} + +static void mmc_complete_request(struct request *req) +{ + struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); + struct mmc_queue *mq = mq_rq->mq; + + /* Release card for block queue context */ + mmc_put_card(mq->card, &mq->blkctx); +} + /** * mmc_init_request() - initialize the MMC-specific per-request data - * @q: the request queue + * @set: tag set for the request * @req: the request - * @gfp: memory allocation policy + * @hctx_idx: hardware context index + * @numa_node: NUMA node */ -static int mmc_init_request(struct request_queue *q, struct request *req, - gfp_t gfp) +static int mmc_init_request(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx, unsigned int numa_node) { struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); - struct mmc_queue *mq = q->queuedata; + struct mmc_queue *mq = set->driver_data; struct mmc_card *card = mq->card; struct mmc_host *host = card->host; - mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp); + mq_rq->sg = mmc_alloc_sg(host->max_segs, GFP_KERNEL); if (!mq_rq->sg) return -ENOMEM; mq_rq->mq = mq; + INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); return 0; } -static void mmc_exit_request(struct request_queue *q, struct request *req) +/** + * mmc_exit_request() - tear down the MMC-specific per-request data + * @set: tag set for the request + * @req: the request + * @hctx_idx: hardware context index + */ +static void mmc_exit_request(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx) { struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); + flush_work(&mq_rq->areq.finalization_work); kfree(mq_rq->sg); mq_rq->sg = NULL; mq_rq->mq = NULL; } -static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) +static void mmc_setup_queue(struct mmc_queue *mq) { + struct request_queue *q = mq->queue; + struct mmc_card *card = mq->card; struct mmc_host *host = card->host; u64 limit = BLK_BOUNCE_HIGH; if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; - queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); - queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue); + blk_queue_max_segments(q, host->max_segs); + blk_queue_prep_rq(q, mmc_prep_request); + queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q); + queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q); if (mmc_can_erase(card)) - mmc_queue_setup_discard(mq->queue, card); - - blk_queue_bounce_limit(mq->queue, limit); - blk_queue_max_hw_sectors(mq->queue, + mmc_queue_setup_discard(q, card); + blk_queue_bounce_limit(q, limit); + blk_queue_max_hw_sectors(q, min(host->max_blk_count, host->max_req_size / 512)); - blk_queue_max_segments(mq->queue, host->max_segs); - blk_queue_max_segment_size(mq->queue, host->max_seg_size); - - /* Initialize thread_sem even if it is not used */ - sema_init(&mq->thread_sem, 1); + blk_queue_max_segments(q, host->max_segs); + blk_queue_max_segment_size(q, host->max_seg_size); } +static const struct blk_mq_ops mmc_mq_ops = { + .queue_rq = mmc_queue_request, + .init_request = mmc_init_request, + .exit_request = mmc_exit_request, + .complete = mmc_complete_request, +}; + /** * mmc_init_queue - initialise a queue structure. * @mq: mmc queue * @card: mmc card to attach this queue - * @lock: queue lock * @subname: partition subname * * Initialise a MMC card request queue. */ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, - spinlock_t *lock, const char *subname) + const char *subname) { struct mmc_host *host = card->host; - int ret = -ENOMEM; + int ret; mq->card = card; - mq->queue = blk_alloc_queue(GFP_KERNEL); - if (!mq->queue) - return -ENOMEM; - mq->queue->queue_lock = lock; - mq->queue->request_fn = mmc_request_fn; - mq->queue->init_rq_fn = mmc_init_request; - mq->queue->exit_rq_fn = mmc_exit_request; - mq->queue->cmd_size = sizeof(struct mmc_queue_req); - mq->queue->queuedata = mq; - ret = blk_init_allocated_queue(mq->queue); + mq->tag_set.ops = &mmc_mq_ops; + /* The MMC/SD protocols have only one command pipe */ + mq->tag_set.nr_hw_queues = 1; + /* Set this to 2 to simulate async requests, should we use 3? */ + mq->tag_set.queue_depth = 2; + mq->tag_set.cmd_size = sizeof(struct mmc_queue_req); + mq->tag_set.numa_node = NUMA_NO_NODE; + /* We use blocking requests */ + mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING; + /* Should we use BLK_MQ_F_SG_MERGE? */ + mq->tag_set.driver_data = mq; + + ret = blk_mq_alloc_tag_set(&mq->tag_set); if (ret) { - blk_cleanup_queue(mq->queue); + dev_err(host->parent, "failed to allocate MQ tag set\n"); return ret; } - - blk_queue_prep_rq(mq->queue, mmc_prep_request); - - mmc_setup_queue(mq, card); - - mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s", - host->index, subname ? subname : ""); - - if (IS_ERR(mq->thread)) { - ret = PTR_ERR(mq->thread); - goto cleanup_queue; + mq->queue = blk_mq_init_queue(&mq->tag_set); + if (!mq->queue) { + dev_err(host->parent, "failed to initialize block MQ\n"); + goto cleanup_free_tag_set; } + mq->queue->queuedata = mq; + mmc_setup_queue(mq); return 0; -cleanup_queue: - blk_cleanup_queue(mq->queue); +cleanup_free_tag_set: + blk_mq_free_tag_set(&mq->tag_set); return ret; } void mmc_cleanup_queue(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; /* Make sure the queue isn't suspended, as that will deadlock */ mmc_queue_resume(mq); - /* Then terminate our worker thread */ - kthread_stop(mq->thread); - /* Empty the queue */ - spin_lock_irqsave(q->queue_lock, flags); q->queuedata = NULL; blk_start_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - + blk_cleanup_queue(q); + blk_mq_free_tag_set(&mq->tag_set); mq->card = NULL; } EXPORT_SYMBOL(mmc_cleanup_queue); @@ -265,23 +229,16 @@ EXPORT_SYMBOL(mmc_cleanup_queue); * mmc_queue_suspend - suspend a MMC request queue * @mq: MMC queue to suspend * - * Stop the block request queue, and wait for our thread to - * complete any outstanding requests. This ensures that we + * Stop the block request queue. This ensures that we * won't suspend while a request is being processed. */ void mmc_queue_suspend(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; if (!mq->suspended) { - mq->suspended |= true; - - spin_lock_irqsave(q->queue_lock, flags); + mq->suspended = true; blk_stop_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - - down(&mq->thread_sem); } } @@ -292,16 +249,10 @@ void mmc_queue_suspend(struct mmc_queue *mq) void mmc_queue_resume(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; if (mq->suspended) { mq->suspended = false; - - up(&mq->thread_sem); - - spin_lock_irqsave(q->queue_lock, flags); blk_start_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); } } diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 67ae311b107f..c78fbb226a90 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -61,16 +61,14 @@ struct mmc_queue_req { struct mmc_queue { struct mmc_card *card; - struct task_struct *thread; - struct semaphore thread_sem; bool suspended; - bool asleep; struct mmc_blk_data *blkdata; struct request_queue *queue; + struct mmc_ctx blkctx; + struct blk_mq_tag_set tag_set; }; -extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, - const char *); +extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, const char *); extern void mmc_cleanup_queue(struct mmc_queue *); extern void mmc_queue_suspend(struct mmc_queue *); extern void mmc_queue_resume(struct mmc_queue *);