From patchwork Thu Feb 9 15:34:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 93727 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp81429obz; Thu, 9 Feb 2017 07:36:43 -0800 (PST) X-Received: by 10.99.158.68 with SMTP id r4mr4689716pgo.153.1486654603077; Thu, 09 Feb 2017 07:36:43 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z4si10362155pfd.119.2017.02.09.07.36.42; Thu, 09 Feb 2017 07:36:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753919AbdBIPgk (ORCPT + 5 others); Thu, 9 Feb 2017 10:36:40 -0500 Received: from mail-lf0-f44.google.com ([209.85.215.44]:34114 "EHLO mail-lf0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753928AbdBIPgI (ORCPT ); Thu, 9 Feb 2017 10:36:08 -0500 Received: by mail-lf0-f44.google.com with SMTP id v186so4554147lfa.1 for ; Thu, 09 Feb 2017 07:35:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ExL/g8vc0+to/j6OfBKE7ysB7kSVD0s36ZLsM1b0V9U=; b=jymvwxl2nAO7a4FR0Vx2thlj0SgAxByCDAPyvPFf6FpxsdKpTdyej1vzZ41GoOQWr4 5v+gv7rrxbEim2Gvgj81pkgSnOus7XcWsG0iJxYaZOyGTOqlazKb6SryOypO+74rjw/3 kIYXu4pDckRY01TBFBc/YCH+zCw7vSi0MT8SA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ExL/g8vc0+to/j6OfBKE7ysB7kSVD0s36ZLsM1b0V9U=; b=rODwUXweEiCAuC032CPraXi4as6im+l7yVfz+I+9yUOhFymueeL18eEAEPRZO/yBfz DBDZWdk8dqVKjWC6h0ptw29uUFYlzmwOB7fLNNAgJfIniShDtlKxIi8jq5XG0TpHsdWB 78ObQlx/qTfAm/XEVozCQGPSJ+lnPzGZjn+NW8KDB8oa7K10FC3yggbCCkjvjjN0nr0R OiAcyhfL9lekeg/F6Kr5lNFxU/1melfTqvY2KGxnE3DEun/L5ZtuWGiJ05oG7N2TdeoC k4tsBRss91zmN5PXcPOPszTrGyGik03CaIjPIPgf+oz9znkxuMIipWNANhCPjNDX++wa aVSw== X-Gm-Message-State: AMke39n6WqLrBvDIPcSiiE4jW4f9wJBdsxnRQGXArVAAzCE/VRE7MdXuzztTvWeeEOhZ5IQS X-Received: by 10.25.135.130 with SMTP id j124mr1343262lfd.11.1486654528756; Thu, 09 Feb 2017 07:35:28 -0800 (PST) Received: from gnarp.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id e86sm3670614lji.32.2017.02.09.07.35.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Feb 2017 07:35:27 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter , Paolo Valente Cc: Chunyan Zhang , Baolin Wang , linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Linus Walleij Subject: [PATCH 13/16] mmc: queue: issue struct mmc_queue_req items Date: Thu, 9 Feb 2017 16:34:00 +0100 Message-Id: <20170209153403.9730-14-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170209153403.9730-1-linus.walleij@linaro.org> References: <20170209153403.9730-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Instead of passing two pointers around and messing and reassigning to the left and right, issue mmc_queue_req and dereference the queue from the request where needed. The struct mmc_queue_req is the thing that has a lifecycle after all: this is what we are keepin in out queue. Augment all users to be passed the struct mmc_queue_req as well. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 88 ++++++++++++++++++++++++------------------------ drivers/mmc/core/block.h | 5 ++- drivers/mmc/core/queue.c | 6 ++-- 3 files changed, 50 insertions(+), 49 deletions(-) -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 4952a105780e..628a22b9bf41 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1151,9 +1151,9 @@ int mmc_access_rpmb(struct mmc_queue *mq) return false; } -static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_DISCARD; @@ -1163,8 +1163,8 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) goto fail; } - from = blk_rq_pos(req); - nr = blk_rq_sectors(req); + from = blk_rq_pos(mq_rq->req); + nr = blk_rq_sectors(mq_rq->req); if (mmc_can_discard(card)) arg = MMC_DISCARD_ARG; @@ -1188,13 +1188,12 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) if (!err) mmc_blk_reset_success(md, type); fail: - blk_end_request(req, err, blk_rq_bytes(req)); + blk_end_request(mq_rq->req, err, blk_rq_bytes(mq_rq->req)); } -static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, - struct request *req) +static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_SECDISCARD; @@ -1204,8 +1203,8 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, goto out; } - from = blk_rq_pos(req); - nr = blk_rq_sectors(req); + from = blk_rq_pos(mq_rq->req); + nr = blk_rq_sectors(mq_rq->req); if (mmc_can_trim(card) && !mmc_erase_group_aligned(card, from, nr)) arg = MMC_SECURE_TRIM1_ARG; @@ -1253,12 +1252,12 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, if (!err) mmc_blk_reset_success(md, type); out: - blk_end_request(req, err, blk_rq_bytes(req)); + blk_end_request(mq_rq->req, err, blk_rq_bytes(mq_rq->req)); } -static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; int ret = 0; @@ -1266,7 +1265,7 @@ static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) if (ret) ret = -EIO; - blk_end_request_all(req, ret); + blk_end_request_all(mq_rq->req, ret); } /* @@ -1614,11 +1613,13 @@ static void mmc_blk_rw_cmd_abort(struct mmc_card *card, struct request *req) * @mq: the queue with the card and host to restart * @req: a new request that want to be started after the current one */ -static void mmc_blk_rw_try_restart(struct mmc_queue *mq) +static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq) { + struct mmc_queue *mq = mq_rq->mq; + /* Proceed and try to restart the current async request */ - mmc_blk_rw_rq_prep(mq->mqrq_cur, mq->card, 0, mq); - mmc_restart_areq(mq->card->host, &mq->mqrq_cur->areq); + mmc_blk_rw_rq_prep(mq_rq, mq->card, 0, mq); + mmc_restart_areq(mq->card->host, &mq_rq->areq); } void mmc_blk_rw_done(struct mmc_async_req *areq, @@ -1676,11 +1677,11 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); if (mmc_blk_reset(md, host, type)) { mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } if (!req_pending) { - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } break; @@ -1693,7 +1694,7 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, if (!mmc_blk_reset(md, host, type)) break; mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; case MMC_BLK_DATA_ERR: { int err; @@ -1702,7 +1703,7 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, break; if (err == -ENODEV) { mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } /* Fall through */ @@ -1723,19 +1724,19 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, req_pending = blk_end_request(old_req, -EIO, brq->data.blksz); if (!req_pending) { - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } break; case MMC_BLK_NOMEDIUM: mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; default: pr_err("%s: Unhandled return value (%d)", old_req->rq_disk->disk_name, status); mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } @@ -1747,15 +1748,16 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, mmc_blk_rw_rq_prep(mq_rq, card, disable_multi, mq); mq_rq->brq.retune_retry_done = retune_retry_done; - mmc_restart_areq(host, &mq->mqrq_cur->areq); + mmc_restart_areq(host, &mq_rq->areq); } } -static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) +static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) { + struct mmc_queue *mq = mq_rq->mq; struct mmc_card *card = mq->card; - if (!new_req) { + if (!mq_rq->req) { pr_err("%s: NULL request!\n", __func__); return; } @@ -1765,54 +1767,52 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) * multiple read or write is allowed */ if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + !IS_ALIGNED(blk_rq_sectors(mq_rq->req), 8)) { pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(card, new_req); + mq_rq->req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(card, mq_rq->req); return; } - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); - mmc_start_areq(card->host, &mq->mqrq_cur->areq); + mmc_blk_rw_rq_prep(mq_rq, card, 0, mq); + mmc_start_areq(card->host, &mq_rq->areq); } -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) { int ret; - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; ret = mmc_blk_part_switch(card, md); if (ret) { - if (req) { - blk_end_request_all(req, -EIO); - } + blk_end_request_all(mq_rq->req, -EIO); return; } - if (req && req_op(req) == REQ_OP_DISCARD) { + if (req_op(mq_rq->req) == REQ_OP_DISCARD) { /* complete ongoing async transfer before issuing discard */ if (card->host->areq) { wait_for_completion(&card->host->areq->complete); card->host->areq = NULL; } - mmc_blk_issue_discard_rq(mq, req); - } else if (req && req_op(req) == REQ_OP_SECURE_ERASE) { + mmc_blk_issue_discard_rq(mq_rq); + } else if (req_op(mq_rq->req) == REQ_OP_SECURE_ERASE) { /* complete ongoing async transfer before issuing secure erase*/ if (card->host->areq) { wait_for_completion(&card->host->areq->complete); card->host->areq = NULL; } - mmc_blk_issue_secdiscard_rq(mq, req); - } else if (req && req_op(req) == REQ_OP_FLUSH) { + mmc_blk_issue_secdiscard_rq(mq_rq); + } else if (req_op(mq_rq->req) == REQ_OP_FLUSH) { /* complete ongoing async transfer before issuing flush */ if (card->host->areq) { wait_for_completion(&card->host->areq->complete); card->host->areq = NULL; } - mmc_blk_issue_flush(mq, req); + mmc_blk_issue_flush(mq_rq); } else { - mmc_blk_issue_rw_rq(mq, req); + mmc_blk_issue_rw_rq(mq_rq); } } diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h index b4b489911599..0326fa5d8217 100644 --- a/drivers/mmc/core/block.h +++ b/drivers/mmc/core/block.h @@ -3,10 +3,9 @@ struct mmc_async_req; enum mmc_blk_status; -struct mmc_queue; -struct request; +struct mmc_queue_req; void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status); -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req); +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq); #endif diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index c9f28de7b0f4..c4e1ced55796 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -54,6 +54,7 @@ static int mmc_queue_thread(void *d) struct mmc_queue *mq = d; struct request_queue *q = mq->queue; bool claimed_host = false; + struct mmc_queue_req *mq_rq; current->flags |= PF_MEMALLOC; @@ -65,7 +66,8 @@ static int mmc_queue_thread(void *d) set_current_state(TASK_INTERRUPTIBLE); req = blk_fetch_request(q); mq->asleep = false; - mq->mqrq_cur->req = req; + mq_rq = mq->mqrq_cur; + mq_rq->req = req; spin_unlock_irq(q->queue_lock); if (req) { @@ -74,7 +76,7 @@ static int mmc_queue_thread(void *d) if (!claimed_host) mmc_get_card(mq->card); set_current_state(TASK_RUNNING); - mmc_blk_issue_rq(mq, req); + mmc_blk_issue_rq(mq_rq); cond_resched(); /* * Current request becomes previous request