From patchwork Sat Jun 18 20:55:24 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Per Forlin X-Patchwork-Id: 2051 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id E86DA23F43 for ; Sat, 18 Jun 2011 20:57:06 +0000 (UTC) Received: from mail-vx0-f180.google.com (mail-vx0-f180.google.com [209.85.220.180]) by fiordland.canonical.com (Postfix) with ESMTP id B67ECA18587 for ; Sat, 18 Jun 2011 20:57:06 +0000 (UTC) Received: by mail-vx0-f180.google.com with SMTP id 7so1410301vxd.11 for ; Sat, 18 Jun 2011 13:57:06 -0700 (PDT) Received: by 10.52.112.106 with SMTP id ip10mr1641465vdb.127.1308430626484; Sat, 18 Jun 2011 13:57:06 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.52.183.130 with SMTP id em2cs35024vdc; Sat, 18 Jun 2011 13:57:06 -0700 (PDT) Received: by 10.204.77.104 with SMTP id f40mr2727912bkk.30.1308430625552; Sat, 18 Jun 2011 13:57:05 -0700 (PDT) Received: from mail-bw0-f50.google.com (mail-bw0-f50.google.com [209.85.214.50]) by mx.google.com with ESMTPS id y13si11429305bkd.37.2011.06.18.13.57.04 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 18 Jun 2011 13:57:05 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of per.forlin@linaro.org) client-ip=209.85.214.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of per.forlin@linaro.org) smtp.mail=per.forlin@linaro.org Received: by mail-bw0-f50.google.com with SMTP id 2so3810709bwz.37 for ; Sat, 18 Jun 2011 13:57:04 -0700 (PDT) Received: by 10.204.81.25 with SMTP id v25mr2556021bkk.168.1308430624548; Sat, 18 Jun 2011 13:57:04 -0700 (PDT) Received: from localhost.localdomain (c-3c7b71d5.029-82-6c756e10.cust.bredbandsbolaget.se [213.113.123.60]) by mx.google.com with ESMTPS id n3sm3064448bka.16.2011.06.18.13.57.03 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 18 Jun 2011 13:57:03 -0700 (PDT) From: Per Forlin To: linux-mmc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linaro-dev@lists.linaro.org, Venkatraman S Cc: Chris Ball , Per Forlin Subject: [PATCH v5 10/12] mmc: add a second mmc queue request member Date: Sat, 18 Jun 2011 22:55:24 +0200 Message-Id: <1308430526-9412-11-git-send-email-per.forlin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1308430526-9412-1-git-send-email-per.forlin@linaro.org> References: <1308430526-9412-1-git-send-email-per.forlin@linaro.org> Add an additional mmc queue request instance to make way for two active block requests. One request may be active while the other request is being prepared. Signed-off-by: Per Forlin --- drivers/mmc/card/queue.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- drivers/mmc/card/queue.h | 3 ++- 2 files changed, 44 insertions(+), 3 deletions(-) diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c index 81d0eef..0757a39 100644 --- a/drivers/mmc/card/queue.c +++ b/drivers/mmc/card/queue.c @@ -130,6 +130,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock u64 limit = BLK_BOUNCE_HIGH; int ret; struct mmc_queue_req *mqrq_cur = &mq->mqrq[0]; + struct mmc_queue_req *mqrq_prev = &mq->mqrq[1]; if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) limit = *mmc_dev(host)->dma_mask; @@ -140,7 +141,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock return -ENOMEM; memset(&mq->mqrq_cur, 0, sizeof(mq->mqrq_cur)); + memset(&mq->mqrq_prev, 0, sizeof(mq->mqrq_prev)); mq->mqrq_cur = mqrq_cur; + mq->mqrq_prev = mqrq_prev; mq->queue->queuedata = mq; blk_queue_prep_rq(mq->queue, mmc_prep_request); @@ -181,9 +184,17 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock "allocate bounce cur buffer\n", mmc_card_name(card)); } + mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL); + if (!mqrq_prev->bounce_buf) { + printk(KERN_WARNING "%s: unable to " + "allocate bounce prev buffer\n", + mmc_card_name(card)); + kfree(mqrq_cur->bounce_buf); + mqrq_cur->bounce_buf = NULL; + } } - if (mqrq_cur->bounce_buf) { + if (mqrq_cur->bounce_buf && mqrq_prev->bounce_buf) { blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY); blk_queue_max_hw_sectors(mq->queue, bouncesz / 512); blk_queue_max_segments(mq->queue, bouncesz / 512); @@ -198,11 +209,19 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock if (ret) goto cleanup_queue; + mqrq_prev->sg = mmc_alloc_sg(1, &ret); + if (ret) + goto cleanup_queue; + + mqrq_prev->bounce_sg = + mmc_alloc_sg(bouncesz / 512, &ret); + if (ret) + goto cleanup_queue; } } #endif - if (!mqrq_cur->bounce_buf) { + if (!mqrq_cur->bounce_buf && !mqrq_prev->bounce_buf) { blk_queue_bounce_limit(mq->queue, limit); blk_queue_max_hw_sectors(mq->queue, min(host->max_blk_count, host->max_req_size / 512)); @@ -213,6 +232,10 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock if (ret) goto cleanup_queue; + + mqrq_prev->sg = mmc_alloc_sg(host->max_segs, &ret); + if (ret) + goto cleanup_queue; } sema_init(&mq->thread_sem, 1); @@ -229,6 +252,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock free_bounce_sg: kfree(mqrq_cur->bounce_sg); mqrq_cur->bounce_sg = NULL; + kfree(mqrq_prev->bounce_sg); + mqrq_prev->bounce_sg = NULL; cleanup_queue: kfree(mqrq_cur->sg); @@ -236,6 +261,11 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock kfree(mqrq_cur->bounce_buf); mqrq_cur->bounce_buf = NULL; + kfree(mqrq_prev->sg); + mqrq_prev->sg = NULL; + kfree(mqrq_prev->bounce_buf); + mqrq_prev->bounce_buf = NULL; + blk_cleanup_queue(mq->queue); return ret; } @@ -245,6 +275,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq) struct request_queue *q = mq->queue; unsigned long flags; struct mmc_queue_req *mqrq_cur = mq->mqrq_cur; + struct mmc_queue_req *mqrq_prev = mq->mqrq_prev; /* Make sure the queue isn't suspended, as that will deadlock */ mmc_queue_resume(mq); @@ -267,6 +298,15 @@ void mmc_cleanup_queue(struct mmc_queue *mq) kfree(mqrq_cur->bounce_buf); mqrq_cur->bounce_buf = NULL; + kfree(mqrq_prev->bounce_sg); + mqrq_prev->bounce_sg = NULL; + + kfree(mqrq_prev->sg); + mqrq_prev->sg = NULL; + + kfree(mqrq_prev->bounce_buf); + mqrq_prev->bounce_buf = NULL; + mq->card = NULL; } EXPORT_SYMBOL(mmc_cleanup_queue); diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h index a1defed..c8fb2dc 100644 --- a/drivers/mmc/card/queue.h +++ b/drivers/mmc/card/queue.h @@ -29,8 +29,9 @@ struct mmc_queue { int (*issue_fn)(struct mmc_queue *, struct request *); void *data; struct request_queue *queue; - struct mmc_queue_req mqrq[1]; + struct mmc_queue_req mqrq[2]; struct mmc_queue_req *mqrq_cur; + struct mmc_queue_req *mqrq_prev; }; extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *);