From patchwork Mon Nov 30 02:46:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 334974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45E15C83011 for ; Mon, 30 Nov 2020 02:47:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2839720771 for ; Mon, 30 Nov 2020 02:47:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726663AbgK3CrK (ORCPT ); Sun, 29 Nov 2020 21:47:10 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:35850 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726000AbgK3CrK (ORCPT ); Sun, 29 Nov 2020 21:47:10 -0500 Received: by mail-pl1-f195.google.com with SMTP id r2so5672368pls.3; Sun, 29 Nov 2020 18:46:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L9bMMO9PLHWj2MaCkKXM22x7qw+mqsaR/qfG6E0Sag8=; b=Jcdsbpw0pmxN5VUAFk7Lu+nQJlat4dE4HSdCoNq2gJ2AVtZloomv4OcyhR5xZKbyIB dd5H7Aic8NFAZzXBWwzYeL7FFksGSBVtXcQep5EVcjd1e45q/9SGt+W2/fGrT+qMxypV M0zNpXE/eqNIOYQEDuBr4PFtg9K0rsxMyU19hR9glJxZ/ay0T/yBgiF5J1fXwTzxGFAq 3bxUMS3UdgUXxyN5t6Kq8oOf3mOdWisQPPdXMfLVdJC6o2+ZQymVJZRtH2JXyWaXZCQ/ IrN1LVUnXXF+MgTk9qm6aLN8MrMteHvNZvmo51E9CVKsBfUirECBxwxJXFCl8SJCmqXc hI+Q== X-Gm-Message-State: AOAM5317Q+kapGvmO10BN8X7Su3kUnRzwSmOd1S/xdtCRI1o4CZHsaYj /tYXWimV7bnhpjOS76lYmbk= X-Google-Smtp-Source: ABdhPJypsp/jGB9VdBi6zZ4uaJD0Ls1LCxAHHQ0IB9jT11HRJOa/TZ1ASPMTtcFmQASWDKjuv8FOsg== X-Received: by 2002:a17:902:d716:b029:da:10c1:51b5 with SMTP id w22-20020a170902d716b02900da10c151b5mr16783188ply.50.1606704388827; Sun, 29 Nov 2020 18:46:28 -0800 (PST) Received: from asus.hsd1.ca.comcast.net (c-73-241-217-19.hsd1.ca.comcast.net. [73.241.217.19]) by smtp.gmail.com with ESMTPSA id n127sm14734659pfd.143.2020.11.29.18.46.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Nov 2020 18:46:27 -0800 (PST) From: Bart Van Assche To: "Martin K . Petersen" Cc: "James E . J . Bottomley" , Jens Axboe , Christoph Hellwig , Ming Lei , linux-scsi@vger.kernel.org, linux-block@vger.kernel.org, Bart Van Assche , Hannes Reinecke , "David S . Miller" , Alan Stern , Can Guo , Stanley Chu , "Rafael J . Wysocki" Subject: [PATCH v4 3/9] ide: Do not set the RQF_PREEMPT flag for sense requests Date: Sun, 29 Nov 2020 18:46:09 -0800 Message-Id: <20201130024615.29171-4-bvanassche@acm.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201130024615.29171-1-bvanassche@acm.org> References: <20201130024615.29171-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org RQF_PREEMPT is used for two different purposes in the legacy IDE code: 1. To mark power management requests. 2. To mark requests that should preempt another request. An (old) explanation of that feature is as follows: "The IDE driver in the Linux kernel normally uses a series of busywait delays during its initialization. When the driver executes these busywaits, the kernel does nothing for the duration of the wait. The time spent in these waits could be used for other initialization activities, if they could be run concurrently with these waits. More specifically, busywait-style delays such as udelay() in module init functions inhibit kernel preemption because the Big Kernel Lock is held, while yielding APIs such as schedule_timeout() allow preemption. This is true because the kernel handles the BKL specially and releases and reacquires it across reschedules allowed by the current thread. This IDE-preempt specification requires that the driver eliminate these busywaits and replace them with a mechanism that allows other work to proceed while the IDE driver is initializing." Since I haven't found an implementation of (2), do not set the PREEMPT flag for sense requests. This patch causes sense requests to be postponed while a drive is suspended instead of being submitted to ide_queue_rq(). If it would ever be necessary to restore the IDE PREEMPT functionality, that can be done by introducing a new flag in struct ide_request. Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: David S. Miller Cc: Alan Stern Cc: Can Guo Cc: Stanley Chu Cc: Ming Lei Cc: Rafael J. Wysocki Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig Reviewed-by: Hannes Reinecke --- drivers/ide/ide-atapi.c | 1 - drivers/ide/ide-io.c | 5 ----- 2 files changed, 6 deletions(-) diff --git a/drivers/ide/ide-atapi.c b/drivers/ide/ide-atapi.c index 2162bc80f09e..013ad33fbbc8 100644 --- a/drivers/ide/ide-atapi.c +++ b/drivers/ide/ide-atapi.c @@ -223,7 +223,6 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq) sense_rq->rq_disk = rq->rq_disk; sense_rq->cmd_flags = REQ_OP_DRV_IN; ide_req(sense_rq)->type = ATA_PRIV_SENSE; - sense_rq->rq_flags |= RQF_PREEMPT; req->cmd[0] = GPCMD_REQUEST_SENSE; req->cmd[4] = cmd_len; diff --git a/drivers/ide/ide-io.c b/drivers/ide/ide-io.c index 1a53c7a75224..c210ea3bd02f 100644 --- a/drivers/ide/ide-io.c +++ b/drivers/ide/ide-io.c @@ -515,11 +515,6 @@ blk_status_t ide_issue_rq(ide_drive_t *drive, struct request *rq, * above to return us whatever is in the queue. Since we call * ide_do_request() ourselves, we end up taking requests while * the queue is blocked... - * - * We let requests forced at head of queue with ide-preempt - * though. I hope that doesn't happen too much, hopefully not - * unless the subdriver triggers such a thing in its own PM - * state machine. */ if ((drive->dev_flags & IDE_DFLAG_BLOCKED) && ata_pm_request(rq) == 0 && From patchwork Mon Nov 30 02:46:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 334973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7108C83017 for ; Mon, 30 Nov 2020 02:47:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 95CDB2087D for ; Mon, 30 Nov 2020 02:47:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726757AbgK3CrO (ORCPT ); Sun, 29 Nov 2020 21:47:14 -0500 Received: from mail-pj1-f68.google.com ([209.85.216.68]:52015 "EHLO mail-pj1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726691AbgK3CrO (ORCPT ); Sun, 29 Nov 2020 21:47:14 -0500 Received: by mail-pj1-f68.google.com with SMTP id r20so397247pjp.1; Sun, 29 Nov 2020 18:46:58 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7/BAMz2CYvfGS5mHYfgF/debVAmENTs8RcHIS0BLCRE=; b=blp3ClspGjPIfRVPjBnp9YY6/JWfUknFjMh+oroRtF/P796EEyx1aP8OrYITmn/SrS DZinbLSSxDdGvAzUXJz869ZYUOD6NDckb9P8psbKmSCX0zPu/vY78bgwL+faKt8OB2cF jmpakyw1emaP6EfjKtGWilgomBzPSO3wtxI9rwzueXBi6jJjN/XywZw7UnUweTl4O3de wXZaYTc0TkfMTLyqukuoMYl93YsE4pPX5odfplYUgBVe4Q9cRwzpIQns6N+VC9/AP/md OwbvEhL7YuAjnk8KptPwTC8ah86DeiLb50TWEA5hZj8JClZWCgBkzNv7tysnqGQnmx3A zI1A== X-Gm-Message-State: AOAM532rbW4L/sNFlNq+oB9JWcLoBASFr/M8VSM1myXi+FHeXfSOaiwq 6xqBRIQOLxdW3SvkpHDRzlU= X-Google-Smtp-Source: ABdhPJwVIZrAn+WXwvUCw81oOt5ne9jRKZ0UkaewnWLMgQ2sH96yj9G+yLkkHzT9dCsAajJgkkrUSg== X-Received: by 2002:a17:902:7607:b029:da:62c8:e206 with SMTP id k7-20020a1709027607b02900da62c8e206mr7086658pll.1.1606704393072; Sun, 29 Nov 2020 18:46:33 -0800 (PST) Received: from asus.hsd1.ca.comcast.net (c-73-241-217-19.hsd1.ca.comcast.net. [73.241.217.19]) by smtp.gmail.com with ESMTPSA id n127sm14734659pfd.143.2020.11.29.18.46.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Nov 2020 18:46:32 -0800 (PST) From: Bart Van Assche To: "Martin K . Petersen" Cc: "James E . J . Bottomley" , Jens Axboe , Christoph Hellwig , Ming Lei , linux-scsi@vger.kernel.org, linux-block@vger.kernel.org, Bart Van Assche , Alan Stern , Can Guo , Stanley Chu , "Rafael J . Wysocki" Subject: [PATCH v4 5/9] scsi: Do not wait for a request in scsi_eh_lock_door() Date: Sun, 29 Nov 2020 18:46:11 -0800 Message-Id: <20201130024615.29171-6-bvanassche@acm.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201130024615.29171-1-bvanassche@acm.org> References: <20201130024615.29171-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org scsi_eh_lock_door() is the only function in the SCSI error handler that calls blk_get_request(). It is not guaranteed that a request is available when scsi_eh_lock_door() is called. Hence pass the BLK_MQ_REQ_NOWAIT flag to blk_get_request(). Reviewed-by: Alan Stern Reviewed-by: Christoph Hellwig Cc: Can Guo Cc: Stanley Chu Cc: Ming Lei Cc: Rafael J. Wysocki Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index d94449188270..6de6e1bf3dcb 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -1993,7 +1993,12 @@ static void scsi_eh_lock_door(struct scsi_device *sdev) struct request *req; struct scsi_request *rq; - req = blk_get_request(sdev->request_queue, REQ_OP_SCSI_IN, 0); + /* + * It is not guaranteed that a request is available nor that + * sdev->request_queue is unfrozen. Hence the BLK_MQ_REQ_NOWAIT below. + */ + req = blk_get_request(sdev->request_queue, REQ_OP_SCSI_IN, + BLK_MQ_REQ_NOWAIT); if (IS_ERR(req)) return; rq = scsi_req(req); From patchwork Mon Nov 30 02:46:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 334972 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A9FDC8301F for ; Mon, 30 Nov 2020 02:47:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B85F208FE for ; Mon, 30 Nov 2020 02:47:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726931AbgK3CrU (ORCPT ); Sun, 29 Nov 2020 21:47:20 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:38837 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726691AbgK3CrU (ORCPT ); Sun, 29 Nov 2020 21:47:20 -0500 Received: by mail-pl1-f195.google.com with SMTP id l1so5668763pld.5; Sun, 29 Nov 2020 18:47:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dc3KA4G0u9z3zbg3gtVIeoxia8Wwz3LorG0wWR1Dri4=; b=jgr95PedbI2gFRo4QAW8UCXcr7kFN0Ewkx2hqKndgmlVGX6oSOJSFAdr01NnA5NfTk TobEPNtQhV5M+05KiDA7esHC2jhLamen4ORUOxrPkpoALGun8hTgTJX14GXHr67I+/AI v68SwykJqKraCSF5e3fiMqLE305zR3cYO6Yj5JHrUFO9bRDDbVEcU9gJV7PuxX8RcRWv yFIWnhJg4byPzEF89F4ZHUmRzpsD9UWwJqoKHW6QiMnE4Qz8MZaUc297fD5A8rtZzxHd fSXAGDUURXzu+rWNaitRSHC1dflNhXT1N5ezs506IzMDQA63O3JDcrUjx0q16AbrqCO0 pjPw== X-Gm-Message-State: AOAM531D5inuhqVEGCTYGW9+wS4Z8Os7SolZtIKr0yv8x1+mBaEOBZBT JsrwtvKU67v0qzwF7AQk7Ik= X-Google-Smtp-Source: ABdhPJyJ7ksWFc6OJ0sUpV/NchfbjUxUD3qfDaehhekN/A2eMRNW6jZTOTIoDFVSQzC1EYI1CJdLDw== X-Received: by 2002:a17:90a:f0ce:: with SMTP id fa14mr5735301pjb.156.1606704399300; Sun, 29 Nov 2020 18:46:39 -0800 (PST) Received: from asus.hsd1.ca.comcast.net (c-73-241-217-19.hsd1.ca.comcast.net. [73.241.217.19]) by smtp.gmail.com with ESMTPSA id n127sm14734659pfd.143.2020.11.29.18.46.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Nov 2020 18:46:38 -0800 (PST) From: Bart Van Assche To: "Martin K . Petersen" Cc: "James E . J . Bottomley" , Jens Axboe , Christoph Hellwig , Ming Lei , linux-scsi@vger.kernel.org, linux-block@vger.kernel.org, Bart Van Assche , Can Guo , Stanley Chu , Alan Stern , "Rafael J . Wysocki" , Martin Kepplinger Subject: [PATCH v4 8/9] block: Remove RQF_PREEMPT and BLK_MQ_REQ_PREEMPT Date: Sun, 29 Nov 2020 18:46:14 -0800 Message-Id: <20201130024615.29171-9-bvanassche@acm.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201130024615.29171-1-bvanassche@acm.org> References: <20201130024615.29171-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Remove flag RQF_PREEMPT and BLK_MQ_REQ_PREEMPT since these are no longer used by any kernel code. Cc: Christoph Hellwig Cc: Can Guo Cc: Stanley Chu Cc: Alan Stern Cc: Ming Lei Cc: Rafael J. Wysocki Cc: Martin Kepplinger Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig Reviewed-by: Hannes Reinecke --- block/blk-core.c | 7 +++---- block/blk-mq-debugfs.c | 1 - block/blk-mq.c | 2 -- include/linux/blk-mq.h | 2 -- include/linux/blkdev.h | 6 +----- 5 files changed, 4 insertions(+), 14 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 10696f9fb6ac..a00bce9f46d8 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -424,11 +424,11 @@ EXPORT_SYMBOL(blk_cleanup_queue); /** * blk_queue_enter() - try to increase q->q_usage_counter * @q: request queue pointer - * @flags: BLK_MQ_REQ_NOWAIT, BLK_MQ_REQ_PM and/or BLK_MQ_REQ_PREEMPT + * @flags: BLK_MQ_REQ_NOWAIT and/or BLK_MQ_REQ_PM */ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) { - const bool pm = flags & (BLK_MQ_REQ_PM | BLK_MQ_REQ_PREEMPT); + const bool pm = flags & BLK_MQ_REQ_PM; while (true) { bool success = false; @@ -630,8 +630,7 @@ struct request *blk_get_request(struct request_queue *q, unsigned int op, struct request *req; WARN_ON_ONCE(op & REQ_NOWAIT); - WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PM | - BLK_MQ_REQ_PREEMPT)); + WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PM)); req = blk_mq_alloc_request(q, op, flags); if (!IS_ERR(req) && q->mq_ops->initialize_rq_fn) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 3094542e12ae..9336a6f8d6ef 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -297,7 +297,6 @@ static const char *const rqf_name[] = { RQF_NAME(MIXED_MERGE), RQF_NAME(MQ_INFLIGHT), RQF_NAME(DONTPREP), - RQF_NAME(PREEMPT), RQF_NAME(FAILED), RQF_NAME(QUIET), RQF_NAME(ELVPRIV), diff --git a/block/blk-mq.c b/block/blk-mq.c index b5880a1fb38d..d50504888b68 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -294,8 +294,6 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, rq->cmd_flags = data->cmd_flags; if (data->flags & BLK_MQ_REQ_PM) rq->rq_flags |= RQF_PM; - if (data->flags & BLK_MQ_REQ_PREEMPT) - rq->rq_flags |= RQF_PREEMPT; if (blk_queue_io_stat(data->q)) rq->rq_flags |= RQF_IO_STAT; INIT_LIST_HEAD(&rq->queuelist); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index c00e856c6fb1..88af1df94308 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -446,8 +446,6 @@ enum { BLK_MQ_REQ_RESERVED = (__force blk_mq_req_flags_t)(1 << 1), /* set RQF_PM */ BLK_MQ_REQ_PM = (__force blk_mq_req_flags_t)(1 << 2), - /* set RQF_PREEMPT */ - BLK_MQ_REQ_PREEMPT = (__force blk_mq_req_flags_t)(1 << 3), }; struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 639cae2c158b..7d4b746f7e6a 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -79,9 +79,6 @@ typedef __u32 __bitwise req_flags_t; #define RQF_MQ_INFLIGHT ((__force req_flags_t)(1 << 6)) /* don't call prep for this one */ #define RQF_DONTPREP ((__force req_flags_t)(1 << 7)) -/* set for "ide_preempt" requests and also for requests for which the SCSI - "quiesce" state must be ignored. */ -#define RQF_PREEMPT ((__force req_flags_t)(1 << 8)) /* vaguely specified driver internal error. Ignored by the block layer */ #define RQF_FAILED ((__force req_flags_t)(1 << 10)) /* don't warn about errors */ @@ -430,8 +427,7 @@ struct request_queue { unsigned long queue_flags; /* * Number of contexts that have called blk_set_pm_only(). If this - * counter is above zero then only RQF_PM and RQF_PREEMPT requests are - * processed. + * counter is above zero then only RQF_PM requests are processed. */ atomic_t pm_only; From patchwork Mon Nov 30 02:46:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 334971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB89AC64E90 for ; Mon, 30 Nov 2020 02:47:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C0A3C20809 for ; Mon, 30 Nov 2020 02:47:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726995AbgK3CrX (ORCPT ); Sun, 29 Nov 2020 21:47:23 -0500 Received: from mail-pj1-f68.google.com ([209.85.216.68]:38656 "EHLO mail-pj1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726691AbgK3CrW (ORCPT ); Sun, 29 Nov 2020 21:47:22 -0500 Received: by mail-pj1-f68.google.com with SMTP id j13so433189pjz.3; Sun, 29 Nov 2020 18:47:07 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4JgpHAM7ijqDR/daqOEIIKcPlm8O8muBHkiT8mO3EUY=; b=OU5hFKJ7Pj125/WpCiIgABuKzjbZT7By+/pnOy8fcyfCBDBJGfr+UmppqJhn8oslp1 rvlZjQFJ7DyLz1pKjysRBA/eASVE0CYuIQr/G0Ku+1xMQx07w27IBtgmtp65nL6ng+Fy jO/oo7dUBZ9wL5rVspSncizF8FKd/tpoSknAFDyEKsKiOughgYolzSBZVQmLQd7gJPtd 8B+ANvFToIhYVY0tVrOY8LwPorig/bUu77eD8iG7VBf99taGU3Ns3IIYJlJ6K9ABfnqG /rHCAMVRgRgUE4srDWJtdpeX4XgY/nI9Gih+wyvUW/PMHUyJkAnwLD6zr8zd3t9j612+ c4Wg== X-Gm-Message-State: AOAM533fmR1+szOtJG4es6TCVM8DjY+e/CANNB7UeHEpU8jOhsQCN7Er gu2Tarp4ZFU7DS4V73Ygg48= X-Google-Smtp-Source: ABdhPJySzFwaS9kN5Nehx4yAWHDsKt0CycJXycGTqD+OB5huQZiDmvnMpZPa38YaDmL3k1FJ6d0fow== X-Received: by 2002:a17:90a:aa14:: with SMTP id k20mr23553945pjq.131.1606704401812; Sun, 29 Nov 2020 18:46:41 -0800 (PST) Received: from asus.hsd1.ca.comcast.net (c-73-241-217-19.hsd1.ca.comcast.net. [73.241.217.19]) by smtp.gmail.com with ESMTPSA id n127sm14734659pfd.143.2020.11.29.18.46.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Nov 2020 18:46:40 -0800 (PST) From: Bart Van Assche To: "Martin K . Petersen" Cc: "James E . J . Bottomley" , Jens Axboe , Christoph Hellwig , Ming Lei , linux-scsi@vger.kernel.org, linux-block@vger.kernel.org, Bart Van Assche , Alan Stern , Can Guo , Stanley Chu , "Rafael J . Wysocki" , Martin Kepplinger Subject: [PATCH v4 9/9] block: Do not accept any requests while suspended Date: Sun, 29 Nov 2020 18:46:15 -0800 Message-Id: <20201130024615.29171-10-bvanassche@acm.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201130024615.29171-1-bvanassche@acm.org> References: <20201130024615.29171-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Alan Stern blk_queue_enter() accepts BLK_MQ_REQ_PM requests independent of the runtime power management state. Now that SCSI domain validation no longer depends on this behavior, modify the behavior of blk_queue_enter() as follows: - Do not accept any requests while suspended. - Only process power management requests while suspending or resuming. Submitting BLK_MQ_REQ_PM requests to a device that is runtime suspended causes runtime-suspended devices not to resume as they should. The request which should cause a runtime resume instead gets issued directly, without resuming the device first. Of course the device can't handle it properly, the I/O fails, and the device remains suspended. The problem is fixed by checking that the queue's runtime-PM status isn't RPM_SUSPENDED before allowing a request to be issued, and queuing a runtime-resume request if it is. In particular, the inline blk_pm_request_resume() routine is renamed blk_pm_resume_queue() and the code is unified by merging the surrounding checks into the routine. If the queue isn't set up for runtime PM, or there currently is no restriction on allowed requests, the request is allowed. Likewise if the BLK_MQ_REQ_PM flag is set and the status isn't RPM_SUSPENDED. Otherwise a runtime resume is queued and the request is blocked until conditions are more suitable. Reviewed-by: Christoph Hellwig Reviewed-by: Can Guo Reviewed-by: Stanley Chu Cc: Ming Lei Cc: Rafael J. Wysocki Reported-and-tested-by: Martin Kepplinger Signed-off-by: Alan Stern Signed-off-by: Bart Van Assche [ bvanassche: modified commit message and removed Cc: stable because without the previous patches from this series this patch would break parallel SCSI domain validation ] Reviewed-by: Hannes Reinecke --- block/blk-core.c | 6 +++--- block/blk-pm.h | 14 +++++++++----- 2 files changed, 12 insertions(+), 8 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index a00bce9f46d8..230880cbf8c8 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -440,7 +440,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) * responsible for ensuring that that counter is * globally visible before the queue is unfrozen. */ - if (pm || !blk_queue_pm_only(q)) { + if ((pm && q->rpm_status != RPM_SUSPENDED) || + !blk_queue_pm_only(q)) { success = true; } else { percpu_ref_put(&q->q_usage_counter); @@ -465,8 +466,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) wait_event(q->mq_freeze_wq, (!q->mq_freeze_depth && - (pm || (blk_pm_request_resume(q), - !blk_queue_pm_only(q)))) || + blk_pm_resume_queue(pm, q)) || blk_queue_dying(q)); if (blk_queue_dying(q)) return -ENODEV; diff --git a/block/blk-pm.h b/block/blk-pm.h index ea5507d23e75..a2283cc9f716 100644 --- a/block/blk-pm.h +++ b/block/blk-pm.h @@ -6,11 +6,14 @@ #include #ifdef CONFIG_PM -static inline void blk_pm_request_resume(struct request_queue *q) +static inline int blk_pm_resume_queue(const bool pm, struct request_queue *q) { - if (q->dev && (q->rpm_status == RPM_SUSPENDED || - q->rpm_status == RPM_SUSPENDING)) - pm_request_resume(q->dev); + if (!q->dev || !blk_queue_pm_only(q)) + return 1; /* Nothing to do */ + if (pm && q->rpm_status != RPM_SUSPENDED) + return 1; /* Request allowed */ + pm_request_resume(q->dev); + return 0; } static inline void blk_pm_mark_last_busy(struct request *rq) @@ -44,8 +47,9 @@ static inline void blk_pm_put_request(struct request *rq) --rq->q->nr_pending; } #else -static inline void blk_pm_request_resume(struct request_queue *q) +static inline int blk_pm_resume_queue(const bool pm, struct request_queue *q) { + return 1; } static inline void blk_pm_mark_last_busy(struct request *rq)