From patchwork Thu Mar 2 14:43:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 658360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E02CC7EE2F for ; Thu, 2 Mar 2023 14:43:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229874AbjCBOnp (ORCPT ); Thu, 2 Mar 2023 09:43:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229492AbjCBOnn (ORCPT ); Thu, 2 Mar 2023 09:43:43 -0500 Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com [IPv6:2a00:1450:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 916A239292 for ; Thu, 2 Mar 2023 06:43:39 -0800 (PST) Received: by mail-lj1-x232.google.com with SMTP id by8so17831434ljb.7 for ; Thu, 02 Mar 2023 06:43:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=+NuIrQyu9sctpk8pQFgxNkYy00s89JzuKjWNu0rz86k=; b=SXXi6eB+umI9ANFEoFGqSrifMaE0poWztgSMyQ22TTInkNboO5fBzmLJu8AVy9bLju M6kRnEzSFF0tIKScNEJKtwGknd9HA8jIlBSeNTpA9OQ0f9rWBw0Ac3o87OdBq5qpWH6L Hp9grmmptCV5jVYnuggL0wtHznofM7v2nu7ZWEw049NOOS9E5JjcztTeVp7gPUE4C+L7 W9KHKjq1tugzlmCOetH33O/q2pd/nVAr1GvLh4ynoV7NG6AHClJbXB+UVxoFI1aYc4Cl LGS6rl1Q9iruf4NB+fghiDV8qU5Rqb4VK4VKXOSyITorbvahOUZuOjakf6L33nu6LEu7 yeHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+NuIrQyu9sctpk8pQFgxNkYy00s89JzuKjWNu0rz86k=; b=IiShmsd/tBrABk/6kXHh7BiE55kfCYtotbc9v9aq9cP+oxPhvkAYowXRLY1V7cmMD6 XP6js5vzPMq0Nmn3m/gXbdEnYRv1UORFIXzZ+9m29ig4lV/4sJ5UBdaeTuscCktOYHhY KT89uTtnu0K3mD5TlowAeldNLj0wB0NOAGfTfR3KQAz11AJwg4fApTabqJ9M9vWg6XXb GjywsDa4g3aL6dfKm8g6e9m7BXuTAEvNeW7p7I3FGCFolXDmBf3U8HL8R37c7gRrIPWZ /7ZExUcQn6i0QlS2s5fMs+cSv0aj4uNh42pybz9HftEeg1IZyxnhEpgr44SSAs4V8fV2 KOpg== X-Gm-Message-State: AO0yUKU8U75/0Z6OeQbultOGn8kl0JHiIgWGP82Z37/2qQagqfUr1CtA 4dc5xWkD9h8s1ra0XhnKyToyWlEb91udg2Nm X-Google-Smtp-Source: AK7set9G/UyUxQKn0i1rnrYbqkFSeGhuf1tIk4E+t5lr32TMrrq16Jkgq8TWOo69tqns9sf6xocjZA== X-Received: by 2002:a2e:b98c:0:b0:295:a446:cd08 with SMTP id p12-20020a2eb98c000000b00295a446cd08mr2838122ljp.6.1677768217230; Thu, 02 Mar 2023 06:43:37 -0800 (PST) Received: from uffe-XPS13.. (h-94-254-63-18.NA.cust.bahnhof.se. [94.254.63.18]) by smtp.gmail.com with ESMTPSA id n15-20020a2e86cf000000b0029597ebacd0sm2070791ljj.64.2023.03.02.06.43.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 06:43:36 -0800 (PST) From: Ulf Hansson To: linux-mmc@vger.kernel.org, Ulf Hansson , Jens Axboe Cc: Wenchao Chen , Adrian Hunter , Avri Altman , Christian Lohle , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH] mmc: core: Disable REQ_FUA if the eMMC supports an internal cache Date: Thu, 2 Mar 2023 15:43:30 +0100 Message-Id: <20230302144330.274947-1-ulf.hansson@linaro.org> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org REQ_FUA is in general supported for eMMC cards, which translates into so called "reliable writes". To support these write operations, the CMD23 (MMC_CAP_CMD23), needs to be supported by the mmc host too, which is common but not always the case. For some eMMC devices, it has been reported that reliable writes are quite costly, leading to performance degradations. In a way to improve the situation, let's avoid announcing REQ_FUA support if the eMMC supports an internal cache, as that allows us to rely solely on flush-requests (REQ_OP_FLUSH) instead, which seems to be a lot cheaper. Note that, those mmc hosts that lacks CMD23 support are already using this type of configuration, whatever that could mean. Reported-by: Wenchao Chen Signed-off-by: Ulf Hansson Acked-by: Bean Huo Acked-by: Avri Altman --- Note that, I haven't been able to test this patch myself, but are relying on Wenchao and others to help out. Sharing some performance number before and after the patch, would be nice. Moreover, what is not clear to me (hence the RFC), is whether relying solely on flush requests are sufficient and as such if it's a good idea after all. Comments are highly appreciated in this regards. Kind regards Ulf Hansson --- drivers/mmc/core/block.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 672ab90c4b2d..2a49531bf023 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -2490,15 +2490,20 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, md->flags |= MMC_BLK_CMD23; } - if (md->flags & MMC_BLK_CMD23 && - ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) || - card->ext_csd.rel_sectors)) { + /* + * REQ_FUA is supported through eMMC reliable writes, which has been + * reported to be quite costly for some eMMCs. Therefore, let's rely + * on flush requests (REQ_OP_FLUSH), if an internal cache is supported. + */ + if (mmc_cache_enabled(card->host)) { + cache_enabled = true; + } else if (md->flags & MMC_BLK_CMD23 && + (card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN || + card->ext_csd.rel_sectors)) { md->flags |= MMC_BLK_REL_WR; fua_enabled = true; cache_enabled = true; } - if (mmc_cache_enabled(card->host)) - cache_enabled = true; blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);