From patchwork Fri Sep 4 13:30:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 264386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FF80C433E2 for ; Fri, 4 Sep 2020 13:40:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6EE83205F4 for ; Fri, 4 Sep 2020 13:40:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599226854; bh=5IwSshPpgozIPZMjbeg/fb3EXkle70zb11w810FYpW0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=COIk4TzVCnKmmvLhrp1+hUkfYI9kDzxsg2juC6W50EwqiwiPk2OkTVQDMZ6O/rQSa SxIDtmZx5p9XWN16XrtGpGUAXWCT15HlF8BnX8kFYBScix2R456rNCJ6Sv/7m0HXDQ ysH4gCvPnGIRunDAxBtL6FBOu8YwOfP0L/fKUONE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730474AbgIDNku (ORCPT ); Fri, 4 Sep 2020 09:40:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:38488 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730488AbgIDNeY (ORCPT ); Fri, 4 Sep 2020 09:34:24 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DAB5E21734; Fri, 4 Sep 2020 13:31:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599226279; bh=5IwSshPpgozIPZMjbeg/fb3EXkle70zb11w810FYpW0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DmuO6/Gm/vXZr09yyWeEN91h92YYLGW1ID6+xR9Mli84Vz9IvyUIC1qYIlrtqn8l/ t7zmuoa+Tx4jtMyhLxRK4fGkrWV6xIO40mydLCGPXZHoz7UT4U0rWTfLfxEdaTHh+H 0Md22jElYz9Ui7+PL1OAYAtIDThCmN6fGM1ZeFRE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, JiangYu , Daniel Meyerholt , Mike Christie , Bodo Stroesser , "Martin K. Petersen" Subject: [PATCH 5.8 17/17] scsi: target: tcmu: Optimize use of flush_dcache_page Date: Fri, 4 Sep 2020 15:30:16 +0200 Message-Id: <20200904120258.840415117@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200904120257.983551609@linuxfoundation.org> References: <20200904120257.983551609@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Bodo Stroesser commit 3c58f737231e2c8cbf543a09d84d8c8e80e05e43 upstream. (scatter|gather)_data_area() need to flush dcache after writing data to or before reading data from a page in uio data area. The two routines are able to handle data transfer to/from such a page in fragments and flush the cache after each fragment was copied by calling the wrapper tcmu_flush_dcache_range(). That means: 1) flush_dcache_page() can be called multiple times for the same page. 2) Calling flush_dcache_page() indirectly using the wrapper does not make sense, because each call of the wrapper is for one single page only and the calling routine already has the correct page pointer. Change (scatter|gather)_data_area() such that, instead of calling tcmu_flush_dcache_range() before/after each memcpy, it now calls flush_dcache_page() before unmapping a page (when writing is complete for that page) or after mapping a page (when starting to read the page). After this change only calls to tcmu_flush_dcache_range() for addresses in vmalloc'ed command ring are left over. The patch was tested on ARM with kernel 4.19.118 and 5.7.2 Link: https://lore.kernel.org/r/20200618131632.32748-2-bstroesser@ts.fujitsu.com Tested-by: JiangYu Tested-by: Daniel Meyerholt Acked-by: Mike Christie Signed-off-by: Bodo Stroesser Signed-off-by: Martin K. Petersen Signed-off-by: Greg Kroah-Hartman --- drivers/target/target_core_user.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -676,8 +676,10 @@ static void scatter_data_area(struct tcm from = kmap_atomic(sg_page(sg)) + sg->offset; while (sg_remaining > 0) { if (block_remaining == 0) { - if (to) + if (to) { + flush_dcache_page(page); kunmap_atomic(to); + } block_remaining = DATA_BLOCK_SIZE; dbi = tcmu_cmd_get_dbi(tcmu_cmd); @@ -722,7 +724,6 @@ static void scatter_data_area(struct tcm memcpy(to + offset, from + sg->length - sg_remaining, copy_bytes); - tcmu_flush_dcache_range(to, copy_bytes); } sg_remaining -= copy_bytes; @@ -731,8 +732,10 @@ static void scatter_data_area(struct tcm kunmap_atomic(from - sg->offset); } - if (to) + if (to) { + flush_dcache_page(page); kunmap_atomic(to); + } } static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd, @@ -778,13 +781,13 @@ static void gather_data_area(struct tcmu dbi = tcmu_cmd_get_dbi(cmd); page = tcmu_get_block_page(udev, dbi); from = kmap_atomic(page); + flush_dcache_page(page); } copy_bytes = min_t(size_t, sg_remaining, block_remaining); if (read_len < copy_bytes) copy_bytes = read_len; offset = DATA_BLOCK_SIZE - block_remaining; - tcmu_flush_dcache_range(from, copy_bytes); memcpy(to + sg->length - sg_remaining, from + offset, copy_bytes);