From patchwork Thu May 20 09:21:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 444617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8134BC433ED for ; Thu, 20 May 2021 09:26:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 667FD6023E for ; Thu, 20 May 2021 09:26:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232035AbhETJ1v (ORCPT ); Thu, 20 May 2021 05:27:51 -0400 Received: from mail.kernel.org ([198.145.29.99]:53820 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231849AbhETJ1B (ORCPT ); Thu, 20 May 2021 05:27:01 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 98E8D613AC; Thu, 20 May 2021 09:25:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1621502740; bh=opB93W4AGOPK9xdlwOLmjCuWmfcA9Q7hj5tZrzDtRxE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oOPYTiAWXBFOcBM2O4ufsWn1Mp3IWyov4PUVoJMq+nHm5AnKbCHUMuRv2k3Sv1cJf 6l4Ttqds7GYbrPJ+rdeaOtvuDwT6jpRnNCJUvt+rrhm0PdMEHXdtlVEGUAGekCfL8R R8IvIRcOVsEIh4io8yXPk3BuEn5ZjOA8k63gqKk8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Nicolas Pitre , Ard Biesheuvel , Russell King , Sasha Levin Subject: [PATCH 5.12 06/45] ARM: 9058/1: cache-v7: refactor v7_invalidate_l1 to avoid clobbering r5/r6 Date: Thu, 20 May 2021 11:21:54 +0200 Message-Id: <20210520092053.731407333@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210520092053.516042993@linuxfoundation.org> References: <20210520092053.516042993@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Ard Biesheuvel [ Upstream commit f9e7a99fb6b86aa6a00e53b34ee6973840e005aa ] The cache invalidation code in v7_invalidate_l1 can be tweaked to re-read the associativity from CCSIDR, and keep the way identifier component in a single register that is assigned in the outer loop. This way, we need 2 registers less. Given that the number of sets is typically much larger than the associativity, rearrange the code so that the outer loop has the fewer number of iterations, ensuring that the re-read of CCSIDR only occurs a handful of times in practice. Fix the whitespace while at it, and update the comment to indicate that this code is no longer a clone of anything else. Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel Signed-off-by: Russell King Signed-off-by: Sasha Levin --- arch/arm/mm/cache-v7.S | 51 +++++++++++++++++++++--------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S index dc8f152f3556..e3bc1d6e13d0 100644 --- a/arch/arm/mm/cache-v7.S +++ b/arch/arm/mm/cache-v7.S @@ -33,41 +33,40 @@ icache_size: * processor. We fix this by performing an invalidate, rather than a * clean + invalidate, before jumping into the kernel. * - * This function is cloned from arch/arm/mach-tegra/headsmp.S, and needs - * to be called for both secondary cores startup and primary core resume - * procedures. + * This function needs to be called for both secondary cores startup and + * primary core resume procedures. */ ENTRY(v7_invalidate_l1) mov r0, #0 mcr p15, 2, r0, c0, c0, 0 mrc p15, 1, r0, c0, c0, 0 - movw r1, #0x7fff - and r2, r1, r0, lsr #13 + movw r3, #0x3ff + and r3, r3, r0, lsr #3 @ 'Associativity' in CCSIDR[12:3] + clz r1, r3 @ WayShift + mov r2, #1 + mov r3, r3, lsl r1 @ NumWays-1 shifted into bits [31:...] + movs r1, r2, lsl r1 @ #1 shifted left by same amount + moveq r1, #1 @ r1 needs value > 0 even if only 1 way - movw r1, #0x3ff + and r2, r0, #0x7 + add r2, r2, #4 @ SetShift - and r3, r1, r0, lsr #3 @ NumWays - 1 - add r2, r2, #1 @ NumSets +1: movw r4, #0x7fff + and r0, r4, r0, lsr #13 @ 'NumSets' in CCSIDR[27:13] - and r0, r0, #0x7 - add r0, r0, #4 @ SetShift - - clz r1, r3 @ WayShift - add r4, r3, #1 @ NumWays -1: sub r2, r2, #1 @ NumSets-- - mov r3, r4 @ Temp = NumWays -2: subs r3, r3, #1 @ Temp-- - mov r5, r3, lsl r1 - mov r6, r2, lsl r0 - orr r5, r5, r6 @ Reg = (Temp<