From patchwork Mon Feb 24 12:17:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 206531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D009C11D2F for ; Mon, 24 Feb 2020 12:17:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D69FB2084E for ; Mon, 24 Feb 2020 12:17:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582546662; bh=u6UNh9AxsmaZvMVFcXaOuu9HBc8jrZmXLZzpHbnjRV4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=lWZn8teu+Z+F+2cBH9eht4mG5VhYzVf83HlgKeTVghUFYdP+PeEihmDQuvbNookV0 7uGmC27WnPSkaRLEM7l7HP2+lVnzSpQz8RvtXmklD4Ba/6bb1TclVGijibCecf1eAR +b1Cdx0fwqqHzBhfTb0D+ZjbbXfKpsEIlj80SHUo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727265AbgBXMRm (ORCPT ); Mon, 24 Feb 2020 07:17:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:44194 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727185AbgBXMRm (ORCPT ); Mon, 24 Feb 2020 07:17:42 -0500 Received: from e123331-lin.home (amontpellier-657-1-18-247.w109-210.abo.wanadoo.fr [109.210.65.247]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1ED9A20838; Mon, 24 Feb 2020 12:17:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582546661; bh=u6UNh9AxsmaZvMVFcXaOuu9HBc8jrZmXLZzpHbnjRV4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hYgbTdxwi6yCT/hMQnnjI1Fq0ei2PoBYS7CeQUddSaMyjzWEqyo4WTh6KiYOf0mV9 0pQtmENszU7+wxVSV462uuVcWCgP/UZn64Ea5h6VB2rxKImgLWt3l4XGBQ6iZbOnR+ WUdmSu8po/EMHpr1tCTkDgMncCKhiMsy20ppbQEk= From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Russell King , Marc Zyngier , Nicolas Pitre , Catalin Marinas , Tony Lindgren , Linus Walleij Subject: [PATCH v3 1/5] efi/arm: Work around missing cache maintenance in decompressor handover Date: Mon, 24 Feb 2020 13:17:29 +0100 Message-Id: <20200224121733.2202-2-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200224121733.2202-1-ardb@kernel.org> References: <20200224121733.2202-1-ardb@kernel.org> Sender: linux-efi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org The EFI stub executes within the context of the zImage as it was loaded by the firmware, which means it is treated as an ordinary PE/COFF executable, which is loaded into memory, and cleaned to the PoU to ensure that it can be executed safely while the MMU and caches are on. When the EFI stub hands over to the decompressor, we clean the caches by set/way and disable the MMU and D-cache, to comply with the Linux boot protocol for ARM. However, cache maintenance by set/way is not sufficient to ensure that subsequent instruction fetches and data accesses done with the MMU off see the correct data. This means that proceeding as we do currently is not safe, especially since we also perform data accesses with the MMU off, from a literal pool as well as the stack. So let's kick this can down the road a bit, and jump into the relocated zImage before disabling the caches. This removes the requirement to perform any by-VA cache maintenance on the original PE/COFF executable, but it does require that the relocated zImage is cleaned to the PoC, which is currently not the case. This will be addressed in a subsequent patch. Signed-off-by: Ard Biesheuvel --- arch/arm/boot/compressed/head.S | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 088b0a060876..39f7071d47c7 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -1461,6 +1461,17 @@ ENTRY(efi_stub_entry) @ Preserve return value of efi_entry() in r4 mov r4, r0 bl cache_clean_flush + + @ The PE/COFF loader might not have cleaned the code we are + @ running beyond the PoU, and so calling cache_off below from + @ inside the PE/COFF loader allocated region is unsafe. Let's + @ assume our own zImage relocation code did a better job, and + @ jump into its version of this routine before proceeding. + ldr r0, [sp] @ relocated zImage + ldr r1, .Ljmp + sub r1, r0, r1 + mov pc, r1 @ no mode switch +0: bl cache_off @ Set parameters for booting zImage according to boot protocol @@ -1469,18 +1480,15 @@ ENTRY(efi_stub_entry) mov r0, #0 mov r1, #0xFFFFFFFF mov r2, r4 - - @ Branch to (possibly) relocated zImage that is in [sp] - ldr lr, [sp] - ldr ip, =start_offset - add lr, lr, ip - mov pc, lr @ no mode switch + b __efi_start efi_load_fail: @ Return EFI_LOAD_ERROR to EFI firmware on error. ldr r0, =0x80000001 ldmfd sp!, {ip, pc} ENDPROC(efi_stub_entry) + .align 2 +.Ljmp: .long start - 0b #endif .align From patchwork Mon Feb 24 12:17:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 206530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0578EC11D2F for ; Mon, 24 Feb 2020 12:17:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CE1182080D for ; Mon, 24 Feb 2020 12:17:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582546667; bh=E3ulqfCoOh5BwS+ZZvVQzkCQwPRSb/ckqFlUNh4AMUo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=iP7BpdCmpRwjOCrLkGrykQHqRu3lRhh8TvyjFvZ9IppvaQyqJiNpHd4yKonoeVVqH tGDDw2NZOFoMyF60oPjWOAt/ieexJjcAKZ18pRWSKVwFBQJw1lV4J5z3Z01GseyOru CcLa2mYUNKQPhK531UwO+himUTb5LZqvDLQOClow= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727329AbgBXMRr (ORCPT ); Mon, 24 Feb 2020 07:17:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:44290 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727185AbgBXMRr (ORCPT ); Mon, 24 Feb 2020 07:17:47 -0500 Received: from e123331-lin.home (amontpellier-657-1-18-247.w109-210.abo.wanadoo.fr [109.210.65.247]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 68B1420880; Mon, 24 Feb 2020 12:17:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582546666; bh=E3ulqfCoOh5BwS+ZZvVQzkCQwPRSb/ckqFlUNh4AMUo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0sWbDfBs9fCTp/oTRwDUhmH00CqDQWMhHjZG5ty2VSsZuLVRxmYqErm5vHTfBBZFW qqTOvKGGPS7XOVgvKXkzAmHR20wiJnXRBDqC/+BkUBNcrn9kfzCu7CN74ka81nvMtE QdP6vuUxQn9scW9ZGYtMC/JPrkn5AFQedLuD2fDI= From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Russell King , Marc Zyngier , Nicolas Pitre , Catalin Marinas , Tony Lindgren , Linus Walleij Subject: [PATCH v3 3/5] ARM: decompressor: factor out routine to obtain the inflated image size Date: Mon, 24 Feb 2020 13:17:31 +0100 Message-Id: <20200224121733.2202-4-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200224121733.2202-1-ardb@kernel.org> References: <20200224121733.2202-1-ardb@kernel.org> Sender: linux-efi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Before adding another reference to the inflated image size, factor out the slightly complicated way of loading the unaligned little-endian constant from the end of the compressed data. Signed-off-by: Ard Biesheuvel --- arch/arm/boot/compressed/head.S | 43 ++++++++++++-------- 1 file changed, 26 insertions(+), 17 deletions(-) diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 8487221bedb0..674e55400cfd 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -151,6 +151,25 @@ .L_\@: .endm + /* + * The kernel build system appends the size of the + * decompressed kernel at the end of the compressed data + * in little-endian form. + */ + .macro get_inflated_image_size, res:req, tmp1:req, tmp2:req + adr \res, .Linflated_image_size_offset + ldr \tmp1, [\res] + add \tmp1, \tmp1, \res @ offset of inflated image size + + ldrb \res, [\tmp1] @ get_unaligned_le32 + ldrb \tmp2, [\tmp1, #1] + orr \res, \res, \tmp2, lsl #8 + ldrb \tmp2, [\tmp1, #2] + ldrb \tmp1, [\tmp1, #3] + orr \res, \res, \tmp2, lsl #16 + orr \res, \res, \tmp1, lsl #24 + .endm + .section ".start", "ax" /* * sort out different calling conventions @@ -268,15 +287,15 @@ not_angel: */ mov r0, pc cmp r0, r4 - ldrcc r0, LC0+32 + ldrcc r0, LC0+28 addcc r0, r0, pc cmpcc r4, r0 orrcc r4, r4, #1 @ remember we skipped cache_on blcs cache_on restart: adr r0, LC0 - ldmia r0, {r1, r2, r3, r6, r10, r11, r12} - ldr sp, [r0, #28] + ldmia r0, {r1, r2, r3, r6, r11, r12} + ldr sp, [r0, #24] /* * We might be running at a different address. We need @@ -284,20 +303,8 @@ restart: adr r0, LC0 */ sub r0, r0, r1 @ calculate the delta offset add r6, r6, r0 @ _edata - add r10, r10, r0 @ inflated kernel size location - /* - * The kernel build system appends the size of the - * decompressed kernel at the end of the compressed data - * in little-endian form. - */ - ldrb r9, [r10, #0] - ldrb lr, [r10, #1] - orr r9, r9, lr, lsl #8 - ldrb lr, [r10, #2] - ldrb r10, [r10, #3] - orr r9, r9, lr, lsl #16 - orr r9, r9, r10, lsl #24 + get_inflated_image_size r9, r10, lr #ifndef CONFIG_ZBOOT_ROM /* malloc space is above the relocated stack (64k max) */ @@ -652,13 +659,15 @@ LC0: .word LC0 @ r1 .word __bss_start @ r2 .word _end @ r3 .word _edata @ r6 - .word input_data_end - 4 @ r10 (inflated size location) .word _got_start @ r11 .word _got_end @ ip .word .L_user_stack_end @ sp .word _end - restart + 16384 + 1024*1024 .size LC0, . - LC0 +.Linflated_image_size_offset: + .long (input_data_end - 4) - . + #ifdef CONFIG_ARCH_RPC .globl params params: ldr r0, =0x10000100 @ params_phys for RPC From patchwork Mon Feb 24 12:17:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 206529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EFFEC11D2F for ; Mon, 24 Feb 2020 12:17:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 377012080D for ; Mon, 24 Feb 2020 12:17:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582546672; bh=Onk64xrrbaXpqS4/XeyMz5zF5N/2DxQtp4c0B3ixwtk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=vNCchsJ9+Pr4Wa1wMtNDGcZI8ugq2YNmvz7dTNkekYhT4d7eGZjiJRO8c84OazJaX 3/vqxhTnbXoi3b9TTT6qjRZqsJkwVlD1CA8t7VfNVfoFw246VOrZWWefsPkerB/1rI bhr8pmPi4dCFxNZ+0CialqRAnhnge/iNMpRy+vn8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727339AbgBXMRw (ORCPT ); Mon, 24 Feb 2020 07:17:52 -0500 Received: from mail.kernel.org ([198.145.29.99]:44366 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727183AbgBXMRv (ORCPT ); Mon, 24 Feb 2020 07:17:51 -0500 Received: from e123331-lin.home (amontpellier-657-1-18-247.w109-210.abo.wanadoo.fr [109.210.65.247]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B053520866; Mon, 24 Feb 2020 12:17:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582546670; bh=Onk64xrrbaXpqS4/XeyMz5zF5N/2DxQtp4c0B3ixwtk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M5Ig7HozrVFd0RBfX0tkWjO7S7OFHcJAn2kOKG8IcgUqlW41bsiX2D2AFx9Jt501E CGLDE4SacsXYowx4IrM7Ph89v13pXCrUinmhf+UxpZwudnCltfQuW49lhBIPZ9eD1Q y+B7lTBbxdz2ztxLPRvH1u0tBtIBDQF9Agou22CQ= From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Russell King , Marc Zyngier , Nicolas Pitre , Catalin Marinas , Tony Lindgren , Linus Walleij Subject: [PATCH v3 5/5] ARM: decompressor: switch to by-VA cache maintenance for v7 cores Date: Mon, 24 Feb 2020 13:17:33 +0100 Message-Id: <20200224121733.2202-6-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200224121733.2202-1-ardb@kernel.org> References: <20200224121733.2202-1-ardb@kernel.org> Sender: linux-efi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Update the v7 cache_clean_flush routine to take into account the memory range passed in r0/r1, and perform cache maintenance by virtual address on this range instead of set/way maintenance, which is inappropriate for the purpose of maintaining the cached state of memory contents. Since this removes any use of the stack in the implementation of cache_clean_flush(), we can also drop some code that manages the value of the stack pointer before calling it. Signed-off-by: Ard Biesheuvel --- arch/arm/boot/compressed/head.S | 82 +++++++------------- 1 file changed, 30 insertions(+), 52 deletions(-) diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 12d631503bfa..aedc9bdb1719 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -528,10 +528,6 @@ dtb_check_done: /* Preserve offset to relocated code. */ sub r6, r9, r6 -#ifndef CONFIG_ZBOOT_ROM - /* cache_clean_flush may use the stack, so relocate it */ - add sp, sp, r6 -#endif adr r0, restart ldr r1, .Lclean_size @@ -688,6 +684,24 @@ params: ldr r0, =0x10000100 @ params_phys for RPC .align #endif +/* + * dcache_line_size - get the minimum D-cache line size from the CTR register + * on ARMv7. + */ + .macro dcache_line_size, reg, tmp +#ifdef CONFIG_CPU_V7M + movw \tmp, #:lower16:BASEADDR_V7M_SCB + V7M_SCB_CTR + movt \tmp, #:upper16:BASEADDR_V7M_SCB + V7M_SCB_CTR + ldr \tmp, [\tmp] +#else + mrc p15, 0, \tmp, c0, c0, 1 @ read ctr +#endif + lsr \tmp, \tmp, #16 + and \tmp, \tmp, #0xf @ cache line size encoding + mov \reg, #4 @ bytes per word + mov \reg, \reg, lsl \tmp @ actual cache line size + .endm + /* * Turn on the cache. We need to setup some page tables so that we * can have both the I and D caches on. @@ -1180,8 +1194,6 @@ __armv7_mmu_cache_off: bic r0, r0, #0x000c #endif mcr p15, 0, r0, c1, c0 @ turn MMU and cache off - mov r12, lr - bl __armv7_mmu_cache_flush mov r0, #0 #ifdef CONFIG_MMU mcr p15, 0, r0, c8, c7, 0 @ invalidate whole TLB @@ -1189,7 +1201,7 @@ __armv7_mmu_cache_off: mcr p15, 0, r0, c7, c5, 6 @ invalidate BTC mcr p15, 0, r0, c7, c10, 4 @ DSB mcr p15, 0, r0, c7, c5, 4 @ ISB - mov pc, r12 + mov pc, lr /* * Clean and flush the cache to maintain consistency. @@ -1205,6 +1217,7 @@ __armv7_mmu_cache_off: .align 5 cache_clean_flush: mov r3, #16 + mov r11, r1 b call_cache_fn __armv4_mpu_cache_flush: @@ -1255,51 +1268,16 @@ __armv7_mmu_cache_flush: mcr p15, 0, r10, c7, c14, 0 @ clean+invalidate D b iflush hierarchical: - mcr p15, 0, r10, c7, c10, 5 @ DMB - stmfd sp!, {r0-r7, r9-r11} - mrc p15, 1, r0, c0, c0, 1 @ read clidr - ands r3, r0, #0x7000000 @ extract loc from clidr - mov r3, r3, lsr #23 @ left align loc bit field - beq finished @ if loc is 0, then no need to clean - mov r10, #0 @ start clean at cache level 0 -loop1: - add r2, r10, r10, lsr #1 @ work out 3x current cache level - mov r1, r0, lsr r2 @ extract cache type bits from clidr - and r1, r1, #7 @ mask of the bits for current cache only - cmp r1, #2 @ see what cache we have at this level - blt skip @ skip if no cache, or just i-cache - mcr p15, 2, r10, c0, c0, 0 @ select current cache level in cssr - mcr p15, 0, r10, c7, c5, 4 @ isb to sych the new cssr&csidr - mrc p15, 1, r1, c0, c0, 0 @ read the new csidr - and r2, r1, #7 @ extract the length of the cache lines - add r2, r2, #4 @ add 4 (line length offset) - ldr r4, =0x3ff - ands r4, r4, r1, lsr #3 @ find maximum number on the way size - clz r5, r4 @ find bit position of way size increment - ldr r7, =0x7fff - ands r7, r7, r1, lsr #13 @ extract max number of the index size -loop2: - mov r9, r4 @ create working copy of max way size -loop3: - ARM( orr r11, r10, r9, lsl r5 ) @ factor way and cache number into r11 - ARM( orr r11, r11, r7, lsl r2 ) @ factor index number into r11 - THUMB( lsl r6, r9, r5 ) - THUMB( orr r11, r10, r6 ) @ factor way and cache number into r11 - THUMB( lsl r6, r7, r2 ) - THUMB( orr r11, r11, r6 ) @ factor index number into r11 - mcr p15, 0, r11, c7, c14, 2 @ clean & invalidate by set/way - subs r9, r9, #1 @ decrement the way - bge loop3 - subs r7, r7, #1 @ decrement the index - bge loop2 -skip: - add r10, r10, #2 @ increment cache number - cmp r3, r10 - bgt loop1 -finished: - ldmfd sp!, {r0-r7, r9-r11} - mov r10, #0 @ switch back to cache level 0 - mcr p15, 2, r10, c0, c0, 0 @ select current cache level in cssr + dcache_line_size r1, r2 @ r1 := dcache min line size + sub r2, r1, #1 @ r2 := line size mask + bic r0, r0, r2 @ round down start to line size + sub r11, r11, #1 @ end address is exclusive + bic r11, r11, r2 @ round down end to line size +0: cmp r0, r11 @ finished? + bgt iflush + mcr p15, 0, r0, c7, c14, 1 @ Dcache clean/invalidate by VA + add r0, r0, r1 + b 0b iflush: mcr p15, 0, r10, c7, c10, 4 @ DSB mcr p15, 0, r10, c7, c5, 0 @ invalidate I+BTB