From patchwork Tue Nov 6 12:41:51 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 12680 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 4615F23E5E for ; Tue, 6 Nov 2012 12:42:03 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id C9058A185E9 for ; Tue, 6 Nov 2012 12:42:02 +0000 (UTC) Received: by mail-ie0-f180.google.com with SMTP id e10so445791iej.11 for ; Tue, 06 Nov 2012 04:42:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:x-gm-message-state; bh=UM2L9bsiGE6cPpUerok85wCXeSR/O8gzIgeGl+zIXbU=; b=BRtrsLyfXmasEx5vAvdaUIQ7V7ri062KTnyhWflAbSpxHlKAR1bbB7Y0j7u2xPXZvy DU/gAERIjUaXVIU4/vnnGLaDo95jFLYGtsmhkWh1U0f55xZhtJQOaJoFCFuCIxHwvn54 u3ARzm8BtQsTYyvBRcC5WoLXGlpkz0dZwd91aXo5E0S7rA6n0+YtRSaTflzIG4oQTUF7 /zHKdO1crGo+WWPUuWLRJa5Yoq9uN2OXkjPLzP7HinPhLO7CGemaM7dlHCN0R0p/AD1y fEup0ECwNYM1+qVeqRZKSdid9yPGjh0xAQx9RqxvepFmocqMPTJ7xzT+zf3rIUjIyvFo U/ZQ== Received: by 10.50.237.69 with SMTP id va5mr729093igc.62.1352205722246; Tue, 06 Nov 2012 04:42:02 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.148 with SMTP id n20csp34907igt; Tue, 6 Nov 2012 04:42:01 -0800 (PST) Received: by 10.204.149.2 with SMTP id r2mr257374bkv.0.1352205720705; Tue, 06 Nov 2012 04:42:00 -0800 (PST) Received: from mail-bk0-f50.google.com (mail-bk0-f50.google.com [209.85.214.50]) by mx.google.com with ESMTPS id c9si28272147bkw.120.2012.11.06.04.42.00 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 06 Nov 2012 04:42:00 -0800 (PST) Received-SPF: neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=209.85.214.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by mail-bk0-f50.google.com with SMTP id q16so174060bkw.9 for ; Tue, 06 Nov 2012 04:42:00 -0800 (PST) Received: by 10.205.132.72 with SMTP id ht8mr229214bkc.72.1352205719841; Tue, 06 Nov 2012 04:41:59 -0800 (PST) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id fm5sm11927891bkc.5.2012.11.06.04.41.57 (version=SSLv3 cipher=OTHER); Tue, 06 Nov 2012 04:41:58 -0800 (PST) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Cc: patches@linaro.org, Nicolas Pitre , Rob Herring Subject: [PATCH] ARM: decompressor: Enable unaligned memory access for v6 and above Date: Tue, 6 Nov 2012 12:41:51 +0000 Message-Id: <1352205711-15787-1-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 X-Gm-Message-State: ALoCoQnTV8EIcJloVRX+bYcwZwSWkJsIm51FEcvbIzNf/hKzWOG4LFhU2HTYAINlPfFBkBBWUoXr Modern GCC can generate code which makes use of the CPU's native unaligned memory access capabilities. This is useful for the C decompressor implementations used for unpacking compressed kernels. This patch disables alignment faults and enables the v6 unaligned access model on CPUs which support these features (i.e., v6 and later), allowing full unaligned access support for C code in the decompressor. The decompressor C code must not be built to assume that unaligned access works if support for v5 or older platforms is included in the kernel. For correct code generation, C decompressor code must always use the get_unaligned and put_unaligned accessors when dealing with unaligned pointers, regardless of this patch. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- This is the same as the previous post, with an additional comment in the commit message regarding the use of {get,put}_unaligned, as suggested by Nico. Tested on ARM1136JF-S (Integrator/CP) and ARM1176JZF-S (RealView PB1176JZF-S). ARM1176 is like v7 regarding the MIDR and SCTLR alignment control bits, so this tests the v7 code path. arch/arm/boot/compressed/head.S | 14 +++++++++++++- 1 files changed, 13 insertions(+), 1 deletions(-) diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 90275f0..49ca86e 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -652,6 +652,15 @@ __setup_mmu: sub r3, r4, #16384 @ Page directory size mov pc, lr ENDPROC(__setup_mmu) +@ Enable unaligned access on v6, to allow better code generation +@ for the decompressor C code: +__armv6_mmu_cache_on: + mrc p15, 0, r0, c1, c0, 0 @ read SCTLR + bic r0, r0, #2 @ A (no unaligned access fault) + orr r0, r0, #1 << 22 @ U (v6 unaligned access model) + mcr p15, 0, r0, c1, c0, 0 @ write SCTLR + b __armv4_mmu_cache_on + __arm926ejs_mmu_cache_on: #ifdef CONFIG_CPU_DCACHE_WRITETHROUGH mov r0, #4 @ put dcache in WT mode @@ -694,6 +703,9 @@ __armv7_mmu_cache_on: bic r0, r0, #1 << 28 @ clear SCTLR.TRE orr r0, r0, #0x5000 @ I-cache enable, RR cache replacement orr r0, r0, #0x003c @ write buffer + bic r0, r0, #2 @ A (no unaligned access fault) + orr r0, r0, #1 << 22 @ U (v6 unaligned access model) + @ (needed for ARM1176) #ifdef CONFIG_MMU #ifdef CONFIG_CPU_ENDIAN_BE8 orr r0, r0, #1 << 25 @ big-endian page tables @@ -914,7 +926,7 @@ proc_types: .word 0x0007b000 @ ARMv6 .word 0x000ff000 - W(b) __armv4_mmu_cache_on + W(b) __armv6_mmu_cache_on W(b) __armv4_mmu_cache_off W(b) __armv6_mmu_cache_flush