From patchwork Tue Mar 27 13:54:02 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 7483 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 8C6D423E13 for ; Tue, 27 Mar 2012 13:54:24 +0000 (UTC) Received: from mail-gx0-f180.google.com (mail-gx0-f180.google.com [209.85.161.180]) by fiordland.canonical.com (Postfix) with ESMTP id 53DA7A18326 for ; Tue, 27 Mar 2012 13:54:24 +0000 (UTC) Received: by gglu1 with SMTP id u1so5676855ggl.11 for ; Tue, 27 Mar 2012 06:54:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-forwarded-to:x-forwarded-for:delivered-to :received-spf:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state; bh=L9J4I+zINeLDfbED5H9hv2PE1AXOM/2Bu87FSN69ZLE=; b=oFnbv3MG7f9QCyee9NJKEySsB39ePl5rMpTgVzJCOt3Mts71CFArnR2EZvAlB2XlIe 5Zv1X3H9JymhaLCM+di4L1yBi3zSEC6Y2dzRhceESEJFFI0YUlS53BuuNVAVUjDrC5NI LVtGTljs8Sa9E2xnRnGNFtvMZz0cjt3pgzSFD0seLP+YDBWzFklZ+gHUxjjdoHsGyPMH 6I3lnhQEBByOGkWKYni+Tty/JAYsoBvax4rBqgDGo9MZ5RsTz/2laiCtpM1ERsZmnYzU /I97cIs0UJ6V2kdE/xKWQyw0WfW8t5Ig7JoLi3A3DWWBi2fI1NWdzCIH2wRLtyGlRnTF KfEw== MIME-Version: 1.0 Received: by 10.50.46.164 with SMTP id w4mr8802518igm.54.1332856463258; Tue, 27 Mar 2012 06:54:23 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.5.205 with SMTP id 13csp18201ibw; Tue, 27 Mar 2012 06:54:22 -0700 (PDT) Received: by 10.204.154.153 with SMTP id o25mr10319149bkw.138.1332856460530; Tue, 27 Mar 2012 06:54:20 -0700 (PDT) Received: from mail-bk0-f50.google.com (mail-bk0-f50.google.com [209.85.214.50]) by mx.google.com with ESMTPS id il2si9097360bkc.144.2012.03.27.06.54.19 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 27 Mar 2012 06:54:20 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=209.85.214.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by mail-bk0-f50.google.com with SMTP id w11so7894172bku.37 for ; Tue, 27 Mar 2012 06:54:19 -0700 (PDT) Received: by 10.204.133.216 with SMTP id g24mr9917323bkt.104.1332856459207; Tue, 27 Mar 2012 06:54:19 -0700 (PDT) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id jr13sm39341831bkb.14.2012.03.27.06.54.17 (version=SSLv3 cipher=OTHER); Tue, 27 Mar 2012 06:54:18 -0700 (PDT) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Cc: patches@linaro.org, Stefano Stabellini , Ian Campbell , Rusty Russell , Christoffer Dall , Will Deacon , Marc Zyngier , Rabin Vincent , Jon Medhurst Subject: [PATCH v2 1/4] ARM: opcodes: Don't define the thumb32 byteswapping macros for BE32 Date: Tue, 27 Mar 2012 14:54:02 +0100 Message-Id: <1332856445-7007-2-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1332856445-7007-1-git-send-email-dave.martin@linaro.org> References: <1332856445-7007-1-git-send-email-dave.martin@linaro.org> X-Gm-Message-State: ALoCoQl4yN2r4N8LBZLmKB5mkwgIF1q/36nFDehMDwr79Xx/AgAPP6ZgmxpZ4zfKvWwv49IIFs5K The existing __mem_to_opcode_thumb32() is incorrect for BE32 platforms. However, these don't support Thumb-2 kernels, so this option is not so relevant for those platforms anyway. This operation is complicated by the lack of unaligned memory access support prior to ARMv6. Rather than provide a "working" macro which will probably won't get used (or worse, will get misused), this patch removes the macro for BE32 kernels. People manipulating Thumb opcodes prior to ARMv6 should almost certainly be splitting these operations into halfwords anyway, using __opcode_thumb32_{first,second,compose}() and the 16-bit opcode transformations. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- arch/arm/include/asm/opcodes.h | 15 ++++++++++++++- 1 files changed, 14 insertions(+), 1 deletions(-) diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index 19c48de..6bf54f9 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -49,18 +49,31 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); #include #ifdef CONFIG_CPU_ENDIAN_BE8 + #define __opcode_to_mem_arm(x) swab32(x) #define __opcode_to_mem_thumb16(x) swab16(x) #define __opcode_to_mem_thumb32(x) swahb32(x) -#else + +#else /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __opcode_to_mem_arm(x) ((u32)(x)) #define __opcode_to_mem_thumb16(x) ((u16)(x)) +#ifndef CONFIG_CPU_ENDIAN_BE32 +/* + * On BE32 systems, using 32-bit accesses to store Thumb instructions will not + * work in all cases, due to alignment constraints. For now, a correct + * version is not provided for BE32. + */ #define __opcode_to_mem_thumb32(x) swahw32(x) #endif +#endif /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __mem_to_opcode_arm(x) __opcode_to_mem_arm(x) #define __mem_to_opcode_thumb16(x) __opcode_to_mem_thumb16(x) +#ifndef CONFIG_CPU_ENDIAN_BE32 #define __mem_to_opcode_thumb32(x) __opcode_to_mem_thumb32(x) +#endif /* Operations specific to Thumb opcodes */