From patchwork Thu Mar 15 16:32:09 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 7310 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id D28F923E12 for ; Thu, 15 Mar 2012 16:32:31 +0000 (UTC) Received: from mail-ey0-f180.google.com (mail-ey0-f180.google.com [209.85.215.180]) by fiordland.canonical.com (Postfix) with ESMTP id C9436A1811D for ; Thu, 15 Mar 2012 16:32:31 +0000 (UTC) Received: by eaal12 with SMTP id l12so1991826eaa.11 for ; Thu, 15 Mar 2012 09:32:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=8knVwHhZWK8zlnsSYY5t/h9JZwUoH7uOPLZEHJvkudc=; b=oNPdaY4282kVPvy9f+biWQwnlNc8CaQD648nDkctE5Lg48pUadRLb8CnIk9zyBbQW8 ALX1cE53F8BcrbXDR+CudFS7N/JEGfb2m2Cp2PgYpHX5dzUNoVm14OaGNGIT/XCJGSxA gj0FnB6mVMnP7qiKcM8tqjs1K6fTbtCZeYxxAHg4bvhtn/nn96NFSdRHl8uyNTU0uQHu 4LvqZlxhAn/povAf/j6QeD+PG7L9KO9tztzVWbmnG0HG6gcvUw7nb9NDFK2AD7F05190 SsKbD0kg3gYQZx837DtVpDX6zcfzr8kld/8sIpVIoueAJju5vRSi3X1ogTfD7uRjsZGU TJ5g== Received: by 10.50.158.133 with SMTP id wu5mr17629815igb.50.1331829150906; Thu, 15 Mar 2012 09:32:30 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.53.18 with SMTP id k18csp40809ibg; Thu, 15 Mar 2012 09:32:30 -0700 (PDT) Received: by 10.204.155.143 with SMTP id s15mr2704058bkw.44.1331829148792; Thu, 15 Mar 2012 09:32:28 -0700 (PDT) Received: from mail-bk0-f50.google.com (mail-bk0-f50.google.com [209.85.214.50]) by mx.google.com with ESMTPS id tb1si1188320bkb.133.2012.03.15.09.32.28 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Mar 2012 09:32:28 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=209.85.214.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by mail-bk0-f50.google.com with SMTP id w11so3194041bku.37 for ; Thu, 15 Mar 2012 09:32:28 -0700 (PDT) Received: by 10.204.156.65 with SMTP id v1mr2657881bkw.109.1331829148122; Thu, 15 Mar 2012 09:32:28 -0700 (PDT) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id u5sm4753780bka.5.2012.03.15.09.32.26 (version=SSLv3 cipher=OTHER); Thu, 15 Mar 2012 09:32:27 -0700 (PDT) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Cc: patches@linaro.org, Stefano Stabellini , Ian Campbell , Rusty Russell , Christoffer Dall , Will Deacon , Marc Zyngier , Rabin Vincent , Jon Medhurst Subject: [PATCH 1/4] ARM: opcodes: Don't define the thumb32 byteswapping macros for BE32 Date: Thu, 15 Mar 2012 16:32:09 +0000 Message-Id: <1331829132-9762-2-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1331829132-9762-1-git-send-email-dave.martin@linaro.org> References: <1331829132-9762-1-git-send-email-dave.martin@linaro.org> X-Gm-Message-State: ALoCoQmuJzqHeLj5ltrqKFxda6OHlljnLkaYNIYDLiZQYoDkf+Nwy4TDYlDONfOqIpGGQCUREotH The existing __mem_to_opcode_thumb32() is incorrect for BE32 platforms. However, these don't support Thumb-2 kernels, so this option is not so relevant for those platforms anyway. This operation is complicated by the lack of unaligned memory access support prior to ARMv6. Rather than provide a "working" macro which will probably won't get used (or worse, will get misused), this patch removes the macro for BE32 kernels. People manipulating Thumb opcodes prior to ARMv6 should almost certainly be splitting these operations into halfwords anyway, using __opcode_thumb32_{first,second,compose}() and the 16-bit opcode transformations. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- arch/arm/include/asm/opcodes.h | 15 ++++++++++++++- 1 files changed, 14 insertions(+), 1 deletions(-) diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index 19c48de..cf877c8 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -49,18 +49,31 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); #include #ifdef CONFIG_CPU_ENDIAN_BE8 + #define __opcode_to_mem_arm(x) swab32(x) #define __opcode_to_mem_thumb16(x) swab16(x) #define __opcode_to_mem_thumb32(x) swahb32(x) -#else + +#else /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __opcode_to_mem_arm(x) ((u32)(x)) #define __opcode_to_mem_thumb16(x) ((u16)(x)) +#ifndef CONFIG_CPU_ENDIAN_BE32 +/* + * On BE32 systems, using 32-bit accesses to store Thumb instructions will not + * work in all cases, due to alignment constraints. For now, a correct + * version is not proivided for BE32. + */ #define __opcode_to_mem_thumb32(x) swahw32(x) #endif +#endif /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __mem_to_opcode_arm(x) __opcode_to_mem_arm(x) #define __mem_to_opcode_thumb16(x) __opcode_to_mem_thumb16(x) +#ifndef CONFIG_CPU_ENDIAN_BE32 #define __mem_to_opcode_thumb32(x) __opcode_to_mem_thumb32(x) +#endif /* Operations specific to Thumb opcodes */