From patchwork Wed Aug 8 12:23:46 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 10576 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id AAB9023EB4 for ; Wed, 8 Aug 2012 12:24:07 +0000 (UTC) Received: from mail-gg0-f180.google.com (mail-gg0-f180.google.com [209.85.161.180]) by fiordland.canonical.com (Postfix) with ESMTP id 5FDB5A1859F for ; Wed, 8 Aug 2012 12:24:07 +0000 (UTC) Received: by ggnf1 with SMTP id f1so662598ggn.11 for ; Wed, 08 Aug 2012 05:24:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=L9J4I+zINeLDfbED5H9hv2PE1AXOM/2Bu87FSN69ZLE=; b=F/PtgSV557BhNQLVrauIh13zPbHcLSF6HTbHJRFHJ+PUMelGb40JVGyOa+ltN89xzO 33VIQz5JQVtPyzxVSrep+s56TkqtiQJZv9j0b3oqnbFHDcaqp6/jSJzqR8Q6P9dNy6Cu 6R7xboa67ZoF85LetCrTYQJ1enxfZ0VjdtWZthGzCsO9YofMKaks12M0Wbv/kBkZrMar nxvmbx6RvYx2n/BO9ml4BLfNZ0QQ2XgTLhi5/CTYtbVmCI8C+WIr17tutWe2oKv6agu0 KasJ+a2APcv2ObQAa7oxkBJafwwdp6fawRCcv04YOrNo7+3uHdZiv42okGemj/NggJTk Zsqw== Received: by 10.50.6.229 with SMTP id e5mr766058iga.9.1344428646552; Wed, 08 Aug 2012 05:24:06 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.200 with SMTP id ew8csp604412igc; Wed, 8 Aug 2012 05:24:03 -0700 (PDT) Received: by 10.14.2.5 with SMTP id 5mr21982799eee.33.1344428641353; Wed, 08 Aug 2012 05:24:01 -0700 (PDT) Received: from mail-ee0-f50.google.com (mail-ee0-f50.google.com [74.125.83.50]) by mx.google.com with ESMTPS id o42si13701892eep.59.2012.08.08.05.24.00 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 08 Aug 2012 05:24:01 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.83.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=74.125.83.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 74.125.83.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by eekc50 with SMTP id c50so218269eek.37 for ; Wed, 08 Aug 2012 05:24:00 -0700 (PDT) Received: by 10.14.198.200 with SMTP id v48mr22279924een.3.1344428640774; Wed, 08 Aug 2012 05:24:00 -0700 (PDT) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id e7sm28153355eep.2.2012.08.08.05.23.59 (version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 05:24:00 -0700 (PDT) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Cc: patches@linaro.org, Stefano Stabellini , Ian Campbell , Rusty Russell , Christoffer Dall , Will Deacon , Marc Zyngier , Rabin Vincent , Jon Medhurst , Nicolas Pitre Subject: [PATCH v2 REPOST 1/4] ARM: opcodes: Don't define the thumb32 byteswapping macros for BE32 Date: Wed, 8 Aug 2012 13:23:46 +0100 Message-Id: <1344428629-12787-2-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1344428629-12787-1-git-send-email-dave.martin@linaro.org> References: <1344428629-12787-1-git-send-email-dave.martin@linaro.org> X-Gm-Message-State: ALoCoQkJ6RPQQc4aQzldv8+sz0qxjc3xwbdag7d5cp95Xl32YBAOXwArn3kBR4aEbEY5J4M25aRa The existing __mem_to_opcode_thumb32() is incorrect for BE32 platforms. However, these don't support Thumb-2 kernels, so this option is not so relevant for those platforms anyway. This operation is complicated by the lack of unaligned memory access support prior to ARMv6. Rather than provide a "working" macro which will probably won't get used (or worse, will get misused), this patch removes the macro for BE32 kernels. People manipulating Thumb opcodes prior to ARMv6 should almost certainly be splitting these operations into halfwords anyway, using __opcode_thumb32_{first,second,compose}() and the 16-bit opcode transformations. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- arch/arm/include/asm/opcodes.h | 15 ++++++++++++++- 1 files changed, 14 insertions(+), 1 deletions(-) diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index 19c48de..6bf54f9 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -49,18 +49,31 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); #include #ifdef CONFIG_CPU_ENDIAN_BE8 + #define __opcode_to_mem_arm(x) swab32(x) #define __opcode_to_mem_thumb16(x) swab16(x) #define __opcode_to_mem_thumb32(x) swahb32(x) -#else + +#else /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __opcode_to_mem_arm(x) ((u32)(x)) #define __opcode_to_mem_thumb16(x) ((u16)(x)) +#ifndef CONFIG_CPU_ENDIAN_BE32 +/* + * On BE32 systems, using 32-bit accesses to store Thumb instructions will not + * work in all cases, due to alignment constraints. For now, a correct + * version is not provided for BE32. + */ #define __opcode_to_mem_thumb32(x) swahw32(x) #endif +#endif /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __mem_to_opcode_arm(x) __opcode_to_mem_arm(x) #define __mem_to_opcode_thumb16(x) __opcode_to_mem_thumb16(x) +#ifndef CONFIG_CPU_ENDIAN_BE32 #define __mem_to_opcode_thumb32(x) __opcode_to_mem_thumb32(x) +#endif /* Operations specific to Thumb opcodes */