From patchwork Mon Sep 3 12:49:22 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 11150 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 6405623F22 for ; Mon, 3 Sep 2012 12:50:09 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id C35E4A18374 for ; Mon, 3 Sep 2012 12:49:21 +0000 (UTC) Received: by iafj25 with SMTP id j25so7253453iaf.11 for ; Mon, 03 Sep 2012 05:50:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:x-gm-message-state; bh=Dlhx5BQ5BMZRtTHEGXXbRojFLnPWbqhTQl4n7NnQQ8M=; b=Uik/W1cjWBxMrwXNpy371SUKfUA79G1SgJLreMe/3eo/JuL8MtfGNjRS0PxUT7kj4y ufqvAC3rqvrffrKRINAnZdnGMWvy1zqmouUiheL8GaPuLBGd3MV+dZMHtQw/Y34tzoAr rLeH3ozYXMIJHBAJFgVMtlhpnANHompZdiims/rWnH/gSe2Gw8lgMOQMV/JWdaxyBo1a 1nCdUCfVZ3OfdLqJb+ixF7a2ggMBnabeRIodhJW1oAdBJ+FexdTROKEIVBv8NVCyVg7U lAwP7gaH2/ubYL81K2+umqkPySN72H3LFJnHN/Tpt+WtnPUTicaZ3dzpz6Vn9+/Npal2 p7Zw== Received: by 10.50.10.201 with SMTP id k9mr10649415igb.28.1346676608388; Mon, 03 Sep 2012 05:50:08 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp134975igc; Mon, 3 Sep 2012 05:50:07 -0700 (PDT) Received: by 10.14.177.193 with SMTP id d41mr21134169eem.19.1346676605763; Mon, 03 Sep 2012 05:50:05 -0700 (PDT) Received: from mail-ee0-f50.google.com (mail-ee0-f50.google.com [74.125.83.50]) by mx.google.com with ESMTPS id 1si7716527eee.61.2012.09.03.05.50.05 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 03 Sep 2012 05:50:05 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.83.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=74.125.83.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 74.125.83.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by eekc50 with SMTP id c50so1971784eek.37 for ; Mon, 03 Sep 2012 05:50:05 -0700 (PDT) Received: by 10.14.213.137 with SMTP id a9mr21280277eep.38.1346676605032; Mon, 03 Sep 2012 05:50:05 -0700 (PDT) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id y1sm36310671eel.0.2012.09.03.05.50.02 (version=SSLv3 cipher=OTHER); Mon, 03 Sep 2012 05:50:03 -0700 (PDT) From: Dave Martin To: patches@arm.linux.org.uk Cc: patches@linaro.org, Dave Martin , Stefano Stabellini , Marc Zyngier Subject: ARM: opcodes: Don't define the thumb32 byteswapping macros for BE32 Date: Mon, 3 Sep 2012 13:49:22 +0100 Message-Id: <1346676569-4085-1-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 X-Gm-Message-State: ALoCoQl7XJbd74IWwc/qGWbGZcrHYOVEBAuo8InS5m+Mn7PFptC+WXnJgSqyLm7cWdbprueDBB7/ The existing __mem_to_opcode_thumb32() is incorrect for BE32 platforms. However, these don't support Thumb-2 kernels, so this option is not so relevant for those platforms anyway. This operation is complicated by the lack of unaligned memory access support prior to ARMv6. Rather than provide a "working" macro which will probably won't get used (or worse, will get misused), this patch removes the macro for BE32 kernels. People manipulating Thumb opcodes prior to ARMv6 should almost certainly be splitting these operations into halfwords anyway, using __opcode_thumb32_{first,second,compose}() and the 16-bit opcode transformations. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- KernelVersion: 3.6-rc4 diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index 19c48de..6bf54f9 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -49,18 +49,31 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); #include #ifdef CONFIG_CPU_ENDIAN_BE8 + #define __opcode_to_mem_arm(x) swab32(x) #define __opcode_to_mem_thumb16(x) swab16(x) #define __opcode_to_mem_thumb32(x) swahb32(x) -#else + +#else /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __opcode_to_mem_arm(x) ((u32)(x)) #define __opcode_to_mem_thumb16(x) ((u16)(x)) +#ifndef CONFIG_CPU_ENDIAN_BE32 +/* + * On BE32 systems, using 32-bit accesses to store Thumb instructions will not + * work in all cases, due to alignment constraints. For now, a correct + * version is not provided for BE32. + */ #define __opcode_to_mem_thumb32(x) swahw32(x) #endif +#endif /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __mem_to_opcode_arm(x) __opcode_to_mem_arm(x) #define __mem_to_opcode_thumb16(x) __opcode_to_mem_thumb16(x) +#ifndef CONFIG_CPU_ENDIAN_BE32 #define __mem_to_opcode_thumb32(x) __opcode_to_mem_thumb32(x) +#endif /* Operations specific to Thumb opcodes */