From patchwork Wed Feb 1 09:42:22 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 6492 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 3639B23E0E for ; Wed, 1 Feb 2012 09:42:39 +0000 (UTC) Received: from mail-bk0-f52.google.com (mail-bk0-f52.google.com [209.85.214.52]) by fiordland.canonical.com (Postfix) with ESMTP id 19E38A181B6 for ; Wed, 1 Feb 2012 09:42:39 +0000 (UTC) Received: by bkar19 with SMTP id r19so1125190bka.11 for ; Wed, 01 Feb 2012 01:42:38 -0800 (PST) Received: by 10.205.134.129 with SMTP id ic1mr12160006bkc.92.1328089358779; Wed, 01 Feb 2012 01:42:38 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.204.130.220 with SMTP id u28cs205845bks; Wed, 1 Feb 2012 01:42:38 -0800 (PST) Received: by 10.204.155.89 with SMTP id r25mr12462533bkw.11.1328089357610; Wed, 01 Feb 2012 01:42:37 -0800 (PST) Received: from mail-bk0-f50.google.com (mail-bk0-f50.google.com [209.85.214.50]) by mx.google.com with ESMTPS id ho9si12399284bkc.19.2012.02.01.01.42.37 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 01 Feb 2012 01:42:37 -0800 (PST) Received-SPF: neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=209.85.214.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.214.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by bkbzu5 with SMTP id zu5so1085556bkb.37 for ; Wed, 01 Feb 2012 01:42:36 -0800 (PST) Received: by 10.204.141.11 with SMTP id k11mr12568768bku.5.1328089356766; Wed, 01 Feb 2012 01:42:36 -0800 (PST) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id sp6sm52722971bkb.2.2012.02.01.01.42.35 (version=SSLv3 cipher=OTHER); Wed, 01 Feb 2012 01:42:36 -0800 (PST) From: Dave Martin To: patches@arm.linux.org.uk Cc: patches@linaro.org, Dave Martin Subject: [PATCH] ARM: Add generic instruction opcode manipulation helpers Date: Wed, 1 Feb 2012 09:42:22 +0000 Message-Id: <1328089342-3703-1-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 This patch adds some endianness-agnostic helpers to convert machine instructions between canonical integer form and in-memory representation. A canonical integer form for representing instructions is also formalised here. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre Tested-by: Jon Medhurst --- KernelVersion: v3.3-rc1 diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index c0efdd6..19c48de 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -17,4 +17,63 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); #define ARM_OPCODE_CONDTEST_PASS 1 #define ARM_OPCODE_CONDTEST_UNCOND 2 + +/* + * Opcode byteswap helpers + * + * These macros help with converting instructions between a canonical integer + * format and in-memory representation, in an endianness-agnostic manner. + * + * __mem_to_opcode_*() convert from in-memory representation to canonical form. + * __opcode_to_mem_*() convert from canonical form to in-memory representation. + * + * + * Canonical instruction representation: + * + * ARM: 0xKKLLMMNN + * Thumb 16-bit: 0x0000KKLL, where KK < 0xE8 + * Thumb 32-bit: 0xKKLLMMNN, where KK >= 0xE8 + * + * There is no way to distinguish an ARM instruction in canonical representation + * from a Thumb instruction (just as these cannot be distinguished in memory). + * Where this distinction is important, it needs to be tracked separately. + * + * Note that values in the range 0x0000E800..0xE7FFFFFF intentionally do not + * represent any valid Thumb-2 instruction. For this range, + * __opcode_is_thumb32() and __opcode_is_thumb16() will both be false. + */ + +#ifndef __ASSEMBLY__ + +#include +#include + +#ifdef CONFIG_CPU_ENDIAN_BE8 +#define __opcode_to_mem_arm(x) swab32(x) +#define __opcode_to_mem_thumb16(x) swab16(x) +#define __opcode_to_mem_thumb32(x) swahb32(x) +#else +#define __opcode_to_mem_arm(x) ((u32)(x)) +#define __opcode_to_mem_thumb16(x) ((u16)(x)) +#define __opcode_to_mem_thumb32(x) swahw32(x) +#endif + +#define __mem_to_opcode_arm(x) __opcode_to_mem_arm(x) +#define __mem_to_opcode_thumb16(x) __opcode_to_mem_thumb16(x) +#define __mem_to_opcode_thumb32(x) __opcode_to_mem_thumb32(x) + +/* Operations specific to Thumb opcodes */ + +/* Instruction size checks: */ +#define __opcode_is_thumb32(x) ((u32)(x) >= 0xE8000000UL) +#define __opcode_is_thumb16(x) ((u32)(x) < 0xE800UL) + +/* Operations to construct or split 32-bit Thumb instructions: */ +#define __opcode_thumb32_first(x) ((u16)((x) >> 16)) +#define __opcode_thumb32_second(x) ((u16)(x)) +#define __opcode_thumb32_compose(first, second) \ + (((u32)(u16)(first) << 16) | (u32)(u16)(second)) + +#endif /* __ASSEMBLY__ */ + #endif /* __ASM_ARM_OPCODES_H */