From patchwork Wed Aug 8 12:23:47 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 10577 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 2E29B23EB4 for ; Wed, 8 Aug 2012 12:24:08 +0000 (UTC) Received: from mail-yw0-f52.google.com (mail-yw0-f52.google.com [209.85.213.52]) by fiordland.canonical.com (Postfix) with ESMTP id DBCAEA1859F for ; Wed, 8 Aug 2012 12:24:07 +0000 (UTC) Received: by yhpp61 with SMTP id p61so667450yhp.11 for ; Wed, 08 Aug 2012 05:24:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=Oo0L/0Gx5kfFlQ1yk0XcFcrC3Fe7r+zbAkttgXGKewc=; b=VLIvgMd0scNbRYZF7ebZ/aJkPGijUpdXBNBzUUMizeOQM6MsmaOJyxC0cJQZW2t3wP xHq3k9cfWsdvHJpjUEjCpW/kNWZb9N+ntXFETEDHGpMWc3Je+aqmjG+583Z6ttAMEPMH 5EJ70B6qFJUHkADT1j1Hmq8kz1IKzGpkYE5dRyd7190Ad/H/tZWa/pqvCzE6FUeHOAo8 922Bro2g/suPqPPdYhpH85uDEuLMqm6M/McWGJiVfu1ZRNIJhEt+az/5WgDp3ssIM+Vi H72TW6zbrjTUoPsouM5dzw2lFOc/+9z76Ns9ecJZuxlxMjauOCyfmQrJM7a6LiPE9PBT MNnw== Received: by 10.42.75.73 with SMTP id z9mr13859176icj.46.1344428647247; Wed, 08 Aug 2012 05:24:07 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.200 with SMTP id ew8csp604415igc; Wed, 8 Aug 2012 05:24:03 -0700 (PDT) Received: by 10.14.208.133 with SMTP id q5mr21998841eeo.9.1344428642861; Wed, 08 Aug 2012 05:24:02 -0700 (PDT) Received: from mail-ey0-f178.google.com (mail-ey0-f178.google.com [209.85.215.178]) by mx.google.com with ESMTPS id l44si12534943eep.139.2012.08.08.05.24.02 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 08 Aug 2012 05:24:02 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.215.178 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=209.85.215.178; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.215.178 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by mail-ey0-f178.google.com with SMTP id k14so215983eaa.37 for ; Wed, 08 Aug 2012 05:24:02 -0700 (PDT) Received: by 10.14.202.66 with SMTP id c42mr4694333eeo.35.1344428642307; Wed, 08 Aug 2012 05:24:02 -0700 (PDT) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id e7sm28153355eep.2.2012.08.08.05.24.00 (version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 05:24:01 -0700 (PDT) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Cc: patches@linaro.org, Stefano Stabellini , Ian Campbell , Rusty Russell , Christoffer Dall , Will Deacon , Marc Zyngier , Rabin Vincent , Jon Medhurst , Nicolas Pitre Subject: [PATCH v2 REPOST 2/4] ARM: opcodes: Make opcode byteswapping macros assembly-compatible Date: Wed, 8 Aug 2012 13:23:47 +0100 Message-Id: <1344428629-12787-3-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1344428629-12787-1-git-send-email-dave.martin@linaro.org> References: <1344428629-12787-1-git-send-email-dave.martin@linaro.org> X-Gm-Message-State: ALoCoQn7D8wVsLl0o9NDx4C9kCaluioliIra/bp2oyjZAiCNWnXB5/6XFj4hPw3YCiZW5Zm2Wp5/ Most of the existing macros don't work with assembler, due to the use of type casts and C functions from . This patch abstracts out those operations and provides simple explicit versions for use in assembly code. __opcode_is_thumb32() and __opcode_is_thumb16() are also converted to do bitmask-based testing to avoid confusion if these are used in assembly code (the assembler typically treats all arithmetic values as signed). These changes avoid the need for the compiler to pre-evaluate constant expressions used to generate opcodes. By ensuring that the forms of these expressions can be evaluated directly by the assembler, we can just stringify the expressions directly into the asm during the preprocessing pass. The alternative approach (passing the evaluated expression via an inline asm "i" constraint) gets painful because the contents of the asm and the constraints must be kept in sync. This makes the resulting macros awkward to use. Retaining the C forms of the macros allows more efficient code to be generated when opcodes are generated programmatically at run- time, but there is no way to embed run-time-generated opcodes in asm() blocks. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- arch/arm/include/asm/opcodes.h | 97 +++++++++++++++++++++++++++++++++------ 1 files changed, 82 insertions(+), 15 deletions(-) diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index 6bf54f9..f57e417 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -19,6 +19,33 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); /* + * Assembler opcode byteswap helpers. + * These are only intended for use by this header: don't use them directly, + * because they will be suboptimal in most cases. + */ +#define ___asm_opcode_swab32(x) ( \ + (((x) << 24) & 0xFF000000) \ + | (((x) << 8) & 0x00FF0000) \ + | (((x) >> 8) & 0x0000FF00) \ + | (((x) >> 24) & 0x000000FF) \ +) +#define ___asm_opcode_swab16(x) ( \ + (((x) << 8) & 0xFF00) \ + | (((x) >> 8) & 0x00FF) \ +) +#define ___asm_opcode_swahb32(x) ( \ + (((x) << 8) & 0xFF00FF00) \ + | (((x) >> 8) & 0x00FF00FF) \ +) +#define ___asm_opcode_swahw32(x) ( \ + (((x) << 16) & 0xFFFF0000) \ + | (((x) >> 16) & 0x0000FFFF) \ +) +#define ___asm_opcode_identity32(x) ((x) & 0xFFFFFFFF) +#define ___asm_opcode_identity16(x) ((x) & 0xFFFF) + + +/* * Opcode byteswap helpers * * These macros help with converting instructions between a canonical integer @@ -41,30 +68,58 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); * Note that values in the range 0x0000E800..0xE7FFFFFF intentionally do not * represent any valid Thumb-2 instruction. For this range, * __opcode_is_thumb32() and __opcode_is_thumb16() will both be false. + * + * The ___asm variants are intended only for use by this header, in situations + * involving inline assembler. For .S files, the normal __opcode_*() macros + * should do the right thing. */ +#ifdef __ASSEMBLY__ -#ifndef __ASSEMBLY__ +#define ___opcode_swab32(x) ___asm_opcode_swab32(x) +#define ___opcode_swab16(x) ___asm_opcode_swab16(x) +#define ___opcode_swahb32(x) ___asm_opcode_swahb32(x) +#define ___opcode_swahw32(x) ___asm_opcode_swahw32(x) +#define ___opcode_identity32(x) ___asm_opcode_identity32(x) +#define ___opcode_identity16(x) ___asm_opcode_identity16(x) + +#else /* ! __ASSEMBLY__ */ #include #include +#define ___opcode_swab32(x) swab32(x) +#define ___opcode_swab16(x) swab16(x) +#define ___opcode_swahb32(x) swahb32(x) +#define ___opcode_swahw32(x) swahw32(x) +#define ___opcode_identity32(x) ((u32)(x)) +#define ___opcode_identity16(x) ((u16)(x)) + +#endif /* ! __ASSEMBLY__ */ + + #ifdef CONFIG_CPU_ENDIAN_BE8 -#define __opcode_to_mem_arm(x) swab32(x) -#define __opcode_to_mem_thumb16(x) swab16(x) -#define __opcode_to_mem_thumb32(x) swahb32(x) +#define __opcode_to_mem_arm(x) ___opcode_swab32(x) +#define __opcode_to_mem_thumb16(x) ___opcode_swab16(x) +#define __opcode_to_mem_thumb32(x) ___opcode_swahb32(x) +#define ___asm_opcode_to_mem_arm(x) ___asm_opcode_swab32(x) +#define ___asm_opcode_to_mem_thumb16(x) ___asm_opcode_swab16(x) +#define ___asm_opcode_to_mem_thumb32(x) ___asm_opcode_swahb32(x) #else /* ! CONFIG_CPU_ENDIAN_BE8 */ -#define __opcode_to_mem_arm(x) ((u32)(x)) -#define __opcode_to_mem_thumb16(x) ((u16)(x)) +#define __opcode_to_mem_arm(x) ___opcode_identity32(x) +#define __opcode_to_mem_thumb16(x) ___opcode_identity16(x) +#define ___asm_opcode_to_mem_arm(x) ___asm_opcode_identity32(x) +#define ___asm_opcode_to_mem_thumb16(x) ___asm_opcode_identity16(x) #ifndef CONFIG_CPU_ENDIAN_BE32 /* * On BE32 systems, using 32-bit accesses to store Thumb instructions will not * work in all cases, due to alignment constraints. For now, a correct * version is not provided for BE32. */ -#define __opcode_to_mem_thumb32(x) swahw32(x) +#define __opcode_to_mem_thumb32(x) ___opcode_swahw32(x) +#define ___asm_opcode_to_mem_thumb32(x) ___asm_opcode_swahw32(x) #endif #endif /* ! CONFIG_CPU_ENDIAN_BE8 */ @@ -78,15 +133,27 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); /* Operations specific to Thumb opcodes */ /* Instruction size checks: */ -#define __opcode_is_thumb32(x) ((u32)(x) >= 0xE8000000UL) -#define __opcode_is_thumb16(x) ((u32)(x) < 0xE800UL) +#define __opcode_is_thumb32(x) ( \ + ((x) & 0xF8000000) == 0xE8000000 \ + || ((x) & 0xF0000000) == 0xF0000000 \ +) +#define __opcode_is_thumb16(x) ( \ + ((x) & 0xFFFF0000) == 0 \ + && !(((x) & 0xF800) == 0xE800 || ((x) & 0xF000) == 0xF000) \ +) /* Operations to construct or split 32-bit Thumb instructions: */ -#define __opcode_thumb32_first(x) ((u16)((x) >> 16)) -#define __opcode_thumb32_second(x) ((u16)(x)) -#define __opcode_thumb32_compose(first, second) \ - (((u32)(u16)(first) << 16) | (u32)(u16)(second)) - -#endif /* __ASSEMBLY__ */ +#define __opcode_thumb32_first(x) (___opcode_identity16((x) >> 16)) +#define __opcode_thumb32_second(x) (___opcode_identity16(x)) +#define __opcode_thumb32_compose(first, second) ( \ + (___opcode_identity32(___opcode_identity16(first)) << 16) \ + | ___opcode_identity32(___opcode_identity16(second)) \ +) +#define ___asm_opcode_thumb32_first(x) (___asm_opcode_identity16((x) >> 16)) +#define ___asm_opcode_thumb32_second(x) (___asm_opcode_identity16(x)) +#define ___asm_opcode_thumb32_compose(first, second) ( \ + (___asm_opcode_identity32(___asm_opcode_identity16(first)) << 16) \ + | ___asm_opcode_identity32(___asm_opcode_identity16(second)) \ +) #endif /* __ASM_ARM_OPCODES_H */