From patchwork Fri Jun 3 03:26:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 69213 Delivered-To: patches@linaro.org Received: by 10.140.106.246 with SMTP id e109csp58030qgf; Thu, 2 Jun 2016 20:26:31 -0700 (PDT) X-Received: by 10.140.174.133 with SMTP id u127mr1351924qhu.10.1464924391756; Thu, 02 Jun 2016 20:26:31 -0700 (PDT) Return-Path: Received: from mail-qg0-x235.google.com (mail-qg0-x235.google.com. [2607:f8b0:400d:c04::235]) by mx.google.com with ESMTPS id y7si2276521qhy.85.2016.06.02.20.26.31 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Jun 2016 20:26:31 -0700 (PDT) Received-SPF: pass (google.com: domain of dave.long@linaro.org designates 2607:f8b0:400d:c04::235 as permitted sender) client-ip=2607:f8b0:400d:c04::235; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of dave.long@linaro.org designates 2607:f8b0:400d:c04::235 as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by mail-qg0-x235.google.com with SMTP id 93so6123580qgx.2 for ; Thu, 02 Jun 2016 20:26:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=EpbIFMp/9ylMJlXdOkHskZ3dYVa95YnybLKgkBQ15aY=; b=iifudG3jTl5zkWKRbJ5WHZPbJo5nkEqM0b6XMoCGLSkkDLcN2CQxt6suW7b8eKAgkd CujlIL6Z0MsHoBO4d0lrXBs5muouhGDLa0LUZ4Pc0t5T0ztzBCXbFhLBNSrqJEE0mzcY PThs88MDPoYoWgi0ZlMx/jHl/UPWZrRwXKXFk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=EpbIFMp/9ylMJlXdOkHskZ3dYVa95YnybLKgkBQ15aY=; b=FA1XQxLCEpb1nLNRfvJ2Nv+v0PE0GNiTXpEfAx01CvZbNLHEtL0qcrzESPMjHngfUS mrGr4vCdxTxq+0BnbaXsyCtPVq0JDJC73gMWvqiEtzScrmvzj/85cU6q9Ot68D8dtrql bDTVOqZZ4xgDQ1tnpBBMpXfQU8dszFimUrCs49JUjM/wtdAKRECn4Wpiy/WazXDSnWYt LSXOWOlxGomsyPP5ElcNUQn6NUUC89VJpAK5MMjmdGjr/e01J2kdeuIEZCP6HuA0cxWf munXoHxb92vXSO+POJiq0Hpo+BG8mkU3SuDFntVpyYe68elEOkdDxe7ypF5lCFL8jYSM lq9g== X-Gm-Message-State: ALyK8tI3u1IJYb9xRlise8dp9rtGDQ60eqQMHrpmI7uM3eoSFlIzLuRkkAfpNYnLe+eDbytcCjM= X-Received: by 10.140.106.228 with SMTP id e91mr1207324qgf.75.1464924391519; Thu, 02 Jun 2016 20:26:31 -0700 (PDT) Return-Path: Received: from localhost.localdomain (pool-72-71-243-181.cncdnh.fast00.myfairpoint.net. [72.71.243.181]) by smtp.googlemail.com with ESMTPSA id 30sm359841qgx.19.2016.06.02.20.26.29 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 02 Jun 2016 20:26:31 -0700 (PDT) From: David Long To: Catalin Marinas , Huang Shijie , James Morse , Marc Zyngier , Pratyush Anand , Sandeepa Prabhu , Will Deacon , William Cohen , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Steve Capper , Masami Hiramatsu , Li Bin Cc: Adam Buchbinder , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Andrew Morton , Andrey Ryabinin , Ard Biesheuvel , Christoffer Dall , Daniel Thompson , Dave P Martin , Jens Wiklander , Jisheng Zhang , John Blackwood , Mark Rutland , Petr Mladek , Robin Murphy , Suzuki K Poulose , Vladimir Murzin , Yang Shi , Zi Shen Lim , yalin wang , Mark Brown Subject: [PATCH v13 02/10] arm64: Add more test functions to insn.c Date: Thu, 2 Jun 2016 23:26:16 -0400 Message-Id: <1464924384-15269-3-git-send-email-dave.long@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1464924384-15269-1-git-send-email-dave.long@linaro.org> References: <1464924384-15269-1-git-send-email-dave.long@linaro.org> From: "David A. Long" Certain instructions are hard to execute correctly out-of-line (as in kprobes). Test functions are added to insn.[hc] to identify these. The instructions include any that use PC-relative addressing, change the PC, or change interrupt masking. For efficiency and simplicity test functions are also added for small collections of related instructions. Signed-off-by: David A. Long --- arch/arm64/include/asm/insn.h | 36 ++++++++++++++++++++++++++++++++++++ arch/arm64/kernel/insn.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+) -- 2.5.0 diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 30e50eb..9785d10 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -120,6 +120,29 @@ enum aarch64_insn_register { AARCH64_INSN_REG_SP = 31 /* Stack pointer: as load/store base reg */ }; +enum aarch64_insn_special_register { + AARCH64_INSN_SPCLREG_SPSR_EL1 = 0xC200, + AARCH64_INSN_SPCLREG_ELR_EL1 = 0xC201, + AARCH64_INSN_SPCLREG_SP_EL0 = 0xC208, + AARCH64_INSN_SPCLREG_SPSEL = 0xC210, + AARCH64_INSN_SPCLREG_CURRENTEL = 0xC212, + AARCH64_INSN_SPCLREG_DAIF = 0xDA11, + AARCH64_INSN_SPCLREG_NZCV = 0xDA10, + AARCH64_INSN_SPCLREG_FPCR = 0xDA20, + AARCH64_INSN_SPCLREG_DSPSR_EL0 = 0xDA28, + AARCH64_INSN_SPCLREG_DLR_EL0 = 0xDA29, + AARCH64_INSN_SPCLREG_SPSR_EL2 = 0xE200, + AARCH64_INSN_SPCLREG_ELR_EL2 = 0xE201, + AARCH64_INSN_SPCLREG_SP_EL1 = 0xE208, + AARCH64_INSN_SPCLREG_SPSR_INQ = 0xE218, + AARCH64_INSN_SPCLREG_SPSR_ABT = 0xE219, + AARCH64_INSN_SPCLREG_SPSR_UND = 0xE21A, + AARCH64_INSN_SPCLREG_SPSR_FIQ = 0xE21B, + AARCH64_INSN_SPCLREG_SPSR_EL3 = 0xF200, + AARCH64_INSN_SPCLREG_ELR_EL3 = 0xF201, + AARCH64_INSN_SPCLREG_SP_EL2 = 0xF210 +}; + enum aarch64_insn_variant { AARCH64_INSN_VARIANT_32BIT, AARCH64_INSN_VARIANT_64BIT @@ -223,8 +246,13 @@ static __always_inline bool aarch64_insn_is_##abbr(u32 code) \ static __always_inline u32 aarch64_insn_get_##abbr##_value(void) \ { return (val); } +__AARCH64_INSN_FUNCS(adr_adrp, 0x1F000000, 0x10000000) +__AARCH64_INSN_FUNCS(prfm_lit, 0xFF000000, 0xD8000000) __AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800) __AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800) +__AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) +__AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) +__AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) __AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000) __AARCH64_INSN_FUNCS(ldp_post, 0x7FC00000, 0x28C00000) __AARCH64_INSN_FUNCS(stp_pre, 0x7FC00000, 0x29800000) @@ -273,10 +301,15 @@ __AARCH64_INSN_FUNCS(svc, 0xFFE0001F, 0xD4000001) __AARCH64_INSN_FUNCS(hvc, 0xFFE0001F, 0xD4000002) __AARCH64_INSN_FUNCS(smc, 0xFFE0001F, 0xD4000003) __AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000) +__AARCH64_INSN_FUNCS(exception, 0xFF000000, 0xD4000000) __AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F) __AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000) __AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000) __AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000) +__AARCH64_INSN_FUNCS(eret, 0xFFFFFFFF, 0xD69F03E0) +__AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000) +__AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F) +__AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000) #undef __AARCH64_INSN_FUNCS @@ -286,6 +319,8 @@ bool aarch64_insn_is_branch_imm(u32 insn); int aarch64_insn_read(void *addr, u32 *insnp); int aarch64_insn_write(void *addr, u32 insn); enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn); +bool aarch64_insn_uses_literal(u32 insn); +bool aarch64_insn_is_branch(u32 insn); u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn); u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type, u32 insn, u64 imm); @@ -367,6 +402,7 @@ bool aarch32_insn_is_wide(u32 insn); #define A32_RT_OFFSET 12 #define A32_RT2_OFFSET 0 +u32 aarch64_extract_system_register(u32 insn); u32 aarch32_insn_extract_reg_num(u32 insn, int offset); u32 aarch32_insn_mcr_extract_opc2(u32 insn); u32 aarch32_insn_mcr_extract_crm(u32 insn); diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 368c082..28c6110f 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -162,6 +162,32 @@ static bool __kprobes __aarch64_insn_hotpatch_safe(u32 insn) aarch64_insn_is_nop(insn); } +bool __kprobes aarch64_insn_uses_literal(u32 insn) +{ + /* ldr/ldrsw (literal), prfm */ + + return aarch64_insn_is_ldr_lit(insn) || + aarch64_insn_is_ldrsw_lit(insn) || + aarch64_insn_is_adr_adrp(insn) || + aarch64_insn_is_prfm_lit(insn); +} + +bool __kprobes aarch64_insn_is_branch(u32 insn) +{ + /* b, bl, cb*, tb*, b.cond, br, blr */ + + return aarch64_insn_is_b(insn) || + aarch64_insn_is_bl(insn) || + aarch64_insn_is_cbz(insn) || + aarch64_insn_is_cbnz(insn) || + aarch64_insn_is_tbz(insn) || + aarch64_insn_is_tbnz(insn) || + aarch64_insn_is_ret(insn) || + aarch64_insn_is_br(insn) || + aarch64_insn_is_blr(insn) || + aarch64_insn_is_bcond(insn); +} + /* * ARM Architecture Reference Manual for ARMv8 Profile-A, Issue A.a * Section B2.6.5 "Concurrent modification and execution of instructions": @@ -1175,6 +1201,14 @@ u32 aarch64_set_branch_offset(u32 insn, s32 offset) BUG(); } +/* + * Extract the Op/CR data from a msr/mrs instruction. + */ +u32 aarch64_insn_extract_system_reg(u32 insn) +{ + return (insn & 0x1FFFE0) >> 5; +} + bool aarch32_insn_is_wide(u32 insn) { return insn >= 0xe800;