From patchwork Wed Aug 31 20:52:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 75123 Delivered-To: patches@linaro.org Received: by 10.140.29.52 with SMTP id a49csp532723qga; Wed, 31 Aug 2016 13:52:25 -0700 (PDT) X-Received: by 10.55.5.65 with SMTP id 62mr13240558qkf.279.1472676745564; Wed, 31 Aug 2016 13:52:25 -0700 (PDT) Return-Path: Received: from mail-qk0-x229.google.com (mail-qk0-x229.google.com. [2607:f8b0:400d:c09::229]) by mx.google.com with ESMTPS id f19si1267970qkh.337.2016.08.31.13.52.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 31 Aug 2016 13:52:25 -0700 (PDT) Received-SPF: pass (google.com: domain of dave.long@linaro.org designates 2607:f8b0:400d:c09::229 as permitted sender) client-ip=2607:f8b0:400d:c09::229; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of dave.long@linaro.org designates 2607:f8b0:400d:c09::229 as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by mail-qk0-x229.google.com with SMTP id l2so64777179qkf.3 for ; Wed, 31 Aug 2016 13:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=iLOtLIKfjZ385BPp1iPlTtOYKAFlRkRjDXE9hhauSxk=; b=ZkDuSwJPUO+2Z3a1Pcn0XtDMAeGonqaNiKK2hq8Ipgy5596HsD5eerP1Nqphza9zNa OdXqKCmj2TDU9c9aD7UvH70Dxl0LPMzldMZtN2MCPOwhLDS4OMvv5N5fjmdajtLmmUvU s/gUyK/Qi57AgQiEzOjI0iXhHI9DRwBMiFeiE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=iLOtLIKfjZ385BPp1iPlTtOYKAFlRkRjDXE9hhauSxk=; b=fEUhUtQX9/qD6NbKvXyybzhgrYsdpQ4l56PQ237gDNmWShQianP+g5Q9mmu96U15Yh Jd5NWmHxZbEdLWE0UtppI4w+jB+zdRQztuor6EzJ8mj9KJ0cJtplwvv3XljKL71I4Tak NRAofKbxg68kM7sn3u01pibQVZTQwul6+xNqmYuVkgJlnr1/VW2WbjYw3gnyTcJHbEv6 8jksmktLEHxZuVE+ws8vo6Iqz234g02Uxx1RPGZl7palwaIRVNfuYxYyAXiGHgqzl8rg Zf/nJY8zsBmaLOeYAuZuZkY+jEu5Sv/kGxAzRIgKSAR8K7J8RBZtoar7vL2XxCIcV/0L VFrg== X-Gm-Message-State: AE9vXwNO5pKU5x5xWYj5lI4f0k3jCJ57/Y0yfrTJNbTwg2cVJ0pKbH/PdTk7X6RTWsvV37q6AS8= X-Received: by 10.55.41.86 with SMTP id p83mr13645842qkh.93.1472676745244; Wed, 31 Aug 2016 13:52:25 -0700 (PDT) Return-Path: Received: from localhost.localdomain (pool-72-71-243-24.cncdnh.fast00.myfairpoint.net. [72.71.243.24]) by smtp.googlemail.com with ESMTPSA id v68sm986497qkd.9.2016.08.31.13.52.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 Aug 2016 13:52:24 -0700 (PDT) From: David Long To: Masami Hiramatsu , Ananth N Mavinakayanahalli , Anil S Keshavamurthy , "David S. Miller" , Will Deacon , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, Sandeepa Prabhu , William Cohen , Pratyush Anand Cc: Mark Brown Subject: [PATCH] arm64: Improve kprobes test for atomic sequence Date: Wed, 31 Aug 2016 16:52:22 -0400 Message-Id: <1472676742-2250-1-git-send-email-dave.long@linaro.org> X-Mailer: git-send-email 2.5.0 From: "David A. Long" Kprobes searches backwards a finite number of instructions to determine if there is an attempt to probe a load/store exclusive sequence. It stops when it hits the maximum number of instructions or a load or store exclusive. However this means it can run up past the beginning of the function and start looking at literal constants. This has been shown to cause a false positive and blocks insertion of the probe. To fix this add a test to see if the typical: "stp x29, x30, [sp, #n]!" instruction beginning a function gets hit. This also improves efficiency by not testing code that is not part of the function. There is some possibility that a function will not begin with this instruction, in which case the fixed code will behave no worse than before. There could also be the case that the stp instruction is found further in the body of the function, which could theoretically allow probing of an atomic squence. The likelihood of this seems low, and this would not be the only aspect of kprobes where the user needs to be careful to avoid problems. Signed-off-by: David A. Long --- arch/arm64/kernel/probes/decode-insn.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) -- 2.5.0 diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c index 37e47a9..248e820 100644 --- a/arch/arm64/kernel/probes/decode-insn.c +++ b/arch/arm64/kernel/probes/decode-insn.c @@ -122,16 +122,28 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi) static bool __kprobes is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end) { + const u32 stp_x29_x30_sp_pre = 0xa9807bfd; + const u32 stp_ignore_index_mask = 0xffc07fff; + u32 instruction = le32_to_cpu(*scan_start); + while (scan_start > scan_end) { /* - * atomic region starts from exclusive load and ends with - * exclusive store. + * Atomic region starts from exclusive load and ends with + * exclusive store. If we hit a "stp x29, x30, [sp, #n]!" + * assume it is the beginning of the function and end the + * search. This helps avoid false positives from literal + * constants that look like a load-exclusive, in addition + * to being more efficient. */ - if (aarch64_insn_is_store_ex(le32_to_cpu(*scan_start))) + if ((instruction & stp_ignore_index_mask) == stp_x29_x30_sp_pre) return false; - else if (aarch64_insn_is_load_ex(le32_to_cpu(*scan_start))) - return true; + scan_start--; + instruction = le32_to_cpu(*scan_start); + if (aarch64_insn_is_store_ex(instruction)) + return false; + else if (aarch64_insn_is_load_ex(instruction)) + return true; } return false; @@ -142,7 +154,6 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi) { enum kprobe_insn decoded; kprobe_opcode_t insn = le32_to_cpu(*addr); - kprobe_opcode_t *scan_start = addr - 1; kprobe_opcode_t *scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE; #if defined(CONFIG_MODULES) && defined(MODULES_VADDR) struct module *mod; @@ -167,7 +178,7 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi) decoded = arm_probe_decode_insn(insn, asi); if (decoded == INSN_REJECTED || - is_probed_address_atomic(scan_start, scan_end)) + is_probed_address_atomic(addr, scan_end)) return INSN_REJECTED; return decoded;