From patchwork Tue Oct 15 21:04:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 21048 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-gg0-f197.google.com (mail-gg0-f197.google.com [209.85.161.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B862820D84 for ; Tue, 15 Oct 2013 21:04:36 +0000 (UTC) Received: by mail-gg0-f197.google.com with SMTP id l4sf14530381ggi.4 for ; Tue, 15 Oct 2013 14:04:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=2ysZNH+icOAiv7xhtqj9ePdRfe8FET942uW/xh+7KRQ=; b=aOp/Z8rS0z3FirnTIzqSmeUHwgdyMBYpn4Oqvo/CiYPomBPt1W5SsBiAY/jicW3c3w iyukVRVwBPEmwRJVA6Rqh9/ACxlNpoTOjxviyRsve+ZGlqFrIUW2XeFIVUkRF1geRGqy 76TRyDjhgbIAoQqAbOtrcljWo79UHyfSP5z1zS0r0NKpulZQRguBoPNj5NNWyPPKRikD vW8g1ExUMFoLJkO3kq0cYyXP6m+4QLWAwVxKQfbgUqjLALqrES5rerO3u+hOr2kuk3KG gOLxlUoUdkBObcl1bBv4byHML1bwWX0hh24y15T0VoBJcJVTXzNBmPqc4c5Xn2JPZ7OS n1Jg== X-Received: by 10.58.94.162 with SMTP id dd2mr5492199veb.21.1381871076547; Tue, 15 Oct 2013 14:04:36 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.35.170 with SMTP id i10ls280984qej.74.gmail; Tue, 15 Oct 2013 14:04:36 -0700 (PDT) X-Received: by 10.58.254.200 with SMTP id ak8mr13995070ved.12.1381871076395; Tue, 15 Oct 2013 14:04:36 -0700 (PDT) Received: from mail-vc0-f178.google.com (mail-vc0-f178.google.com [209.85.220.178]) by mx.google.com with ESMTPS id dj3si23463296vcb.44.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 15 Oct 2013 14:04:36 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.178 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.178; Received: by mail-vc0-f178.google.com with SMTP id lh4so5433434vcb.23 for ; Tue, 15 Oct 2013 14:04:36 -0700 (PDT) X-Gm-Message-State: ALoCoQnq7pWVpE8AOciWOq8vsDAB2tEpmhsFazIgfqm5RePdLgDsU1W95LYl0t9KDTLnO8TxUKnd X-Received: by 10.52.227.6 with SMTP id rw6mr10512408vdc.19.1381871076276; Tue, 15 Oct 2013 14:04:36 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp328412vcz; Tue, 15 Oct 2013 14:04:35 -0700 (PDT) X-Received: by 10.229.192.6 with SMTP id do6mr60078356qcb.3.1381871075867; Tue, 15 Oct 2013 14:04:35 -0700 (PDT) Received: from mail-qe0-f41.google.com (mail-qe0-f41.google.com [209.85.128.41]) by mx.google.com with ESMTPS id s1si6849748qef.91.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 15 Oct 2013 14:04:35 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.41 is neither permitted nor denied by best guess record for domain of dave.long@linaro.org) client-ip=209.85.128.41; Received: by mail-qe0-f41.google.com with SMTP id x7so5675799qeu.28 for ; Tue, 15 Oct 2013 14:04:35 -0700 (PDT) X-Received: by 10.229.192.65 with SMTP id dp1mr48637020qcb.19.1381871075681; Tue, 15 Oct 2013 14:04:35 -0700 (PDT) Received: from localhost.localdomain (pool-72-71-243-183.cncdnh.fast00.myfairpoint.net. [72.71.243.183]) by mx.google.com with ESMTPSA id i4sm159645128qan.0.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 15 Oct 2013 14:04:35 -0700 (PDT) From: David Long To: linux-arm-kernel@lists.infradead.org Cc: Rabin Vincent , "Jon Medhurst (Tixy)" , Oleg Nesterov , Srikar Dronamraju , Ingo Molnar , linux-kernel@vger.kernel.org Subject: [PATCH v2 03/13] uprobes: allow arch access to xol slot Date: Tue, 15 Oct 2013 17:04:18 -0400 Message-Id: <1381871068-27660-4-git-send-email-dave.long@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1381871068-27660-1-git-send-email-dave.long@linaro.org> References: <1381871068-27660-1-git-send-email-dave.long@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: dave.long@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.178 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: "David A. Long" Allow arches to customize how the instruction is filled into the xol slot. ARM will use this to insert an undefined instruction after the real instruction in order to simulate a single step of the instruction without hardware support. Signed-off-by: Rabin Vincent Signed-off-by: David A. Long --- include/linux/uprobes.h | 1 + kernel/events/uprobes.c | 10 +++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 80116c9..2556ab6 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -134,6 +134,7 @@ extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned l extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs); extern unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs); extern bool __weak arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs); +extern void __weak arch_uprobe_xol_copy(struct arch_uprobe *auprobe, void *vaddr); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 3955172..22d0121 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1238,6 +1238,11 @@ static unsigned long xol_take_insn_slot(struct xol_area *area) return slot_addr; } +void __weak arch_uprobe_xol_copy(struct arch_uprobe *auprobe, void *vaddr) +{ + memcpy(vaddr, auprobe->insn, MAX_UINSN_BYTES); +} + /* * xol_get_insn_slot - allocate a slot for xol. * Returns the allocated slot address or 0. @@ -1246,6 +1251,7 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) { struct xol_area *area; unsigned long xol_vaddr; + void *kaddr; area = get_xol_area(); if (!area) @@ -1256,7 +1262,9 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) return 0; /* Initialize the slot */ - copy_to_page(area->page, xol_vaddr, uprobe->arch.insn, MAX_UINSN_BYTES); + kaddr = kmap_atomic(area->page); + arch_uprobe_xol_copy(&uprobe->arch, kaddr + (xol_vaddr & ~PAGE_MASK)); + kunmap_atomic(kaddr); /* * We probably need flush_icache_user_range() but it needs vma. * This should work on supported architectures too.