From patchwork Wed Apr 16 01:42:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: vkamensky X-Patchwork-Id: 28438 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f72.google.com (mail-qa0-f72.google.com [209.85.216.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 233772036D for ; Wed, 16 Apr 2014 01:45:45 +0000 (UTC) Received: by mail-qa0-f72.google.com with SMTP id hw13sf20842350qab.11 for ; Tue, 15 Apr 2014 18:45:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mime-version:in-reply-to:references :date:message-id:subject:from:to:cc:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:content-type :content-transfer-encoding; bh=JxfHHsFdMRgt1pgJqj89+dC/WHZ3d0bRoGAtKDM1V0Q=; b=HcABKtum2BQAFb0JHmTqm8WV0wc/apbvD2ksn/EEZ1Jlx1iBjyAlp8XA10USdrEiKh Zntvjidx43Uo2R+buzfobhNbL0+VluLSqHsjBUHjXVdk/y9og7Ymdqb55MZMIttz6uoE uymEfRok8VLx5UsT94dg//G738Curd97qPA3OJYMETkI0WUUB2zTldjtvw3u4AxROrE9 ZaNReS6lb/Gt4Mix6CxklhxDzAu2ls6geXAPoM904pX7zhNuxZw0D/kIREB/m7iFoN9H Qec/tcY1Cy+n4uAij5wmW9dLszUsO41VwflYIaWcpZ7LNiEo6ToJYnb9HCNkHqq8Oxnj w/OQ== X-Gm-Message-State: ALoCoQkFbu0R+f7MKhx64ZlFTcz9BTYU5qSvW/EtjumuTHZwdh0Jetl/liXOOOtmlKXgEMREz9fh X-Received: by 10.236.142.212 with SMTP id i60mr2042081yhj.39.1397612745313; Tue, 15 Apr 2014 18:45:45 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.23.106 with SMTP id 97ls446707qgo.34.gmail; Tue, 15 Apr 2014 18:45:45 -0700 (PDT) X-Received: by 10.58.38.166 with SMTP id h6mr85461vek.22.1397612745129; Tue, 15 Apr 2014 18:45:45 -0700 (PDT) Received: from mail-ve0-f180.google.com (mail-ve0-f180.google.com [209.85.128.180]) by mx.google.com with ESMTPS id a8si3692128vej.17.2014.04.15.18.45.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 15 Apr 2014 18:45:45 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.180; Received: by mail-ve0-f180.google.com with SMTP id jz11so10226017veb.11 for ; Tue, 15 Apr 2014 18:45:45 -0700 (PDT) X-Received: by 10.58.13.104 with SMTP id g8mr90839vec.16.1397612745010; Tue, 15 Apr 2014 18:45:45 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp275145vcb; Tue, 15 Apr 2014 18:45:44 -0700 (PDT) X-Received: by 10.236.174.201 with SMTP id x49mr6348905yhl.159.1397612743058; Tue, 15 Apr 2014 18:45:43 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id 66si21544243yhx.92.2014.04.15.18.45.42 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Apr 2014 18:45:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WaEsV-0003Pp-MP; Wed, 16 Apr 2014 01:43:07 +0000 Received: from mail-qg0-f45.google.com ([209.85.192.45]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WaEsS-0003PU-DY for linux-arm-kernel@lists.infradead.org; Wed, 16 Apr 2014 01:43:05 +0000 Received: by mail-qg0-f45.google.com with SMTP id j5so10735447qga.32 for ; Tue, 15 Apr 2014 18:42:40 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.224.160.142 with SMTP id n14mr1590239qax.17.1397612559587; Tue, 15 Apr 2014 18:42:39 -0700 (PDT) Received: by 10.229.95.6 with HTTP; Tue, 15 Apr 2014 18:42:39 -0700 (PDT) In-Reply-To: <20140415.155330.2253279045572024595.davem@davemloft.net> References: <534D6A1F.70102@linaro.org> <20140415174330.GA10558@redhat.com> <534D8AE6.3090609@linaro.org> <20140415.155330.2253279045572024595.davem@davemloft.net> Date: Tue, 15 Apr 2014 18:42:39 -0700 Message-ID: Subject: Re: [RFC PATCH] uprobes: copy to user-space xol page with proper cache flushing From: Victor Kamensky To: David Miller X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140415_184304_659827_71F54C96 X-CRM114-Status: GOOD ( 26.36 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.3.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at http://www.dnswl.org/, no trust [209.85.192.45 listed in list.dnswl.org] Cc: Jon Medhurst , "linaro-kernel@lists.linaro.org" , Russell King - ARM Linux , Ananth N Mavinakayanahalli , Peter Zijlstra , Taras Kondratiuk , Oleg Nesterov , rabin@rab.in, Dave Long , Linus Torvalds , Dave Martin , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: victor.kamensky@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 On 15 April 2014 12:53, David Miller wrote: > From: David Long > Date: Tue, 15 Apr 2014 15:39:18 -0400 > >> Yes, I have that coded for 14 out of 24 architectures (the easy >> ones). The remaining ones need more work. Some may prove problematic. >> The whole approach could yet prove impractical. More recent suggested >> approaches may be better too (or Linus's relatively simple change). > > BTW, another reason perhaps to write directly to userspace and have > a uprobes_*() specific interface is that you'll only need to do > 3 architectures, the ones that support uprobes. > > That way, this is just part of what every arch needs to implement in > order to support uprobes. David, Russell, I hope I correctly understood your idea of writing directly into user space. Please check patch below. On my tests it works ok. Look for arch_uprobe_copy_ixol in arch/arm/kernel/uprobes.c. However, I've run into the issue while I've tried that - I had to add VM_WRITE to xol area mapping. I.e area should be writable and executable in order for put_user or __copy_to_user to work. This does not seem right to have such mapping inside of user-space process. It seems as possible exploitation point. Especially it seems that xol page is left around even when tracing is complete. I feel very uneasy about this direction, unless I am missing something and there is other way to do that. Another concern about this approach: will flush_cache_user_range function take care of smp case were remove snooping is not supported. I think use case that Russell mentioned about self modified code and special system call implies yes. >From e325a1a1bdddc7bd95301f0031d868bc69ddcddb Mon Sep 17 00:00:00 2001 From: Victor Kamensky Date: Mon, 7 Apr 2014 17:57:36 -0700 Subject: [PATCH] ARM: uprobes need icache flush after xol write After instruction write into xol area, on ARM V7 architecture code need to flush dcache and icache to sync them up for given set of addresses. Having just 'flush_dcache_page(page)' call is not enough - it is possible to have stale instruction sitting in icache for given xol area slot address. Introduce arch_uprobe_ixol_copy weak function that by default calls uprobes copy_to_page function and than flush_dcache_page function and on ARM define new one that handles xol slot copy in ARM specific way Arm implementation of arch_uprobe_ixol_copy just makes __copy_to_user which does not have dcache aliasing issues and then flush_cache_user_range to push dcache out and invalidate corresponding icache entries. Note in order to write into uprobes xol area had to add VM_WRITE to xol area mapping. Signed-off-by: Victor Kamensky --- arch/arm/kernel/uprobes.c | 8 ++++++++ include/linux/uprobes.h | 3 +++ kernel/events/uprobes.c | 27 ++++++++++++++++++--------- 3 files changed, 29 insertions(+), 9 deletions(-) @@ -1296,14 +1296,8 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) if (unlikely(!xol_vaddr)) return 0; - /* Initialize the slot */ - copy_to_page(area->page, xol_vaddr, - &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); - /* - * We probably need flush_icache_user_range() but it needs vma. - * This should work on supported architectures too. - */ - flush_dcache_page(area->page); + arch_uprobe_copy_ixol(area->page, xol_vaddr, + &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); return xol_vaddr; } @@ -1346,6 +1340,21 @@ static void xol_free_insn_slot(struct task_struct *tsk) } } +void __weak arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len) +{ + /* Initialize the slot */ + copy_to_page(page, vaddr, src, len); + + /* + * We probably need flush_icache_user_range() but it needs vma. + * This should work on most of architectures by default. If + * architecture needs to do something different it can define + * its own version of the function. + */ + flush_dcache_page(page); +} + /** * uprobe_get_swbp_addr - compute address of swbp given post-swbp regs * @regs: Reflects the saved state of the task after it has hit a breakpoint diff --git a/arch/arm/kernel/uprobes.c b/arch/arm/kernel/uprobes.c index f9bacee..4836e54 100644 --- a/arch/arm/kernel/uprobes.c +++ b/arch/arm/kernel/uprobes.c @@ -113,6 +113,14 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, return 0; } +void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len) +{ + if (!__copy_to_user((void *) vaddr, src, len)) + flush_cache_user_range(vaddr, len); +} + + int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) { struct uprobe_task *utask = current->utask; diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index edff2b9..c52f827 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -32,6 +32,7 @@ struct vm_area_struct; struct mm_struct; struct inode; struct notifier_block; +struct page; #define UPROBE_HANDLER_REMOVE 1 #define UPROBE_HANDLER_MASK 1 @@ -127,6 +128,8 @@ extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned l extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs); extern unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs); extern bool __weak arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs); +extern void __weak arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 04709b6..9e22002 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1149,7 +1149,7 @@ static int xol_add_vma(struct mm_struct *mm, struct xol_area *area) } ret = install_special_mapping(mm, area->vaddr, PAGE_SIZE, - VM_EXEC|VM_MAYEXEC|VM_DONTCOPY|VM_IO, &area->page); + VM_EXEC|VM_MAYEXEC|VM_DONTCOPY|VM_IO|VM_WRITE, &area->page); if (ret) goto fail;