From patchwork Fri Apr 11 14:26:51 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: vkamensky X-Patchwork-Id: 28280 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f199.google.com (mail-ve0-f199.google.com [209.85.128.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C3D9B20822 for ; Fri, 11 Apr 2014 14:35:19 +0000 (UTC) Received: by mail-ve0-f199.google.com with SMTP id jy13sf17189804veb.10 for ; Fri, 11 Apr 2014 07:35:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mime-version:in-reply-to:references :date:message-id:subject:from:to:cc:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:content-type :content-transfer-encoding; bh=sDq0XkNcrfm9drPHqZWhnOlijKnNjUlFl1sCDAUt7JM=; b=ZHH+ui+WS3sjpDs6EmmpVp0645kq/Yd3eVqTAWHL/Ww36C2wEF6pYWkITVWAPXNjNM Tnz2o4Frq0rtPQFpX2gApvaAgJYHrY6GOxvLjy8qTBy9YZf1ParAapIBFpjEJFfjS4o5 vMHfBIWve8azB4dARxrq2SJ7BMXBdBJHtbOslzA1cs5VTszSk4DEeIPuwnWYB0tLOy+t oryRKn3cWOHJDsu46aDfP+SG9m5pbskn4eutxwr7UHArnPiAfSlByERyRBaAOCxoh/lC K1OHuIowJgWAlHqEEEhY7b9jHLN1HNHWpqxWO2sAHmdkCxkZ+YxsWXMAMC0jwt+C031N vgCA== X-Gm-Message-State: ALoCoQmqyisxDdP3meXsekGVzPL5UQW5TBGUlgEi5SgE1JX6FcclvL1Th2kppJldgggSouaHGsPk X-Received: by 10.236.137.50 with SMTP id x38mr11189039yhi.9.1397226919489; Fri, 11 Apr 2014 07:35:19 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.95.135 with SMTP id i7ls1613505qge.7.gmail; Fri, 11 Apr 2014 07:35:19 -0700 (PDT) X-Received: by 10.52.37.196 with SMTP id a4mr1312359vdk.33.1397226919343; Fri, 11 Apr 2014 07:35:19 -0700 (PDT) Received: from mail-ve0-f170.google.com (mail-ve0-f170.google.com [209.85.128.170]) by mx.google.com with ESMTPS id sc7si1356774vdc.49.2014.04.11.07.35.19 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 11 Apr 2014 07:35:19 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.170 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.170; Received: by mail-ve0-f170.google.com with SMTP id pa12so5048253veb.1 for ; Fri, 11 Apr 2014 07:35:19 -0700 (PDT) X-Received: by 10.52.15.1 with SMTP id t1mr515783vdc.49.1397226919238; Fri, 11 Apr 2014 07:35:19 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp61328vcb; Fri, 11 Apr 2014 07:35:18 -0700 (PDT) X-Received: by 10.194.87.163 with SMTP id az3mr7903476wjb.63.1397226918286; Fri, 11 Apr 2014 07:35:18 -0700 (PDT) Received: from casper.infradead.org (casper.infradead.org. [2001:770:15f::2]) by mx.google.com with ESMTPS id hk8si2764609wjc.38.2014.04.11.07.35.17 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 11 Apr 2014 07:35:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:770:15f::2 as permitted sender) client-ip=2001:770:15f::2; Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYcXg-0007wm-SF; Fri, 11 Apr 2014 14:34:57 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYcXe-0005Db-Ao; Fri, 11 Apr 2014 14:34:54 +0000 Received: from bombadil.infradead.org ([2001:1868:205::9]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYcXa-0005DB-HS for linux-arm-kernel@merlin.infradead.org; Fri, 11 Apr 2014 14:34:51 +0000 Received: from mail-qa0-f42.google.com ([209.85.216.42]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYcXY-0001Pu-Rm for linux-arm-kernel@lists.infradead.org; Fri, 11 Apr 2014 14:34:49 +0000 Received: by mail-qa0-f42.google.com with SMTP id k15so5479594qaq.1 for ; Fri, 11 Apr 2014 07:34:26 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.140.24.33 with SMTP id 30mr27973439qgq.40.1397226411736; Fri, 11 Apr 2014 07:26:51 -0700 (PDT) Received: by 10.229.95.6 with HTTP; Fri, 11 Apr 2014 07:26:51 -0700 (PDT) In-Reply-To: <20140411.003636.272212797007496394.davem@davemloft.net> References: <20140409184507.GA1058@redhat.com> <5347655B.3080307@linaro.org> <20140411.003636.272212797007496394.davem@davemloft.net> Date: Fri, 11 Apr 2014 07:26:51 -0700 Message-ID: Subject: Re: [RFC PATCH] uprobes: copy to user-space xol page with proper cache flushing From: Victor Kamensky To: David Miller X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140411_073448_965255_EF79D4C1 X-CRM114-Status: GOOD ( 20.99 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.3.2 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.216.42 listed in list.dnswl.org] Cc: Jon Medhurst , "linaro-kernel@lists.linaro.org" , Russell King - ARM Linux , ananth@in.ibm.com, Taras Kondratiuk , Oleg Nesterov , rabin@rab.in, Dave Long , Dave Martin , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: victor.kamensky@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.170 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 On 10 April 2014 21:36, David Miller wrote: > From: David Long > Date: Thu, 10 Apr 2014 23:45:31 -0400 > >> Replace memcpy and dcache flush in generic uprobes with a call to >> copy_to_user_page(), which will do a proper flushing of kernel and >> user cache. Also modify the inmplementation of copy_to_user_page >> to assume a NULL vma pointer means the user icache corresponding >> to this right is stale and needs to be flushed. Note that this patch >> does not fix copy_to_user page for the sh, alpha, sparc, or mips >> architectures (which do not currently support uprobes). >> >> Signed-off-by: David A. Long > > You really need to pass the proper VMA down to the call site > rather than pass NULL, that's extremely ugly and totally > unnecesary. Agreed that VMA is really needed. Here is variant that I tried while waiting for Oleg's response: >From 4a6a9043e0910041dd8842835a528cbdc39fad34 Mon Sep 17 00:00:00 2001 From: Victor Kamensky Date: Thu, 10 Apr 2014 17:06:39 -0700 Subject: [PATCH] uprobes: use copy_to_user_page function to copy instr to xol area Use copy_to_user_page function to copy instruction into xol area. copy_to_user_page function guarantee that all caches are correctly flushed during such write (including icache as well if needed). Because copy_to_user_page needs vm_area_struct, vma field was added into struct xol_area. It holds cached vma value for xol_area. Also using copy_to_user_page we make sure that we use the same code that ptrace write code uses. Signed-off-by: Victor Kamensky --- kernel/events/uprobes.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) @@ -1287,6 +1289,7 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) { struct xol_area *area; unsigned long xol_vaddr; + void *xol_page_kaddr; area = get_xol_area(); if (!area) @@ -1297,8 +1300,11 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) return 0; /* Initialize the slot */ - copy_to_page(area->page, xol_vaddr, - &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); + xol_page_kaddr = kmap_atomic(area->page); + copy_to_user_page(area->vma, area->page, xol_vaddr, + xol_page_kaddr + (xol_vaddr & ~PAGE_MASK), + &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); + kunmap_atomic(xol_page_kaddr); /* * We probably need flush_icache_user_range() but it needs vma. * This should work on supported architectures too. diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 04709b6..1ae4563 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -117,6 +117,7 @@ struct xol_area { * the vma go away, and we must handle that reasonably gracefully. */ unsigned long vaddr; /* Page(s) of instruction slots */ + struct vm_area_struct *vma; /* VMA that holds above address */ }; /* @@ -1150,6 +1151,7 @@ static int xol_add_vma(struct mm_struct *mm, struct xol_area *area) ret = install_special_mapping(mm, area->vaddr, PAGE_SIZE, VM_EXEC|VM_MAYEXEC|VM_DONTCOPY|VM_IO, &area->page); + area->vma = find_vma(mm, area->vaddr); if (ret) goto fail;