From patchwork Wed Apr 9 05:58:52 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: vkamensky X-Patchwork-Id: 28059 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f72.google.com (mail-oa0-f72.google.com [209.85.219.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 4D7B520553 for ; Wed, 9 Apr 2014 06:01:41 +0000 (UTC) Received: by mail-oa0-f72.google.com with SMTP id eb12sf8796306oac.11 for ; Tue, 08 Apr 2014 23:01:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=gXDv8ifSWAcNmoybg2TujALRdrF7NmkiL5Q81Txb7uY=; b=fJ5ZHwj2L+PHyTwzRxGO4bNH5KKWNdiiJYWd4JbGUXV+gXTZzVQA20H+YJ9ep6L7W8 6e+kpyLviEtdcf8CHHU3TrsSdBxT5CpsRXaU4qG4fRPR/IiEnpbYrSroy4nlJ3ZN4nig cVT8f605D9BypKiL+npAXbJvO0Luq1NVDFr9HhqbrrO4THhdzq5W6wBtuaVopz4SZ/qn AVsAbJbHILiVYvVoJ0cBXcgh8tBZtU9zRPdTWWvg7iaMdp6gYEfZwZ9RDN1JSTtZ/iQb ev9Mj0W6nVx0tNLD1OXKZIida/s3n9FvPAidw1gNTz9IdHHlJh3siykTcAD7TcKIHUWv EGBQ== X-Gm-Message-State: ALoCoQlnY85pTOQ+TpZ2yhYJIMYpwTqp255atTyOpSdUoLi1TXQqkx6sQzaiPuJJP1CUUGLcTkZt X-Received: by 10.42.92.69 with SMTP id s5mr3482395icm.20.1397023300679; Tue, 08 Apr 2014 23:01:40 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.22.75 with SMTP id 69ls497100qgm.94.gmail; Tue, 08 Apr 2014 23:01:40 -0700 (PDT) X-Received: by 10.58.74.38 with SMTP id q6mr6811061vev.7.1397023300425; Tue, 08 Apr 2014 23:01:40 -0700 (PDT) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by mx.google.com with ESMTPS id vv9si808863vcb.199.2014.04.08.23.01.40 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 08 Apr 2014 23:01:40 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.174; Received: by mail-vc0-f174.google.com with SMTP id ld13so1636224vcb.19 for ; Tue, 08 Apr 2014 23:01:40 -0700 (PDT) X-Received: by 10.58.202.133 with SMTP id ki5mr6801215vec.19.1397023300315; Tue, 08 Apr 2014 23:01:40 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.12.8 with SMTP id v8csp303172vcv; Tue, 8 Apr 2014 23:01:39 -0700 (PDT) X-Received: by 10.194.122.6 with SMTP id lo6mr7671039wjb.38.1397023299273; Tue, 08 Apr 2014 23:01:39 -0700 (PDT) Received: from casper.infradead.org (casper.infradead.org. [2001:770:15f::2]) by mx.google.com with ESMTPS id f2si1770035wjn.128.2014.04.08.23.01.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 08 Apr 2014 23:01:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:770:15f::2 as permitted sender) client-ip=2001:770:15f::2; Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WXlYw-0004kE-MQ; Wed, 09 Apr 2014 06:00:42 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WXlYm-00018W-NS; Wed, 09 Apr 2014 06:00:32 +0000 Received: from mail-pd0-f181.google.com ([209.85.192.181]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WXlYG-00016B-N7 for linux-arm-kernel@lists.infradead.org; Wed, 09 Apr 2014 06:00:08 +0000 Received: by mail-pd0-f181.google.com with SMTP id p10so2000423pdj.40 for ; Tue, 08 Apr 2014 22:59:36 -0700 (PDT) X-Received: by 10.68.228.1 with SMTP id se1mr9555389pbc.43.1397023176031; Tue, 08 Apr 2014 22:59:36 -0700 (PDT) Received: from kamensky-w530.hsd1.ca.comcast.net (c-24-6-79-41.hsd1.ca.comcast.net. [24.6.79.41]) by mx.google.com with ESMTPSA id hy3sm8755972pbc.31.2014.04.08.22.59.34 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 08 Apr 2014 22:59:35 -0700 (PDT) From: Victor Kamensky To: rmk@arm.linux.org.uk, dave.long@linaro.org, oleg@redhat.com, Dave.Martin@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2] ARM: uprobes need icache flush after xol write Date: Tue, 8 Apr 2014 22:58:52 -0700 Message-Id: <1397023132-10313-2-git-send-email-victor.kamensky@linaro.org> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1397023132-10313-1-git-send-email-victor.kamensky@linaro.org> References: <1397023132-10313-1-git-send-email-victor.kamensky@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140409_020006_751510_58C0717B X-CRM114-Status: GOOD ( 21.26 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at http://www.dnswl.org/, no trust [209.85.192.181 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: tixy@linaro.org, linaro-kernel@lists.linaro.org, ananth@in.ibm.com, Victor Kamensky , taras.kondratiuk@linaro.org, will.deacon@arm.com, rabin@rab.in X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: victor.kamensky@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 After instruction write into xol area, on ARM V7 architecture code need to flush dcache and icache to sync them up for given set of addresses. Having just 'flush_dcache_page(page)' call is not enough - it is possible to have stale instruction sitting in icache for given xol area slot address. Introduce arch_uprobe_flush_xol_access weak function that by default calls 'flush_dcache_page(page)' and on ARM define new one that calls flush_uprobe_xol_access function. flush_uprobe_xol_access function shares/reuses implementation with/of flush_ptrace_access function and takes care of writing instruction to user land address space on given variety of different cache types on ARM CPUs. Because flush_uprobe_xol_access does not have vma around flush_ptrace_access was split into two parts. First that retrieves set of condition from vma and common that receives those conditions as flags. Because on ARM cache flush function need kernel address through which instruction write happened, changed xol_get_insn_slot to explicitly map page and do memcpy rather than use copy_to_page helper. In this case xol_get_insn_slot knows kernel address and passes it to arch_uprobe_flush_xol_access function. Signed-off-by: Victor Kamensky --- arch/arm/include/asm/cacheflush.h | 2 ++ arch/arm/kernel/uprobes.c | 6 ++++++ arch/arm/mm/flush.c | 41 +++++++++++++++++++++++++++++++++------ include/linux/uprobes.h | 3 +++ kernel/events/uprobes.c | 33 +++++++++++++++++++++++++------ 5 files changed, 73 insertions(+), 12 deletions(-) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 8b8b616..e02712a 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -487,4 +487,6 @@ int set_memory_rw(unsigned long addr, int numpages); int set_memory_x(unsigned long addr, int numpages); int set_memory_nx(unsigned long addr, int numpages); +void flush_uprobe_xol_access(struct page *page, unsigned long uaddr, + void *kaddr, unsigned long len); #endif diff --git a/arch/arm/kernel/uprobes.c b/arch/arm/kernel/uprobes.c index f9bacee..a0339a6 100644 --- a/arch/arm/kernel/uprobes.c +++ b/arch/arm/kernel/uprobes.c @@ -113,6 +113,12 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, return 0; } +void arch_uprobe_flush_xol_access(struct page *page, unsigned long vaddr, + void *kaddr, unsigned long len) +{ + flush_uprobe_xol_access(page, vaddr, kaddr, len); +} + int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) { struct uprobe_task *utask = current->utask; diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 3387e60..69a0bd08 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -104,17 +104,21 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig #define flush_icache_alias(pfn,vaddr,len) do { } while (0) #endif +#define FLAG_UA_IS_EXEC 1 +#define FLAG_UA_CORE_IN_MM 2 +#define FLAG_UA_BROADCAST 4 + static void flush_ptrace_access_other(void *args) { __flush_icache_all(); } -static -void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, - unsigned long uaddr, void *kaddr, unsigned long len) +static inline +void __flush_ptrace_access(struct page *page, unsigned long uaddr, void *kaddr, + unsigned long len, unsigned int flags) { if (cache_is_vivt()) { - if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) { + if (flags & FLAG_UA_CORE_IN_MM) { unsigned long addr = (unsigned long)kaddr; __cpuc_coherent_kern_range(addr, addr + len); } @@ -128,18 +132,43 @@ void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, } /* VIPT non-aliasing D-cache */ - if (vma->vm_flags & VM_EXEC) { + if (flags & FLAG_UA_IS_EXEC) { unsigned long addr = (unsigned long)kaddr; if (icache_is_vipt_aliasing()) flush_icache_alias(page_to_pfn(page), uaddr, len); else __cpuc_coherent_kern_range(addr, addr + len); - if (cache_ops_need_broadcast()) + if (flags & FLAG_UA_BROADCAST) smp_call_function(flush_ptrace_access_other, NULL, 1); } } +static +void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, + unsigned long uaddr, void *kaddr, unsigned long len) +{ + unsigned int flags = 0; + if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) { + flags |= FLAG_UA_CORE_IN_MM; + } + if (vma->vm_flags & VM_EXEC) { + flags |= FLAG_UA_IS_EXEC; + } + if (cache_ops_need_broadcast()) { + flags |= FLAG_UA_BROADCAST; + } + __flush_ptrace_access(page, uaddr, kaddr, len, flags); +} + +void flush_uprobe_xol_access(struct page *page, unsigned long uaddr, + void *kaddr, unsigned long len) +{ + unsigned int flags = FLAG_UA_CORE_IN_MM|FLAG_UA_IS_EXEC; + + __flush_ptrace_access(page, uaddr, kaddr, len, flags); +} + /* * Copy user data from/to a page which is mapped into a different * processes address space. Really, we want to allow our "user diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index edff2b9..534e083 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -32,6 +32,7 @@ struct vm_area_struct; struct mm_struct; struct inode; struct notifier_block; +struct page; #define UPROBE_HANDLER_REMOVE 1 #define UPROBE_HANDLER_MASK 1 @@ -127,6 +128,8 @@ extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned l extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs); extern unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs); extern bool __weak arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs); +extern void __weak arch_uprobe_flush_xol_access(struct page *page, unsigned long vaddr, + void *kaddr, unsigned long len); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 04709b6..b9142d5 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1287,6 +1287,7 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) { struct xol_area *area; unsigned long xol_vaddr; + void *xol_page_kaddr; area = get_xol_area(); if (!area) @@ -1296,14 +1297,22 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) if (unlikely(!xol_vaddr)) return 0; - /* Initialize the slot */ - copy_to_page(area->page, xol_vaddr, - &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); /* - * We probably need flush_icache_user_range() but it needs vma. - * This should work on supported architectures too. + * We don't use copy_to_page here because we need kernel page + * addr to invalidate caches correctly */ - flush_dcache_page(area->page); + xol_page_kaddr = kmap_atomic(area->page); + + /* Initialize the slot */ + memcpy(xol_page_kaddr + (xol_vaddr & ~PAGE_MASK), + &uprobe->arch.ixol, + sizeof(uprobe->arch.ixol)); + + arch_uprobe_flush_xol_access(area->page, xol_vaddr, + xol_page_kaddr + (xol_vaddr & ~PAGE_MASK), + sizeof(uprobe->arch.ixol)); + + kunmap_atomic(xol_page_kaddr); return xol_vaddr; } @@ -1346,6 +1355,18 @@ static void xol_free_insn_slot(struct task_struct *tsk) } } +void __weak arch_uprobe_flush_xol_access(struct page *page, unsigned long vaddr, + void *kaddr, unsigned long len) +{ + /* + * We probably need flush_icache_user_range() but it needs vma. + * This should work on most of architectures by default. If + * architecture needs to do something different it can define + * its own version of the function. + */ + flush_dcache_page(page); +} + /** * uprobe_get_swbp_addr - compute address of swbp given post-swbp regs * @regs: Reflects the saved state of the task after it has hit a breakpoint