From patchwork Fri Apr 11 03:45:31 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 28247 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f71.google.com (mail-oa0-f71.google.com [209.85.219.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 80C0820671 for ; Fri, 11 Apr 2014 03:46:55 +0000 (UTC) Received: by mail-oa0-f71.google.com with SMTP id j17sf22531781oag.10 for ; Thu, 10 Apr 2014 20:46:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:message-id:date:from:user-agent :mime-version:to:subject:references:in-reply-to:cc:precedence :list-id:list-unsubscribe:list-archive:list-post:list-help :list-subscribe:sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:content-type :content-transfer-encoding; bh=rf/A8rUNp+M+Y1mnm++H1JKY6UaLvG1GsPOF8X959KY=; b=cjJqhmOVGkjcnzu20iA+zCr+vlAPBkFFBmVZLJXoV7XbfkKQGJc7cpH5vucCGpMAOE hFDWG1EhkmqOD7EQuCqWBrzxhjtWrw9PT1NYHZllsEdyYwj/Tr4yH2e1KsFKzQW6oaxR fTHsDPlnPSYFrCTBt3Kz5Ah33CXnEthMNvbZdBFa0q/ohzY42PQBmThat04maxLj/rSh bDvqrhmKlD+JZNSb/dT4rhi0a93n/24kODzVeN5aU5kIfrf5QnK5J1wqbga9VCH4RLji TY4WK6Flrj8l2DkGWqjSxtAM6E0PYZ9gGfUP3F3htvBiEAQU4IJqXQaxxNo5Dsyj3Jrd yS9w== X-Gm-Message-State: ALoCoQmKnQJBchWScnKBqMilGUMqURXvKbhwgEWyGDVOuMlqxdrcDg9ZDrpiRlqAYCgODJIgKcSF X-Received: by 10.182.226.163 with SMTP id rt3mr10413834obc.13.1397188014797; Thu, 10 Apr 2014 20:46:54 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.19.115 with SMTP id 106ls1430380qgg.75.gmail; Thu, 10 Apr 2014 20:46:54 -0700 (PDT) X-Received: by 10.52.23.97 with SMTP id l1mr14626561vdf.11.1397188014704; Thu, 10 Apr 2014 20:46:54 -0700 (PDT) Received: from mail-ve0-f181.google.com (mail-ve0-f181.google.com [209.85.128.181]) by mx.google.com with ESMTPS id ck5si39407vdd.55.2014.04.10.20.46.54 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 10 Apr 2014 20:46:54 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.181; Received: by mail-ve0-f181.google.com with SMTP id oy12so4377011veb.12 for ; Thu, 10 Apr 2014 20:46:54 -0700 (PDT) X-Received: by 10.52.166.102 with SMTP id zf6mr1322679vdb.2.1397188014574; Thu, 10 Apr 2014 20:46:54 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp20608vcb; Thu, 10 Apr 2014 20:46:53 -0700 (PDT) X-Received: by 10.180.94.8 with SMTP id cy8mr1052182wib.29.1397188013225; Thu, 10 Apr 2014 20:46:53 -0700 (PDT) Received: from casper.infradead.org (casper.infradead.org. [2001:770:15f::2]) by mx.google.com with ESMTPS id lx7si2188313wjb.39.2014.04.10.20.46.52 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Apr 2014 20:46:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:770:15f::2 as permitted sender) client-ip=2001:770:15f::2; Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYSPy-0000RQ-8Y; Fri, 11 Apr 2014 03:46:18 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYSPv-0003Dh-UD; Fri, 11 Apr 2014 03:46:16 +0000 Received: from bombadil.infradead.org ([2001:1868:205::9]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYSPg-0003Db-JL for linux-arm-kernel@merlin.infradead.org; Fri, 11 Apr 2014 03:46:00 +0000 Received: from mail-qc0-f174.google.com ([209.85.216.174]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYSPb-00054a-EK for linux-arm-kernel@lists.infradead.org; Fri, 11 Apr 2014 03:45:59 +0000 Received: by mail-qc0-f174.google.com with SMTP id c9so5284160qcz.5 for ; Thu, 10 Apr 2014 20:45:33 -0700 (PDT) X-Received: by 10.140.49.109 with SMTP id p100mr24357468qga.47.1397187932990; Thu, 10 Apr 2014 20:45:32 -0700 (PDT) Received: from [192.168.1.116] (pool-72-71-243-183.cncdnh.fast00.myfairpoint.net. [72.71.243.183]) by mx.google.com with ESMTPSA id p2sm11344526qah.38.2014.04.10.20.45.31 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 10 Apr 2014 20:45:32 -0700 (PDT) Message-ID: <5347655B.3080307@linaro.org> Date: Thu, 10 Apr 2014 23:45:31 -0400 From: David Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130510 Thunderbird/17.0.6 MIME-Version: 1.0 To: Victor Kamensky , Russell King Subject: [RFC PATCH] uprobes: copy to user-space xol page with proper cache flushing References: <1397023132-10313-1-git-send-email-victor.kamensky@linaro.org> <1397023132-10313-2-git-send-email-victor.kamensky@linaro.org> <20140409184507.GA1058@redhat.com> In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140410_204555_685263_9068D0B7 X-CRM114-Status: GOOD ( 19.35 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.3.2 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.216.174 listed in list.dnswl.org] Cc: Jon Medhurst , "linaro-kernel@lists.linaro.org" , ananth@in.ibm.com, Taras Kondratiuk , Oleg Nesterov , "David S. Miller" , rabin@rab.in, Dave Martin , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: dave.long@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Replace memcpy and dcache flush in generic uprobes with a call to copy_to_user_page(), which will do a proper flushing of kernel and user cache. Also modify the inmplementation of copy_to_user_page to assume a NULL vma pointer means the user icache corresponding to this right is stale and needs to be flushed. Note that this patch does not fix copy_to_user page for the sh, alpha, sparc, or mips architectures (which do not currently support uprobes). Signed-off-by: David A. Long --- arch/arc/include/asm/cacheflush.h | 2 +- arch/arm/mm/flush.c | 4 ++-- arch/arm64/mm/flush.c | 2 +- arch/avr32/mm/cache.c | 2 +- arch/hexagon/include/asm/cacheflush.h | 2 +- arch/m68k/include/asm/cacheflush_mm.h | 2 +- arch/microblaze/include/asm/cacheflush.h | 2 +- arch/mips/mm/init.c | 2 +- arch/parisc/include/asm/tlbflush.h | 4 ++++ arch/parisc/kernel/cache.c | 4 ++-- arch/score/include/asm/cacheflush.h | 2 +- arch/score/mm/cache.c | 2 +- arch/sh/mm/cache.c | 2 +- arch/sparc/mm/leon_mm.c | 2 +- arch/tile/include/asm/cacheflush.h | 2 +- arch/unicore32/mm/flush.c | 2 +- arch/xtensa/mm/cache.c | 4 ++-- kernel/events/uprobes.c | 7 +------ 18 files changed, 24 insertions(+), 25 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index 6abc497..64d67e4 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -110,7 +110,7 @@ static inline int cache_is_vipt_aliasing(void) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ memcpy(dst, src, len); \ - if (vma->vm_flags & VM_EXEC) \ + if (!vma || vma->vm_flags & VM_EXEC) \ __sync_icache_dcache((unsigned long)(dst), vaddr, len); \ } while (0) diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 3387e60..dd19ad4 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -114,7 +114,7 @@ void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, unsigned long uaddr, void *kaddr, unsigned long len) { if (cache_is_vivt()) { - if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) { + if (!vma || cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) { unsigned long addr = (unsigned long)kaddr; __cpuc_coherent_kern_range(addr, addr + len); } @@ -128,7 +128,7 @@ void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, } /* VIPT non-aliasing D-cache */ - if (vma->vm_flags & VM_EXEC) { + if (!vma || vma->vm_flags & VM_EXEC) { unsigned long addr = (unsigned long)kaddr; if (icache_is_vipt_aliasing()) flush_icache_alias(page_to_pfn(page), uaddr, len); diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index e4193e3..cde3cb4 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -38,7 +38,7 @@ static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, unsigned long uaddr, void *kaddr, unsigned long len) { - if (vma->vm_flags & VM_EXEC) { + if (!vma || vma->vm_flags & VM_EXEC) { unsigned long addr = (unsigned long)kaddr; if (icache_is_aliasing()) { __flush_dcache_area(kaddr, len); diff --git a/arch/avr32/mm/cache.c b/arch/avr32/mm/cache.c index 6a46ecd..cd3d378 100644 --- a/arch/avr32/mm/cache.c +++ b/arch/avr32/mm/cache.c @@ -156,7 +156,7 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long len) { memcpy(dst, src, len); - if (vma->vm_flags & VM_EXEC) + if (!vma || vma->vm_flags & VM_EXEC) flush_icache_range((unsigned long)dst, (unsigned long)dst + len); } diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 49e0896..9bea768 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -86,7 +86,7 @@ static inline void copy_to_user_page(struct vm_area_struct *vma, void *dst, void *src, int len) { memcpy(dst, src, len); - if (vma->vm_flags & VM_EXEC) { + if (!vma || vma->vm_flags & VM_EXEC) { flush_icache_range((unsigned long) dst, (unsigned long) dst + len); } diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index fa2c3d6..afefdeb 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -212,7 +212,7 @@ static inline void flush_cache_range(struct vm_area_struct *vma, static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn) { - if (vma->vm_mm == current->mm) + if (!vma || vma->vm_mm == current->mm) __flush_cache_030(); } diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h index ffea82a..9eef956 100644 --- a/arch/microblaze/include/asm/cacheflush.h +++ b/arch/microblaze/include/asm/cacheflush.h @@ -108,7 +108,7 @@ static inline void copy_to_user_page(struct vm_area_struct *vma, { u32 addr = virt_to_phys(dst); memcpy(dst, src, len); - if (vma->vm_flags & VM_EXEC) { + if (!vma || vma->vm_flags & VM_EXEC) { invalidate_icache_range(addr, addr + PAGE_SIZE); flush_dcache_range(addr, addr + PAGE_SIZE); } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 6b59617..e428551 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -232,7 +232,7 @@ void copy_to_user_page(struct vm_area_struct *vma, if (cpu_has_dc_aliases) SetPageDcacheDirty(page); } - if ((vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc) + if ((!vma || vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc) flush_cache_page(vma, vaddr, page_to_pfn(page)); } diff --git a/arch/parisc/include/asm/tlbflush.h b/arch/parisc/include/asm/tlbflush.h index 9d086a5..7aad1b6 100644 --- a/arch/parisc/include/asm/tlbflush.h +++ b/arch/parisc/include/asm/tlbflush.h @@ -68,6 +68,10 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, /* For one page, it's not worth testing the split_tlb variable */ mb(); + if (!vma) { + flush_tlb_all(); + return; + } sid = vma->vm_mm->context; purge_tlb_start(flags); mtsp(sid, 1); diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index ac87a40..ff09f05 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -278,7 +278,7 @@ __flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, { preempt_disable(); flush_dcache_page_asm(physaddr, vmaddr); - if (vma->vm_flags & VM_EXEC) + if (!vma || vma->vm_flags & VM_EXEC) flush_icache_page_asm(physaddr, vmaddr); preempt_enable(); } @@ -574,7 +574,7 @@ void flush_cache_range(struct vm_area_struct *vma, void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn) { - BUG_ON(!vma->vm_mm->context); + BUG_ON(vma && !vma->vm_mm->context); if (pfn_valid(pfn)) { flush_tlb_page(vma, vmaddr); diff --git a/arch/score/include/asm/cacheflush.h b/arch/score/include/asm/cacheflush.h index 1d545d0..63e7b4e 100644 --- a/arch/score/include/asm/cacheflush.h +++ b/arch/score/include/asm/cacheflush.h @@ -41,7 +41,7 @@ static inline void flush_icache_page(struct vm_area_struct *vma, #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ memcpy(dst, src, len); \ - if ((vma->vm_flags & VM_EXEC)) \ + if (!vma || (vma->vm_flags & VM_EXEC)) \ flush_cache_page(vma, vaddr, page_to_pfn(page));\ } while (0) diff --git a/arch/score/mm/cache.c b/arch/score/mm/cache.c index f85ec1a..8464575 100644 --- a/arch/score/mm/cache.c +++ b/arch/score/mm/cache.c @@ -210,7 +210,7 @@ void flush_cache_range(struct vm_area_struct *vma, void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) { - int exec = vma->vm_flags & VM_EXEC; + int exec = !vma || vma->vm_flags & VM_EXEC; unsigned long kaddr = 0xa0000000 | (pfn << PAGE_SHIFT); flush_dcache_range(kaddr, kaddr + PAGE_SIZE); diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 616966a..ba2313a 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -70,7 +70,7 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, clear_bit(PG_dcache_clean, &page->flags); } - if (vma->vm_flags & VM_EXEC) + if (!vma || vma->vm_flags & VM_EXEC) flush_cache_page(vma, vaddr, page_to_pfn(page)); } diff --git a/arch/sparc/mm/leon_mm.c b/arch/sparc/mm/leon_mm.c index 5bed085..dca5e18 100644 --- a/arch/sparc/mm/leon_mm.c +++ b/arch/sparc/mm/leon_mm.c @@ -192,7 +192,7 @@ void leon_flush_dcache_all(void) void leon_flush_pcache_all(struct vm_area_struct *vma, unsigned long page) { - if (vma->vm_flags & VM_EXEC) + if (!vma || vma->vm_flags & VM_EXEC) leon_flush_icache_all(); leon_flush_dcache_all(); } diff --git a/arch/tile/include/asm/cacheflush.h b/arch/tile/include/asm/cacheflush.h index 92ee4c8..7b7022c 100644 --- a/arch/tile/include/asm/cacheflush.h +++ b/arch/tile/include/asm/cacheflush.h @@ -66,7 +66,7 @@ static inline void copy_to_user_page(struct vm_area_struct *vma, void *dst, void *src, int len) { memcpy(dst, src, len); - if (vma->vm_flags & VM_EXEC) { + if (!vma || vma->vm_flags & VM_EXEC) { flush_icache_range((unsigned long) dst, (unsigned long) dst + len); } diff --git a/arch/unicore32/mm/flush.c b/arch/unicore32/mm/flush.c index 6d4c096..10ddab3 100644 --- a/arch/unicore32/mm/flush.c +++ b/arch/unicore32/mm/flush.c @@ -36,7 +36,7 @@ static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, unsigned long uaddr, void *kaddr, unsigned long len) { /* VIPT non-aliasing D-cache */ - if (vma->vm_flags & VM_EXEC) { + if (!vma || vma->vm_flags & VM_EXEC) { unsigned long addr = (unsigned long)kaddr; __cpuc_coherent_kern_range(addr, addr + len); diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c index ba4c47f..d34c06c 100644 --- a/arch/xtensa/mm/cache.c +++ b/arch/xtensa/mm/cache.c @@ -221,10 +221,10 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long t = TLBTEMP_BASE_1 + (vaddr & DCACHE_ALIAS_MASK); __flush_invalidate_dcache_range((unsigned long) dst, len); - if ((vma->vm_flags & VM_EXEC) != 0) + if (!vma || (vma->vm_flags & VM_EXEC) != 0) __invalidate_icache_page_alias(t, phys); - } else if ((vma->vm_flags & VM_EXEC) != 0) { + } else if (!vma || (vma->vm_flags & VM_EXEC) != 0) { __flush_dcache_range((unsigned long)dst,len); __invalidate_icache_range((unsigned long) dst, len); } diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 04709b6..2e976fb 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -241,7 +241,7 @@ static void copy_from_page(struct page *page, unsigned long vaddr, void *dst, in static void copy_to_page(struct page *page, unsigned long vaddr, const void *src, int len) { void *kaddr = kmap_atomic(page); - memcpy(kaddr + (vaddr & ~PAGE_MASK), src, len); + copy_to_user_page(NULL, page, vaddr, kaddr + (vaddr & ~PAGE_MASK), src, len); kunmap_atomic(kaddr); } @@ -1299,11 +1299,6 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) /* Initialize the slot */ copy_to_page(area->page, xol_vaddr, &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); - /* - * We probably need flush_icache_user_range() but it needs vma. - * This should work on supported architectures too. - */ - flush_dcache_page(area->page); return xol_vaddr; }