From patchwork Mon Jul 30 08:28:18 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 10350 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 0ED9623F61 for ; Mon, 30 Jul 2012 08:28:48 +0000 (UTC) Received: from mail-yw0-f52.google.com (mail-yw0-f52.google.com [209.85.213.52]) by fiordland.canonical.com (Postfix) with ESMTP id 9D0E1A185BF for ; Mon, 30 Jul 2012 08:28:47 +0000 (UTC) Received: by yhpp61 with SMTP id p61so4449835yhp.11 for ; Mon, 30 Jul 2012 01:28:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:x-auditid :from:to:date:message-id:x-mailer:in-reply-to:references :x-brightmail-tracker:x-tm-as-mml:cc:subject:x-beenthere :x-mailman-version:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=VMgi0+PbRiTnHitS6PZafDCLNBgkF1Xht7IORUkGd0s=; b=ZeTR5PKO7eOu2eNEnWhDpR0F+2JjfrxzKBz7jtn6F6GrfdjvXk8wY+fXONLtP4Runt 60c5gKSmeVakFPB1mso08iD0lRvL84hg6njjsGRM1MLaEmBBRsBYgcOKvLSg5f945Meq rvzdrfBEuaX3pxzB+EW8X+V8fhA5+axjLc7WvwdDoBofE2Oeq+G2V6pZmmC2ps/PUOtf BO4K9CcvWWC+WZ1kb8I+cSVF2tCONO6+wtKf7rumbh2Tf9R+9eoKXfom6+tLw59UbAGo UO4UEs2bSZWQT6ifwydhZEX54jmgbL426mKVS+80CTKRQ9+mRkQCkKV1lITc7WjvFkAk dGaA== Received: by 10.50.213.39 with SMTP id np7mr7725199igc.51.1343636926471; Mon, 30 Jul 2012 01:28:46 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.87.40 with SMTP id u8csp76572igz; Mon, 30 Jul 2012 01:28:45 -0700 (PDT) Received: by 10.204.154.151 with SMTP id o23mr3775731bkw.77.1343636924062; Mon, 30 Jul 2012 01:28:44 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id ha11si8301815bkc.36.2012.07.30.01.28.42; Mon, 30 Jul 2012 01:28:44 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SvlLE-0006LG-Lz; Mon, 30 Jul 2012 08:28:40 +0000 Received: from mailout2.samsung.com ([203.254.224.25]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SvlLC-0006L3-7X for linaro-mm-sig@lists.linaro.org; Mon, 30 Jul 2012 08:28:39 +0000 Received: from epcpsbgm2.samsung.com (mailout2.samsung.com [203.254.224.25]) by mailout2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0M7Y00GV1SVEI6S0@mailout2.samsung.com> for linaro-mm-sig@lists.linaro.org; Mon, 30 Jul 2012 17:28:36 +0900 (KST) X-AuditID: cbfee61b-b7f566d000005c8a-30-501645b461b0 Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id 3D.78.23690.4B546105; Mon, 30 Jul 2012 17:28:36 +0900 (KST) Received: from mcdsrvbld02.digital.local ([106.116.37.23]) by mmp2.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0M7Y006KBSV9QA60@mmp2.samsung.com> for linaro-mm-sig@lists.linaro.org; Mon, 30 Jul 2012 17:28:36 +0900 (KST) From: Marek Szyprowski To: linux-arm-kernel@lists.infradead.org, linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 30 Jul 2012 10:28:18 +0200 Message-id: <1343636899-19508-2-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.10 In-reply-to: <1343636899-19508-1-git-send-email-m.szyprowski@samsung.com> References: <1343636899-19508-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrPJMWRmVeSWpSXmKPExsVy+t9jQd0trmIBBp9uKVp8ufKQyYHR4/a/ x8wBjFFcNimpOZllqUX6dglcGe8XTWMsWKResbvtDVMD4xz5LkZ2DgkBE4kb4l2MnECWmMSF e+vZuhi5OIQEpjNKrP90jxHCWcsk8f52HwtIFZuAoUTX2y6gKg4OEYEaiXkzwGqYBeYyS9zb +ZMdpEZYwENiy8dvrCA1LAKqEmvu6oOEeYHCt/f9YYRYJi/x9H4f2BhOAU+JJ3tVQMJCQCWH nl9in8DIu4CRYRWjaGpBckFxUnqukV5xYm5xaV66XnJ+7iZGsLefSe9gXNVgcYhRgINRiYfX 6LJogBBrYllxZe4hRgkOZiUR3kkiYgFCvCmJlVWpRfnxRaU5qcWHGKU5WJTEeU28v/oLCaQn lqRmp6YWpBbBZJk4OKUaGJVOT5cMf7R0yx/vB66990+f1b44OdVlauaDQ2xL0lQqzUKul/ow z53/yL7w0+oNO9ZPf7Pkqd4eu6+Pbay+qp3kn602Ld1pu9iuc5/VDcUEQnpZb4ZuVN4sxFma n7JgNX9gkfnKvrfBpg0u15ZM1XiTtZBtrUTHmk0JXx2nKfJWMXcXR85vbVRiKc5INNRiLipO BAAgQBVE8gEAAA== X-TM-AS-MML: No Cc: Russell King - ARM Linux , Arnd Bergmann , Konrad Rzeszutek Wilk , Kyungmin Park , Minchan Kim Subject: [Linaro-mm-sig] [PATCHv6 1/2] mm: vmalloc: use const void * for caller argument X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQkzN8Bxe8TYdyRnsyQEY8Bb6ltO5TZ3jRSrbp987wmALlIsS7lWO0jMGyi69/Di3ufmG/JU 'const void *' is a safer type for caller function type. This patch updates all references to caller function type. Signed-off-by: Marek Szyprowski Reviewed-by: Kyungmin Park Reviewed-by: Minchan Kim --- include/linux/vmalloc.h | 8 ++++---- mm/vmalloc.c | 18 +++++++++--------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index dcdfc2b..2e28f4d 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -32,7 +32,7 @@ struct vm_struct { struct page **pages; unsigned int nr_pages; phys_addr_t phys_addr; - void *caller; + const void *caller; }; /* @@ -62,7 +62,7 @@ extern void *vmalloc_32_user(unsigned long size); extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot); extern void *__vmalloc_node_range(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, - pgprot_t prot, int node, void *caller); + pgprot_t prot, int node, const void *caller); extern void vfree(const void *addr); extern void *vmap(struct page **pages, unsigned int count, @@ -85,13 +85,13 @@ static inline size_t get_vm_area_size(const struct vm_struct *area) extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags); extern struct vm_struct *get_vm_area_caller(unsigned long size, - unsigned long flags, void *caller); + unsigned long flags, const void *caller); extern struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags, unsigned long start, unsigned long end); extern struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, unsigned long start, unsigned long end, - void *caller); + const void *caller); extern struct vm_struct *remove_vm_area(const void *addr); extern int map_vm_area(struct vm_struct *area, pgprot_t prot, diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 2aad499..11308f0 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1280,7 +1280,7 @@ DEFINE_RWLOCK(vmlist_lock); struct vm_struct *vmlist; static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, - unsigned long flags, void *caller) + unsigned long flags, const void *caller) { vm->flags = flags; vm->addr = (void *)va->va_start; @@ -1306,7 +1306,7 @@ static void insert_vmalloc_vmlist(struct vm_struct *vm) } static void insert_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, - unsigned long flags, void *caller) + unsigned long flags, const void *caller) { setup_vmalloc_vm(vm, va, flags, caller); insert_vmalloc_vmlist(vm); @@ -1314,7 +1314,7 @@ static void insert_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, static struct vm_struct *__get_vm_area_node(unsigned long size, unsigned long align, unsigned long flags, unsigned long start, - unsigned long end, int node, gfp_t gfp_mask, void *caller) + unsigned long end, int node, gfp_t gfp_mask, const void *caller) { struct vmap_area *va; struct vm_struct *area; @@ -1375,7 +1375,7 @@ EXPORT_SYMBOL_GPL(__get_vm_area); struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, unsigned long start, unsigned long end, - void *caller) + const void *caller) { return __get_vm_area_node(size, 1, flags, start, end, -1, GFP_KERNEL, caller); @@ -1397,7 +1397,7 @@ struct vm_struct *get_vm_area(unsigned long size, unsigned long flags) } struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, - void *caller) + const void *caller) { return __get_vm_area_node(size, 1, flags, VMALLOC_START, VMALLOC_END, -1, GFP_KERNEL, caller); @@ -1568,9 +1568,9 @@ EXPORT_SYMBOL(vmap); static void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, pgprot_t prot, - int node, void *caller); + int node, const void *caller); static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, - pgprot_t prot, int node, void *caller) + pgprot_t prot, int node, const void *caller) { const int order = 0; struct page **pages; @@ -1643,7 +1643,7 @@ fail: */ void *__vmalloc_node_range(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, - pgprot_t prot, int node, void *caller) + pgprot_t prot, int node, const void *caller) { struct vm_struct *area; void *addr; @@ -1699,7 +1699,7 @@ fail: */ static void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, pgprot_t prot, - int node, void *caller) + int node, const void *caller) { return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END, gfp_mask, prot, node, caller);