From patchwork Fri Dec 16 02:51:03 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 5794 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id C137F23E0E for ; Fri, 16 Dec 2011 02:51:23 +0000 (UTC) Received: from mail-ey0-f180.google.com (mail-ey0-f180.google.com [209.85.215.180]) by fiordland.canonical.com (Postfix) with ESMTP id B8056A18977 for ; Fri, 16 Dec 2011 02:51:23 +0000 (UTC) Received: by mail-ey0-f180.google.com with SMTP id k10so3114749eaa.11 for ; Thu, 15 Dec 2011 18:51:23 -0800 (PST) Received: by 10.205.127.12 with SMTP id gy12mr2566002bkc.108.1324003883574; Thu, 15 Dec 2011 18:51:23 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.205.129.2 with SMTP id hg2cs59556bkc; Thu, 15 Dec 2011 18:51:23 -0800 (PST) Received: by 10.68.197.138 with SMTP id iu10mr13297346pbc.27.1324003880736; Thu, 15 Dec 2011 18:51:20 -0800 (PST) Received: from e32.co.us.ibm.com (e32.co.us.ibm.com. [32.97.110.150]) by mx.google.com with ESMTPS id p7si9807033pbq.54.2011.12.15.18.51.19 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Dec 2011 18:51:20 -0800 (PST) Received-SPF: pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.150 as permitted sender) client-ip=32.97.110.150; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.150 as permitted sender) smtp.mail=jstultz@us.ibm.com Received: from /spool/local by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 15 Dec 2011 19:51:19 -0700 Received: from d03relay02.boulder.ibm.com (9.17.195.227) by e32.co.us.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 15 Dec 2011 19:51:17 -0700 Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pBG2pGck135530; Thu, 15 Dec 2011 19:51:16 -0700 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pBG2pEge017647; Thu, 15 Dec 2011 19:51:16 -0700 Received: from kernel.beaverton.ibm.com (kernel.beaverton.ibm.com [9.47.67.96]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id pBG2pE6d017610; Thu, 15 Dec 2011 19:51:14 -0700 Received: by kernel.beaverton.ibm.com (Postfix, from userid 1056) id 7112E1E7505; Thu, 15 Dec 2011 18:51:12 -0800 (PST) From: John Stultz To: =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= Cc: John Stultz , Brian Swetland , Colin Cross , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Dima Zavin , Robert Love , Greg KH Subject: [PATCH 09/10] ashmem: Whitespace cleanups Date: Thu, 15 Dec 2011 18:51:03 -0800 Message-Id: <1324003864-26776-10-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.3.2.146.gca209 In-Reply-To: <1324003864-26776-1-git-send-email-john.stultz@linaro.org> References: <1324003864-26776-1-git-send-email-john.stultz@linaro.org> MIME-Version: 1.0 x-cbid: 11121602-3270-0000-0000-000002982F98 Fixes checkpatch warnings with the ashmem.c file CC: Brian Swetland CC: Colin Cross CC: Arve Hjønnevåg CC: Dima Zavin CC: Robert Love CC: Greg KH Signed-off-by: John Stultz --- drivers/staging/android/ashmem.c | 46 +++++++++++++++++-------------------- 1 files changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c index a78ba21..99052bf 100644 --- a/drivers/staging/android/ashmem.c +++ b/drivers/staging/android/ashmem.c @@ -41,11 +41,11 @@ * Big Note: Mappings do NOT pin this structure; it dies on close() */ struct ashmem_area { - char name[ASHMEM_FULL_NAME_LEN];/* optional name for /proc/pid/maps */ - struct list_head unpinned_list; /* list of all ashmem areas */ - struct file *file; /* the shmem-based backing file */ - size_t size; /* size of the mapping, in bytes */ - unsigned long prot_mask; /* allowed prot bits, as vm_flags */ + char name[ASHMEM_FULL_NAME_LEN]; /* optional name in /proc/pid/maps */ + struct list_head unpinned_list; /* list of all ashmem areas */ + struct file *file; /* the shmem-based backing file */ + size_t size; /* size of the mapping, in bytes */ + unsigned long prot_mask; /* allowed prot bits, as vm_flags */ }; /* @@ -79,26 +79,26 @@ static struct kmem_cache *ashmem_area_cachep __read_mostly; static struct kmem_cache *ashmem_range_cachep __read_mostly; #define range_size(range) \ - ((range)->pgend - (range)->pgstart + 1) + ((range)->pgend - (range)->pgstart + 1) #define range_on_lru(range) \ - ((range)->purged == ASHMEM_NOT_PURGED) + ((range)->purged == ASHMEM_NOT_PURGED) #define page_range_subsumes_range(range, start, end) \ - (((range)->pgstart >= (start)) && ((range)->pgend <= (end))) + (((range)->pgstart >= (start)) && ((range)->pgend <= (end))) #define page_range_subsumed_by_range(range, start, end) \ - (((range)->pgstart <= (start)) && ((range)->pgend >= (end))) + (((range)->pgstart <= (start)) && ((range)->pgend >= (end))) #define page_in_range(range, page) \ - (((range)->pgstart <= (page)) && ((range)->pgend >= (page))) + (((range)->pgstart <= (page)) && ((range)->pgend >= (page))) #define page_range_in_range(range, start, end) \ - (page_in_range(range, start) || page_in_range(range, end) || \ - page_range_subsumes_range(range, start, end)) + (page_in_range(range, start) || page_in_range(range, end) || \ + page_range_subsumes_range(range, start, end)) #define range_before_page(range, page) \ - ((range)->pgend < (page)) + ((range)->pgend < (page)) #define PROT_MASK (PROT_EXEC | PROT_READ | PROT_WRITE) @@ -220,9 +220,8 @@ static ssize_t ashmem_read(struct file *file, char __user *buf, mutex_lock(&ashmem_mutex); /* If size is not set, or set to 0, always return EOF. */ - if (asma->size == 0) { + if (asma->size == 0) goto out; - } if (!asma->file) { ret = -EBADF; @@ -230,9 +229,8 @@ static ssize_t ashmem_read(struct file *file, char __user *buf, } ret = asma->file->f_op->read(asma->file, buf, len, pos); - if (ret < 0) { + if (ret < 0) goto out; - } /** Update backing file pos, since f_ops->read() doesn't */ asma->file->f_pos = *pos; @@ -260,9 +258,8 @@ static loff_t ashmem_llseek(struct file *file, loff_t offset, int origin) } ret = asma->file->f_op->llseek(asma->file, offset, origin); - if (ret < 0) { + if (ret < 0) goto out; - } /** Copy f_pos from backing file, since f_ops->llseek() sets it */ file->f_pos = asma->file->f_pos; @@ -272,10 +269,9 @@ out: return ret; } -static inline unsigned long -calc_vm_may_flags(unsigned long prot) +static inline unsigned long calc_vm_may_flags(unsigned long prot) { - return _calc_vm_trans(prot, PROT_READ, VM_MAYREAD ) | + return _calc_vm_trans(prot, PROT_READ, VM_MAYREAD) | _calc_vm_trans(prot, PROT_WRITE, VM_MAYWRITE) | _calc_vm_trans(prot, PROT_EXEC, VM_MAYEXEC); } @@ -295,7 +291,7 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma) /* requested protection bits must match our allowed protection mask */ if (unlikely((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask)) & - calc_vm_prot_bits(PROT_MASK))) { + calc_vm_prot_bits(PROT_MASK))) { ret = -EPERM; goto out; } @@ -688,8 +684,8 @@ static struct file_operations ashmem_fops = { .owner = THIS_MODULE, .open = ashmem_open, .release = ashmem_release, - .read = ashmem_read, - .llseek = ashmem_llseek, + .read = ashmem_read, + .llseek = ashmem_llseek, .mmap = ashmem_mmap, .unlocked_ioctl = ashmem_ioctl, .compat_ioctl = ashmem_ioctl,