From patchwork Wed Dec 21 00:49:54 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 5912 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 73B4123E03 for ; Wed, 21 Dec 2011 00:50:18 +0000 (UTC) Received: from mail-ey0-f180.google.com (mail-ey0-f180.google.com [209.85.215.180]) by fiordland.canonical.com (Postfix) with ESMTP id 69FDAA182A4 for ; Wed, 21 Dec 2011 00:50:18 +0000 (UTC) Received: by mail-ey0-f180.google.com with SMTP id c11so4023605eaa.11 for ; Tue, 20 Dec 2011 16:50:18 -0800 (PST) Received: by 10.204.136.194 with SMTP id s2mr1522689bkt.96.1324428618220; Tue, 20 Dec 2011 16:50:18 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.205.82.144 with SMTP id ac16cs26517bkc; Tue, 20 Dec 2011 16:50:17 -0800 (PST) Received: by 10.68.208.162 with SMTP id mf2mr7549707pbc.0.1324428615475; Tue, 20 Dec 2011 16:50:15 -0800 (PST) Received: from e39.co.us.ibm.com (e39.co.us.ibm.com. [32.97.110.160]) by mx.google.com with ESMTPS id i3si2921837pbd.162.2011.12.20.16.50.14 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 20 Dec 2011 16:50:15 -0800 (PST) Received-SPF: pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.160 as permitted sender) client-ip=32.97.110.160; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.160 as permitted sender) smtp.mail=jstultz@us.ibm.com Received: from /spool/local by e39.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 20 Dec 2011 17:50:13 -0700 Received: from d03relay02.boulder.ibm.com (9.17.195.227) by e39.co.us.ibm.com (192.168.1.139) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 20 Dec 2011 17:50:05 -0700 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pBL0o4VB121906; Tue, 20 Dec 2011 17:50:04 -0700 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pBL0o3LV029145; Tue, 20 Dec 2011 17:50:03 -0700 Received: from kernel.beaverton.ibm.com (kernel.beaverton.ibm.com [9.47.67.96]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id pBL0o268029115; Tue, 20 Dec 2011 17:50:03 -0700 Received: by kernel.beaverton.ibm.com (Postfix, from userid 1056) id 34AAA1E7503; Tue, 20 Dec 2011 16:50:01 -0800 (PST) From: John Stultz To: Greg KH Cc: John Stultz , Brian Swetland , Colin Cross , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Dima Zavin , Robert Love , Greg KH Subject: [PATCH 7/7] ashmem: Whitespace cleanups Date: Tue, 20 Dec 2011 16:49:54 -0800 Message-Id: <1324428595-9253-8-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.3.2.146.gca209 In-Reply-To: <1324428595-9253-1-git-send-email-john.stultz@linaro.org> References: <1324428595-9253-1-git-send-email-john.stultz@linaro.org> MIME-Version: 1.0 x-cbid: 11122100-4242-0000-0000-0000005E79E9 Fixes checkpatch warnings with the ashmem.c file CC: Brian Swetland CC: Colin Cross CC: Arve Hjønnevåg CC: Dima Zavin CC: Robert Love CC: Greg KH Signed-off-by: John Stultz --- drivers/staging/android/ashmem.c | 46 +++++++++++++++++-------------------- 1 files changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c index a78ba21..99052bf 100644 --- a/drivers/staging/android/ashmem.c +++ b/drivers/staging/android/ashmem.c @@ -41,11 +41,11 @@ * Big Note: Mappings do NOT pin this structure; it dies on close() */ struct ashmem_area { - char name[ASHMEM_FULL_NAME_LEN];/* optional name for /proc/pid/maps */ - struct list_head unpinned_list; /* list of all ashmem areas */ - struct file *file; /* the shmem-based backing file */ - size_t size; /* size of the mapping, in bytes */ - unsigned long prot_mask; /* allowed prot bits, as vm_flags */ + char name[ASHMEM_FULL_NAME_LEN]; /* optional name in /proc/pid/maps */ + struct list_head unpinned_list; /* list of all ashmem areas */ + struct file *file; /* the shmem-based backing file */ + size_t size; /* size of the mapping, in bytes */ + unsigned long prot_mask; /* allowed prot bits, as vm_flags */ }; /* @@ -79,26 +79,26 @@ static struct kmem_cache *ashmem_area_cachep __read_mostly; static struct kmem_cache *ashmem_range_cachep __read_mostly; #define range_size(range) \ - ((range)->pgend - (range)->pgstart + 1) + ((range)->pgend - (range)->pgstart + 1) #define range_on_lru(range) \ - ((range)->purged == ASHMEM_NOT_PURGED) + ((range)->purged == ASHMEM_NOT_PURGED) #define page_range_subsumes_range(range, start, end) \ - (((range)->pgstart >= (start)) && ((range)->pgend <= (end))) + (((range)->pgstart >= (start)) && ((range)->pgend <= (end))) #define page_range_subsumed_by_range(range, start, end) \ - (((range)->pgstart <= (start)) && ((range)->pgend >= (end))) + (((range)->pgstart <= (start)) && ((range)->pgend >= (end))) #define page_in_range(range, page) \ - (((range)->pgstart <= (page)) && ((range)->pgend >= (page))) + (((range)->pgstart <= (page)) && ((range)->pgend >= (page))) #define page_range_in_range(range, start, end) \ - (page_in_range(range, start) || page_in_range(range, end) || \ - page_range_subsumes_range(range, start, end)) + (page_in_range(range, start) || page_in_range(range, end) || \ + page_range_subsumes_range(range, start, end)) #define range_before_page(range, page) \ - ((range)->pgend < (page)) + ((range)->pgend < (page)) #define PROT_MASK (PROT_EXEC | PROT_READ | PROT_WRITE) @@ -220,9 +220,8 @@ static ssize_t ashmem_read(struct file *file, char __user *buf, mutex_lock(&ashmem_mutex); /* If size is not set, or set to 0, always return EOF. */ - if (asma->size == 0) { + if (asma->size == 0) goto out; - } if (!asma->file) { ret = -EBADF; @@ -230,9 +229,8 @@ static ssize_t ashmem_read(struct file *file, char __user *buf, } ret = asma->file->f_op->read(asma->file, buf, len, pos); - if (ret < 0) { + if (ret < 0) goto out; - } /** Update backing file pos, since f_ops->read() doesn't */ asma->file->f_pos = *pos; @@ -260,9 +258,8 @@ static loff_t ashmem_llseek(struct file *file, loff_t offset, int origin) } ret = asma->file->f_op->llseek(asma->file, offset, origin); - if (ret < 0) { + if (ret < 0) goto out; - } /** Copy f_pos from backing file, since f_ops->llseek() sets it */ file->f_pos = asma->file->f_pos; @@ -272,10 +269,9 @@ out: return ret; } -static inline unsigned long -calc_vm_may_flags(unsigned long prot) +static inline unsigned long calc_vm_may_flags(unsigned long prot) { - return _calc_vm_trans(prot, PROT_READ, VM_MAYREAD ) | + return _calc_vm_trans(prot, PROT_READ, VM_MAYREAD) | _calc_vm_trans(prot, PROT_WRITE, VM_MAYWRITE) | _calc_vm_trans(prot, PROT_EXEC, VM_MAYEXEC); } @@ -295,7 +291,7 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma) /* requested protection bits must match our allowed protection mask */ if (unlikely((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask)) & - calc_vm_prot_bits(PROT_MASK))) { + calc_vm_prot_bits(PROT_MASK))) { ret = -EPERM; goto out; } @@ -688,8 +684,8 @@ static struct file_operations ashmem_fops = { .owner = THIS_MODULE, .open = ashmem_open, .release = ashmem_release, - .read = ashmem_read, - .llseek = ashmem_llseek, + .read = ashmem_read, + .llseek = ashmem_llseek, .mmap = ashmem_mmap, .unlocked_ioctl = ashmem_ioctl, .compat_ioctl = ashmem_ioctl,