From patchwork Thu Aug 22 14:13:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Richter X-Patchwork-Id: 19410 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vb0-f71.google.com (mail-vb0-f71.google.com [209.85.212.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 34CB4248E6 for ; Thu, 22 Aug 2013 14:14:42 +0000 (UTC) Received: by mail-vb0-f71.google.com with SMTP id g17sf1554114vbg.6 for ; Thu, 22 Aug 2013 07:14:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:delivered-to:sender:from:to:cc:subject:date:message-id :in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=rKCYyFaxljLEpbOy0zDwYtaKP7RAbnFRpFd+j0tlW4g=; b=V1IzC6pfss90UfwdPWIsU3mWrOD/uiRHd7O69gwe/o21j8Yy4Dz5lxVurpqEhG6/Kx Juz0xYuGQGu2Cw9s65ik9j+l017uo3wXok/iHSQeauv2JiETFdAAZZBcrJUAInhnkmwQ l+fZCjHTOdgMN2y6jdc/aKoXHrPx1lqyCZzVrRhpD2FVC/TK+N0gmtb66TGbV9FYdbXt LRzWTcD6dwfH9uqhXD5ZDUvEp4NQA9oRdNt2zeD1E2Qf1DUuTugo2z++zli+cd9hSRoe SLihX+2ss/XAiwzbh+HyjIrvBe5WyH+2P6ZObBYz27aswatMjmaIfg7rYLIHv28I9jrn HURQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-gm-message-state:delivered-to:sender:from:to:cc :subject:date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=rKCYyFaxljLEpbOy0zDwYtaKP7RAbnFRpFd+j0tlW4g=; b=LDr2Vk0bzYYCT0veee1ajDoRu+3Wtfcmh7GJf9RZFL8iZFIIGvtXDdEzWgMKEy5I47 qJjFxFNIJ4naVKHAuk5OZYlaCFf5FczHhohDlGPbeWbsOPmg3/DnRX71Zga+3lxHy4p9 hk1nBo5T8RiAcEjuK6Fw9lWUoTR6+HaIU13F0RWk1vao5ZH6et05I/FNAJo8yLTbaVXl hdouDSBF2xgyxx5BE+KpNbMb+Ze4VlAa2FLVQmQRTIwbPJgyOUUcpzXXEk0zAMzYwFCY lBE1KkT4FpaGvNox5k7ZpUyaQ1RcYF36Q7NpJCxVkV4shSu3VuKlCJtJ8pDGOlbkOGsV 8SLg== X-Received: by 10.236.167.138 with SMTP id i10mr4856744yhl.9.1377180881901; Thu, 22 Aug 2013 07:14:41 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.58.70 with SMTP id o6ls1157729qeq.58.gmail; Thu, 22 Aug 2013 07:14:41 -0700 (PDT) X-Received: by 10.58.197.5 with SMTP id iq5mr611871vec.30.1377180881807; Thu, 22 Aug 2013 07:14:41 -0700 (PDT) Received: from mail-vc0-x22d.google.com (mail-vc0-x22d.google.com [2607:f8b0:400c:c03::22d]) by mx.google.com with ESMTPS id vr9si4003766vcb.126.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 22 Aug 2013 07:14:41 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c03::22d is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c03::22d; Received: by mail-vc0-f173.google.com with SMTP id id13so1093823vcb.18 for ; Thu, 22 Aug 2013 07:14:41 -0700 (PDT) X-Gm-Message-State: ALoCoQnqU12dHaY27whUl4KpWqwzab+fllEoNJV+tvKyubf8GTFjB/j+uJUmyBAk2MyFvozO4u7Y X-Received: by 10.52.164.16 with SMTP id ym16mr850964vdb.32.1377180881705; Thu, 22 Aug 2013 07:14:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp33203vcz; Thu, 22 Aug 2013 07:14:41 -0700 (PDT) X-Received: by 10.204.62.132 with SMTP id x4mr10661270bkh.22.1377180880434; Thu, 22 Aug 2013 07:14:40 -0700 (PDT) Received: from mail-bk0-x232.google.com (mail-bk0-x232.google.com [2a00:1450:4008:c01::232]) by mx.google.com with ESMTPS id pr9si2307007bkb.110.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 22 Aug 2013 07:14:40 -0700 (PDT) Received-SPF: pass (google.com: domain of rric.net@gmail.com designates 2a00:1450:4008:c01::232 as permitted sender) client-ip=2a00:1450:4008:c01::232; Received: by mail-bk0-f50.google.com with SMTP id mz11so700315bkb.23 for ; Thu, 22 Aug 2013 07:14:39 -0700 (PDT) X-Received: by 10.204.189.144 with SMTP id de16mr2590416bkb.26.1377180879365; Thu, 22 Aug 2013 07:14:39 -0700 (PDT) Received: from rric.localhost (g224195237.adsl.alicedsl.de. [92.224.195.237]) by mx.google.com with ESMTPSA id jh13sm3079991bkb.13.1969.12.31.16.00.00 (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 22 Aug 2013 07:14:38 -0700 (PDT) Sender: Robert Richter From: Robert Richter To: Peter Zijlstra Cc: Ingo Molnar , Arnaldo Carvalho de Melo , Borislav Petkov , Jiri Olsa , linux-kernel@vger.kernel.org, Robert Richter , Robert Richter Subject: [PATCH v3 03/12] perf, mmap: Factor out perf_alloc/free_rb() Date: Thu, 22 Aug 2013 16:13:18 +0200 Message-Id: <1377180807-12758-4-git-send-email-rric@kernel.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1377180807-12758-1-git-send-email-rric@kernel.org> References: <1377180807-12758-1-git-send-email-rric@kernel.org> X-Original-Sender: rric.net@gmail.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c03::22d is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gmail.com Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Robert Richter Factor out code to allocate and deallocate ringbuffers. We need this later to setup the sampling buffer for persistent events. While at this, replacing get_current_user() with get_uid(user). Signed-off-by: Robert Richter Signed-off-by: Robert Richter --- kernel/events/core.c | 75 +++++++++++++++++++++++++++++------------------- kernel/events/internal.h | 3 ++ 2 files changed, 48 insertions(+), 30 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index c9a5d4c..24810d5 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3124,8 +3124,44 @@ static void free_event_rcu(struct rcu_head *head) } static void ring_buffer_put(struct ring_buffer *rb); +static void ring_buffer_attach(struct perf_event *event, struct ring_buffer *rb); static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb); +/* + * Must be called with &event->mmap_mutex held. event->rb must be + * NULL. perf_alloc_rb() requires &event->mmap_count to be incremented + * on success which corresponds to &rb->mmap_count that is initialized + * with 1. + */ +int perf_alloc_rb(struct perf_event *event, int nr_pages, int flags) +{ + struct ring_buffer *rb; + + rb = rb_alloc(nr_pages, + event->attr.watermark ? event->attr.wakeup_watermark : 0, + event->cpu, flags); + if (!rb) + return -ENOMEM; + + atomic_set(&rb->mmap_count, 1); + ring_buffer_attach(event, rb); + rcu_assign_pointer(event->rb, rb); + + perf_event_update_userpage(event); + + return 0; +} + +/* Must be called with &event->mmap_mutex held. event->rb must be set. */ +void perf_free_rb(struct perf_event *event) +{ + struct ring_buffer *rb = event->rb; + + rcu_assign_pointer(event->rb, NULL); + ring_buffer_detach(event, rb); + ring_buffer_put(rb); +} + static void unaccount_event_cpu(struct perf_event *event, int cpu) { if (event->parent) @@ -3177,6 +3213,7 @@ static void __free_event(struct perf_event *event) call_rcu(&event->rcu_head, free_event_rcu); } + static void free_event(struct perf_event *event) { irq_work_sync(&event->pending); @@ -3184,8 +3221,6 @@ static void free_event(struct perf_event *event) unaccount_event(event); if (event->rb) { - struct ring_buffer *rb; - /* * Can happen when we close an event with re-directed output. * @@ -3193,12 +3228,8 @@ static void free_event(struct perf_event *event) * over us; possibly making our ring_buffer_put() the last. */ mutex_lock(&event->mmap_mutex); - rb = event->rb; - if (rb) { - rcu_assign_pointer(event->rb, NULL); - ring_buffer_detach(event, rb); - ring_buffer_put(rb); /* could be last */ - } + if (event->rb) + perf_free_rb(event); mutex_unlock(&event->mmap_mutex); } @@ -3798,11 +3829,8 @@ static void ring_buffer_detach_all(struct ring_buffer *rb) * still restart the iteration to make sure we're not now * iterating the wrong list. */ - if (event->rb == rb) { - rcu_assign_pointer(event->rb, NULL); - ring_buffer_detach(event, rb); - ring_buffer_put(rb); /* can't be last, we still have one */ - } + if (event->rb == rb) + perf_free_rb(event); mutex_unlock(&event->mmap_mutex); put_event(event); @@ -3938,7 +3966,6 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) unsigned long user_locked, user_lock_limit; struct user_struct *user = current_user(); unsigned long locked, lock_limit; - struct ring_buffer *rb; unsigned long vma_size; unsigned long nr_pages; long user_extra, extra; @@ -4022,27 +4049,15 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) if (vma->vm_flags & VM_WRITE) flags |= RING_BUFFER_WRITABLE; - rb = rb_alloc(nr_pages, - event->attr.watermark ? event->attr.wakeup_watermark : 0, - event->cpu, flags); - - if (!rb) { - ret = -ENOMEM; + ret = perf_alloc_rb(event, nr_pages, flags); + if (ret) goto unlock; - } - atomic_set(&rb->mmap_count, 1); - rb->mmap_locked = extra; - rb->mmap_user = get_current_user(); + event->rb->mmap_locked = extra; + event->rb->mmap_user = get_uid(user); atomic_long_add(user_extra, &user->locked_vm); vma->vm_mm->pinned_vm += extra; - - ring_buffer_attach(event, rb); - rcu_assign_pointer(event->rb, rb); - - perf_event_update_userpage(event); - unlock: if (!ret) atomic_inc(&event->mmap_count); diff --git a/kernel/events/internal.h b/kernel/events/internal.h index 96a07d2..8ddaf57 100644 --- a/kernel/events/internal.h +++ b/kernel/events/internal.h @@ -190,4 +190,7 @@ static inline void put_event(struct perf_event *event) __put_event(event); } +extern int perf_alloc_rb(struct perf_event *event, int nr_pages, int flags); +extern void perf_free_rb(struct perf_event *event); + #endif /* _KERNEL_EVENTS_INTERNAL_H */