From patchwork Fri Oct 16 07:42:13 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kaixu Xia X-Patchwork-Id: 55082 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by patches.linaro.org (Postfix) with ESMTPS id C389022FFA for ; Fri, 16 Oct 2015 07:42:50 +0000 (UTC) Received: by lbbms9 with SMTP id ms9sf21199213lbb.3 for ; Fri, 16 Oct 2015 00:42:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type:sender:precedence :list-id:x-original-sender:x-original-authentication-results :mailing-list:list-post:list-help:list-archive:list-unsubscribe; bh=xDbOLV48op8UVQTptBINo22+g0sTMvh0lBku7HYcthw=; b=WL9Z6CwaqYvEFk25tXg94rolqMK+iu/QU1xBRi+j/t12AlLCSZRuF/mSx9IxcwC1jx BY5RiRpjthuV5yhr6608jV9sr8Knz4dCT9DPctd6aNoseJqbF1yi6epEYMBGkErBj6wB RoAgabydcNLKnUBNp1ZBWEz+ooQTxDuVTxYBxijlK1IFU/O8POsScjbVzGm+9E6n7um4 QuMciAGlaKCXtAe4iEh9BkpDh0T/CcLRunmzNHBA6QOr5jGbnW92xSMS9WKBE8nV9TUk ZXtBMXPohPMt8DJhF4WvZW45VxETaJCpzgtBn19/0UJYxUV3emoW20G1TfrzGNwUVNOe CuSA== X-Gm-Message-State: ALoCoQmGwi8klYKi4n6IYXzqwzyEXDCq6P2oh4OSiDqImhKBoKpTv+apmHj1bYJDRw9sv1Z4i8yc X-Received: by 10.194.156.193 with SMTP id wg1mr3173155wjb.3.1444981369436; Fri, 16 Oct 2015 00:42:49 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.87.139 with SMTP id l133ls264173lfb.34.gmail; Fri, 16 Oct 2015 00:42:49 -0700 (PDT) X-Received: by 10.112.130.225 with SMTP id oh1mr7419128lbb.69.1444981369211; Fri, 16 Oct 2015 00:42:49 -0700 (PDT) Received: from mail-lb0-f181.google.com (mail-lb0-f181.google.com. [209.85.217.181]) by mx.google.com with ESMTPS id d75si12048275lfd.7.2015.10.16.00.42.49 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Oct 2015 00:42:49 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) client-ip=209.85.217.181; Received: by lbbpp2 with SMTP id pp2so64714864lbb.0 for ; Fri, 16 Oct 2015 00:42:49 -0700 (PDT) X-Received: by 10.112.202.35 with SMTP id kf3mr7457526lbc.19.1444981369088; Fri, 16 Oct 2015 00:42:49 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1082644lbq; Fri, 16 Oct 2015 00:42:48 -0700 (PDT) X-Received: by 10.107.137.202 with SMTP id t71mr5785745ioi.119.1444981368153; Fri, 16 Oct 2015 00:42:48 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id av10si2644518igc.36.2015.10.16.00.42.47; Fri, 16 Oct 2015 00:42:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754027AbbJPHmp (ORCPT + 30 others); Fri, 16 Oct 2015 03:42:45 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:42349 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753804AbbJPHml (ORCPT ); Fri, 16 Oct 2015 03:42:41 -0400 Received: from 172.24.1.51 (EHLO szxeml428-hub.china.huawei.com) ([172.24.1.51]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CWW46262; Fri, 16 Oct 2015 15:42:29 +0800 (CST) Received: from euler.hulk-profiling (10.107.193.250) by szxeml428-hub.china.huawei.com (10.82.67.183) with Microsoft SMTP Server id 14.3.235.1; Fri, 16 Oct 2015 15:42:19 +0800 From: Kaixu Xia To: , , , , , , , CC: , , , , , Subject: [PATCH V3 2/2] bpf: control all the perf events stored in PERF_EVENT_ARRAY maps Date: Fri, 16 Oct 2015 07:42:13 +0000 Message-ID: <1444981333-70429-3-git-send-email-xiakaixu@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1444981333-70429-1-git-send-email-xiakaixu@huawei.com> References: <1444981333-70429-1-git-send-email-xiakaixu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.250] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: xiakaixu@huawei.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch implements the function that controlling all the perf events stored in PERF_EVENT_ARRAY maps by setting the parameter 'index' to maps max_entries. Signed-off-by: Kaixu Xia --- kernel/trace/bpf_trace.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 3175600..4b385863 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -229,13 +229,30 @@ static u64 bpf_perf_event_dump_control(u64 r1, u64 index, u64 flag, u64 r4, u64 struct bpf_map *map = (struct bpf_map *) (unsigned long) r1; struct bpf_array *array = container_of(map, struct bpf_array, map); struct perf_event *event; + int i; - if (unlikely(index >= array->map.max_entries)) + if (unlikely(index > array->map.max_entries)) return -E2BIG; if (flag & BIT_FLAG_CHECK) return -EINVAL; + if (index == array->map.max_entries) { + bool dump_control = flag & BIT_DUMP_CTL; + + for (i = 0; i < array->map.max_entries; i++) { + event = (struct perf_event *)array->ptrs[i]; + if (!event) + continue; + + if (dump_control) + atomic_dec_if_positive(&event->dump_enable); + else + atomic_inc_unless_negative(&event->dump_enable); + } + return 0; + } + event = (struct perf_event *)array->ptrs[index]; if (!event) return -ENOENT; @@ -244,7 +261,6 @@ static u64 bpf_perf_event_dump_control(u64 r1, u64 index, u64 flag, u64 r4, u64 atomic_dec_if_positive(&event->dump_enable); else atomic_inc_unless_negative(&event->dump_enable); - return 0; }