From patchwork Wed Jan 21 17:03:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 43475 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BEE71240D5 for ; Wed, 21 Jan 2015 17:04:00 +0000 (UTC) Received: by mail-lb0-f200.google.com with SMTP id u10sf6571381lbd.3 for ; Wed, 21 Jan 2015 09:03:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=jqxAlLP/Dao5QLI3oolMiGSQqLc8glBAFaNWAYCOCro=; b=U8xL6cBMPmscMTSgOxT2FnRTjE1IfHyw6cI1IK8j/B0tt0QsjHDsu5LuC+WqXQgOQ5 WgirJ5TOIgRaO16mDX8HWVDK17T+HM8Vlzg3Cy6tIPFkgxmPs7fvge2EPSysYWqfeqVI DcX95s+WDx4wSLX13QjH+9IYxx7sCBuz0yAlkVewO7ANVcvohN7L4yrEcNpvqPOyy02u P4Vzvdtao3V51ehzPDPBGlQTNT1NEhPrnLzUWDC/kPgGJbBKMDP7TGAp+LEWM5FiuaXB 0V4oJ+L/dXFqoTKexPQGu2slq1DzHFt7NBleYk6XbV5xbWqh1f/O+8xkHHvfyLv47aI3 XhVQ== X-Gm-Message-State: ALoCoQkXM25uRZdyg/UlEUmxNOvKLBQyyp38e9+xszfAGH3BNScV7kKDquwsnh/lOjM/t3Oo8kdU X-Received: by 10.112.148.198 with SMTP id tu6mr5248557lbb.3.1421859839253; Wed, 21 Jan 2015 09:03:59 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.43.97 with SMTP id v1ls69029lal.29.gmail; Wed, 21 Jan 2015 09:03:59 -0800 (PST) X-Received: by 10.152.21.228 with SMTP id y4mr45906965lae.72.1421859839050; Wed, 21 Jan 2015 09:03:59 -0800 (PST) Received: from mail-la0-f45.google.com (mail-la0-f45.google.com. [209.85.215.45]) by mx.google.com with ESMTPS id qf1si18369298lbb.132.2015.01.21.09.03.59 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 21 Jan 2015 09:03:59 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.45 as permitted sender) client-ip=209.85.215.45; Received: by mail-la0-f45.google.com with SMTP id gd6so17667121lab.4 for ; Wed, 21 Jan 2015 09:03:59 -0800 (PST) X-Received: by 10.152.182.235 with SMTP id eh11mr28071769lac.9.1421859838974; Wed, 21 Jan 2015 09:03:58 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.9.200 with SMTP id c8csp1844642lbb; Wed, 21 Jan 2015 09:03:58 -0800 (PST) X-Received: by 10.194.84.179 with SMTP id a19mr10626973wjz.96.1421859838188; Wed, 21 Jan 2015 09:03:58 -0800 (PST) Received: from mail-wi0-f170.google.com (mail-wi0-f170.google.com. [209.85.212.170]) by mx.google.com with ESMTPS id v6si829997wjy.60.2015.01.21.09.03.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Jan 2015 09:03:58 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 209.85.212.170 as permitted sender) client-ip=209.85.212.170; Received: by mail-wi0-f170.google.com with SMTP id em10so16359515wid.1 for ; Wed, 21 Jan 2015 09:03:58 -0800 (PST) X-Received: by 10.194.190.162 with SMTP id gr2mr76047508wjc.13.1421859837678; Wed, 21 Jan 2015 09:03:57 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id w16sm8021260wia.15.2015.01.21.09.03.55 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Jan 2015 09:03:56 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner , Jason Cooper , Russell King Cc: Daniel Thompson , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Sumit Semwal , Dirk Behme , Daniel Drake , Dmitry Pervushin , Tim Sander , Stephen Boyd , Will Deacon Subject: [RFC PATCH v2 4/5] arm: perf: Make v7 support FIQ-safe Date: Wed, 21 Jan 2015 17:03:41 +0000 Message-Id: <1421859822-3621-5-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1421859822-3621-1-git-send-email-daniel.thompson@linaro.org> References: <1421166931-14134-1-git-send-email-daniel.thompson@linaro.org> <1421859822-3621-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.45 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , armv7pmu_disable_event() is called during irq handling. If irq handling switches over to fiq then the spin locks in this function risk deadlock. Both armv7_pmnc_disable_counter() and armv7_pmnc_disable_intens() are unconditional co-processor writes. I haven't yet come up with an schedule where other users of pmu_lock would break if interleaved with these calls so I have simply removed them. The other change required us to avoid calling irq_work_run() when run from a FIQ handler. The pended work will either be dispatched by the irq work IPI or by a timer handler. Signed-off-by: Daniel Thompson --- arch/arm/kernel/perf_event_v7.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c index 8993770c47de..08f426486d3e 100644 --- a/arch/arm/kernel/perf_event_v7.c +++ b/arch/arm/kernel/perf_event_v7.c @@ -744,7 +744,6 @@ static void armv7pmu_enable_event(struct perf_event *event) static void armv7pmu_disable_event(struct perf_event *event) { - unsigned long flags; struct hw_perf_event *hwc = &event->hw; struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); @@ -757,11 +756,6 @@ static void armv7pmu_disable_event(struct perf_event *event) } /* - * Disable counter and interrupt - */ - raw_spin_lock_irqsave(&events->pmu_lock, flags); - - /* * Disable counter */ armv7_pmnc_disable_counter(idx); @@ -770,8 +764,6 @@ static void armv7pmu_disable_event(struct perf_event *event) * Disable interrupt for this counter */ armv7_pmnc_disable_intens(idx); - - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } static irqreturn_t armv7pmu_handle_irq(int irq_num, void *dev) @@ -831,7 +823,8 @@ static irqreturn_t armv7pmu_handle_irq(int irq_num, void *dev) * platforms that can have the PMU interrupts raised as an NMI, this * will not work. */ - irq_work_run(); + if (!in_nmi()) + irq_work_run(); return IRQ_HANDLED; }