From patchwork Sat Oct 25 11:06:53 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 39519 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f71.google.com (mail-wg0-f71.google.com [74.125.82.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DAACD2405F for ; Sat, 25 Oct 2014 11:09:58 +0000 (UTC) Received: by mail-wg0-f71.google.com with SMTP id y10sf1426782wgg.10 for ; Sat, 25 Oct 2014 04:09:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=TsvKfX0PEi6c8dBmoq+oEI96a9NA0iPlPo06ydNUM14=; b=cInW8J2bq1Yq6u4UvO1RUjXJhFLw4mWp/GC84TmgthtpotmGWYCROqLLMq5oEgA62q BZ34Zl3vcClFGQQszByWc3Gs14qsrBcjz68Y7s7ussu2ZzXiut3FJ/dm5T6356lxo0P4 l/tMfbrCqs+ZNmNEcXj9DuGokb69Oag043l/W5U0hO9KalEzr3qElr6w4FIACMasK1ix VP2RziCKWXmqRVJIIOPlm65/AZVKsjZ9dHXrvrkI/6FcvNTXJzpkF3NWy/nWKMHenFsu 3pax2yBuVO9j57eY7bxUGK0omtMvE+NmcMbRf3KPsZW0K8tyqWd8y9cnuLJxxAo0EaNY AI2Q== X-Gm-Message-State: ALoCoQl3Xmo8OxQ/aEyptRj9/T9711Rsl+C+Ihsu719nceoeECJgTwrJ27BumclxI5Ii8zy4BAn1 X-Received: by 10.180.98.165 with SMTP id ej5mr3222299wib.1.1414235398042; Sat, 25 Oct 2014 04:09:58 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.153.4.33 with SMTP id cb1ls34006lad.89.gmail; Sat, 25 Oct 2014 04:09:57 -0700 (PDT) X-Received: by 10.112.161.9 with SMTP id xo9mr10403951lbb.62.1414235397879; Sat, 25 Oct 2014 04:09:57 -0700 (PDT) Received: from mail-lb0-f175.google.com (mail-lb0-f175.google.com. [209.85.217.175]) by mx.google.com with ESMTPS id od3si10822479lbb.84.2014.10.25.04.09.57 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 25 Oct 2014 04:09:57 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) client-ip=209.85.217.175; Received: by mail-lb0-f175.google.com with SMTP id u10so3761304lbd.34 for ; Sat, 25 Oct 2014 04:09:57 -0700 (PDT) X-Received: by 10.152.5.38 with SMTP id p6mr10257148lap.44.1414235397590; Sat, 25 Oct 2014 04:09:57 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp24173lbz; Sat, 25 Oct 2014 04:09:56 -0700 (PDT) X-Received: by 10.67.30.34 with SMTP id kb2mr10438820pad.97.1414235395694; Sat, 25 Oct 2014 04:09:55 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id p8si6173138pds.217.2014.10.25.04.09.55 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Oct 2014 04:09:55 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XhzCA-0004NB-9w; Sat, 25 Oct 2014 11:07:42 +0000 Received: from inca-roads.misterjones.org ([213.251.177.50]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XhzC2-0004LT-2v for linux-arm-kernel@lists.infradead.org; Sat, 25 Oct 2014 11:07:34 +0000 Received: from [90.219.10.17] (helo=why.wild-wind.fr.eu.org) by cheepnis.misterjones.org with esmtpsa (TLSv1.2:AES128-SHA256:128) (Exim 4.80) (envelope-from ) id 1XhzBZ-00075o-Q5; Sat, 25 Oct 2014 13:07:05 +0200 From: Marc Zyngier To: Thomas Gleixner , Jason Cooper Subject: [PATCH 1/3] genirq: Add support for priority-drop/deactivate interrupt controllers Date: Sat, 25 Oct 2014 12:06:53 +0100 Message-Id: <1414235215-10468-2-git-send-email-marc.zyngier@arm.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1414235215-10468-1-git-send-email-marc.zyngier@arm.com> References: <1414235215-10468-1-git-send-email-marc.zyngier@arm.com> X-SA-Exim-Connect-IP: 90.219.10.17 X-SA-Exim-Rcpt-To: tglx@linutronix.de, jason@lakedaemon.net, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-SA-Exim-Mail-From: marc.zyngier@arm.com X-SA-Exim-Scanned: No (on cheepnis.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141025_040734_273504_7379C4E6 X-CRM114-Status: GOOD ( 18.45 ) X-Spam-Score: 1.0 (+) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (1.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 1.0 SPF_SOFTFAIL SPF: sender does not match SPF record (softfail) Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: marc.zyngier@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Moderately recent ARM interrupt controllers can use a "split mode" EOI, where instead of just using a single write to notify the controller of the end of interrupt, uses the following: - priority-drop: the interrupt is still active, but other interrupts can now be taken - deactivate: the interrupt is not active anymore, and can be taken again. This makes it very useful for threaded interrupts, as it avoids the usual mask/unmask dance (and has the potential of being more efficient on ARM, as it is using the CPU interface instead of the global distributor). To implement this, a new optional irqchip method is added (irq_priority_drop). The usual irq_eoi is expected to implement the deactivate method. Non threaded interrupts are using these two callbacks back to back, but threaded ones only perform the irq_priority_drop call in the interrupt context, leaving the irq_eoi call to the thread context (which are expected to use the IRQCHIP_EOI_THREADED flag). Signed-off-by: Marc Zyngier --- include/linux/irq.h | 1 + kernel/irq/chip.c | 53 +++++++++++++++++++++++++++++++++++++---------------- 2 files changed, 38 insertions(+), 16 deletions(-) diff --git a/include/linux/irq.h b/include/linux/irq.h index 257d59a..64d3756 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -330,6 +330,7 @@ struct irq_chip { void (*irq_mask)(struct irq_data *data); void (*irq_mask_ack)(struct irq_data *data); void (*irq_unmask)(struct irq_data *data); + void (*irq_priority_drop)(struct irq_data *data); void (*irq_eoi)(struct irq_data *data); int (*irq_set_affinity)(struct irq_data *data, const struct cpumask *dest, bool force); diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c index e5202f0..cf9d001 100644 --- a/kernel/irq/chip.c +++ b/kernel/irq/chip.c @@ -272,12 +272,25 @@ void mask_irq(struct irq_desc *desc) } } +static void mask_threaded_irq(struct irq_desc *desc) +{ + struct irq_chip *chip = desc->irq_data.chip; + + /* If we can do priority drop, then masking comes for free */ + if (chip->irq_priority_drop) + irq_state_set_masked(desc); + else + mask_irq(desc); +} + void unmask_irq(struct irq_desc *desc) { - if (desc->irq_data.chip->irq_unmask) { - desc->irq_data.chip->irq_unmask(&desc->irq_data); + struct irq_chip *chip = desc->irq_data.chip; + + if (chip->irq_unmask && !chip->irq_priority_drop) + chip->irq_unmask(&desc->irq_data); + if (chip->irq_unmask || chip->irq_priority_drop) irq_state_clr_masked(desc); - } } void unmask_threaded_irq(struct irq_desc *desc) @@ -287,10 +300,7 @@ void unmask_threaded_irq(struct irq_desc *desc) if (chip->flags & IRQCHIP_EOI_THREADED) chip->irq_eoi(&desc->irq_data); - if (chip->irq_unmask) { - chip->irq_unmask(&desc->irq_data); - irq_state_clr_masked(desc); - } + unmask_irq(desc); } /* @@ -470,12 +480,24 @@ static inline void preflow_handler(struct irq_desc *desc) static inline void preflow_handler(struct irq_desc *desc) { } #endif +static void eoi_irq(struct irq_desc *desc, struct irq_chip *chip) +{ + if (chip->irq_priority_drop) + chip->irq_priority_drop(&desc->irq_data); + if (chip->irq_eoi) + chip->irq_eoi(&desc->irq_data); +} + static void cond_unmask_eoi_irq(struct irq_desc *desc, struct irq_chip *chip) { if (!(desc->istate & IRQS_ONESHOT)) { - chip->irq_eoi(&desc->irq_data); + eoi_irq(desc, chip); return; } + + if (chip->irq_priority_drop) + chip->irq_priority_drop(&desc->irq_data); + /* * We need to unmask in the following cases: * - Oneshot irq which did not wake the thread (caused by a @@ -485,7 +507,8 @@ static void cond_unmask_eoi_irq(struct irq_desc *desc, struct irq_chip *chip) if (!irqd_irq_disabled(&desc->irq_data) && irqd_irq_masked(&desc->irq_data) && !desc->threads_oneshot) { chip->irq_eoi(&desc->irq_data); - unmask_irq(desc); + if (!chip->irq_priority_drop) + unmask_irq(desc); } else if (!(chip->flags & IRQCHIP_EOI_THREADED)) { chip->irq_eoi(&desc->irq_data); } @@ -525,7 +548,7 @@ handle_fasteoi_irq(unsigned int irq, struct irq_desc *desc) } if (desc->istate & IRQS_ONESHOT) - mask_irq(desc); + mask_threaded_irq(desc); preflow_handler(desc); handle_irq_event(desc); @@ -536,7 +559,7 @@ handle_fasteoi_irq(unsigned int irq, struct irq_desc *desc) return; out: if (!(chip->flags & IRQCHIP_EOI_IF_HANDLED)) - chip->irq_eoi(&desc->irq_data); + eoi_irq(desc, chip); raw_spin_unlock(&desc->lock); } EXPORT_SYMBOL_GPL(handle_fasteoi_irq); @@ -655,7 +678,7 @@ void handle_edge_eoi_irq(unsigned int irq, struct irq_desc *desc) !irqd_irq_disabled(&desc->irq_data)); out_eoi: - chip->irq_eoi(&desc->irq_data); + eoi_irq(desc, chip); raw_spin_unlock(&desc->lock); } #endif @@ -679,8 +702,7 @@ handle_percpu_irq(unsigned int irq, struct irq_desc *desc) handle_irq_event_percpu(desc, desc->action); - if (chip->irq_eoi) - chip->irq_eoi(&desc->irq_data); + eoi_irq(desc, chip); } /** @@ -711,8 +733,7 @@ void handle_percpu_devid_irq(unsigned int irq, struct irq_desc *desc) res = action->handler(irq, dev_id); trace_irq_handler_exit(irq, action, res); - if (chip->irq_eoi) - chip->irq_eoi(&desc->irq_data); + eoi_irq(desc, chip); } void