From patchwork Wed Jan 21 17:03:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 43473 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f70.google.com (mail-ee0-f70.google.com [74.125.83.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5B3CA240D5 for ; Wed, 21 Jan 2015 17:03:56 +0000 (UTC) Received: by mail-ee0-f70.google.com with SMTP id c13sf10747063eek.1 for ; Wed, 21 Jan 2015 09:03:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=t6UkSp4N20/aCLfj1QVHjansDvgf4QWhaNzlZJHpvBk=; b=PMJtEVDtWSzIGWnzagTix780zk0iY3a6/RL4F84mtv455znPMv+VQ7lS+MNL5SXjbZ i8VTI145FbZ7Ho62KRoIAA75NQHHyno2E0kfW/BScSgCJbdwEfVfxyhzmBud712G5qGG F7IzBPqJM9IYhAlz9NwC1FJa97WL2dQaSbKxYK/xI6V48t7zXwiTti+kMTaJfbXMQ1gl R5lgBdzQz+Dyn68Yzzw2+rGhnQS+yAGmZeWhV9lCDZrNuuEZs4ntVbEI+pMVAqY0AH3o R7jG/SjlqnO26LIGw+wDE8DdacgUemwIDwGaOoYD2zD6xNqL5gk9jE1w3RlMfS5zlagl SmiA== X-Gm-Message-State: ALoCoQn9aHDx2LYRSwzMYBeGQg9soDtM+17ilLsaCWzpu3w2Svnuv1FbqO5XrMG1/9nq61RXMN3e X-Received: by 10.180.93.165 with SMTP id cv5mr3737315wib.6.1421859835479; Wed, 21 Jan 2015 09:03:55 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.44.197 with SMTP id g5ls64759lam.99.gmail; Wed, 21 Jan 2015 09:03:55 -0800 (PST) X-Received: by 10.112.168.164 with SMTP id zx4mr45532074lbb.28.1421859835252; Wed, 21 Jan 2015 09:03:55 -0800 (PST) Received: from mail-la0-f53.google.com (mail-la0-f53.google.com. [209.85.215.53]) by mx.google.com with ESMTPS id jh3si18511594lbc.5.2015.01.21.09.03.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 21 Jan 2015 09:03:55 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) client-ip=209.85.215.53; Received: by mail-la0-f53.google.com with SMTP id gq15so16903597lab.12 for ; Wed, 21 Jan 2015 09:03:55 -0800 (PST) X-Received: by 10.152.180.136 with SMTP id do8mr19160421lac.31.1421859835152; Wed, 21 Jan 2015 09:03:55 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.9.200 with SMTP id c8csp1844619lbb; Wed, 21 Jan 2015 09:03:54 -0800 (PST) X-Received: by 10.194.85.161 with SMTP id i1mr32950007wjz.35.1421859834189; Wed, 21 Jan 2015 09:03:54 -0800 (PST) Received: from mail-we0-f175.google.com (mail-we0-f175.google.com. [74.125.82.175]) by mx.google.com with ESMTPS id da6si721890wjb.92.2015.01.21.09.03.54 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 21 Jan 2015 09:03:54 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 74.125.82.175 as permitted sender) client-ip=74.125.82.175; Received: by mail-we0-f175.google.com with SMTP id k11so44330567wes.6 for ; Wed, 21 Jan 2015 09:03:54 -0800 (PST) X-Received: by 10.194.243.1 with SMTP id wu1mr82693503wjc.69.1421859833889; Wed, 21 Jan 2015 09:03:53 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id w16sm8021260wia.15.2015.01.21.09.03.52 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Jan 2015 09:03:53 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner , Jason Cooper , Russell King Cc: Daniel Thompson , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Sumit Semwal , Dirk Behme , Daniel Drake , Dmitry Pervushin , Tim Sander , Stephen Boyd , Will Deacon Subject: [RFC PATCH v2 2/5] irq: Allow interrupts to routed to NMI (or similar) Date: Wed, 21 Jan 2015 17:03:39 +0000 Message-Id: <1421859822-3621-3-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1421859822-3621-1-git-send-email-daniel.thompson@linaro.org> References: <1421166931-14134-1-git-send-email-daniel.thompson@linaro.org> <1421859822-3621-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Some combinations of architectures and interrupt controllers make it possible for abitrary interrupt signals to be selectively made immune to masking by local_irq_disable(). For example, on ARM platforms, many interrupt controllers allow interrupts to be routed to FIQ rather than IRQ. These features could be exploited to implement debug and tracing features that can be implemented using NMI on x86 platforms (perf, hard lockup, kgdb). Signed-off-by: Daniel Thompson --- include/linux/interrupt.h | 5 +++++ include/linux/irq.h | 2 ++ include/linux/irqdesc.h | 3 +++ kernel/irq/irqdesc.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++ kernel/irq/manage.c | 46 +++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 104 insertions(+) diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index d9b05b5bf8c7..839ad225bc97 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -57,6 +57,8 @@ * IRQF_NO_THREAD - Interrupt cannot be threaded * IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device * resume time. + * IRQF_NMI - Route the interrupt to an NMI or some similar signal that is not + * masked by local_irq_disable(). */ #define IRQF_DISABLED 0x00000020 #define IRQF_SHARED 0x00000080 @@ -70,8 +72,10 @@ #define IRQF_FORCE_RESUME 0x00008000 #define IRQF_NO_THREAD 0x00010000 #define IRQF_EARLY_RESUME 0x00020000 +#define __IRQF_NMI 0x00040000 #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD) +#define IRQF_NMI (__IRQF_NMI | IRQF_NO_THREAD) /* * These values can be returned by request_any_context_irq() and @@ -649,5 +653,6 @@ int arch_show_interrupts(struct seq_file *p, int prec); extern int early_irq_init(void); extern int arch_probe_nr_irqs(void); extern int arch_early_irq_init(void); +extern int arch_filter_nmi_handler(irq_handler_t); #endif diff --git a/include/linux/irq.h b/include/linux/irq.h index d09ec7a1243e..695eb37f04ae 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -307,6 +307,7 @@ static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d) * @irq_eoi: end of interrupt * @irq_set_affinity: set the CPU affinity on SMP machines * @irq_retrigger: resend an IRQ to the CPU + * @irq_set_nmi_routing:set whether interrupt can act like NMI * @irq_set_type: set the flow type (IRQ_TYPE_LEVEL/etc.) of an IRQ * @irq_set_wake: enable/disable power-management wake-on of an IRQ * @irq_bus_lock: function to lock access to slow bus (i2c) chips @@ -341,6 +342,7 @@ struct irq_chip { int (*irq_set_affinity)(struct irq_data *data, const struct cpumask *dest, bool force); int (*irq_retrigger)(struct irq_data *data); + int (*irq_set_nmi_routing)(struct irq_data *data, unsigned int nmi); int (*irq_set_type)(struct irq_data *data, unsigned int flow_type); int (*irq_set_wake)(struct irq_data *data, unsigned int on); diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h index faf433af425e..408d2e4ed40f 100644 --- a/include/linux/irqdesc.h +++ b/include/linux/irqdesc.h @@ -213,4 +213,7 @@ __irq_set_preflow_handler(unsigned int irq, irq_preflow_handler_t handler) } #endif +int handle_nmi_irq_desc(unsigned int irq, struct irq_desc *desc); +int handle_nmi_irq(unsigned int irq); + #endif diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c index 99793b9b6d23..876d01a6ad74 100644 --- a/kernel/irq/irqdesc.c +++ b/kernel/irq/irqdesc.c @@ -646,3 +646,51 @@ unsigned int kstat_irqs_usr(unsigned int irq) irq_unlock_sparse(); return sum; } + +/** + * handle_nmi_irq_desc - Call an NMI handler + * @irq: the interrupt number + * @desc: the interrupt description structure for this irq + * + * To the caller this function is similar in scope to generic_handle_irq_desc() + * but without any attempt to manage the handler flow. We assume that if the + * flow is complex then NMI routing is a bad idea; the caller is expected to + * handle the ack, clear, mask and unmask issues if necessary. + * + * Note that this function does not take any of the usual locks. Instead + * is relies on NMIs being prohibited from sharing interrupts (i.e. + * there will be exactly one irqaction) and that no call to free_irq() + * will be made whilst the handler is running. + */ +int handle_nmi_irq_desc(unsigned int irq, struct irq_desc *desc) +{ + struct irqaction *action = desc->action; + + BUG_ON(action->next); + + return action->handler(irq, action->dev_id); +} +EXPORT_SYMBOL_GPL(handle_nmi_irq_desc); + +/** + * handle_nmi - Call an NMI handler + * @irq: the interrupt number + * @desc: the interrupt description structure for this irq + * + * To the caller this function is similar in scope to generic_handle_irq(), + * see handle_nmi_irq_desc for more detail. + */ +int handle_nmi_irq(unsigned int irq) +{ + /* + * irq_to_desc is either simple arithmetic (no locking) or a radix + * tree lookup (RCU). Both are safe from NMI. + */ + struct irq_desc *desc = irq_to_desc(irq); + + if (!desc) + return -EINVAL; + handle_nmi_irq_desc(irq, desc); + return 0; +} +EXPORT_SYMBOL_GPL(handle_nmi_irq); diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 80692373abd6..96212a0493c0 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -571,6 +571,17 @@ int can_request_irq(unsigned int irq, unsigned long irqflags) return canrequest; } +int __irq_set_nmi_routing(struct irq_desc *desc, unsigned int irq, + unsigned int nmi) +{ + struct irq_chip *chip = desc->irq_data.chip; + + if (!chip || !chip->irq_set_nmi_routing) + return -EINVAL; + + return chip->irq_set_nmi_routing(&desc->irq_data, nmi); +} + int __irq_set_trigger(struct irq_desc *desc, unsigned int irq, unsigned long flags) { @@ -966,6 +977,16 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new) if (desc->irq_data.chip == &no_irq_chip) return -ENOSYS; + + if (new->flags & __IRQF_NMI) { + if (new->flags & IRQF_SHARED) + return -EINVAL; + + ret = arch_filter_nmi_handler(new->handler); + if (ret < 0) + return ret; + } + if (!try_module_get(desc->owner)) return -ENODEV; @@ -1153,6 +1174,19 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new) init_waitqueue_head(&desc->wait_for_threads); + if (new->flags & __IRQF_NMI) { + ret = __irq_set_nmi_routing(desc, irq, true); + if (ret != 1) + goto out_mask; + } else { + ret = __irq_set_nmi_routing(desc, irq, false); + if (ret == 1) { + pr_err("Failed to disable NMI routing for irq %d\n", + irq); + goto out_mask; + } + } + /* Setup the type (level, edge polarity) if configured: */ if (new->flags & IRQF_TRIGGER_MASK) { ret = __irq_set_trigger(desc, irq, @@ -1758,3 +1792,15 @@ int request_percpu_irq(unsigned int irq, irq_handler_t handler, return retval; } + +/* + * Allows architectures to deny requests to set __IRQF_NMI. + * + * Typically this is used to restrict the use of NMI handlers that do not + * originate from arch code. However the default implementation is + * extremely permissive. + */ +int __weak arch_filter_nmi_handler(irq_handler_t handler) +{ + return 0; +}