From patchwork Wed Aug 13 16:01:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 35371 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f197.google.com (mail-ob0-f197.google.com [209.85.214.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3B4A2203C5 for ; Wed, 13 Aug 2014 16:01:46 +0000 (UTC) Received: by mail-ob0-f197.google.com with SMTP id vb8sf52603355obc.4 for ; Wed, 13 Aug 2014 09:01:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=4Dk8Cy2z1ibM0z6wLg+It/K/J1MYNKZVgP7vqA4/Zs8=; b=A79UP/VoYCQsIYI++o5UIQBb/Y+2q6vow2Axii1tKt7yoTq/xRgcB1IIAnth1hSMaH CSrnqy72kYjiGg82yZfWjOypiG8wFPu8+P4aKKagZgG0g38jj3cOwlmRLlXheqJHpGOg qd7T9/SMz8stCn5+Xqpv6zmfy3AGcYas/hGOEZVBxVxGivT1fItRCdO68TFTZLwIwGvE tU2eR873/E1aVD2N2QsIEQo8J2E3pvZ+KFHPaJ/wc3oYBxXA6HQoEKq0bddpd2sZqPUJ fLmy8uuAZNQ44TocWMH75Rg74TIJlPtf17RWwqEq+ezhCVT9oe/EhlTNNj3Via1PIpLy MsQg== X-Gm-Message-State: ALoCoQmt9Kvs5bK6noGl6vAy6mjtLy1QQCe1OsRDHgNfKIk3TW8sCnoq4br2Zri/iH6VYaJD+Ilb X-Received: by 10.42.119.82 with SMTP id a18mr3289689icr.19.1407945705856; Wed, 13 Aug 2014 09:01:45 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.104.18 with SMTP id z18ls646616qge.0.gmail; Wed, 13 Aug 2014 09:01:45 -0700 (PDT) X-Received: by 10.220.110.77 with SMTP id m13mr4561339vcp.35.1407945705770; Wed, 13 Aug 2014 09:01:45 -0700 (PDT) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id th5si1360009vdb.39.2014.08.13.09.01.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 13 Aug 2014 09:01:45 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id le20so15464386vcb.28 for ; Wed, 13 Aug 2014 09:01:45 -0700 (PDT) X-Received: by 10.52.73.202 with SMTP id n10mr1396593vdv.86.1407945705692; Wed, 13 Aug 2014 09:01:45 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp342023vcb; Wed, 13 Aug 2014 09:01:45 -0700 (PDT) X-Received: by 10.68.247.72 with SMTP id yc8mr4930809pbc.114.1407945704631; Wed, 13 Aug 2014 09:01:44 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id rt17si1808213pac.146.2014.08.13.09.01.44 for ; Wed, 13 Aug 2014 09:01:44 -0700 (PDT) Received-SPF: none (google.com: linux-pm-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752090AbaHMQBn (ORCPT + 14 others); Wed, 13 Aug 2014 12:01:43 -0400 Received: from mail-pd0-f176.google.com ([209.85.192.176]:51177 "EHLO mail-pd0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751557AbaHMQBm (ORCPT ); Wed, 13 Aug 2014 12:01:42 -0400 Received: by mail-pd0-f176.google.com with SMTP id y10so14634227pdj.21 for ; Wed, 13 Aug 2014 09:01:42 -0700 (PDT) X-Received: by 10.69.18.65 with SMTP id gk1mr4850558pbd.68.1407945702278; Wed, 13 Aug 2014 09:01:42 -0700 (PDT) Received: from ubuntu.localdomain (proxy6-global253.qualcomm.com. [199.106.103.253]) by mx.google.com with ESMTPSA id wj10sm2518679pbc.67.2014.08.13.09.01.40 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 13 Aug 2014 09:01:41 -0700 (PDT) From: Lina Iyer To: daniel.lezcano@linaro.org, khilman@linaro.org, ulf.hansson@linaro.org, linux-pm@vger.kernel.org, tglx@linutronix.de, rjw@rjwysocki.net Cc: Lina Iyer Subject: [PATCH v2 3/4] irq: Allow multiple clients to register for irq affinity notification Date: Wed, 13 Aug 2014 10:01:28 -0600 Message-Id: <1407945689-18494-4-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1407945689-18494-1-git-send-email-lina.iyer@linaro.org> References: <1407945689-18494-1-git-send-email-lina.iyer@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: lina.iyer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The current implementation allows only a single notification callback whenever the IRQ's SMP affinity is changed. Adding a second notification punts the existing notifier function out of registration. Add a list of notifiers, allowing multiple clients to register for irq affinity notifications. Signed-off-by: Lina Iyer --- drivers/infiniband/hw/qib/qib_iba7322.c | 4 +- include/linux/interrupt.h | 12 ++++- include/linux/irq.h | 1 + include/linux/irqdesc.h | 6 ++- kernel/irq/irqdesc.c | 1 + kernel/irq/manage.c | 85 ++++++++++++++++++++++----------- lib/cpu_rmap.c | 2 +- 7 files changed, 74 insertions(+), 37 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c index a7eb325..62cb77d 100644 --- a/drivers/infiniband/hw/qib/qib_iba7322.c +++ b/drivers/infiniband/hw/qib/qib_iba7322.c @@ -3345,9 +3345,7 @@ static void reset_dca_notifier(struct qib_devdata *dd, struct qib_msix_entry *m) "Disabling notifier on HCA %d irq %d\n", dd->unit, m->msix.vector); - irq_set_affinity_notifier( - m->msix.vector, - NULL); + irq_release_affinity_notifier(m->notifier); m->notifier = NULL; } diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 698ad05..c1e227c 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -203,7 +203,7 @@ static inline int check_wakeup_irqs(void) { return 0; } * struct irq_affinity_notify - context for notification of IRQ affinity changes * @irq: Interrupt to which notification applies * @kref: Reference count, for internal use - * @work: Work item, for internal use + * @list: Add to the notifier list, for internal use * @notify: Function to be called on change. This will be * called in process context. * @release: Function to be called on release. This will be @@ -214,7 +214,7 @@ static inline int check_wakeup_irqs(void) { return 0; } struct irq_affinity_notify { unsigned int irq; struct kref kref; - struct work_struct work; + struct list_head list; void (*notify)(struct irq_affinity_notify *, const cpumask_t *mask); void (*release)(struct kref *ref); }; @@ -265,6 +265,8 @@ extern int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m); extern int irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify); +extern int +irq_release_affinity_notifier(struct irq_affinity_notify *notify); #else /* CONFIG_SMP */ static inline int irq_set_affinity(unsigned int irq, const struct cpumask *m) @@ -295,6 +297,12 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) { return 0; } + +static inline int +irq_release_affinity_notifier(struct irq_affinity_notify *notify) +{ + return 0; +} #endif /* CONFIG_SMP */ /* diff --git a/include/linux/irq.h b/include/linux/irq.h index 62af592..2634a48 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -20,6 +20,7 @@ #include #include #include +#include #include #include diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h index 472c021..db3509e 100644 --- a/include/linux/irqdesc.h +++ b/include/linux/irqdesc.h @@ -31,7 +31,8 @@ struct irq_desc; * @threads_handled_last: comparator field for deferred spurious detection of theraded handlers * @lock: locking for SMP * @affinity_hint: hint to user space for preferred irq affinity - * @affinity_notify: context for notification of affinity changes + * @affinity_notify: list of notification clients for affinity changes + * @affinity_work: Work queue for handling affinity change notifications * @pending_mask: pending rebalanced interrupts * @threads_oneshot: bitfield to handle shared oneshot threads * @threads_active: number of irqaction threads currently running @@ -60,7 +61,8 @@ struct irq_desc { struct cpumask *percpu_enabled; #ifdef CONFIG_SMP const struct cpumask *affinity_hint; - struct irq_affinity_notify *affinity_notify; + struct list_head affinity_notify; + struct work_struct affinity_work; #ifdef CONFIG_GENERIC_PENDING_IRQ cpumask_var_t pending_mask; #endif diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c index 1487a12..c95e1f3 100644 --- a/kernel/irq/irqdesc.c +++ b/kernel/irq/irqdesc.c @@ -91,6 +91,7 @@ static void desc_set_defaults(unsigned int irq, struct irq_desc *desc, int node, for_each_possible_cpu(cpu) *per_cpu_ptr(desc->kstat_irqs, cpu) = 0; desc_smp_init(desc, node); + INIT_LIST_HEAD(&desc->affinity_notify); } int nr_irqs = NR_IRQS; diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 88657d7..99fc0e7 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -209,10 +209,9 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask, irq_copy_pending(desc, mask); } - if (desc->affinity_notify) { - kref_get(&desc->affinity_notify->kref); - schedule_work(&desc->affinity_notify->work); - } + if (!list_empty(&desc->affinity_notify)) + schedule_work(&desc->affinity_work); + irqd_set(data, IRQD_AFFINITY_SET); return ret; @@ -248,14 +247,14 @@ EXPORT_SYMBOL_GPL(irq_set_affinity_hint); static void irq_affinity_notify(struct work_struct *work) { - struct irq_affinity_notify *notify = - container_of(work, struct irq_affinity_notify, work); - struct irq_desc *desc = irq_to_desc(notify->irq); + struct irq_desc *desc = + container_of(work, struct irq_desc, affinity_work); cpumask_var_t cpumask; unsigned long flags; + struct irq_affinity_notify *notify; if (!desc || !alloc_cpumask_var(&cpumask, GFP_KERNEL)) - goto out; + return; raw_spin_lock_irqsave(&desc->lock, flags); if (irq_move_pending(&desc->irq_data)) @@ -264,11 +263,20 @@ static void irq_affinity_notify(struct work_struct *work) cpumask_copy(cpumask, desc->irq_data.affinity); raw_spin_unlock_irqrestore(&desc->lock, flags); - notify->notify(notify, cpumask); + list_for_each_entry(notify, &desc->affinity_notify, list) { + /** + * Check and get the kref only if the kref has not been + * released by now. Its possible that the reference count + * is already 0, we dont want to notify those if they are + * already released. + */ + if (!kref_get_unless_zero(¬ify->kref)) + continue; + notify->notify(notify, cpumask); + kref_put(¬ify->kref, notify->release); + } free_cpumask_var(cpumask); -out: - kref_put(¬ify->kref, notify->release); } /** @@ -286,34 +294,50 @@ int irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) { struct irq_desc *desc = irq_to_desc(irq); - struct irq_affinity_notify *old_notify; unsigned long flags; - /* The release function is promised process context */ - might_sleep(); - if (!desc) return -EINVAL; - /* Complete initialisation of *notify */ - if (notify) { - notify->irq = irq; - kref_init(¬ify->kref); - INIT_WORK(¬ify->work, irq_affinity_notify); + if (!notify) { + WARN("%s called with NULL notifier - use irq_release_affinity_notifier function instead.\n", + __func__); + return -EINVAL; } + notify->irq = irq; + kref_init(¬ify->kref); + INIT_LIST_HEAD(¬ify->list); raw_spin_lock_irqsave(&desc->lock, flags); - old_notify = desc->affinity_notify; - desc->affinity_notify = notify; + list_add(¬ify->list, &desc->affinity_notify); raw_spin_unlock_irqrestore(&desc->lock, flags); - if (old_notify) - kref_put(&old_notify->kref, old_notify->release); - return 0; } EXPORT_SYMBOL_GPL(irq_set_affinity_notifier); +/** + * irq_release_affinity_notifier - Remove us from notifications + * @notify: Context for notification + */ +int irq_release_affinity_notifier(struct irq_affinity_notify *notify) +{ + struct irq_desc *desc; + unsigned long flags; + + if (!notify) + return -EINVAL; + + desc = irq_to_desc(notify->irq); + raw_spin_lock_irqsave(&desc->lock, flags); + list_del(¬ify->list); + raw_spin_unlock_irqrestore(&desc->lock, flags); + kref_put(¬ify->kref, notify->release); + + return 0; +} +EXPORT_SYMBOL_GPL(irq_release_affinity_notifier); + #ifndef CONFIG_AUTO_IRQ_AFFINITY /* * Generic version of the affinity autoselector. @@ -348,6 +372,8 @@ setup_affinity(unsigned int irq, struct irq_desc *desc, struct cpumask *mask) if (cpumask_intersects(mask, nodemask)) cpumask_and(mask, mask, nodemask); } + INIT_LIST_HEAD(&desc->affinity_notify); + INIT_WORK(&desc->affinity_work, irq_affinity_notify); irq_do_set_affinity(&desc->irq_data, mask, false); return 0; } @@ -1414,14 +1440,15 @@ EXPORT_SYMBOL_GPL(remove_irq); void free_irq(unsigned int irq, void *dev_id) { struct irq_desc *desc = irq_to_desc(irq); + struct irq_affinity_notify *notify; if (!desc || WARN_ON(irq_settings_is_per_cpu_devid(desc))) return; -#ifdef CONFIG_SMP - if (WARN_ON(desc->affinity_notify)) - desc->affinity_notify = NULL; -#endif + WARN_ON(!list_empty(&desc->affinity_notify)); + + list_for_each_entry(notify, &desc->affinity_notify, list) + kref_put(¬ify->kref, notify->release); chip_bus_lock(desc); kfree(__free_irq(irq, dev_id)); diff --git a/lib/cpu_rmap.c b/lib/cpu_rmap.c index 4f134d8..0c8da50 100644 --- a/lib/cpu_rmap.c +++ b/lib/cpu_rmap.c @@ -235,7 +235,7 @@ void free_irq_cpu_rmap(struct cpu_rmap *rmap) for (index = 0; index < rmap->used; index++) { glue = rmap->obj[index]; - irq_set_affinity_notifier(glue->notify.irq, NULL); + irq_release_affinity_notifier(&glue->notify); } cpu_rmap_put(rmap);