From patchwork Mon Jul 21 15:52:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 33989 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f72.google.com (mail-qa0-f72.google.com [209.85.216.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id EEB0F20672 for ; Mon, 21 Jul 2014 15:52:23 +0000 (UTC) Received: by mail-qa0-f72.google.com with SMTP id cm18sf19723440qab.7 for ; Mon, 21 Jul 2014 08:52:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=JXFblcrV5xpVB7WBB/gO7kQiDAgp1NLXe+7Y5NPxcWE=; b=MEFkCHVRBBOAst0PO8avvcB6Ae0G2FQwre31Chv4IaLucMKaoOYIFwlmZfFf16IDEM opk+xG3So2g+xM5HZmN8yPUnRqWyzcin+VdvSK/vUb+gYp/h7iOBAQpJ/azRVAqgAo0W 72xz+Dg+uQh660v1pjA2YEbpBXL6kq/N95Nqdp7QjW7mCNd64+awZix5Khmp2e9DT5V/ V/at9LwyOPwPYfOM06vbwrnKu2cJo3PQJIcc8JorADzo6Dmw7CWw3Ufah8zsU81gK0nM YRO9UQPC0tRVtSTVBZP1H/onBtwEpou3kq0lR3q9GVwt6R6CWD13TuAs3AT3G4WuIGiL 5Y4w== X-Gm-Message-State: ALoCoQl+9rgW5qYBcYcBTUXvBZM2KxjoKwfUmEfAJv3mvUfZ2qLgd94jIHWJgr8x84TSKLfZmxH+ X-Received: by 10.236.197.65 with SMTP id s41mr11593677yhn.36.1405957943774; Mon, 21 Jul 2014 08:52:23 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.38.51 with SMTP id s48ls2072062qgs.64.gmail; Mon, 21 Jul 2014 08:52:23 -0700 (PDT) X-Received: by 10.220.105.201 with SMTP id u9mr4053097vco.11.1405957943665; Mon, 21 Jul 2014 08:52:23 -0700 (PDT) Received: from mail-vc0-f173.google.com (mail-vc0-f173.google.com [209.85.220.173]) by mx.google.com with ESMTPS id si10si11683142vcb.10.2014.07.21.08.52.22 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 21 Jul 2014 08:52:22 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.173 as permitted sender) client-ip=209.85.220.173; Received: by mail-vc0-f173.google.com with SMTP id hy10so12554526vcb.18 for ; Mon, 21 Jul 2014 08:52:22 -0700 (PDT) X-Received: by 10.220.44.141 with SMTP id a13mr3230895vcf.71.1405957942836; Mon, 21 Jul 2014 08:52:22 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp123596vcb; Mon, 21 Jul 2014 08:52:22 -0700 (PDT) X-Received: by 10.66.236.35 with SMTP id ur3mr9550156pac.35.1405957941691; Mon, 21 Jul 2014 08:52:21 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d7si7335273pdj.181.2014.07.21.08.52.21; Mon, 21 Jul 2014 08:52:21 -0700 (PDT) Received-SPF: none (google.com: linux-pm-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932818AbaGUPwQ (ORCPT + 15 others); Mon, 21 Jul 2014 11:52:16 -0400 Received: from mail-pd0-f173.google.com ([209.85.192.173]:36245 "EHLO mail-pd0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932899AbaGUPwO (ORCPT ); Mon, 21 Jul 2014 11:52:14 -0400 Received: by mail-pd0-f173.google.com with SMTP id w10so9454686pde.32 for ; Mon, 21 Jul 2014 08:52:13 -0700 (PDT) X-Received: by 10.70.100.106 with SMTP id ex10mr10130742pdb.73.1405957933895; Mon, 21 Jul 2014 08:52:13 -0700 (PDT) Received: from ubuntu.localdomain (proxy6-global253.qualcomm.com. [199.106.103.253]) by mx.google.com with ESMTPSA id r1sm19605571pdo.51.2014.07.21.08.52.12 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 21 Jul 2014 08:52:13 -0700 (PDT) From: Lina Iyer To: linux-pm@vger.kernel.org Cc: Thomas Gleixner , Kevin Hilman , Lina Iyer Subject: [PATCH, RFC 1/3] irq: Allow multiple clients to register for irq affinity notification Date: Mon, 21 Jul 2014 09:52:02 -0600 Message-Id: <1405957924-41292-2-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1405957924-41292-1-git-send-email-lina.iyer@linaro.org> References: <1405957924-41292-1-git-send-email-lina.iyer@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: lina.iyer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.173 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Current implementation allows only a single notification callback whenever IRQ affinity is changed. A second notification punts the existing notifier function out of registration. Add a list, allowing multiple clients to register for irq affinity notifications. Signed-off-by: Lina Iyer --- drivers/infiniband/hw/qib/qib_iba7322.c | 4 +- include/linux/interrupt.h | 12 +++++- include/linux/irq.h | 1 + include/linux/irqdesc.h | 5 ++- kernel/irq/irqdesc.c | 1 + kernel/irq/manage.c | 69 +++++++++++++++++++-------------- lib/cpu_rmap.c | 2 +- 7 files changed, 56 insertions(+), 38 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c index a7eb325..62cb77d 100644 --- a/drivers/infiniband/hw/qib/qib_iba7322.c +++ b/drivers/infiniband/hw/qib/qib_iba7322.c @@ -3345,9 +3345,7 @@ static void reset_dca_notifier(struct qib_devdata *dd, struct qib_msix_entry *m) "Disabling notifier on HCA %d irq %d\n", dd->unit, m->msix.vector); - irq_set_affinity_notifier( - m->msix.vector, - NULL); + irq_release_affinity_notifier(m->notifier); m->notifier = NULL; } diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 698ad05..c1e227c 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -203,7 +203,7 @@ static inline int check_wakeup_irqs(void) { return 0; } * struct irq_affinity_notify - context for notification of IRQ affinity changes * @irq: Interrupt to which notification applies * @kref: Reference count, for internal use - * @work: Work item, for internal use + * @list: Add to the notifier list, for internal use * @notify: Function to be called on change. This will be * called in process context. * @release: Function to be called on release. This will be @@ -214,7 +214,7 @@ static inline int check_wakeup_irqs(void) { return 0; } struct irq_affinity_notify { unsigned int irq; struct kref kref; - struct work_struct work; + struct list_head list; void (*notify)(struct irq_affinity_notify *, const cpumask_t *mask); void (*release)(struct kref *ref); }; @@ -265,6 +265,8 @@ extern int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m); extern int irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify); +extern int +irq_release_affinity_notifier(struct irq_affinity_notify *notify); #else /* CONFIG_SMP */ static inline int irq_set_affinity(unsigned int irq, const struct cpumask *m) @@ -295,6 +297,12 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) { return 0; } + +static inline int +irq_release_affinity_notifier(struct irq_affinity_notify *notify) +{ + return 0; +} #endif /* CONFIG_SMP */ /* diff --git a/include/linux/irq.h b/include/linux/irq.h index 62af592..2634a48 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -20,6 +20,7 @@ #include #include #include +#include #include #include diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h index 472c021..10d8155 100644 --- a/include/linux/irqdesc.h +++ b/include/linux/irqdesc.h @@ -31,7 +31,7 @@ struct irq_desc; * @threads_handled_last: comparator field for deferred spurious detection of theraded handlers * @lock: locking for SMP * @affinity_hint: hint to user space for preferred irq affinity - * @affinity_notify: context for notification of affinity changes + * @affinity_notify: list head for notification of affinity changes * @pending_mask: pending rebalanced interrupts * @threads_oneshot: bitfield to handle shared oneshot threads * @threads_active: number of irqaction threads currently running @@ -60,7 +60,8 @@ struct irq_desc { struct cpumask *percpu_enabled; #ifdef CONFIG_SMP const struct cpumask *affinity_hint; - struct irq_affinity_notify *affinity_notify; + struct list_head affinity_notify; + struct work_struct affinity_work; #ifdef CONFIG_GENERIC_PENDING_IRQ cpumask_var_t pending_mask; #endif diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c index 1487a12..c95e1f3 100644 --- a/kernel/irq/irqdesc.c +++ b/kernel/irq/irqdesc.c @@ -91,6 +91,7 @@ static void desc_set_defaults(unsigned int irq, struct irq_desc *desc, int node, for_each_possible_cpu(cpu) *per_cpu_ptr(desc->kstat_irqs, cpu) = 0; desc_smp_init(desc, node); + INIT_LIST_HEAD(&desc->affinity_notify); } int nr_irqs = NR_IRQS; diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 88657d7..de5008b4 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -209,10 +209,9 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask, irq_copy_pending(desc, mask); } - if (desc->affinity_notify) { - kref_get(&desc->affinity_notify->kref); - schedule_work(&desc->affinity_notify->work); - } + if (!list_empty(&desc->affinity_notify)) + schedule_work(&desc->affinity_work); + irqd_set(data, IRQD_AFFINITY_SET); return ret; @@ -248,14 +247,14 @@ EXPORT_SYMBOL_GPL(irq_set_affinity_hint); static void irq_affinity_notify(struct work_struct *work) { - struct irq_affinity_notify *notify = - container_of(work, struct irq_affinity_notify, work); - struct irq_desc *desc = irq_to_desc(notify->irq); + struct irq_desc *desc = + container_of(work, struct irq_desc, affinity_work); cpumask_var_t cpumask; unsigned long flags; + struct irq_affinity_notify *notify; if (!desc || !alloc_cpumask_var(&cpumask, GFP_KERNEL)) - goto out; + return; raw_spin_lock_irqsave(&desc->lock, flags); if (irq_move_pending(&desc->irq_data)) @@ -264,11 +263,14 @@ static void irq_affinity_notify(struct work_struct *work) cpumask_copy(cpumask, desc->irq_data.affinity); raw_spin_unlock_irqrestore(&desc->lock, flags); - notify->notify(notify, cpumask); + list_for_each_entry(notify, &desc->affinity_notify, list) { + if (!kref_get_unless_zero(¬ify->kref)) + continue; + notify->notify(notify, cpumask); + kref_put(¬ify->kref, notify->release); + } free_cpumask_var(cpumask); -out: - kref_put(¬ify->kref, notify->release); } /** @@ -286,8 +288,6 @@ int irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) { struct irq_desc *desc = irq_to_desc(irq); - struct irq_affinity_notify *old_notify; - unsigned long flags; /* The release function is promised process context */ might_sleep(); @@ -295,25 +295,37 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) if (!desc) return -EINVAL; - /* Complete initialisation of *notify */ - if (notify) { - notify->irq = irq; - kref_init(¬ify->kref); - INIT_WORK(¬ify->work, irq_affinity_notify); + if (!notify) { + WARN("%s: Use irq_release_affinity_notifier function instead.\n", + __func__); + return -EINVAL; } - raw_spin_lock_irqsave(&desc->lock, flags); - old_notify = desc->affinity_notify; - desc->affinity_notify = notify; - raw_spin_unlock_irqrestore(&desc->lock, flags); - - if (old_notify) - kref_put(&old_notify->kref, old_notify->release); + notify->irq = irq; + kref_init(¬ify->kref); + INIT_LIST_HEAD(¬ify->list); + list_add(&desc->affinity_notify, ¬ify->list); return 0; } EXPORT_SYMBOL_GPL(irq_set_affinity_notifier); +/** + * irq_release_affinity_notifier - Remove us from notifications + * @notify: Context for notification + */ +int irq_release_affinity_notifier(struct irq_affinity_notify *notify) +{ + if (!notify) + return -EINVAL; + + list_del(¬ify->list); + kref_put(¬ify->kref, notify->release); + + return 0; +} +EXPORT_SYMBOL_GPL(irq_release_affinity_notifier); + #ifndef CONFIG_AUTO_IRQ_AFFINITY /* * Generic version of the affinity autoselector. @@ -348,6 +360,8 @@ setup_affinity(unsigned int irq, struct irq_desc *desc, struct cpumask *mask) if (cpumask_intersects(mask, nodemask)) cpumask_and(mask, mask, nodemask); } + INIT_LIST_HEAD(&desc->affinity_notify); + INIT_WORK(&desc->affinity_work, irq_affinity_notify); irq_do_set_affinity(&desc->irq_data, mask, false); return 0; } @@ -1418,11 +1432,6 @@ void free_irq(unsigned int irq, void *dev_id) if (!desc || WARN_ON(irq_settings_is_per_cpu_devid(desc))) return; -#ifdef CONFIG_SMP - if (WARN_ON(desc->affinity_notify)) - desc->affinity_notify = NULL; -#endif - chip_bus_lock(desc); kfree(__free_irq(irq, dev_id)); chip_bus_sync_unlock(desc); diff --git a/lib/cpu_rmap.c b/lib/cpu_rmap.c index 4f134d8..0c8da50 100644 --- a/lib/cpu_rmap.c +++ b/lib/cpu_rmap.c @@ -235,7 +235,7 @@ void free_irq_cpu_rmap(struct cpu_rmap *rmap) for (index = 0; index < rmap->used; index++) { glue = rmap->obj[index]; - irq_set_affinity_notifier(glue->notify.irq, NULL); + irq_release_affinity_notifier(&glue->notify); } cpu_rmap_put(rmap);