From patchwork Mon Jan 5 14:54:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 42743 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9B53F26670 for ; Mon, 5 Jan 2015 14:55:14 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id b13sf12178789wgh.5 for ; Mon, 05 Jan 2015 06:55:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=tDfWzxw5/DJi0738bG9AhJZ/qxvNlYfPLkjFlVbDNIg=; b=dp0+Ndd8PhQzLg8sBoU7yy8U3I/fTGuUXMp84OiwzZ+BwKK9+naVE7R7WdP072yHu3 45MlZzAv+FPPzx1aXYMHN2wca2/dGKgbIlP1MXW68e/lrBalTu3HAEbj7fJZyRktKmrH nuhSlVkOfHJG7ygbHt9IbqqLBcRq5YZKrlGx0c4I4OZ2IC7ThGjHcKtdpazGp2yvV6qJ Vq6+08FlT1syKbzdaXjwFlFAJSbfnv6FQhxNRkOTpZEhkCVJ1SZP990jfnrxrUGwUqOp CUf+qcmR8Gx/YKpfEpPfD/lWyNAn8bxTqwEHh2V2qCLz75GzhTFEIpQVqii4Skd6nKC2 ls/A== X-Gm-Message-State: ALoCoQkHARUd3RmyK+GcJL9ikJV7u3ls1ast6+4Sk72k2OuPGxTL5rPhlxrg3+3pFYM3386F24dq X-Received: by 10.112.118.18 with SMTP id ki18mr43231lbb.15.1420469713720; Mon, 05 Jan 2015 06:55:13 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.28.200 with SMTP id d8ls1940793lah.18.gmail; Mon, 05 Jan 2015 06:55:13 -0800 (PST) X-Received: by 10.112.8.69 with SMTP id p5mr92540700lba.97.1420469713582; Mon, 05 Jan 2015 06:55:13 -0800 (PST) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id is2si62225083lac.12.2015.01.05.06.55.13 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 05 Jan 2015 06:55:13 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by mail-la0-f44.google.com with SMTP id gd6so18612568lab.3 for ; Mon, 05 Jan 2015 06:55:13 -0800 (PST) X-Received: by 10.152.115.146 with SMTP id jo18mr87212663lab.9.1420469713436; Mon, 05 Jan 2015 06:55:13 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.9.200 with SMTP id c8csp903880lbb; Mon, 5 Jan 2015 06:55:11 -0800 (PST) X-Received: by 10.180.24.167 with SMTP id v7mr27000166wif.5.1420469710694; Mon, 05 Jan 2015 06:55:10 -0800 (PST) Received: from mail-wi0-f173.google.com (mail-wi0-f173.google.com. [209.85.212.173]) by mx.google.com with ESMTPS id g10si17206978wix.1.2015.01.05.06.55.10 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 05 Jan 2015 06:55:10 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 209.85.212.173 as permitted sender) client-ip=209.85.212.173; Received: by mail-wi0-f173.google.com with SMTP id r20so3450952wiv.6 for ; Mon, 05 Jan 2015 06:55:10 -0800 (PST) X-Received: by 10.180.208.76 with SMTP id mc12mr26565478wic.73.1420469710324; Mon, 05 Jan 2015 06:55:10 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id n5sm10239010wic.6.2015.01.05.06.55.08 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Jan 2015 06:55:09 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner , Jason Cooper Cc: Daniel Thompson , Russell King , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Sumit Semwal , Dirk Behme , Daniel Drake , Dmitry Pervushin , Tim Sander , Stephen Boyd , Marc Zyngier Subject: [PATCH 3.19-rc2 v13 1/5] irqchip: gic: Optimize locking in gic_raise_softirq Date: Mon, 5 Jan 2015 14:54:55 +0000 Message-Id: <1420469699-25350-2-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1420469699-25350-1-git-send-email-daniel.thompson@linaro.org> References: <1415968543-29469-1-git-send-email-daniel.thompson@linaro.org> <1420469699-25350-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently gic_raise_softirq() is locked using upon irq_controller_lock. This lock is primarily used to make register read-modify-write sequences atomic but gic_raise_softirq() uses it instead to ensure that the big.LITTLE migration logic can figure out when it is safe to migrate interrupts between physical cores. This is sub-optimal in closely related ways: 1. No locking at all is required on systems where the b.L switcher is not configured. 2. Finer grain locking can be used on systems where the b.L switcher is present. This patch resolves both of the above by introducing a separate finer grain lock and providing conditionally compiled inlines to lock/unlock it. Signed-off-by: Daniel Thompson Cc: Thomas Gleixner Cc: Jason Cooper Cc: Russell King Cc: Marc Zyngier --- drivers/irqchip/irq-gic.c | 36 +++++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index d617ee5a3d8a..a9ed64dcc84b 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -73,6 +73,27 @@ struct gic_chip_data { static DEFINE_RAW_SPINLOCK(irq_controller_lock); /* + * This lock is used by the big.LITTLE migration code to ensure no IPIs + * can be pended on the old core after the map has been updated. + */ +#ifdef CONFIG_BL_SWITCHER +static DEFINE_RAW_SPINLOCK(cpu_map_migration_lock); + +static inline void bl_migration_lock(unsigned long *flags) +{ + raw_spin_lock_irqsave(&cpu_map_migration_lock, *flags); +} + +static inline void bl_migration_unlock(unsigned long flags) +{ + raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags); +} +#else +static inline void bl_migration_lock(unsigned long *flags) {} +static inline void bl_migration_unlock(unsigned long flags) {} +#endif + +/* * The GIC mapping of CPU interfaces does not necessarily match * the logical CPU numbering. Let's use a mapping as returned * by the GIC itself. @@ -624,7 +645,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq) int cpu; unsigned long flags, map = 0; - raw_spin_lock_irqsave(&irq_controller_lock, flags); + bl_migration_lock(&flags); /* Convert our logical CPU mask into a physical one. */ for_each_cpu(cpu, mask) @@ -639,7 +660,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq) /* this always happens on GIC0 */ writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT); - raw_spin_unlock_irqrestore(&irq_controller_lock, flags); + bl_migration_unlock(flags); } #endif @@ -710,8 +731,17 @@ void gic_migrate_target(unsigned int new_cpu_id) raw_spin_lock(&irq_controller_lock); - /* Update the target interface for this logical CPU */ + /* + * Update the target interface for this logical CPU + * + * From the point we release the cpu_map_migration_lock any new + * SGIs will be pended on the new cpu which makes the set of SGIs + * pending on the old cpu static. That means we can defer the + * migration until after we have released the irq_controller_lock. + */ + raw_spin_lock(&cpu_map_migration_lock); gic_cpu_map[cpu] = 1 << new_cpu_id; + raw_spin_unlock(&cpu_map_migration_lock); /* * Find all the peripheral interrupts targetting the current