From patchwork Fri Dec 6 14:35:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 180924 Delivered-To: patch@linaro.org Received: by 2002:ac9:44c4:0:0:0:0:0 with SMTP id t4csp827101och; Fri, 6 Dec 2019 06:38:52 -0800 (PST) X-Google-Smtp-Source: APXvYqyNXBUH8ntQeICgYjwdm/g1awAk06c6/8nvpKnwGQ00udtcdcf7eeMIpqMsTa/iYn8JZ1at X-Received: by 2002:a05:6830:13d9:: with SMTP id e25mr10833668otq.134.1575643132478; Fri, 06 Dec 2019 06:38:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575643132; cv=none; d=google.com; s=arc-20160816; b=GSIrQPXfcxcdtwMsesz5z5ctPpPfhDiZL+Msfhc8sLLI+V7DheERmmpIYbtKzo0uND fVhKDWuqvxrjre9xLWrLSiCY3JDmdHvjATpevYAyc/jpiUHQlA2a5WlNpZW6uPjmqCPe eM4zCUtfoaX6+Vc+dgaraJVhMljgT1KYYTwP2Yemyv2dGhmL5Txzv5pA2/1uzOelKA5W jv9j+cOs28r5qcy5lt9xaIOl0g9nYrb7E8m01Zrs/4kM23Y5rAJ7BjazwfIq2evS6dUk 9HG7bS0uclC0wqjPHO8u4DL9JoDLlKwjBtO/kItc7Ds+wnkyXHPNz8sakniUs5g/qHxg uaZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=LCiQG3Xe7DAsMA25pHfZ59V5CiRx66Vh/147+cibF3Y=; b=kUUgrxCfwoO3RnGbke/4vE3D2qK0CzbjsxDVCqAba3ejbfDigOEVBqGz9Bn85oJ1hb AOXb0ylHOb5gvVDjGo9k9gddJpu076R/AJ6xGK1ikQUJ4UM2dz9s65OL+NmzH+WJk384 OCH9QquHVZr9uJIteiLiV10MLl1nnbjPpF2UBjhNRUsNA/azsGBfF5Juc9m62cZEwUsk xBRez0S4Pw87+boJ578lZtz3LWm8jtKqikMg0a/ukyvIzZvJ0A5+UJgatVY+kWFNOzGF VoVKcACKQmVNPrZPI4sW7B07eDy0H9l6JnWcYytBEElBnuS2F1DP5TLNIthb2JSIbk1Q G1jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 90si4364681oty.278.2019.12.06.06.38.52; Fri, 06 Dec 2019 06:38:52 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726455AbfLFOiu (ORCPT + 27 others); Fri, 6 Dec 2019 09:38:50 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:49558 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726259AbfLFOiu (ORCPT ); Fri, 6 Dec 2019 09:38:50 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 0D3A35B5EF5F41BE22E2; Fri, 6 Dec 2019 22:38:42 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.439.0; Fri, 6 Dec 2019 22:38:34 +0800 From: John Garry To: CC: , , , , , , , , , , , John Garry Subject: [PATCH RFC 0/1] Threaded handler uses irq affinity for when the interrupt is managed Date: Fri, 6 Dec 2019 22:35:03 +0800 Message-ID: <1575642904-58295-1-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, As mentioned in [0], we are experiencing a scenario where data throughput can be limited by the fact a CPU can be fully consumed in handling the hard and threaded interrupt parts for a managed interrupt, while it can help throughput by allowing another CPU to handle the threaded part. That same link also includes some CPU load figures further in the discussion for the same change in the patch here. As some more background, in cbf8699996a6 ("genirq: Let irq thread follow the effective hard irq affinity"), the change was made to enforce that the threaded and hard parts should be kept on the same CPU, on the basis we should not allow the threaded part stray from the CPU of the hard handler. As in [0], again, Thomas said that it could be optional on whether we allow the full irq affinity mask to be used. What that option is based on, I am not sure. Ming Lei said it would be sensible to do it when the interrupt is managed, so that is the basis of this change. Aside this this, it is worth noting that there has been another discussion on CPU lockup from relentless handling of hard interrupts [2]. Using threaded interrupts was discussed but seemingly rejected due to too much context switching hitting performance. And so it seems that the conclusion in that discussion was to use IRQ polling, but I have seen no recent update. [0] https://lore.kernel.org/lkml/e0e9478e-62a5-ca24-3b12-58f7d056383e@huawei.com/ [1] https://lore.kernel.org/lkml/CACVXFVPCiTU0mtXKS0fyMccPXN6hAdZNHv6y-f8-tz=FE=BV=g@mail.gmail.com/ John Garry (1): genirq: Make threaded handler use irq affinity for managed interrupt kernel/irq/manage.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) -- 2.17.1