From patchwork Tue May 26 18:52:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 225411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3983DC433E1 for ; Tue, 26 May 2020 18:58:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 070372086A for ; Tue, 26 May 2020 18:58:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590519489; bh=ztM3ucvyX1sZSQHueZcY8XRuo+V0Vih+cQQnNynMlGk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=CmGxDwMTDbg9HcK/xsQPyIQnjH1mZxCfxtgf9MXo3XTX25RHGBrxl5Tcx3JtMRF/t IdwYAOMAcd9F5PLOk8Pv2EH9DxXOyg+XBLM8Nk4N4UAzLm8v1aMjbUP1Z/iB+G3ZJA mnD6n3wsGiKm0t8yqJLLMprFmPnMeQTyW7hVxmrE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388967AbgEZS6H (ORCPT ); Tue, 26 May 2020 14:58:07 -0400 Received: from mail.kernel.org ([198.145.29.99]:51510 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390254AbgEZS6E (ORCPT ); Tue, 26 May 2020 14:58:04 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0DFE020870; Tue, 26 May 2020 18:58:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590519484; bh=ztM3ucvyX1sZSQHueZcY8XRuo+V0Vih+cQQnNynMlGk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U6Xh0+VXLxk3qEI/BDr1m+2Dip18gnNJ35Ym7ziUyB3fnCUEkvEDonuZN4vAjssR9 72/pHcJBqIiAiSWsW+p1d7RSsOWhI6j5r+i93x4pomFiSv64FaJFA9zIsAXn1ksD+q wxG3civHoBqN+LiCmxlafI4bSLfwBgYPpvWQokF4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mathias Krause , Herbert Xu , Daniel Jordan , Sasha Levin Subject: [PATCH 4.9 21/64] padata: set cpu_index of unused CPUs to -1 Date: Tue, 26 May 2020 20:52:50 +0200 Message-Id: <20200526183919.699140377@linuxfoundation.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200526183913.064413230@linuxfoundation.org> References: <20200526183913.064413230@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mathias Krause [ Upstream commit 1bd845bcb41d5b7f83745e0cb99273eb376f2ec5 ] The parallel queue per-cpu data structure gets initialized only for CPUs in the 'pcpu' CPU mask set. This is not sufficient as the reorder timer may run on a different CPU and might wrongly decide it's the target CPU for the next reorder item as per-cpu memory gets memset(0) and we might be waiting for the first CPU in cpumask.pcpu, i.e. cpu_index 0. Make the '__this_cpu_read(pd->pqueue->cpu_index) == next_queue->cpu_index' compare in padata_get_next() fail in this case by initializing the cpu_index member of all per-cpu parallel queues. Use -1 for unused ones. Signed-off-by: Mathias Krause Signed-off-by: Herbert Xu Signed-off-by: Daniel Jordan Signed-off-by: Sasha Levin --- kernel/padata.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/kernel/padata.c b/kernel/padata.c index 693536efccf9..52a1d3fd13b5 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -462,8 +462,14 @@ static void padata_init_pqueues(struct parallel_data *pd) struct padata_parallel_queue *pqueue; cpu_index = 0; - for_each_cpu(cpu, pd->cpumask.pcpu) { + for_each_possible_cpu(cpu) { pqueue = per_cpu_ptr(pd->pqueue, cpu); + + if (!cpumask_test_cpu(cpu, pd->cpumask.pcpu)) { + pqueue->cpu_index = -1; + continue; + } + pqueue->pd = pd; pqueue->cpu_index = cpu_index; cpu_index++;