From patchwork Mon Feb 10 12:31:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 231725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B9B7C352A5 for ; Mon, 10 Feb 2020 13:21:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 74CFC20733 for ; Mon, 10 Feb 2020 13:21:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581340916; bh=J05NzwZemBlr80XEvaHzlzPsIv40DKlhVAFBil0eImg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=1lZ6L4eE3q4kXk5IeSDp+zKPwL8svkRAINEw3BPAaFtwd1hR1BE7HjhuRCi07cLXY XKaJJ2Xmz8F2w2WFhqYkbbHGIquUOry1wB81dhscnG4fu1mUmzVq9vHdPhjO2BaoI/ 3s7rkBRfzR9zC6nczkfjRrAzCu+TP9WZU9cDAEAQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728144AbgBJNV4 (ORCPT ); Mon, 10 Feb 2020 08:21:56 -0500 Received: from mail.kernel.org ([198.145.29.99]:58972 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728763AbgBJMh2 (ORCPT ); Mon, 10 Feb 2020 07:37:28 -0500 Received: from localhost (unknown [209.37.97.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 844C420873; Mon, 10 Feb 2020 12:37:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581338247; bh=J05NzwZemBlr80XEvaHzlzPsIv40DKlhVAFBil0eImg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZkHKdoHm6KkqOH3ITnszG7nayyORr7hr/Dv0PvaOn9Mybv0XmvwzLCEwgYfCrOFmW 63U/TCJJlof1BaBrXTzt0xbg1HZ94l15v9wQYgqnq+HHBsId9qA9sk2z8WNRgFU4EO HPgtAu20OB1UD4q0ILjvJKVB1gE8v/ETR63md1J0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Herbert Xu , Daniel Jordan Subject: [PATCH 5.4 107/309] padata: Remove broken queue flushing Date: Mon, 10 Feb 2020 04:31:03 -0800 Message-Id: <20200210122416.640349969@linuxfoundation.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200210122406.106356946@linuxfoundation.org> References: <20200210122406.106356946@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Herbert Xu commit 07928d9bfc81640bab36f5190e8725894d93b659 upstream. The function padata_flush_queues is fundamentally broken because it cannot force padata users to complete the request that is underway. IOW padata has to passively wait for the completion of any outstanding work. As it stands flushing is used in two places. Its use in padata_stop is simply unnecessary because nothing depends on the queues to be flushed afterwards. The other use in padata_replace is more substantial as we depend on it to free the old pd structure. This patch instead uses the pd->refcnt to dynamically free the pd structure once all requests are complete. Fixes: 2b73b07ab8a4 ("padata: Flush the padata queues actively") Cc: Signed-off-by: Herbert Xu Reviewed-by: Daniel Jordan Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- kernel/padata.c | 43 ++++++++++++------------------------------- 1 file changed, 12 insertions(+), 31 deletions(-) --- a/kernel/padata.c +++ b/kernel/padata.c @@ -35,6 +35,8 @@ #define MAX_OBJ_NUM 1000 +static void padata_free_pd(struct parallel_data *pd); + static int padata_index_to_cpu(struct parallel_data *pd, int cpu_index) { int cpu, target_cpu; @@ -283,6 +285,7 @@ static void padata_serial_worker(struct struct padata_serial_queue *squeue; struct parallel_data *pd; LIST_HEAD(local_list); + int cnt; local_bh_disable(); squeue = container_of(serial_work, struct padata_serial_queue, work); @@ -292,6 +295,8 @@ static void padata_serial_worker(struct list_replace_init(&squeue->serial.list, &local_list); spin_unlock(&squeue->serial.lock); + cnt = 0; + while (!list_empty(&local_list)) { struct padata_priv *padata; @@ -301,9 +306,12 @@ static void padata_serial_worker(struct list_del_init(&padata->list); padata->serial(padata); - atomic_dec(&pd->refcnt); + cnt++; } local_bh_enable(); + + if (atomic_sub_and_test(cnt, &pd->refcnt)) + padata_free_pd(pd); } /** @@ -440,7 +448,7 @@ static struct parallel_data *padata_allo padata_init_squeues(pd); atomic_set(&pd->seq_nr, -1); atomic_set(&pd->reorder_objects, 0); - atomic_set(&pd->refcnt, 0); + atomic_set(&pd->refcnt, 1); spin_lock_init(&pd->lock); pd->cpu = cpumask_first(pd->cpumask.pcpu); INIT_WORK(&pd->reorder_work, invoke_padata_reorder); @@ -466,29 +474,6 @@ static void padata_free_pd(struct parall kfree(pd); } -/* Flush all objects out of the padata queues. */ -static void padata_flush_queues(struct parallel_data *pd) -{ - int cpu; - struct padata_parallel_queue *pqueue; - struct padata_serial_queue *squeue; - - for_each_cpu(cpu, pd->cpumask.pcpu) { - pqueue = per_cpu_ptr(pd->pqueue, cpu); - flush_work(&pqueue->work); - } - - if (atomic_read(&pd->reorder_objects)) - padata_reorder(pd); - - for_each_cpu(cpu, pd->cpumask.cbcpu) { - squeue = per_cpu_ptr(pd->squeue, cpu); - flush_work(&squeue->work); - } - - BUG_ON(atomic_read(&pd->refcnt) != 0); -} - static void __padata_start(struct padata_instance *pinst) { pinst->flags |= PADATA_INIT; @@ -502,10 +487,6 @@ static void __padata_stop(struct padata_ pinst->flags &= ~PADATA_INIT; synchronize_rcu(); - - get_online_cpus(); - padata_flush_queues(pinst->pd); - put_online_cpus(); } /* Replace the internal control structure with a new one. */ @@ -526,8 +507,8 @@ static void padata_replace(struct padata if (!cpumask_equal(pd_old->cpumask.cbcpu, pd_new->cpumask.cbcpu)) notification_mask |= PADATA_CPU_SERIAL; - padata_flush_queues(pd_old); - padata_free_pd(pd_old); + if (atomic_dec_and_test(&pd_old->refcnt)) + padata_free_pd(pd_old); if (notification_mask) blocking_notifier_call_chain(&pinst->cpumask_change_notifier,