From patchwork Tue Sep 29 10:59:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 290812 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C971C47428 for ; Tue, 29 Sep 2020 12:30:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C4DC208B8 for ; Tue, 29 Sep 2020 12:30:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601382651; bh=/SxPJUpqKhaxSW+fenMFVsuIVtuTWZGsuUoRBdV6OaA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=wzvdqdzLP8EQ0FYB5Yz2H/POHvbnAdANNIdbeTSN/U+1bae8BP0qWUn55/wRSWD/w k3PzF3heOjBBWnWcnLn5FBz656IalkneM1cKG0JB0ITbZNJZSm0uJkR+6GaRDDNmaU LvgmSF3pVVFC4fuqIonNVtG5AOfEcyQN6ZxLEh78= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728705AbgI2Mat (ORCPT ); Tue, 29 Sep 2020 08:30:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:45278 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729429AbgI2L3g (ORCPT ); Tue, 29 Sep 2020 07:29:36 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 58B8C23AC0; Tue, 29 Sep 2020 11:24:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601378655; bh=/SxPJUpqKhaxSW+fenMFVsuIVtuTWZGsuUoRBdV6OaA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=alOkRogP6gnkBL3ieJgha5o1151wLQI+w6WqUalnQLTpoihWm7LEIHvMuh7Fcd8Ao Zv+IHu67EE2oXLqHVenyl/sg7mfjKw9q3hzmSZmEM9yywacSOhiBZZyQYcAY/lKxiy F9/BTJwk0QEvHpGeLb12WtIGaEqlRSj22iiws5pU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, James Morse , Catalin Marinas , Sasha Levin Subject: [PATCH 4.19 092/245] firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp Date: Tue, 29 Sep 2020 12:59:03 +0200 Message-Id: <20200929105951.467556850@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200929105946.978650816@linuxfoundation.org> References: <20200929105946.978650816@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse [ Upstream commit 54f529a6806c9710947a4f2cdc15d6ea54121ccd ] SDEI has private events that need registering and enabling on each CPU. CPUs can come and go while we are trying to do this. SDEI tries to avoid these problems by setting the reregister flag before the register call, so any CPUs that come online register the event too. Sticking plaster like this doesn't work, as if the register call fails, a CPU that subsequently comes online will register the event before reregister is cleared. Take cpus_read_lock() around the register and enable calls. We don't want surprise CPUs to do the wrong thing if they race with these calls failing. Signed-off-by: James Morse Signed-off-by: Catalin Marinas Signed-off-by: Sasha Levin --- drivers/firmware/arm_sdei.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c index 05b528c7ed8fd..e809f4d9a9e93 100644 --- a/drivers/firmware/arm_sdei.c +++ b/drivers/firmware/arm_sdei.c @@ -410,14 +410,19 @@ int sdei_event_enable(u32 event_num) return -ENOENT; } - spin_lock(&sdei_list_lock); - event->reenable = true; - spin_unlock(&sdei_list_lock); + cpus_read_lock(); if (event->type == SDEI_EVENT_TYPE_SHARED) err = sdei_api_event_enable(event->event_num); else err = sdei_do_cross_call(_local_event_enable, event); + + if (!err) { + spin_lock(&sdei_list_lock); + event->reenable = true; + spin_unlock(&sdei_list_lock); + } + cpus_read_unlock(); mutex_unlock(&sdei_events_lock); return err; @@ -619,21 +624,18 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg) break; } - spin_lock(&sdei_list_lock); - event->reregister = true; - spin_unlock(&sdei_list_lock); - + cpus_read_lock(); err = _sdei_event_register(event); if (err) { - spin_lock(&sdei_list_lock); - event->reregister = false; - event->reenable = false; - spin_unlock(&sdei_list_lock); - sdei_event_destroy(event); pr_warn("Failed to register event %u: %d\n", event_num, err); + } else { + spin_lock(&sdei_list_lock); + event->reregister = true; + spin_unlock(&sdei_list_lock); } + cpus_read_unlock(); } while (0); mutex_unlock(&sdei_events_lock);