From patchwork Wed May 4 16:47:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 569618 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E7F7C43217 for ; Wed, 4 May 2022 17:17:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239061AbiEDRUt (ORCPT ); Wed, 4 May 2022 13:20:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358300AbiEDRPt (ORCPT ); Wed, 4 May 2022 13:15:49 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8DE256771; Wed, 4 May 2022 09:59:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E5D7A61950; Wed, 4 May 2022 16:59:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1461C385AA; Wed, 4 May 2022 16:59:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1651683549; bh=AOBz04IO0x5NuUMMvvLUKJriVH5uBAjA6v8SLusB+VI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P/cbRVyTX66nCFedUKfQ2sQ22KrWV1XK2EvMZdry58YpKhsEe4NkXjTiG+NYKCUII 46qdWrt+k2hd7tuxQm5vXCJ7nyLswYU/GcOiyhdJZ15ez1bwatAn6bEzUJI2KzH5nx Iy/kzz9j0g+q2gUM2+TJrpizq95dAu35AYpd9g/c= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Daniel Starke Subject: [PATCH 5.17 203/225] tty: n_gsm: fix decoupled mux resource Date: Wed, 4 May 2022 18:47:21 +0200 Message-Id: <20220504153128.246294160@linuxfoundation.org> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220504153110.096069935@linuxfoundation.org> References: <20220504153110.096069935@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Daniel Starke commit 1ec92e9742774bf42614fceea3bf6b50c9409225 upstream. The active mux instances are managed in the gsm_mux array and via mux_get() and mux_put() functions separately. This gives a very loose coupling between the actual instance and the gsm_mux array which manages it. It also results in unnecessary lockings which makes it prone to failures. And it creates a race condition if more than the maximum number of mux instances are requested while the user changes the parameters of an active instance. The user may loose ownership of the current mux instance in this case. Fix this by moving the gsm_mux array handling to the mux allocation and deallocation functions. Fixes: e1eaea46bb40 ("tty: n_gsm line discipline") Cc: stable@vger.kernel.org Signed-off-by: Daniel Starke Link: https://lore.kernel.org/r/20220414094225.4527-3-daniel.starke@siemens.com Signed-off-by: Greg Kroah-Hartman --- drivers/tty/n_gsm.c | 63 +++++++++++++++++++++++++++++++--------------------- 1 file changed, 38 insertions(+), 25 deletions(-) --- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -2136,18 +2136,6 @@ static void gsm_cleanup_mux(struct gsm_m /* Finish outstanding timers, making sure they are done */ del_timer_sync(&gsm->t2_timer); - spin_lock(&gsm_mux_lock); - for (i = 0; i < MAX_MUX; i++) { - if (gsm_mux[i] == gsm) { - gsm_mux[i] = NULL; - break; - } - } - spin_unlock(&gsm_mux_lock); - /* open failed before registering => nothing to do */ - if (i == MAX_MUX) - return; - /* Free up any link layer users */ for (i = 0; i < NUM_DLCI; i++) if (gsm->dlci[i]) @@ -2171,7 +2159,6 @@ static void gsm_cleanup_mux(struct gsm_m static int gsm_activate_mux(struct gsm_mux *gsm) { struct gsm_dlci *dlci; - int i = 0; timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0); init_waitqueue_head(&gsm->event); @@ -2183,18 +2170,6 @@ static int gsm_activate_mux(struct gsm_m else gsm->receive = gsm1_receive; - spin_lock(&gsm_mux_lock); - for (i = 0; i < MAX_MUX; i++) { - if (gsm_mux[i] == NULL) { - gsm->num = i; - gsm_mux[i] = gsm; - break; - } - } - spin_unlock(&gsm_mux_lock); - if (i == MAX_MUX) - return -EBUSY; - dlci = gsm_dlci_alloc(gsm, 0); if (dlci == NULL) return -ENOMEM; @@ -2210,6 +2185,15 @@ static int gsm_activate_mux(struct gsm_m */ static void gsm_free_mux(struct gsm_mux *gsm) { + int i; + + for (i = 0; i < MAX_MUX; i++) { + if (gsm == gsm_mux[i]) { + gsm_mux[i] = NULL; + break; + } + } + mutex_destroy(&gsm->mutex); kfree(gsm->txframe); kfree(gsm->buf); kfree(gsm); @@ -2229,12 +2213,20 @@ static void gsm_free_muxr(struct kref *r static inline void mux_get(struct gsm_mux *gsm) { + unsigned long flags; + + spin_lock_irqsave(&gsm_mux_lock, flags); kref_get(&gsm->ref); + spin_unlock_irqrestore(&gsm_mux_lock, flags); } static inline void mux_put(struct gsm_mux *gsm) { + unsigned long flags; + + spin_lock_irqsave(&gsm_mux_lock, flags); kref_put(&gsm->ref, gsm_free_muxr); + spin_unlock_irqrestore(&gsm_mux_lock, flags); } static inline unsigned int mux_num_to_base(struct gsm_mux *gsm) @@ -2255,6 +2247,7 @@ static inline unsigned int mux_line_to_n static struct gsm_mux *gsm_alloc_mux(void) { + int i; struct gsm_mux *gsm = kzalloc(sizeof(struct gsm_mux), GFP_KERNEL); if (gsm == NULL) return NULL; @@ -2284,6 +2277,26 @@ static struct gsm_mux *gsm_alloc_mux(voi gsm->mtu = 64; gsm->dead = true; /* Avoid early tty opens */ + /* Store the instance to the mux array or abort if no space is + * available. + */ + spin_lock(&gsm_mux_lock); + for (i = 0; i < MAX_MUX; i++) { + if (!gsm_mux[i]) { + gsm_mux[i] = gsm; + gsm->num = i; + break; + } + } + spin_unlock(&gsm_mux_lock); + if (i == MAX_MUX) { + mutex_destroy(&gsm->mutex); + kfree(gsm->txframe); + kfree(gsm->buf); + kfree(gsm); + return NULL; + } + return gsm; }