From patchwork Tue Nov 3 20:35:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 316822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E391C2D0A3 for ; Tue, 3 Nov 2020 21:50:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED01A223AB for ; Tue, 3 Nov 2020 21:50:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604440216; bh=dhcROChYc6uCOfhW4sTqa5kYsBfKaACXb38YQFIBvqs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=QJOOj8F81jX95Dgcv7rq6ZosHyYPLRSM7wliwGRfpYD8sCcjxcWxQsxWB9/RrLLJL VwM92gk+74Hh8hbjAEBtRI33tUtjSJWAWzVFOgV+vzoQWVl/0PNLaNwhZiAFG2Lmr5 Baerp69jtJbGG9n56kgTEzEP3aIlw2QzeM7NWFYU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731492AbgKCUtQ (ORCPT ); Tue, 3 Nov 2020 15:49:16 -0500 Received: from mail.kernel.org ([198.145.29.99]:41890 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731476AbgKCUtO (ORCPT ); Tue, 3 Nov 2020 15:49:14 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 304E9223FD; Tue, 3 Nov 2020 20:49:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604436553; bh=dhcROChYc6uCOfhW4sTqa5kYsBfKaACXb38YQFIBvqs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mo0NBVkUdmmsw0TFlMEbcG6eZBTurppAl+dBushW36m5Xad/pPSJUAzldF6QpPcou XargAPmeccEE7CWbG622uZ88XGbad6jinPQRLeJNI7lBD519CtjG1Ly4MGpO4Y3xl/ 57FUqeCkbnSyWaxQSN9P0zErg9L9MlVg+Fu4eFxg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Bob Peterson , Andreas Gruenbacher Subject: [PATCH 5.9 299/391] gfs2: Only access gl_delete for iopen glocks Date: Tue, 3 Nov 2020 21:35:50 +0100 Message-Id: <20201103203407.257675059@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201103203348.153465465@linuxfoundation.org> References: <20201103203348.153465465@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Bob Peterson commit 2ffed5290b3bff7562d29fd06621be4705704242 upstream. Only initialize gl_delete for iopen glocks, but more importantly, only access it for iopen glocks in flush_delete_work: flush_delete_work is called for different types of glocks including rgrp glocks, and those use gl_vm which is in a union with gl_delete. Without this fix, we'll end up clobbering gl_vm, which results in general memory corruption. Fixes: a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Bob Peterson Signed-off-by: Andreas Gruenbacher Signed-off-by: Greg Kroah-Hartman --- fs/gfs2/glock.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -1054,7 +1054,8 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, gl->gl_object = NULL; gl->gl_hold_time = GL_GLOCK_DFT_HOLD; INIT_DELAYED_WORK(&gl->gl_work, glock_work_func); - INIT_DELAYED_WORK(&gl->gl_delete, delete_work_func); + if (gl->gl_name.ln_type == LM_TYPE_IOPEN) + INIT_DELAYED_WORK(&gl->gl_delete, delete_work_func); mapping = gfs2_glock2aspace(gl); if (mapping) { @@ -1906,9 +1907,11 @@ bool gfs2_delete_work_queued(const struc static void flush_delete_work(struct gfs2_glock *gl) { - if (cancel_delayed_work(&gl->gl_delete)) { - queue_delayed_work(gfs2_delete_workqueue, - &gl->gl_delete, 0); + if (gl->gl_name.ln_type == LM_TYPE_IOPEN) { + if (cancel_delayed_work(&gl->gl_delete)) { + queue_delayed_work(gfs2_delete_workqueue, + &gl->gl_delete, 0); + } } gfs2_glock_queue_work(gl, 0); }