From patchwork Wed Aug 18 08:06:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 499050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE50EC4338F for ; Wed, 18 Aug 2021 08:06:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5B4D60F38 for ; Wed, 18 Aug 2021 08:06:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239835AbhHRIGt (ORCPT ); Wed, 18 Aug 2021 04:06:49 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:20910 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239781AbhHRIGr (ORCPT ); Wed, 18 Aug 2021 04:06:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1629273973; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eHOkSBe23x1mddfFu84rXsgimGdj4smxXTSmrdiSfS4=; b=GSZOEphipoUlQwbiViAIubaxs9w2JRmQLk25bTw3g2jJXReaMK728FC19dxTM6lx0hfLP8 dV3TjCc4DgsoxJqSL2k/14U/n68jXPPUWhkRk/jBXbb4BLeYPF8EJiR4G62ZV4SGkUgTYj /5QRiiOmE69maW/lnY0boYq++adD0N0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-283-Nzc6HQfgPQG49M0xZLoiDg-1; Wed, 18 Aug 2021 04:06:11 -0400 X-MC-Unique: Nzc6HQfgPQG49M0xZLoiDg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9502A107ACF5; Wed, 18 Aug 2021 08:06:10 +0000 (UTC) Received: from lxbceph1.gsslab.pek2.redhat.com (unknown [10.72.47.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9F2B75C232; Wed, 18 Aug 2021 08:06:08 +0000 (UTC) From: xiubli@redhat.com To: jlayton@kernel.org Cc: idryomov@gmail.com, pdonnell@redhat.com, ceph-devel@vger.kernel.org, Xiubo Li Subject: [PATCH 1/3] ceph: remove the capsnaps when removing the caps Date: Wed, 18 Aug 2021 16:06:01 +0800 Message-Id: <20210818080603.195722-2-xiubli@redhat.com> In-Reply-To: <20210818080603.195722-1-xiubli@redhat.com> References: <20210818080603.195722-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Xiubo Li The capsnaps will ihold the inodes when queuing to flush, so when force umounting it will close the sessions first and if the MDSes respond very fast and the session connections are closed just before killing the superblock, which will flush the msgr queue, then the flush capsnap callback won't ever be called, which will lead the memory leak bug for the ceph_inode_info. URL: https://tracker.ceph.com/issues/52295 Signed-off-by: Xiubo Li --- fs/ceph/caps.c | 47 +++++++++++++++++++++++++++++--------------- fs/ceph/mds_client.c | 23 +++++++++++++++++++++- fs/ceph/super.h | 3 +++ 3 files changed, 56 insertions(+), 17 deletions(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index e239f06babbc..7def99fbdca6 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -3663,6 +3663,34 @@ static void handle_cap_flush_ack(struct inode *inode, u64 flush_tid, iput(inode); } +/* + * Caller hold s_mutex and i_ceph_lock. + */ +void ceph_remove_capsnap(struct inode *inode, struct ceph_cap_snap *capsnap, + bool *wake_ci, bool *wake_mdsc) +{ + struct ceph_inode_info *ci = ceph_inode(inode); + struct ceph_mds_client *mdsc = ceph_sb_to_client(inode->i_sb)->mdsc; + bool ret; + + dout("removing capsnap %p, inode %p ci %p\n", capsnap, inode, ci); + + WARN_ON(capsnap->dirty_pages || capsnap->writing); + list_del(&capsnap->ci_item); + ret = __detach_cap_flush_from_ci(ci, &capsnap->cap_flush); + if (wake_ci) + *wake_ci = ret; + + spin_lock(&mdsc->cap_dirty_lock); + if (list_empty(&ci->i_cap_flush_list)) + list_del_init(&ci->i_flushing_item); + + ret = __detach_cap_flush_from_mdsc(mdsc, &capsnap->cap_flush); + if (wake_mdsc) + *wake_mdsc = ret; + spin_unlock(&mdsc->cap_dirty_lock); +} + /* * Handle FLUSHSNAP_ACK. MDS has flushed snap data to disk and we can * throw away our cap_snap. @@ -3700,23 +3728,10 @@ static void handle_cap_flushsnap_ack(struct inode *inode, u64 flush_tid, capsnap, capsnap->follows); } } - if (flushed) { - WARN_ON(capsnap->dirty_pages || capsnap->writing); - dout(" removing %p cap_snap %p follows %lld\n", - inode, capsnap, follows); - list_del(&capsnap->ci_item); - wake_ci |= __detach_cap_flush_from_ci(ci, &capsnap->cap_flush); - - spin_lock(&mdsc->cap_dirty_lock); - - if (list_empty(&ci->i_cap_flush_list)) - list_del_init(&ci->i_flushing_item); - - wake_mdsc |= __detach_cap_flush_from_mdsc(mdsc, - &capsnap->cap_flush); - spin_unlock(&mdsc->cap_dirty_lock); - } + if (flushed) + ceph_remove_capsnap(inode, capsnap, &wake_ci, &wake_mdsc); spin_unlock(&ci->i_ceph_lock); + if (flushed) { ceph_put_snap_context(capsnap->context); ceph_put_cap_snap(capsnap); diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index fa4c0fe294c1..a632e1c7cef2 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -1604,10 +1604,30 @@ int ceph_iterate_session_caps(struct ceph_mds_session *session, return ret; } +static void remove_capsnaps(struct ceph_mds_client *mdsc, struct inode *inode) +{ + struct ceph_inode_info *ci = ceph_inode(inode); + struct ceph_cap_snap *capsnap; + + dout("removing capsnaps, ci is %p, inode is %p\n", ci, inode); + + while (!list_empty(&ci->i_cap_snaps)) { + capsnap = list_first_entry(&ci->i_cap_snaps, + struct ceph_cap_snap, ci_item); + ceph_remove_capsnap(inode, capsnap, NULL, NULL); + ceph_put_snap_context(capsnap->context); + ceph_put_cap_snap(capsnap); + iput(inode); + } + wake_up_all(&ci->i_cap_wq); + wake_up_all(&mdsc->cap_flushing_wq); +} + static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap, void *arg) { struct ceph_fs_client *fsc = (struct ceph_fs_client *)arg; + struct ceph_mds_client *mdsc = fsc->mdsc; struct ceph_inode_info *ci = ceph_inode(inode); LIST_HEAD(to_remove); bool dirty_dropped = false; @@ -1619,7 +1639,6 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap, __ceph_remove_cap(cap, false); if (!ci->i_auth_cap) { struct ceph_cap_flush *cf; - struct ceph_mds_client *mdsc = fsc->mdsc; if (READ_ONCE(fsc->mount_state) >= CEPH_MOUNT_SHUTDOWN) { if (inode->i_data.nrpages > 0) @@ -1684,6 +1703,8 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap, ci->i_prealloc_cap_flush = NULL; } } + if (!list_empty(&ci->i_cap_snaps)) + remove_capsnaps(mdsc, inode); spin_unlock(&ci->i_ceph_lock); while (!list_empty(&to_remove)) { struct ceph_cap_flush *cf; diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 0bc36cf4c683..51ec17d12b26 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -1168,6 +1168,9 @@ extern void ceph_put_cap_refs_no_check_caps(struct ceph_inode_info *ci, int had); extern void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr, struct ceph_snap_context *snapc); +extern void ceph_remove_capsnap(struct inode *inode, + struct ceph_cap_snap *capsnap, + bool *wake_ci, bool *wake_mdsc); extern void ceph_flush_snaps(struct ceph_inode_info *ci, struct ceph_mds_session **psession); extern bool __ceph_should_report_size(struct ceph_inode_info *ci); From patchwork Wed Aug 18 08:06:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 499813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9F84C4338F for ; Wed, 18 Aug 2021 08:06:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A198960F38 for ; Wed, 18 Aug 2021 08:06:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238345AbhHRIGv (ORCPT ); Wed, 18 Aug 2021 04:06:51 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:55937 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238550AbhHRIGu (ORCPT ); Wed, 18 Aug 2021 04:06:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1629273975; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V3RflNDLpVwgbW2+Fb6ZwEOjyzotHVmDP7rb4Fsr+D0=; b=VOuI7Xeu5taS1bEXf3qSJsnUWCuJtbqenHAUQFf4eI2W745qa1W+Jgqkl4L3gS6x0gx7YX Jnkxr1ebiF2tTcFlp/IUy0I76NZ8rWBgcAbZyBUWvI9TdiclpEa1DPAhj9G8YHh2LlGVmX hwaQccEnZdYrmdmyATWT7cM19kRgEIY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-543-6M99ZYAaMyadPS_xomuhhg-1; Wed, 18 Aug 2021 04:06:14 -0400 X-MC-Unique: 6M99ZYAaMyadPS_xomuhhg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 185168015C7; Wed, 18 Aug 2021 08:06:13 +0000 (UTC) Received: from lxbceph1.gsslab.pek2.redhat.com (unknown [10.72.47.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 23C605C25A; Wed, 18 Aug 2021 08:06:10 +0000 (UTC) From: xiubli@redhat.com To: jlayton@kernel.org Cc: idryomov@gmail.com, pdonnell@redhat.com, ceph-devel@vger.kernel.org, Xiubo Li Subject: [PATCH 2/3] ceph: don't WARN if we're force umounting Date: Wed, 18 Aug 2021 16:06:02 +0800 Message-Id: <20210818080603.195722-3-xiubli@redhat.com> In-Reply-To: <20210818080603.195722-1-xiubli@redhat.com> References: <20210818080603.195722-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Xiubo Li Force umount will try to close the sessions by setting the session state to _CLOSING, so in ceph_kill_sb after that it will warn on it. URL: https://tracker.ceph.com/issues/52295 Signed-off-by: Xiubo Li --- fs/ceph/mds_client.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index a632e1c7cef2..0302af53e079 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -4558,6 +4558,8 @@ static void maybe_recover_session(struct ceph_mds_client *mdsc) bool check_session_state(struct ceph_mds_session *s) { + struct ceph_fs_client *fsc = s->s_mdsc->fsc; + switch (s->s_state) { case CEPH_MDS_SESSION_OPEN: if (s->s_ttl && time_after(jiffies, s->s_ttl)) { @@ -4566,8 +4568,11 @@ bool check_session_state(struct ceph_mds_session *s) } break; case CEPH_MDS_SESSION_CLOSING: - /* Should never reach this when we're unmounting */ - WARN_ON_ONCE(s->s_ttl); + /* + * Should never reach this when none force unmounting + */ + if (READ_ONCE(fsc->mount_state) != CEPH_MOUNT_SHUTDOWN) + WARN_ON_ONCE(s->s_ttl); fallthrough; case CEPH_MDS_SESSION_NEW: case CEPH_MDS_SESSION_RESTARTING: From patchwork Wed Aug 18 08:06:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 499049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 055D2C4338F for ; Wed, 18 Aug 2021 08:06:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E36C460E76 for ; Wed, 18 Aug 2021 08:06:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239589AbhHRIGx (ORCPT ); Wed, 18 Aug 2021 04:06:53 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:34785 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238550AbhHRIGw (ORCPT ); Wed, 18 Aug 2021 04:06:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1629273978; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iV43omn4WtYbGj+mfdeYJEwJuxttprsI88LphUoIjxo=; b=ZpFlgg9f60z0I7ZJ7rTZu/i69rrhLHcWIdmjSL+T9tXdusQChbzgGk1aoIyQFig4aVlpYl YCgxbEJ66887VnDl+N0ejVH7NLokgtGDYLRRM6Oap7Zj/cLtgaNM2g2+sQqKpQpQbjHEF2 SNAAT0xicCpHUqRU6wAd/L0I4/SP7hA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-555-CnpADLOhMM2MFcKJdEgwNA-1; Wed, 18 Aug 2021 04:06:16 -0400 X-MC-Unique: CnpADLOhMM2MFcKJdEgwNA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9201A1009608; Wed, 18 Aug 2021 08:06:15 +0000 (UTC) Received: from lxbceph1.gsslab.pek2.redhat.com (unknown [10.72.47.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9C0CB5C232; Wed, 18 Aug 2021 08:06:13 +0000 (UTC) From: xiubli@redhat.com To: jlayton@kernel.org Cc: idryomov@gmail.com, pdonnell@redhat.com, ceph-devel@vger.kernel.org, Xiubo Li Subject: [PATCH 3/3] ceph: don't WARN if we're iterate removing the session caps Date: Wed, 18 Aug 2021 16:06:03 +0800 Message-Id: <20210818080603.195722-4-xiubli@redhat.com> In-Reply-To: <20210818080603.195722-1-xiubli@redhat.com> References: <20210818080603.195722-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Xiubo Li For example in case force umounting it will remove all the session caps one by one even it's dirty cap. URL: https://tracker.ceph.com/issues/52295 Signed-off-by: Xiubo Li --- fs/ceph/caps.c | 15 ++++++++------- fs/ceph/mds_client.c | 4 ++-- fs/ceph/super.h | 3 ++- 3 files changed, 12 insertions(+), 10 deletions(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 7def99fbdca6..1ed9b9d57dd3 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -1101,7 +1101,7 @@ int ceph_is_any_caps(struct inode *inode) * caller should hold i_ceph_lock. * caller will not hold session s_mutex if called from destroy_inode. */ -void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release) +void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release, bool warn) { struct ceph_mds_session *session = cap->session; struct ceph_inode_info *ci = cap->ci; @@ -1121,7 +1121,7 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release) /* remove from inode's cap rbtree, and clear auth cap */ rb_erase(&cap->ci_node, &ci->i_caps); if (ci->i_auth_cap == cap) { - WARN_ON_ONCE(!list_empty(&ci->i_dirty_item) && + WARN_ON_ONCE(warn && !list_empty(&ci->i_dirty_item) && !mdsc->fsc->blocklisted); ci->i_auth_cap = NULL; } @@ -1304,7 +1304,7 @@ void __ceph_remove_caps(struct ceph_inode_info *ci) while (p) { struct ceph_cap *cap = rb_entry(p, struct ceph_cap, ci_node); p = rb_next(p); - __ceph_remove_cap(cap, true); + __ceph_remove_cap(cap, true, true); } spin_unlock(&ci->i_ceph_lock); } @@ -3815,7 +3815,7 @@ static void handle_cap_export(struct inode *inode, struct ceph_mds_caps *ex, goto out_unlock; if (target < 0) { - __ceph_remove_cap(cap, false); + __ceph_remove_cap(cap, false, true); goto out_unlock; } @@ -3850,7 +3850,7 @@ static void handle_cap_export(struct inode *inode, struct ceph_mds_caps *ex, change_auth_cap_ses(ci, tcap->session); } } - __ceph_remove_cap(cap, false); + __ceph_remove_cap(cap, false, true); goto out_unlock; } else if (tsession) { /* add placeholder for the export tagert */ @@ -3867,7 +3867,7 @@ static void handle_cap_export(struct inode *inode, struct ceph_mds_caps *ex, spin_unlock(&mdsc->cap_dirty_lock); } - __ceph_remove_cap(cap, false); + __ceph_remove_cap(cap, false, true); goto out_unlock; } @@ -3978,7 +3978,8 @@ static void handle_cap_import(struct ceph_mds_client *mdsc, ocap->mseq, mds, le32_to_cpu(ph->seq), le32_to_cpu(ph->mseq)); } - __ceph_remove_cap(ocap, (ph->flags & CEPH_CAP_FLAG_RELEASE)); + __ceph_remove_cap(ocap, (ph->flags & CEPH_CAP_FLAG_RELEASE), + true); } *old_issued = issued; diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 0302af53e079..d99ec2618585 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -1636,7 +1636,7 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap, dout("removing cap %p, ci is %p, inode is %p\n", cap, ci, &ci->vfs_inode); spin_lock(&ci->i_ceph_lock); - __ceph_remove_cap(cap, false); + __ceph_remove_cap(cap, false, false); if (!ci->i_auth_cap) { struct ceph_cap_flush *cf; @@ -2008,7 +2008,7 @@ static int trim_caps_cb(struct inode *inode, struct ceph_cap *cap, void *arg) if (oissued) { /* we aren't the only cap.. just remove us */ - __ceph_remove_cap(cap, true); + __ceph_remove_cap(cap, true, true); (*remaining)--; } else { struct dentry *dentry; diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 51ec17d12b26..106ddfd1ce92 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -1142,7 +1142,8 @@ extern void ceph_add_cap(struct inode *inode, unsigned issued, unsigned wanted, unsigned cap, unsigned seq, u64 realmino, int flags, struct ceph_cap **new_cap); -extern void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release); +extern void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release, + bool warn); extern void __ceph_remove_caps(struct ceph_inode_info *ci); extern void ceph_put_cap(struct ceph_mds_client *mdsc, struct ceph_cap *cap);