From patchwork Tue Oct 27 13:47:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 313046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 436B6C5DF9E for ; Tue, 27 Oct 2020 14:11:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A37722264 for ; Tue, 27 Oct 2020 14:11:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603807863; bh=UsXuWPO3MVgIBZEvmfMajERhlXMjTnec0B+27+LY1sc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=b3FKnVhCRHwzFaO2tGOergEHXhrGicLa/9KVxjUdN5C2kJhNoxJq0NGv2OY0XTITS oAfXfI/A9qW3JVUIR6jd54cwtJKcNk7xjq5TYjfNEaghmZuZM489VOkni2I3F3C1qg 0HTeg+Za+Avq3p7Lt3IbAElcclSzkdxwZW5DXfXM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755197AbgJ0OIo (ORCPT ); Tue, 27 Oct 2020 10:08:44 -0400 Received: from mail.kernel.org ([198.145.29.99]:57824 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755173AbgJ0OIn (ORCPT ); Tue, 27 Oct 2020 10:08:43 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DEAFC22258; Tue, 27 Oct 2020 14:08:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603807723; bh=UsXuWPO3MVgIBZEvmfMajERhlXMjTnec0B+27+LY1sc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xvJV28AtrxcUqYvpIi5BWBCwxWHY4zL2Wz3Tp3FJ+nmKXL89BnKyifyiXjGh7SwrO Lw//ffWZF5CaCrT2u23QUCUcaZqjeG22+0MGBxMk0O4ffOh3A82cYQxJPoBMvIh57l RvnhTkxebqBPo7vlJc9ZpvKpK792eAdp8G3LqCfA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Junaid Shahid , Sean Christopherson , Paolo Bonzini Subject: [PATCH 4.14 018/191] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages Date: Tue, 27 Oct 2020 14:47:53 +0100 Message-Id: <20201027134910.599243921@linuxfoundation.org> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201027134909.701581493@linuxfoundation.org> References: <20201027134909.701581493@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Sean Christopherson commit e89505698c9f70125651060547da4ff5046124fc upstream. Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in kvm_recover_nx_lpages() to finish zapping pages in the unlikely event that the loop exited due to lpage_disallowed_mmu_pages being empty. Because the recovery thread drops mmu_lock() when rescheduling, it's possible that lpage_disallowed_mmu_pages could be emptied by a different thread without to_zap reaching zero despite to_zap being derived from the number of disallowed lpages. Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages") Cc: Junaid Shahid Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Message-Id: <20200923183735.584-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/mmu.c | 1 + 1 file changed, 1 insertion(+) --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5846,6 +5846,7 @@ static void kvm_recover_nx_lpages(struct cond_resched_lock(&kvm->mmu_lock); } } + kvm_mmu_commit_zap_page(kvm, &invalid_list); spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, rcu_idx);