From patchwork Tue Mar 10 12:41:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 229588 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19E5AC18E5A for ; Tue, 10 Mar 2020 13:19:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E565A20409 for ; Tue, 10 Mar 2020 13:19:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583846350; bh=3Y1XYXfkWamaOXHpkOn5oCvQthVNR5JtEBNATDMRc5A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=vp9NCaZmesaK45pkDAPwXkxPIEQcsdPpR1rHy6RpU94LcgQj/qMTGzgm4ulpKB3ZX Pda42rxVj7JqQRnVnHoDQw7FBAb2fNfB80hf2DLfLTD7lMP4f5p3i/k/FAbVFldx+b fBfocJAgWAG3tLYznX+9RiUfEjLC5GoIV3iovq10= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731257AbgCJNJn (ORCPT ); Tue, 10 Mar 2020 09:09:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:56858 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730889AbgCJNJl (ORCPT ); Tue, 10 Mar 2020 09:09:41 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 99FAC24698; Tue, 10 Mar 2020 13:09:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583845779; bh=3Y1XYXfkWamaOXHpkOn5oCvQthVNR5JtEBNATDMRc5A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X/WcUSEDadsiAR6Z/L4seinA7HiNXAD/YDyn7LtSZXhszafaNesI45aWaLjX04iO6 oNohS40XVQogjc9aqASMZweVFcXHKvDJDSoqOkLr0X5igsEtx43TeNRVxsTW1UCavr DmERGwb/YRZRlgAOMPcRjaHoECOxCQ3ynE6prTd0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jim Mattson , Andrew Honig , Sean Christopherson , Paolo Bonzini Subject: [PATCH 4.14 058/126] KVM: Check for a bad hva before dropping into the ghc slow path Date: Tue, 10 Mar 2020 13:41:19 +0100 Message-Id: <20200310124207.872563697@linuxfoundation.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200310124203.704193207@linuxfoundation.org> References: <20200310124203.704193207@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Sean Christopherson commit fcfbc617547fc6d9552cb6c1c563b6a90ee98085 upstream. When reading/writing using the guest/host cache, check for a bad hva before checking for a NULL memslot, which triggers the slow path for handing cross-page accesses. Because the memslot is nullified on error by __kvm_gfn_to_hva_cache_init(), if the bad hva is encountered after crossing into a new page, then the kvm_{read,write}_guest() slow path could potentially write/access the first chunk prior to detecting the bad hva. Arguably, performing a partial access is semantically correct from an architectural perspective, but that behavior is certainly not intended. In the original implementation, memslot was not explicitly nullified and therefore the partial access behavior varied based on whether the memslot itself was null, or if the hva was simply bad. The current behavior was introduced as a seemingly unintentional side effect in commit f1b9dd5eb86c ("kvm: Disallow wraparound in kvm_gfn_to_hva_cache_init"), which justified the change with "since some callers don't check the return code from this function, it sit seems prudent to clear ghc->memslot in the event of an error". Regardless of intent, the partial access is dependent on _not_ checking the result of the cache initialization, which is arguably a bug in its own right, at best simply weird. Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.") Cc: Jim Mattson Cc: Andrew Honig Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- virt/kvm/kvm_main.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2027,12 +2027,12 @@ int kvm_write_guest_offset_cached(struct if (slots->generation != ghc->generation) __kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len); - if (unlikely(!ghc->memslot)) - return kvm_write_guest(kvm, gpa, data, len); - if (kvm_is_error_hva(ghc->hva)) return -EFAULT; + if (unlikely(!ghc->memslot)) + return kvm_write_guest(kvm, gpa, data, len); + r = __copy_to_user((void __user *)ghc->hva + offset, data, len); if (r) return -EFAULT; @@ -2060,12 +2060,12 @@ int kvm_read_guest_cached(struct kvm *kv if (slots->generation != ghc->generation) __kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len); - if (unlikely(!ghc->memslot)) - return kvm_read_guest(kvm, ghc->gpa, data, len); - if (kvm_is_error_hva(ghc->hva)) return -EFAULT; + if (unlikely(!ghc->memslot)) + return kvm_read_guest(kvm, ghc->gpa, data, len); + r = __copy_from_user(data, (void __user *)ghc->hva, len); if (r) return -EFAULT;