From patchwork Fri Apr 8 21:05:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 561092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A5B2C4321E for ; Fri, 8 Apr 2022 21:06:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239683AbiDHVIB (ORCPT ); Fri, 8 Apr 2022 17:08:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239693AbiDHVIA (ORCPT ); Fri, 8 Apr 2022 17:08:00 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 519F11403D9 for ; Fri, 8 Apr 2022 14:05:55 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id b18-20020a63d812000000b0037e1aa59c0bso5351279pgh.12 for ; Fri, 08 Apr 2022 14:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rlJkFcNcYd6mPNnge4ujHl+5hUg478jozPBxCeiplcU=; b=kD8vKqNMy0pBGS7eymZaRDQYbCNZT+UEHGxZV9YZX4Xm8SiuB0c/ypoS9pSwv0tLYB HpcVD4HioKpNuQKaD3c4terg5FTGEDvP/LrN0nbTKAdGOUmsHVYS8Ei92SqUAd/2Ye6+ Ic+jODJNU3xgl9cTfgGDuAalpa70HzKMZ+mmGOu+SHcXOO3ONMt6qujDh28WdL0wn89X HgUxOQHBTtaDJ+Skf4i3LxR2zpMF3o/infccnKr16BKp1eI81jsXJLeFnnKMa8VRDD5C 2RiTV0MygW7LyqGZ+RxlAnFjspv1Jm/3O9QZCKEvxpdvVG2CujC8yCUJy8+HeeE3Lf/L 4/Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rlJkFcNcYd6mPNnge4ujHl+5hUg478jozPBxCeiplcU=; b=Pbb2O90pmgwKFwYkQmVSvF+MPQER/V3X4TCuPVOH3iQpXKqC5LbR8r8uaDZk6nYykT rwriVrSXszcFTkZJE7NTmica8FCoBQpRGV3MhIccMMCRhD2kwI//d+jazYbDj+p+Fp9W eHtgTdO1FhU0CKfPuzFSNzRnnDd+UUngLaaxMJCs9brpQ2lFmUa/TvZ3hf8hRgX23rdr uBGmlo/Dzl+pgkJfkZm/Y9pvKxOqp6HoIJDAdNpJMuqgikhkHwcY+XI82F2dQK8u5r6u JH7V4R+LMkXhQsfpvLEPRtnp95OG+GmaDd/uR8xxErD4EdnYDgyekOcbYCu+aVCFZrRa 5HxQ== X-Gm-Message-State: AOAM531F9yr3LNFBWrg0h1KG+aMF4npS3dFWu9uYWhKTFzq3J51qKZpN NigD5zHpqaHdY2wStc3v8JnyeFGuiJRwa14+ X-Google-Smtp-Source: ABdhPJxlvfncBAZX5WHOx5BNkCJlbFqniaKc+b2JesTBq+VaT+G3FdaPBqWx9JKhyKVqo/FZpF/gO6FixnKe/EqS X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:2384:b0:1cb:5223:9dc4 with SMTP id mr4-20020a17090b238400b001cb52239dc4mr313643pjb.1.1649451954536; Fri, 08 Apr 2022 14:05:54 -0700 (PDT) Date: Fri, 8 Apr 2022 21:05:41 +0000 In-Reply-To: <20220408210545.3915712-1-vannapurve@google.com> Message-Id: <20220408210545.3915712-2-vannapurve@google.com> Mime-Version: 1.0 References: <20220408210545.3915712-1-vannapurve@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [RFC V1 PATCH 1/5] x86: kvm: HACK: Allow testing of priv memfd approach From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add plumbing in KVM logic to allow private memfd series: https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/ to be tested with non-confidential VMs. 1) Existing hypercall KVM_HC_MAP_GPA_RANGE is modified to support marking pages of the guest memory as privately accessed or accessed in a shared fashion. 2) kvm_vcpu_is_private_gfn is defined to allow guest accesses to be categorized as shared or private based on the values set by KVM_HC_MAP_GPA_RANGE hypercall. 3) KVM_MEM_PRIVATE flag for memslots is marked as always supported. Signed-off-by: Vishal Annapurve --- arch/x86/include/uapi/asm/kvm_para.h | 1 + arch/x86/kvm/mmu/mmu.c | 9 +++++---- arch/x86/kvm/x86.c | 16 ++++++++++++++-- include/linux/kvm_host.h | 3 +++ virt/kvm/kvm_main.c | 2 +- 5 files changed, 24 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h index 6e64b27b2c1e..3bc9add4095d 100644 --- a/arch/x86/include/uapi/asm/kvm_para.h +++ b/arch/x86/include/uapi/asm/kvm_para.h @@ -102,6 +102,7 @@ struct kvm_clock_pairing { #define KVM_MAP_GPA_RANGE_PAGE_SZ_2M (1 << 0) #define KVM_MAP_GPA_RANGE_PAGE_SZ_1G (1 << 1) #define KVM_MAP_GPA_RANGE_ENC_STAT(n) (n << 4) +#define KVM_MARK_GPA_RANGE_ENC_ACCESS (1 << 8) #define KVM_MAP_GPA_RANGE_ENCRYPTED KVM_MAP_GPA_RANGE_ENC_STAT(1) #define KVM_MAP_GPA_RANGE_DECRYPTED KVM_MAP_GPA_RANGE_ENC_STAT(0) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b1a30a751db0..ee9bc36011de 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3895,10 +3895,11 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, static bool kvm_vcpu_is_private_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) { - /* - * At this time private gfn has not been supported yet. Other patch - * that enables it should change this. - */ + gpa_t priv_gfn_end = vcpu->priv_gfn + vcpu->priv_pages; + + if ((gfn >= vcpu->priv_gfn) && (gfn < priv_gfn_end)) + return true; + return false; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 11a949928a85..3b17fa7f2192 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9186,8 +9186,20 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) if (!(vcpu->kvm->arch.hypercall_exit_enabled & (1 << KVM_HC_MAP_GPA_RANGE))) break; - if (!PAGE_ALIGNED(gpa) || !npages || - gpa_to_gfn(gpa) + npages <= gpa_to_gfn(gpa)) { + if (!PAGE_ALIGNED(gpa) || + gpa_to_gfn(gpa) + npages < gpa_to_gfn(gpa)) { + ret = -KVM_EINVAL; + break; + } + + if (attrs & KVM_MARK_GPA_RANGE_ENC_ACCESS) { + vcpu->priv_gfn = gpa_to_gfn(gpa); + vcpu->priv_pages = npages; + ret = 0; + break; + } + + if (!npages) { ret = -KVM_EINVAL; break; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 0150e952a131..7c12a0bdb495 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -311,6 +311,9 @@ struct kvm_vcpu { u64 requests; unsigned long guest_debug; + uint64_t priv_gfn; + uint64_t priv_pages; + struct mutex mutex; struct kvm_run *run; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index df5311755a40..a31a58aa1b79 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1487,7 +1487,7 @@ static void kvm_replace_memslot(struct kvm *kvm, bool __weak kvm_arch_private_memory_supported(struct kvm *kvm) { - return false; + return true; } static int check_memory_region_flags(struct kvm *kvm, From patchwork Fri Apr 8 21:05:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 559026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D90B1C433FE for ; Fri, 8 Apr 2022 21:06:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239675AbiDHVIG (ORCPT ); Fri, 8 Apr 2022 17:08:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234398AbiDHVIC (ORCPT ); Fri, 8 Apr 2022 17:08:02 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B04B313CCE7 for ; Fri, 8 Apr 2022 14:05:57 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id o6-20020a170902778600b00153a7c44e4bso5030772pll.13 for ; Fri, 08 Apr 2022 14:05:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GW1vKV9N5teOyvfb5I9sUS6KK3JTNrM8T3BSiJA3u58=; b=YPJzdBeiwqt4fjOsYpcbGkh1gLEAIoalUN0DQYlSc34An29/tzLZqwDKzPep8ILYxR Gu3b8EV6mtpXvJASkRADF/66OiaUIGTMgyJ5ovvU6ymdAYVy6wSWmka6NJGH0QRLTXL6 r2umjRjjUO6obXxOyLt75iI38lUoWAVVot2IPIJrVpmnX2hCCKBQw8JrfIKdkNKRgrh9 HaASvu7mS+//qfMTma0LOYpXsp7UOHwgUHWVY2EbYHGbGfqmSHagSzBTN5AqRBH8YL7h 0etiyCrAptQNAVpnH5L7JXc2gn6U8vp7IR3Wvr+y9Af1hD+XHs5GzjRM+yZELeINoH3k fh4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GW1vKV9N5teOyvfb5I9sUS6KK3JTNrM8T3BSiJA3u58=; b=nb/7y08+nyvNL9n3yB/lRSuHVL63OLj603Zv8nrPd3lWW3K8+B6Y/t1E6AcNicfIXC O7Eq9fHr4t0pF5pmuF6fhrr1D3nzW9o7zmMZr1To/aDEhq33Kf5+goY7LPJthNAX5FJc 5AomE3d+MFYnB07eSGu/i8Z1774lsk1V1Fg0qdhytRaAsLTTUYoL3grkqBONiOxrZUW2 ZPfcMtjUKjmLPLT18PFhk3pVnHVp0vpeRvMF3SPQnBD4pj3JEoPWX86mlXwutFk5aNO5 qHePrB/7EBCzu2nAmx2i8HOaJDRxBsEFf1kX8rMNMEPDuadATdtPW6I85B5rSa+7NhD9 3qow== X-Gm-Message-State: AOAM533XthLPwFHF1loH5Pmvm7gO/Wh0LRKDKP+i70qvAZ7/RKPeATuU YS2qH/rhj19BeZCSsbM3exfzI80s2hDPZqIs X-Google-Smtp-Source: ABdhPJxPmdiW7rz30UJ3Ed7dM0wffN2Qn1qWfxF8EThuEe9ikuR8T5UVcG1/UtpEZ1q7GRUv78CpjaDK4xF4IDVi X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:198c:b0:4fa:c717:9424 with SMTP id d12-20020a056a00198c00b004fac7179424mr21025716pfl.63.1649451957132; Fri, 08 Apr 2022 14:05:57 -0700 (PDT) Date: Fri, 8 Apr 2022 21:05:42 +0000 In-Reply-To: <20220408210545.3915712-1-vannapurve@google.com> Message-Id: <20220408210545.3915712-3-vannapurve@google.com> Mime-Version: 1.0 References: <20220408210545.3915712-1-vannapurve@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [RFC V1 PATCH 2/5] selftests: kvm: Fix inline assembly for hypercall From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Fix inline assembly for hypercall to explicitly set eax with hypercall number to allow the implementation to work even in cases where compiler would inline the function. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/lib/x86_64/processor.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..4d88e1a553bf 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -1461,7 +1461,7 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2, asm volatile("vmcall" : "=a"(r) - : "b"(a0), "c"(a1), "d"(a2), "S"(a3)); + : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3)); return r; } From patchwork Fri Apr 8 21:05:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 561091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32DE6C4332F for ; Fri, 8 Apr 2022 21:06:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239740AbiDHVIH (ORCPT ); Fri, 8 Apr 2022 17:08:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239729AbiDHVIF (ORCPT ); Fri, 8 Apr 2022 17:08:05 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C833143447 for ; Fri, 8 Apr 2022 14:06:00 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id u32-20020a634560000000b0039940fd2020so5338257pgk.20 for ; Fri, 08 Apr 2022 14:06:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/PqtQ0NHvl+Wage0chf3tBw7+MIrXRVv9pylUCmNXV8=; b=plGF1jVQWaXcT1st2xbpUbGV2bSPxA/iFPXSb40XeJmgu6EN4al4biuWhXFGqymZJW EzpWsjI/rc6wX7bKpzOhxSitpXiXafrZg1xmrnGUdfldyg073xm4DSryQk3sEPjLeY7i EwQO0cAVNAzlkERyLMgpPbMJpT4TOo1GpBU8KDjEI9kqs+KoVSBh1bXEwY7i+qJFT1kd Gg+D423m4wvgwNLL2g1b8WHbg4WFdtFQWoFK8c5V0Bv/TgxNx/BDF9YXTfREKrtviHqZ RXgqxyBar/43UiXjuthrux23J5xsQm+QmqfZdYMWbXApD/6mgAFSxdxRTRYsumWfNFfX dLFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/PqtQ0NHvl+Wage0chf3tBw7+MIrXRVv9pylUCmNXV8=; b=vbuxNYLRustSejDHQMncAQ145aA6nCDEasWvFtSV3YyikIhWo9PqYMJBjYvywIUuWv W/YwRoyAACv0BQqO4Z+3HRFMLfHxW9XoPeuaJRi6oaP6dAe0Yl92RLXO41MoSzm462hS O3xXjKkutejJa0AYPJwtrXqrhlP//Tsm5fB29P1iTAYpUC6F6rinBXAFiZo6hQ8uJzm2 0Gq1pD4HtGURSOlQnEJxCYtI521u6MGQEFwE67+EL/6lB2SvkJPTnpDrEoEuk5PEjKl4 jfYUdIyt2hhxSUpJSnl89RviFdGhyZBEYYtmuUDLzOkuWHu0asjKUls/1eRGLw1cFlmi EvAA== X-Gm-Message-State: AOAM533526OaW5Rzg496klZei6nntU7iFAiUqsOniY0907fXikaQjV0Y SAkUuIPrCbxvl0QqKEiJ667ncB15/wuMV8Da X-Google-Smtp-Source: ABdhPJzvRnpfeMyq6Yj/VgOQhvzXRoSHo8lUk2TngSIFhFjCok6pnxqkgM4nmfJVF3U2TDYKr4kADpYVV4/sUQx1 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:858b:b0:1c6:5bc8:781a with SMTP id m11-20020a17090a858b00b001c65bc8781amr466671pjn.0.1649451959567; Fri, 08 Apr 2022 14:05:59 -0700 (PDT) Date: Fri, 8 Apr 2022 21:05:43 +0000 In-Reply-To: <20220408210545.3915712-1-vannapurve@google.com> Message-Id: <20220408210545.3915712-4-vannapurve@google.com> Mime-Version: 1.0 References: <20220408210545.3915712-1-vannapurve@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [RFC V1 PATCH 3/5] selftests: kvm: Add a basic selftest to test private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add KVM selftest to access private memory privately from the guest to test that memory updates from guest and userspace vmm don't affect each other. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/Makefile | 1 + tools/testing/selftests/kvm/priv_memfd_test.c | 257 ++++++++++++++++++ 2 files changed, 258 insertions(+) create mode 100644 tools/testing/selftests/kvm/priv_memfd_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 21c2dbd21a81..f2f9a8546c66 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -97,6 +97,7 @@ TEST_GEN_PROGS_x86_64 += max_guest_memory_test TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test TEST_GEN_PROGS_x86_64 += memslot_perf_test TEST_GEN_PROGS_x86_64 += rseq_test +TEST_GEN_PROGS_x86_64 += priv_memfd_test TEST_GEN_PROGS_x86_64 += set_memory_region_test TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c new file mode 100644 index 000000000000..11ccdb853a84 --- /dev/null +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -0,0 +1,257 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include + +#define TEST_MEM_GPA 0xb0000000 +#define TEST_MEM_SIZE 0x2000 +#define TEST_MEM_END (TEST_MEM_GPA + TEST_MEM_SIZE) +#define SHARED_MEM_DATA_BYTE 0x66 +#define PRIV_MEM_DATA_BYTE 0x99 + +#define TEST_MEM_SLOT 10 + +#define VCPU_ID 0 + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +typedef bool (*vm_stage_handler_fn)(struct kvm_vm *, + void *, uint64_t); +typedef void (*guest_code_fn)(void); +struct test_run_helper { + char *test_desc; + vm_stage_handler_fn vmst_handler; + guest_code_fn guest_fn; + void *shared_mem; + int priv_memfd; +}; + +static bool verify_byte_pattern(void *mem, uint8_t byte, uint32_t size) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) { + if (buf[i] != byte) + return false; + } + + return true; +} + +/* Test to verify guest private accesses on private memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues guest + * execution. + * 3) Guest writes a different pattern on the private memory and signals VMM + * that it has updated private memory. + * 4) VMM verifies its shared memory contents to be same as the data populated + * in step 2 and continues guest execution. + * 5) Guest verifies its private memory contents to be same as the data + * populated in step 3 and marks the end of the guest execution. + */ +#define PMPAT_ID 0 +#define PMPAT_DESC "PrivateMemoryPrivateAccessTest" + +/* Guest code execution stages for private mem access test */ +#define PMPAT_GUEST_STARTED 0ULL +#define PMPAT_GUEST_PRIV_MEM_UPDATED 1ULL + +static bool pmpat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PMPAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + memset(shared_mem, SHARED_MEM_DATA_BYTE, TEST_MEM_SIZE); + VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED); + break; + } + case PMPAT_GUEST_PRIV_MEM_UPDATED: { + /* verify host updated data is still intact */ + TEST_ASSERT(verify_byte_pattern(shared_mem, + SHARED_MEM_DATA_BYTE, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pmpat_guest_code(void) +{ + void *priv_mem = (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(PMPAT_GUEST_STARTED); + + /* Mark the GPA range to be treated as always accessed privately */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + memset(priv_mem, PRIV_MEM_DATA_BYTE, TEST_MEM_SIZE); + GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED); + + GUEST_ASSERT(verify_byte_pattern(priv_mem, + PRIV_MEM_DATA_BYTE, TEST_MEM_SIZE)); + + GUEST_DONE(); +} + +static struct test_run_helper priv_memfd_testsuite[] = { + [PMPAT_ID] = { + .test_desc = PMPAT_DESC, + .vmst_handler = pmpat_handle_vm_stage, + .guest_fn = pmpat_guest_code, + }, +}; + +static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) +{ + struct kvm_run *run; + struct ucall uc; + uint64_t cmd; + + /* + * Loop until the guest is done. + */ + run = vcpu_state(vm, VCPU_ID); + + while (true) { + vcpu_run(vm, VCPU_ID); + + if (run->exit_reason == KVM_EXIT_IO) { + cmd = get_ucall(vm, VCPU_ID, &uc); + if (cmd != UCALL_SYNC) + break; + + if (!priv_memfd_testsuite[test_id].vmst_handler( + vm, &priv_memfd_testsuite[test_id], uc.args[1])) + break; + + continue; + } + + TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason); + break; + } + + if (run->exit_reason == KVM_EXIT_IO && cmd == UCALL_ABORT) + TEST_FAIL("%s at %s:%ld, val = %lu", (const char *)uc.args[0], + __FILE__, uc.args[1], uc.args[2]); +} + +static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot, + uint32_t size, uint64_t guest_addr, + uint32_t priv_fd, uint64_t priv_offset) +{ + struct kvm_userspace_memory_region_ext region_ext; + int ret; + + region_ext.region.slot = slot; + region_ext.region.flags = KVM_MEM_PRIVATE; + region_ext.region.guest_phys_addr = guest_addr; + region_ext.region.memory_size = size; + region_ext.region.userspace_addr = (uintptr_t) mem; + region_ext.private_fd = priv_fd; + region_ext.private_offset = priv_offset; + ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION, ®ion_ext); + TEST_ASSERT(ret == 0, "Failed to register user region for gpa 0x%lx\n", + guest_addr); +} + +/* Do private access to the guest's private memory */ +static void setup_and_execute_test(uint32_t test_id) +{ + struct kvm_vm *vm; + int priv_memfd; + int ret; + void *shared_mem; + struct kvm_enable_cap cap; + + vm = vm_create_default(VCPU_ID, 0, + priv_memfd_testsuite[test_id].guest_fn); + + /* Allocate shared memory */ + shared_mem = mmap(NULL, TEST_MEM_SIZE, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host"); + + /* Allocate private memory */ + priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE); + TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd"); + ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE); + TEST_ASSERT(ret != -1, "fallocate failed"); + + priv_memory_region_add(vm, shared_mem, + TEST_MEM_SLOT, TEST_MEM_SIZE, + TEST_MEM_GPA, priv_memfd, 0); + + pr_info("Mapping test memory pages 0x%x page_size 0x%x\n", + TEST_MEM_SIZE/vm_get_page_size(vm), + vm_get_page_size(vm)); + virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA, + (TEST_MEM_SIZE/vm_get_page_size(vm))); + + /* Enable exit on KVM_HC_MAP_GPA_RANGE */ + pr_info("Enabling exit on map_gpa_range hypercall\n"); + ret = ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL); + TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE), + "VM exit on MAP_GPA_RANGE HC not supported"); + cap.cap = KVM_CAP_EXIT_HYPERCALL; + cap.flags = 0; + cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); + ret = ioctl(vm_get_fd(vm), KVM_ENABLE_CAP, &cap); + TEST_ASSERT(ret == 0, + "Failed to enable exit on MAP_GPA_RANGE hypercall\n"); + + priv_memfd_testsuite[test_id].shared_mem = shared_mem; + priv_memfd_testsuite[test_id].priv_memfd = priv_memfd; + vcpu_work(vm, test_id); + + munmap(shared_mem, TEST_MEM_SIZE); + priv_memfd_testsuite[test_id].shared_mem = NULL; + close(priv_memfd); + priv_memfd_testsuite[test_id].priv_memfd = -1; + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { + pr_info("=== Starting test %s... ===\n", + priv_memfd_testsuite[i].test_desc); + setup_and_execute_test(i); + pr_info("--- completed test %s ---\n\n", + priv_memfd_testsuite[i].test_desc); + } + + return 0; +} From patchwork Fri Apr 8 21:05:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 559025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23A4EC433F5 for ; Fri, 8 Apr 2022 21:06:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239744AbiDHVIJ (ORCPT ); Fri, 8 Apr 2022 17:08:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239742AbiDHVII (ORCPT ); Fri, 8 Apr 2022 17:08:08 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 161ED13CEFF for ; Fri, 8 Apr 2022 14:06:03 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id w14-20020a1709027b8e00b0015386056d2bso5037667pll.5 for ; Fri, 08 Apr 2022 14:06:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=p+6J5avDF84ftaGe39tTHRoP/NBW8772y66wg1YfZYw=; b=l0FXhWa/6YJKcTE5WGzDyEC+UrLkSpJwQDJkJsDVyNCbeKa+j011f+nnEMfM/d7Yn2 vqnoDRBaNf9IoPd4xNvbP9KSD58R/Fp3gNuGLjKhyxKStwtunD0MB37DqfQP0LW7PGrq slph68rE/UdnBwzmZ+mNKfL9TSUcXXMOOj04cUVMIxtdTSnhJT9Y1XLA8FHGfPARTd0U JaTa3xOyWnbW51pC/QiY87Gd0W4v4IY+IrQrJUW29R/ZPK4CuLQ8gsy0K56vhahESAbU osOkiuhnjdio/riFZgIMbrFuClGk0cxmaxxxQCCZ9VD3bSQ4uOtdMM9lNNNKshoMx3X0 xcBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=p+6J5avDF84ftaGe39tTHRoP/NBW8772y66wg1YfZYw=; b=agYp/umvPadODvaIP9IrvWT6DAyw6C7Opif3x8EOoOYi4nyjnR2Q3KjFKEVkNiNRJL nC7CeMVWlAvzYzCpuTXVyKpgQHdQHi0p1p+OMW//B9MlZa3BsltK+YDtgpdTcehsj11g h4zd/ByZXJgpL/+ttRsoWXV8SNVWgRXyJIWOb4ym2h260VTuqWAkj80Z3UYuSSGe14gU Vq2OpDZxf+Maz4VefiDjiYumv9xGrKeCcdjgdSoYDX0U5TrQ3Ky+cz17/gmRDMfx4Mz+ XF+4If4GH41DPRiSbXnurW7FexOmZbf+pFv225x4BqfMeLORRE+f4Nym/0DtZjOGb2s6 Yg7g== X-Gm-Message-State: AOAM5333q9qq0eDrMheUsj5zqc93tO/0T5x+IFb9bvTbaekrPDCrWUVy moOnHbaJl2ElqRD6gYgD8f8dlr1NX7pcQdiY X-Google-Smtp-Source: ABdhPJwkd3il5WzGjYKAv2n2CphkdVejN98fg/wNCluE/8xZWC50B6gLsb61B6viCzpZYwi6CRhlp5PWQl3rD8L2 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:846:b0:4fb:3b79:fc94 with SMTP id q6-20020a056a00084600b004fb3b79fc94mr21231620pfk.76.1649451962533; Fri, 08 Apr 2022 14:06:02 -0700 (PDT) Date: Fri, 8 Apr 2022 21:05:44 +0000 In-Reply-To: <20220408210545.3915712-1-vannapurve@google.com> Message-Id: <20220408210545.3915712-5-vannapurve@google.com> Mime-Version: 1.0 References: <20220408210545.3915712-1-vannapurve@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [RFC V1 PATCH 4/5] selftests: kvm: priv_memfd_test: Add support for memory conversion From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add handling of explicit private/shared memory conversion using KVM_HC_MAP_GPA_RANGE and implicit memory conversion by handling KVM_EXIT_MEMORY_ERROR. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c index 11ccdb853a84..0e6c19501f27 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -129,6 +129,83 @@ static struct test_run_helper priv_memfd_testsuite[] = { }, }; +static void handle_vm_exit_hypercall(struct kvm_run *run, + uint32_t test_id) +{ + uint64_t gpa, npages, attrs; + int priv_memfd = + priv_memfd_testsuite[test_id].priv_memfd; + int ret; + int fallocate_mode; + + if (run->hypercall.nr != KVM_HC_MAP_GPA_RANGE) { + TEST_FAIL("Unhandled Hypercall %lld\n", + run->hypercall.nr); + } + + gpa = run->hypercall.args[0]; + npages = run->hypercall.args[1]; + attrs = run->hypercall.args[2]; + + if ((gpa >= TEST_MEM_GPA) && ((gpa + + (npages << MIN_PAGE_SHIFT)) <= TEST_MEM_END)) { + TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n", + gpa, npages); + } + + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) + fallocate_mode = 0; + else { + fallocate_mode = (FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_KEEP_SIZE); + } + pr_info("Converting off 0x%lx pages 0x%lx to %s\n", + (gpa - TEST_MEM_GPA), npages, + fallocate_mode ? + "shared" : "private"); + ret = fallocate(priv_memfd, fallocate_mode, + (gpa - TEST_MEM_GPA), + npages << MIN_PAGE_SHIFT); + TEST_ASSERT(ret != -1, + "fallocate failed in hc handling"); + run->hypercall.ret = 0; +} + +static void handle_vm_exit_memory_error(struct kvm_run *run, + uint32_t test_id) +{ + uint64_t gpa, size, flags; + int ret; + int priv_memfd = + priv_memfd_testsuite[test_id].priv_memfd; + int fallocate_mode; + + gpa = run->memory.gpa; + size = run->memory.size; + flags = run->memory.flags; + + if ((gpa < TEST_MEM_GPA) || ((gpa + size) + > TEST_MEM_END)) { + TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n", + gpa, size); + } + + if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) + fallocate_mode = 0; + else { + fallocate_mode = (FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_KEEP_SIZE); + } + pr_info("Converting off 0x%lx size 0x%lx to %s\n", + (gpa - TEST_MEM_GPA), size, + fallocate_mode ? + "shared" : "private"); + ret = fallocate(priv_memfd, fallocate_mode, + (gpa - TEST_MEM_GPA), size); + TEST_ASSERT(ret != -1, + "fallocate failed in memory error handling"); +} + static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) { struct kvm_run *run; @@ -155,6 +232,16 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) continue; } + if (run->exit_reason == KVM_EXIT_HYPERCALL) { + handle_vm_exit_hypercall(run, test_id); + continue; + } + + if (run->exit_reason == KVM_EXIT_MEMORY_ERROR) { + handle_vm_exit_memory_error(run, test_id); + continue; + } + TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason); break; } From patchwork Fri Apr 8 21:05:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 561090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8C8CC433EF for ; Fri, 8 Apr 2022 21:06:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239679AbiDHVI2 (ORCPT ); Fri, 8 Apr 2022 17:08:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239749AbiDHVIK (ORCPT ); Fri, 8 Apr 2022 17:08:10 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 633A213CEFF for ; Fri, 8 Apr 2022 14:06:05 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id rm11-20020a17090b3ecb00b001c713925e58so8412422pjb.6 for ; Fri, 08 Apr 2022 14:06:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4zsOaGoqxi5u8VVnNDFHZKxvAZS+RzVtuUFsffUUaGY=; b=JXLMkRS6JvIurR1ovHjYoZO2eSr4rOolTQWaY3+SzZUiP6Ub/2PveOAlWSzfq4VsNm 4vLNloLowvGCB2fVvUbo1hVchL+aQd7NmdyHAsblXwunuqvXyvxKMxuuWFSUqdX6EiPG Npqog9MAPyGM9raiDtXv7Yi22unJsCuHoOWFtd8gxpYpffl2IRj4S5GrEnuvLRnDoVPH XpnuRYmpf+HQuPQN+9dX5vrJJeDHsD4UCFnkBXwcagjwSNsPu+vCiN4eYtirqv53S7fc z6x2YcLJ/uj6DLtjk1Xa+lHssjjp0HylnGiQClroP+Lzi+mM8+yTEqf9GKNjC+ZtYkOr VCaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4zsOaGoqxi5u8VVnNDFHZKxvAZS+RzVtuUFsffUUaGY=; b=EJNL8RvpVCSoyis7oR5OKG79wjvZKbJKUJMo5KCEgdyTFxj2vPBCU5j6h8Rz+PYDpG 6lwWAKWCTU4QdLqRXwUCjNLKhOYao80sLMXqX1P+JjH+k1OEkRfYxb+HwVjp3KAWsTP+ ypIcMJk4346jl3+Y8L3g4F9YJ5MLMBalQ7xJaeXmGCBRwUzD1L3u3XM0EDbAnTM4nP9X 0Wh+wjDRaui/q4uwoUrE6lC7z9rEFQwxC22q7OSLWa2/hxgABQdftksRDe+8HNY/Tfuo bP/7Z8+J1olxJGzBRDjE2ELusxdv3f7Kb7CsuLEK+u69eDEyrlSywgrNKnyM+aaKcuUV kh8g== X-Gm-Message-State: AOAM533Un5Ysjbjsp9JmnTysXiYlz7fwreGiBT02sjQIwG52HUW3E7sh AIygin74Ne4sBXmwjg1E50C0D9QGbyccJ8bw X-Google-Smtp-Source: ABdhPJxpiNMh3TIWz6tVEMCHA/ekNoUBOfzl+LYpi/yUPlJpNr7M5OLqAuI4cHyRkKTibbaTwiiYk99gwpeL6KnX X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:d511:b0:157:1304:e79e with SMTP id b17-20020a170902d51100b001571304e79emr8597077plg.100.1649451964891; Fri, 08 Apr 2022 14:06:04 -0700 (PDT) Date: Fri, 8 Apr 2022 21:05:45 +0000 In-Reply-To: <20220408210545.3915712-1-vannapurve@google.com> Message-Id: <20220408210545.3915712-6-vannapurve@google.com> Mime-Version: 1.0 References: <20220408210545.3915712-1-vannapurve@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [RFC V1 PATCH 5/5] selftests: kvm: priv_memfd_test: Add shared access test From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a test to access private memory in shared fashion which should exercise implicit memory conversion path using KVM_EXIT_MEMORY_ERROR. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c index 0e6c19501f27..607fdc149c7d 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -121,12 +121,78 @@ static void pmpat_guest_code(void) GUEST_DONE(); } +/* Test to verify guest shared accesses on private memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues guest + * execution. + * 3) Guest reads private gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 3) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 3 and continues guest execution. + */ +#define PMSAT_ID 1 +#define PMSAT_DESC "PrivateMemorySharedAccessTest" + +/* Guest code execution stages for private mem access test */ +#define PMSAT_GUEST_STARTED 0ULL +#define PMSAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool pmsat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PMSAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + memset(shared_mem, SHARED_MEM_DATA_BYTE, TEST_MEM_SIZE); + VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED); + break; + } + case PMSAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(verify_byte_pattern(shared_mem, + PRIV_MEM_DATA_BYTE, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PMSAT_GUEST_PRIV_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pmsat_guest_code(void) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + + GUEST_SYNC(PMSAT_GUEST_STARTED); + GUEST_ASSERT(verify_byte_pattern(shared_mem, + SHARED_MEM_DATA_BYTE, TEST_MEM_SIZE)); + + memset(shared_mem, PRIV_MEM_DATA_BYTE, TEST_MEM_SIZE); + GUEST_SYNC(PMSAT_GUEST_TEST_MEM_UPDATED); + + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] = { [PMPAT_ID] = { .test_desc = PMPAT_DESC, .vmst_handler = pmpat_handle_vm_stage, .guest_fn = pmpat_guest_code, }, + [PMSAT_ID] = { + .test_desc = PMSAT_DESC, + .vmst_handler = pmsat_handle_vm_stage, + .guest_fn = pmsat_guest_code, + }, }; static void handle_vm_exit_hypercall(struct kvm_run *run,