From patchwork Fri Dec 23 00:13:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 636648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A801FC4332F for ; Fri, 23 Dec 2022 00:14:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230384AbiLWAOB (ORCPT ); Thu, 22 Dec 2022 19:14:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230071AbiLWAN7 (ORCPT ); Thu, 22 Dec 2022 19:13:59 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77085264BE for ; Thu, 22 Dec 2022 16:13:58 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id o18-20020a170902d4d200b00189d4c25568so2339310plg.13 for ; Thu, 22 Dec 2022 16:13:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3CXtB6MmWgwzdyhdYI6wgxsbgDPgAgTn0fSY+N59/F8=; b=drpNHVsxlEQPFFOKpXIeD+HdXUVwglQVf5P3SjgrGRWkac+ExKksqwf2uJcdq6tNAY dzE7DEuw9lOfVmG4IPSHg9ge2WxIe4c3sdbOoBHOuRakO+snBcaTDAt0Ex/WGTRsTYrg b5/8VhWaubp4zJlxmRXrbSSUC/MfGw3nZVk63VY7yoLpxgLU80KPH4J/51ITcwYHISkT Swk4EqO7j+tz4b98gJSbEof3ugrIy8hT5k/XbiZx9Q3rbrQs9Fmj+j09Njf0tHrQX6pX xl2cickkH5OxZPemDApPAiesiswRMUt/uQVQeLGXWKtqOABdpBLcolSi9Qf3ao4VZmEm XyRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3CXtB6MmWgwzdyhdYI6wgxsbgDPgAgTn0fSY+N59/F8=; b=SmUZX5QlJEcDTobwPy3n1wklf+GMo78pC8O5pq8ncssl11V8jZvsK4K2pbyhQrdwmX UyiyhIHtfic76hyavN9E1zEtaegD6NclvmIC0QHRyh2AX6RAFlgonYi5WBSd68/qAHbA e20knwWEMmXgTkABYSdXnVBVH9vC8AZ9IumtEUioXuAyqK+SLXeYXBg/rgU0cLajtLR3 VLHK9SJZDtAVcnUZ5DU7CM/KBVz3Y3Yf3jCIFFCLzK40FFnmiWDZv53iodTPSXPFjwHn FGnoJkPdN5VqHuouhgFlvEdivziwsBvTcHL2Bk8bD/QSQQasJ1gqJyM4+lK8pZNDpdOK G6og== X-Gm-Message-State: AFqh2kqV61FqtKJ9d/khvO4LOiNZKrY9Pa2M6Jod6ID1b1emv1NH9qES C3RU7OLt3YUrqtcfohxGIpHdUiHcCq0cqAt5 X-Google-Smtp-Source: AMrXdXv4c3s22lQ8qQxnMVuKAij3ndlrk3aTTDiiOZmva1SV1pclFaT3/V/1VZXr2KoAYvuhAnpCUB0B/vE4Bscr X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:349b:b0:576:91fa:8ed0 with SMTP id cp27-20020a056a00349b00b0057691fa8ed0mr603715pfb.15.1671754437896; Thu, 22 Dec 2022 16:13:57 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:45 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-2-vannapurve@google.com> Subject: [V3 PATCH 1/8] KVM: selftests: private_mem: Use native hypercall From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org CVMs need to execute hypercalls as per the cpu type without relying on KVM emulation. Execute hypercall from the guest using native instructions. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/lib/x86_64/private_mem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c index 2b97fc34ec4a..5a8fd8c3bc04 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -24,7 +24,7 @@ static inline uint64_t __kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size, uint64_t flags) { - return kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0); + return kvm_native_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0); } static inline void kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size, From patchwork Fri Dec 23 00:13:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 636517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E06BC4332F for ; Fri, 23 Dec 2022 00:14:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231484AbiLWAOE (ORCPT ); Thu, 22 Dec 2022 19:14:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231233AbiLWAOC (ORCPT ); Thu, 22 Dec 2022 19:14:02 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8105026AB3 for ; Thu, 22 Dec 2022 16:14:00 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id 191-20020a6214c8000000b00577ab8701b0so1801339pfu.0 for ; Thu, 22 Dec 2022 16:14:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MVS2VPJHypvfItXM6YMyTQGTBGIjipmMwSiUjoXdL8g=; b=XuSDGnZjihxZ6byyIHRt4Ar3HsgSljKfwzl1KDlDhO4Qj64Q5E66DxA2dvI0tLoMTD pk7ozE17dGCcj+MvJCTSiKNdwlEl6jF52KnUALW3lawPPKi+oJUXngTpy8CUcRnbfrET xZkS/kN1R+BsqdqSnrLfKuXSGRKDCF4RAj2hna0jgRDnOyG9iM6OrPHVFHDOcy0Uqwn0 QwME64otg8DEpyacK9CMhahAKHVlUDBetoysoqOXfPZGi/Cpl9f/KtnGjCmpXFhyyPvd jX+4S6miiz4cxFDxnryStGqJ6ZuxUfUTAlJ/bfeqbtnr7POPsZ7aDBSsQ0HxT+8dAA8N tCBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MVS2VPJHypvfItXM6YMyTQGTBGIjipmMwSiUjoXdL8g=; b=tNGkEba30j6VkgnVwoMXJYalWaPtIe0xO0WUmFYA+t7bDwaYd5UD1cr3BgYUnxORbf +qw/X6QB9OrLUHYR+dg1R6ZaOY9QWbBhGJySTXdKdmgjfcd3uWIXhGmPEiEq/ZmZKkEf hMTw0DNym0JbY+c0ug9NQKS95BnmzMXkRe4p0njeid5Noc7Pye0oYU+QPrCfNkRyQydN jLBWqz3GONlAQEJUswuPpnM8N/fBm016FJ4gy+8FM9x0U3YqO7hboR3zLEyrihfMr78Q ukOWtqVu6wkg0fiM8F62QFlWA9Cd+M+yrecm3bV1mNBRBJ10U+QIjtDONfKB/zXoH+QL hbVQ== X-Gm-Message-State: AFqh2kr3jckXQc972I49vrxswLbG8w6TFtIBcRBsUtUZ4WOSyAWGuqL2 6ycxpTmQ5juPgIGKaZRSiZLuUgE5vZYsGIVQ X-Google-Smtp-Source: AMrXdXt3hoEW5vjhWX43EGzyjfpjYakRv97PcJWxT3YainHnqA0hmEVVeDlGX4ACjZ3RUg9/N/mrfrdFAGPMovyC X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:1955:b0:219:b576:d0bd with SMTP id nk21-20020a17090b195500b00219b576d0bdmr682184pjb.123.1671754439977; Thu, 22 Dec 2022 16:13:59 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:46 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-3-vannapurve@google.com> Subject: [V3 PATCH 2/8] KVM: selftests: Support mapping pagetables to guest virtual memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add support for mapping guest pagetables to guest VM regions to allow guests to modify the page table entries at runtime. Add following APIs: 1) vm_set_pgt_alloc_tracking: Track allocations for guest page tables so that such pages can be mapped back to guests later. 2) vm_map_page_table: Map page table pages to guests and pass the information about page table physical pages to virtual address mappings to guests. This framework is useful for the guests whose pagetables can't be manipulated from userspace e.g. confidential VM selftests. Confidential VMs need to change the memory access type by modifying gpa values in their page table entries. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/kvm_util_base.h | 88 +++++++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 88 ++++++++++++++++++- .../selftests/kvm/lib/x86_64/processor.c | 41 +++++++++ 3 files changed, 216 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index a736d6d18fa5..6c286686ec7c 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -78,6 +78,11 @@ struct protected_vm { int8_t protected_bit; }; +struct pgt_page { + vm_paddr_t paddr; + struct list_head list; +}; + struct kvm_vm { int mode; unsigned long type; @@ -108,6 +113,10 @@ struct kvm_vm { /* VM protection enabled: SEV, etc*/ bool protected; + struct list_head pgt_pages; + bool track_pgt_pages; + uint32_t num_pgt_pages; + vm_vaddr_t pgt_vaddr_start; /* Cache of information for binary stats interface */ int stats_fd; @@ -196,6 +205,25 @@ struct vm_guest_mode_params { unsigned int page_size; unsigned int page_shift; }; + +/* + * Structure shared with the guest containing information about: + * - Starting virtual address for num_pgt_pages physical pagetable + * page addresses tracked via paddrs array + * - page size of the guest + * + * Guest can walk through its pagetables using this information to + * read/modify pagetable attributes. + */ +struct guest_pgt_info { + uint64_t num_pgt_pages; + uint64_t pgt_vaddr_start; + uint64_t page_size; + uint64_t enc_mask; + uint64_t shared_mask; + uint64_t paddrs[]; +}; + extern const struct vm_guest_mode_params vm_guest_mode_params[]; int open_path_or_exit(const char *path, int flags); @@ -411,6 +439,48 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); + +/* + * function called by guest code to translate physical address of a pagetable + * page to guest virtual address. + * + * input args: + * gpgt_info - pointer to the guest_pgt_info structure containing info + * about guest virtual address mappings for guest physical + * addresses of page table pages. + * pgt_pa - physical address of guest page table page to be translated + * to a virtual address. + * + * output args: none + * + * return: + * pointer to the pagetable page, null in case physical address is not + * tracked via given guest_pgt_info structure. + */ +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, uint64_t pgt_pa); + +/* + * 1) Map page table physical pages to the guest virtual address range + * 2) Allocate and setup a page to be shared with guest containing guest_pgt_info + * structure. + * + * Note: + * 1) vm_set_pgt_alloc_tracking function should be used to start tracking + * of physical page table page allocation. + * 2) This function should be invoked after needed pagetable pages are + * mapped to the VM using virt_pg_map. + * + * input args: + * vm - virtual machine + * vaddr_min - Minimum guest virtual address to start mapping the + * guest pagetable pages and guest_pgt_info structure page(s). + * + * output args: none + * + * return: none + */ +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); @@ -673,10 +743,28 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing); const char *exit_reason_str(unsigned int exit_reason); +void sync_vm_gpgt_info(struct kvm_vm *vm, vm_vaddr_t pgt_info); + vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t _vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot, bool protected); + +/* + * Enable tracking of physical guest pagetable pages for the given vm. + * This function should be called right after vm creation before any pages are + * mapped into the VM using vm_alloc_* / vm_vaddr_alloc* functions. + * + * input args: + * vm - virtual machine + * + * output args: none + * + * return: + * None + */ +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm); + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 37f342a17350..b56be997216a 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -202,6 +202,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) TEST_ASSERT(vm != NULL, "Insufficient Memory"); INIT_LIST_HEAD(&vm->vcpus); + INIT_LIST_HEAD(&vm->pgt_pages); vm->regions.gpa_tree = RB_ROOT; vm->regions.hva_tree = RB_ROOT; hash_init(vm->regions.slot_hash); @@ -695,6 +696,7 @@ void kvm_vm_free(struct kvm_vm *vmp) { int ctr; struct hlist_node *node; + struct pgt_page *entry, *nentry; struct userspace_mem_region *region; if (vmp == NULL) @@ -710,6 +712,9 @@ void kvm_vm_free(struct kvm_vm *vmp) hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node) __vm_mem_region_delete(vmp, region, false); + list_for_each_entry_safe(entry, nentry, &vmp->pgt_pages, list) + free(entry); + /* Free sparsebit arrays. */ sparsebit_free(&vmp->vpages_valid); sparsebit_free(&vmp->vpages_mapped); @@ -1330,6 +1335,72 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } +void __weak sync_vm_gpgt_info(struct kvm_vm *vm, vm_vaddr_t pgt_info) +{ +} + +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, + uint64_t pgt_pa) +{ + uint64_t num_pgt_pages = gpgt_info->num_pgt_pages; + uint64_t pgt_vaddr_start = gpgt_info->pgt_vaddr_start; + uint64_t page_size = gpgt_info->page_size; + + for (uint32_t i = 0; i < num_pgt_pages; i++) { + if (gpgt_info->paddrs[i] == pgt_pa) + return (void *)(pgt_vaddr_start + i * page_size); + } + return NULL; +} + +static void vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + struct guest_pgt_info *gpgt_info; + uint64_t info_size = sizeof(*gpgt_info) + (sizeof(uint64_t) * vm->num_pgt_pages); + uint64_t num_pages = align_up(info_size, vm->page_size); + vm_vaddr_t buf_start = vm_vaddr_alloc(vm, num_pages, vaddr_min); + uint32_t i = 0; + + gpgt_info = (struct guest_pgt_info *)addr_gva2hva(vm, buf_start); + gpgt_info->num_pgt_pages = vm->num_pgt_pages; + gpgt_info->pgt_vaddr_start = vm->pgt_vaddr_start; + gpgt_info->page_size = vm->page_size; + if (vm->protected) { + gpgt_info->enc_mask = vm->arch.c_bit; + gpgt_info->shared_mask = vm->arch.s_bit; + } + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + gpgt_info->paddrs[i] = pgt_page_entry->paddr; + i++; + } + TEST_ASSERT((i == vm->num_pgt_pages), "pgt entries mismatch with the counter"); + sync_vm_gpgt_info(vm, buf_start); +} + +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + vm_vaddr_t vaddr; + + /* Stop tracking further pgt pages, mapping pagetable may itself need + * new pages. + */ + vm->track_pgt_pages = false; + vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, + vm->num_pgt_pages * vm->page_size, vaddr_min); + vaddr = vaddr_start; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + /* Map the virtual page. */ + virt_pg_map(vm, vaddr, pgt_page_entry->paddr & ~vm->arch.c_bit); + sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); + vaddr += vm->page_size; + } + vm->pgt_vaddr_start = vaddr_start; + + vm_setup_pgt_info_buf(vm, vaddr_min); +} + /* * VM Virtual Address Allocate Shared/Encrypted * @@ -1981,9 +2052,24 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables. */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm) +{ + vm->track_pgt_pages = true; +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + struct pgt_page *pgt; + vm_paddr_t paddr = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + + if (vm->track_pgt_pages) { + pgt = calloc(1, sizeof(*pgt)); + TEST_ASSERT(pgt != NULL, "Insufficient memory"); + pgt->paddr = (paddr | vm->arch.c_bit); + list_add(&pgt->list, &vm->pgt_pages); + vm->num_pgt_pages++; + } + return paddr; } /* diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 429e55f2609f..ab7d4cc4b848 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -19,6 +19,7 @@ #define MAX_NR_CPUID_ENTRIES 100 vm_vaddr_t exception_handlers; +static struct guest_pgt_info *gpgt_info; static bool is_cpu_vendor_intel; static bool is_cpu_vendor_amd; @@ -241,6 +242,46 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } +static uint64_t *guest_code_get_pte(uint64_t vaddr) +{ + uint16_t index[4]; + uint64_t *pml4e, *pdpe, *pde, *pte; + uint64_t pgt_paddr = get_cr3(); + + GUEST_ASSERT(gpgt_info != NULL); + uint64_t page_size = gpgt_info->page_size; + + index[0] = (vaddr >> 12) & 0x1ffu; + index[1] = (vaddr >> 21) & 0x1ffu; + index[2] = (vaddr >> 30) & 0x1ffu; + index[3] = (vaddr >> 39) & 0x1ffu; + + pml4e = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pml4e && (pml4e[index[3]] & PTE_PRESENT_MASK)); + + pgt_paddr = (PTE_GET_PFN(pml4e[index[3]]) * page_size); + pdpe = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pdpe && (pdpe[index[2]] & PTE_PRESENT_MASK) && + !(pdpe[index[2]] & PTE_LARGE_MASK)); + + pgt_paddr = (PTE_GET_PFN(pdpe[index[2]]) * page_size); + pde = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pde && (pde[index[1]] & PTE_PRESENT_MASK) && + !(pde[index[1]] & PTE_LARGE_MASK)); + + pgt_paddr = (PTE_GET_PFN(pde[index[1]]) * page_size); + pte = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pte && (pte[index[0]] & PTE_PRESENT_MASK)); + + return (uint64_t *)&pte[index[0]]; +} + +void sync_vm_gpgt_info(struct kvm_vm *vm, vm_vaddr_t pgt_info) +{ + gpgt_info = (struct guest_pgt_info *)pgt_info; + sync_global_to_guest(vm, gpgt_info); +} + void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level) { From patchwork Fri Dec 23 00:13:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 636647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFE71C4332F for ; Fri, 23 Dec 2022 00:14:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235452AbiLWAOG (ORCPT ); Thu, 22 Dec 2022 19:14:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235201AbiLWAOD (ORCPT ); Thu, 22 Dec 2022 19:14:03 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3A5E26AA5 for ; Thu, 22 Dec 2022 16:14:02 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id c9-20020a63da09000000b0047954824506so1844757pgh.5 for ; Thu, 22 Dec 2022 16:14:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cLuChDmHZy9fZJAsWYadyMA42cALgM97d3TAhY29bd4=; b=qesxOioXaSBJL1ruAcCimnE/4E3MkzM5ZdGrQ40tnDmX4p1zgoouiyYp1MUFj51vYO nNfOwW6LG4WVVZlUPnlDY4+m1g5Ot+S+a+sxVijlNBf3A1ztamoTbu1JNeB6zcat62bv 4sZKS+O6SR4BDGU2SdZQmbQddafokLgtJEAamTcU9EXpGLpzv5QpwfcuaDT1U1ByTR8t Yn9Pel1JcW6C3LxBscnyfw+zgkMrzXFOvu0/lfQMKNp1hjPCg42eLccXB6PKdZ97pd5/ 03ryjbSSeYTi7TOD9AP2l+5tIQA4MenYW7GXMmsAvoZ+uhGJvLzF654L0FA35u/haI3Y 1Q2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cLuChDmHZy9fZJAsWYadyMA42cALgM97d3TAhY29bd4=; b=dcvHMZf3WQVpx2b9OnFHNt+cVFLIQ+onnwsA2mjQBNh/3PuQ00802pRd4hbL42iw04 i0UNpg/yaGO16idbYufSIl7uGZ1/2g2x6pJSN6R/sj6KLt146AbhQ2TAwSgUzyCESHrl 5DFkOKOyFnqlpRyBm6FI8BbFxUTKAfJK6prwFGP64Xjbyj4u9OVQ9+sv/itp6/eP9n80 gk5dKtodncpjwMk3dPlw2iDsF4IAdbSgQC1jE7laXDtuCGCcVWVITpwszNZOVLHVvuMo pE8Du6iN77+Gng8YK2RNw2dOwTeP/O9XIU2AoqYq6hXEJFvpmBm2fHBk8vff9qa6g38p y9pQ== X-Gm-Message-State: AFqh2kplqCNW3kF4tCktLSTfGb+GgOLxAuReOy1ZIDSxbXYBMa2IF/08 0IOdinGE0ilDt9nHZtUD6CBB0oZA1Ti1auHe X-Google-Smtp-Source: AMrXdXvCa2RPlxHRHVP2d/nxmRmmfc6ud8BYwgOQ+ATjacORH6WxAk8C9L+XSPvx7xJyS+iVx1Es8D2izekn2BrB X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:aa7:870f:0:b0:576:a1d3:a157 with SMTP id b15-20020aa7870f000000b00576a1d3a157mr451723pfo.32.1671754442064; Thu, 22 Dec 2022 16:14:02 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:47 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-4-vannapurve@google.com> Subject: [V3 PATCH 3/8] KVM: selftests: x86: Support changing gpa encryption masks From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add support for guest side functionality to modify encryption/shared masks for entries in page table to allow accessing GPA ranges as private or shared. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/x86_64/processor.h | 4 ++ .../selftests/kvm/lib/x86_64/processor.c | 39 +++++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 3617f83bb2e5..c8c55f54c14f 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -945,6 +945,10 @@ void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu); void vm_install_exception_handler(struct kvm_vm *vm, int vector, void (*handler)(struct ex_regs *)); +void guest_set_region_shared(void *vaddr, uint64_t size); + +void guest_set_region_private(void *vaddr, uint64_t size); + /* If a toddler were to say "abracadabra". */ #define KVM_EXCEPTION_MAGIC 0xabacadabaULL diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index ab7d4cc4b848..42d1e4074f32 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -276,6 +276,45 @@ static uint64_t *guest_code_get_pte(uint64_t vaddr) return (uint64_t *)&pte[index[0]]; } +static void guest_code_change_region_prot(void *vaddr_start, uint64_t mem_size, + bool private) +{ + uint64_t vaddr = (uint64_t)vaddr_start; + uint32_t num_pages; + + GUEST_ASSERT(gpgt_info != NULL); + uint32_t guest_page_size = gpgt_info->page_size; + + GUEST_ASSERT(!(mem_size % guest_page_size) && !(vaddr % guest_page_size)); + GUEST_ASSERT(gpgt_info->enc_mask | gpgt_info->shared_mask); + + num_pages = mem_size / guest_page_size; + for (uint32_t i = 0; i < num_pages; i++) { + uint64_t *pte = guest_code_get_pte(vaddr); + + GUEST_ASSERT(pte); + if (private) { + *pte &= ~(gpgt_info->shared_mask); + *pte |= gpgt_info->enc_mask; + } else { + *pte &= ~(gpgt_info->enc_mask); + *pte |= gpgt_info->shared_mask; + } + asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory"); + vaddr += guest_page_size; + } +} + +void guest_set_region_shared(void *vaddr, uint64_t size) +{ + guest_code_change_region_prot(vaddr, size, /* shared */ false); +} + +void guest_set_region_private(void *vaddr, uint64_t size) +{ + guest_code_change_region_prot(vaddr, size, /* private */ true); +} + void sync_vm_gpgt_info(struct kvm_vm *vm, vm_vaddr_t pgt_info) { gpgt_info = (struct guest_pgt_info *)pgt_info; From patchwork Fri Dec 23 00:13:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 636516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DB2AC10F1D for ; Fri, 23 Dec 2022 00:14:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235689AbiLWAOL (ORCPT ); Thu, 22 Dec 2022 19:14:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235442AbiLWAOF (ORCPT ); Thu, 22 Dec 2022 19:14:05 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2238827932 for ; Thu, 22 Dec 2022 16:14:05 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id h88-20020a17090a29e100b00223f501b046so1771131pjd.0 for ; Thu, 22 Dec 2022 16:14:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xlm4N5CTIk+gx1iR2xF2yQo3rvmps98CLQF90zWQYoY=; b=jq9yJkP+aFqI9u8i8zl7BZbwNgH9lWiHnMROFDPqV7emq/6CZ1BBaf50MTKk4axAHc P9sABC18Xnp2N7bHZ4fpyynjIqDrHu6Gm9tsTlRRzSwwn3W0i3ulLy0Xl61kd3MxS1Sb 2NHztbMkbWmYzJGnjb2t3WQGs6TpLSrUBIBYBU8VNN912lCHyUFwtnC5bk9HsB8Ggfis +y7p+8FCdWi7jdfSSDQNlmCl+FJTT2KJdcPfFDlhmohQqs3qvR9KClcOBwCwtol5jTBA 0rS1T8RuAY6ZCMdhuzRsWhPMXMYwBH531gG+eFKkv7DZNMnSsCUeFcAWqVBQ6TTT5aix XKRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xlm4N5CTIk+gx1iR2xF2yQo3rvmps98CLQF90zWQYoY=; b=KXfLR1FoVoQnwI9gVR4rjdEEmW+fiGZiMA5BVOpysOeYdGMH7YUt02E7+kLMr0Mtm+ XRZ47WPQQ+lkG5W52J4GovQTLg05SD1jPL8Cos0b7q/Hmw85uWfLXMwgrQMxPM+fbsNO N0Ww8dSnhEwwT3Hn1cz4vfC+fZTBB+MKCdIPPOTut2C+XpnU4OWN6OMBoR28a9syENSA 6KVngsxrXx4uPLftnoOTWXi1zryJE8tvYNDulsuQmJT7ascWhPiLTqadiV9SVa5qii/X VA0S+eIAlvFPcS8/36AHx5Kcta+STmoO/0g4GWI926OQxspcF2wjFTiym0RGBNudsl2o tPfA== X-Gm-Message-State: AFqh2koL3RtVTTY4AZSL79zzeKgKY/1znhUhbeOprgVvgQG2vFCpo3RU Vxgbcdd5TsaGYI4FYR6XIj/CnMJ1DqSvaes3 X-Google-Smtp-Source: AMrXdXsWc72htDtUWZS0yotugRgHtIHt8ahcLWxwxO5Sze+CNKRIKcpoRG+ZfMoKpk9Dv/2eij/H1pcN0LxZ3IDY X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:903:32c3:b0:191:3e01:39b3 with SMTP id i3-20020a17090332c300b001913e0139b3mr400196plr.5.1671754444373; Thu, 22 Dec 2022 16:14:04 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:48 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-5-vannapurve@google.com> Subject: [V3 PATCH 4/8] KVM: selftests: Split SEV VM creation logic From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Split SEV VM creation logic to allow additional modifications to SEV VM configuration e.g. adding memslots. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/x86_64/sev.h | 4 ++++ tools/testing/selftests/kvm/lib/x86_64/sev.c | 20 ++++++++++++++++--- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h index 1148db928d0b..6bf2015fff7a 100644 --- a/tools/testing/selftests/kvm/include/x86_64/sev.h +++ b/tools/testing/selftests/kvm/include/x86_64/sev.h @@ -19,4 +19,8 @@ bool is_kvm_sev_supported(void); struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, struct kvm_vcpu **cpu); +struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, + struct kvm_vcpu **cpu); + +void sev_vm_finalize(struct kvm_vm *vm, uint32_t policy); #endif /* SELFTEST_KVM_SEV_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c index 49c62f25363e..96d3dbc2ba74 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -215,7 +215,7 @@ static void sev_vm_measure(struct kvm_vm *vm) pr_debug("\n"); } -struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, +struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, struct kvm_vcpu **cpu) { enum vm_guest_mode mode = VM_MODE_PXXV48_4K; @@ -231,14 +231,28 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, *cpu = vm_vcpu_add(vm, 0, guest_code); kvm_vm_elf_load(vm, program_invocation_name); + pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n", policy, + nr_pages * vm->page_size / 1024); + return vm; +} + +void sev_vm_finalize(struct kvm_vm *vm, uint32_t policy) +{ sev_vm_launch(vm, policy); sev_vm_measure(vm); sev_vm_launch_finish(vm); +} - pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n", policy, - nr_pages * vm->page_size / 1024); +struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, + struct kvm_vcpu **cpu) +{ + struct kvm_vm *vm; + + vm = sev_vm_init_with_one_vcpu(policy, guest_code, cpu); + + sev_vm_finalize(vm, policy); return vm; } From patchwork Fri Dec 23 00:13:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 636646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0243BC4332F for ; Fri, 23 Dec 2022 00:14:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235753AbiLWAOW (ORCPT ); Thu, 22 Dec 2022 19:14:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235527AbiLWAOJ (ORCPT ); Thu, 22 Dec 2022 19:14:09 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 607CC28735 for ; Thu, 22 Dec 2022 16:14:07 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id o11-20020a17090a9f8b00b00225b041ba39so2242953pjp.7 for ; Thu, 22 Dec 2022 16:14:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EZopER/s4tiFrvreBNvA+OQLVRUFqUiISvcp+c+5UHk=; b=HUHXH4BEFdjasI8KjN0DUcFgQ332DM1EZL/VY94UC6Bde5YG+dsLST21wPEJeWfbWE wY4rri7V8MWRDYmpHskMTKvqqoxLZBtWuJC7SsHzFUIHkRoL0PTqLj7lSni34hAjQObJ xlXyImG913s4gNZ8WDm9+nM7U9HGJqblab9uMnFqkO+wPGJc1j5U63dgGp+POZhikZLP j/rXDR7gmy5jND0wk+OXc9YgUJodEbji4Jn4yijCKodYA82gSADK6f7xteIR75w0gdzX tsixo1K5xYhwzfwRZhv6jAu4TVv612TEO75bFQbxpJ3LOTl3mCEK6W07rf9eGXcjdcjY DPNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EZopER/s4tiFrvreBNvA+OQLVRUFqUiISvcp+c+5UHk=; b=30CsvWKi3GWp0rkhakMTHqtNUp2eYrX/IFSHgtrns53vx0T4UPTk4XskJT69cmgf4C rwinuTEIRG+KGh6K6RzZ4t/+7ilo2YfLjXeO30aCR4Q9t2lN7KxpBLW/uhIoQ42SD1Xw LDbn5vqcEVKFGrV0aYzT3uBWfD3brVXcBvba3AmqzyLD9Gt7Y5Zch+hYkPMkpUS4igib vyhER2pm10oW4QmFzeKU3+nHMVL98b05GebC+wOViVyxRuvRppCA1OheNgwZ4MKmq0wT wcmJL0DpwTkBs/ZRvB8gdXheTbNnqIxxVD9pt00N3eZMUSz9oEHLM22RqKVoTbQmz7n2 9wMg== X-Gm-Message-State: AFqh2krL2y8r0OPG1izULmQKBGgpsYwKLYoN7U+jH3YeVuqamRLPPtZ2 j58S3qNypGVbmM2rR3+8+0MXHCyv7pk5VVAT X-Google-Smtp-Source: AMrXdXtsCOLxwkxPeGE9Ol/zssDLd6SOwLmrDr2Pr3mBLzctCLIMEiKmUCf6u6bVUNGHfgsoNz3vV5wWK10EBiv+ X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a62:3042:0:b0:576:f200:210 with SMTP id w63-20020a623042000000b00576f2000210mr505612pfw.67.1671754446844; Thu, 22 Dec 2022 16:14:06 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:49 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-6-vannapurve@google.com> Subject: [V3 PATCH 5/8] KVM: selftests: Enable pagetable mapping for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Enable pagetable tracking and mapping for SEV VMs to allow guest code to execute guest_map_region_shared/private APIs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/lib/x86_64/sev.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c index 96d3dbc2ba74..0dfffdc224d6 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -215,6 +215,8 @@ static void sev_vm_measure(struct kvm_vm *vm) pr_debug("\n"); } +#define GUEST_PGT_MIN_VADDR 0x10000 + struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, struct kvm_vcpu **cpu) { @@ -224,6 +226,7 @@ struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, vm = ____vm_create(mode, nr_pages); + vm_set_pgt_alloc_tracking(vm); kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL); configure_sev_pte_masks(vm); @@ -238,6 +241,8 @@ struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, void sev_vm_finalize(struct kvm_vm *vm, uint32_t policy) { + vm_map_page_table(vm, GUEST_PGT_MIN_VADDR); + sev_vm_launch(vm, policy); sev_vm_measure(vm); From patchwork Fri Dec 23 00:13:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 636515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEBD5C4332F for ; Fri, 23 Dec 2022 00:14:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235824AbiLWAOp (ORCPT ); Thu, 22 Dec 2022 19:14:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235710AbiLWAOT (ORCPT ); Thu, 22 Dec 2022 19:14:19 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32A6327B35 for ; Thu, 22 Dec 2022 16:14:10 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id m13-20020a170902f64d00b001899a70c8f1so2330128plg.14 for ; Thu, 22 Dec 2022 16:14:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DfG5JzhSPPBTmyeZqygfonNF3+eUcGyRu0K6yDYOFAo=; b=SM4+xKni226DpVhRn0NlBoUPPfriOG1H2UYRt16IbgCmpCE7yuX0JtitW2/sUsc3MJ /v0ne16UCwUCdgWvGj3qRXuEcilujaVVDmjG3codhksg+Oy4OleFYozdtiXrqznBYA3P Guyy3es8av6ciLytcA8Tw7UEANDlWCUDN2U5Z4tgNipzL8frjSSB2XHLi2AlmtTQ4pWh txpTtgar9LDSfDplgpjUttEesi1dSv79Tt1fSryGRU2Efr++lIuHRoSUICFmzFxLAyXA c2MtMSQv6A4vOxw7ZN0ob0IHH45SveGoCbCa6afIwm0yl3EUWRl8pT2+5Di12B5qQ94t oQ8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DfG5JzhSPPBTmyeZqygfonNF3+eUcGyRu0K6yDYOFAo=; b=3+aTLX2gV1O4iDx10Mziu6IX5AfpJsO3SRqNUpU1i4ILXYXFbrH5c2U1oMRmLEqqqd bHC8RjZPgPTsI13dbPeGSz1pm7wAwp/zjy2a/tA+zgTYOUVuUddozI5272tFeyyiQkd3 OYVKnFL0F3wsIFVrcWKzN1pLOJVgnG3Yi/3c81xhRd0C3Uzkco0Cp1aHzn7OPZwrwFps ZdsMmPKB8m3CInCSzBm6pbihH+2UZkR1+gCdcjktoHQwVZunBAPDp8jaDEyZcPo15uq7 QkFafcRpehlO8Esgw53CDhOIANV1IhW1GK+/jWHwxOQ2LfbxsrlIv2wS80NZuKm9yzKq kBWA== X-Gm-Message-State: AFqh2kpTjoHmBPcyS6GHAl0nJD+6MwIXz3O1UtKbkrZsupJ3FoUQVpOz DvNpGOvEcTEmP0I/xMAUei7onGXCw0ChEO9x X-Google-Smtp-Source: AMrXdXud1PoPlKT4y1F5KMib0Fr4Tkf0gq528zsd01g47m76L9pYsV1T8JlsoH7XBdJPnfuKbWmcx618eW/AtxjD X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:1309:b0:219:4487:f763 with SMTP id h9-20020a17090a130900b002194487f763mr845655pja.201.1671754449592; Thu, 22 Dec 2022 16:14:09 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:50 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-7-vannapurve@google.com> Subject: [V3 PATCH 6/8] KVM: selftests: Refactor private_mem_test From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Move most of the logic from private mem test to a library to allow possible sharing of private_mem_test logic amongst non-confidential and confidential VM selftests. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/Makefile | 1 + .../include/x86_64/private_mem_test_helper.h | 15 ++ .../kvm/lib/x86_64/private_mem_test_helper.c | 197 ++++++++++++++++++ .../selftests/kvm/x86_64/private_mem_test.c | 187 +---------------- 4 files changed, 214 insertions(+), 186 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index ee8c3aebee80..83c649c9de23 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -56,6 +56,7 @@ LIBKVM_x86_64 += lib/x86_64/handlers.S LIBKVM_x86_64 += lib/x86_64/hyperv.c LIBKVM_x86_64 += lib/x86_64/memstress.c LIBKVM_x86_64 += lib/x86_64/private_mem.c +LIBKVM_x86_64 += lib/x86_64/private_mem_test_helper.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h new file mode 100644 index 000000000000..4d32c025876c --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022, Google LLC. + */ + +#ifndef SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H +#define SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H + +#include +#include + +void execute_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src); + +#endif /* SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c new file mode 100644 index 000000000000..600bd21d1bb8 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#define TEST_AREA_SLOT 10 +#define TEST_AREA_GPA 0xC0000000 +#define TEST_AREA_SIZE (2 * 1024 * 1024) +#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) +#define GUEST_TEST_MEM_SIZE (10 * 4096) + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +#define TEST_MEM_DATA_PATTERN1 0x66 +#define TEST_MEM_DATA_PATTERN2 0x99 +#define TEST_MEM_DATA_PATTERN3 0x33 +#define TEST_MEM_DATA_PATTERN4 0xaa +#define TEST_MEM_DATA_PATTERN5 0x12 + +static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pattern) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) { + if (buf[i] != pattern) + return false; + } + + return true; +} + +static void populate_test_area(void *test_area_base, uint64_t pattern) +{ + memset(test_area_base, pattern, TEST_AREA_SIZE); +} + +static void populate_guest_test_mem(void *guest_test_mem, uint64_t pattern) +{ + memset(guest_test_mem, pattern, GUEST_TEST_MEM_SIZE); +} + +static bool verify_test_area(void *test_area_base, uint64_t area_pattern, + uint64_t guest_pattern) +{ + void *guest_test_mem = test_area_base + GUEST_TEST_MEM_OFFSET; + void *test_area2_base = guest_test_mem + GUEST_TEST_MEM_SIZE; + uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + + GUEST_TEST_MEM_SIZE)); + + return (verify_mem_contents(test_area_base, GUEST_TEST_MEM_OFFSET, area_pattern) && + verify_mem_contents(guest_test_mem, GUEST_TEST_MEM_SIZE, guest_pattern) && + verify_mem_contents(test_area2_base, test_area2_size, area_pattern)); +} + +#define GUEST_STARTED 0 +#define GUEST_PRIVATE_MEM_POPULATED 1 +#define GUEST_SHARED_MEM_POPULATED 2 +#define GUEST_PRIVATE_MEM_POPULATED2 3 + +/* + * Run memory conversion tests with explicit conversion: + * Execute KVM hypercall to map/unmap gpa range which will cause userspace exit + * to back/unback private memory. Subsequent accesses by guest to the gpa range + * will not cause exit to userspace. + * + * Test memory conversion scenarios with following steps: + * 1) Access private memory using private access and verify that memory contents + * are not visible to userspace. + * 2) Convert memory to shared using explicit conversions and ensure that + * userspace is able to access the shared regions. + * 3) Convert memory back to private using explicit conversions and ensure that + * userspace is again not able to access converted private regions. + */ +static void guest_conv_test_fn(void) +{ + void *test_area_base = (void *)TEST_AREA_GPA; + void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + GUEST_SYNC(GUEST_STARTED); + + populate_test_area(test_area_base, TEST_MEM_DATA_PATTERN1); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN1)); + + kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size); + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2); + + GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN5)); + + kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size); + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); + + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN3)); + GUEST_DONE(); +} + +#define ASSERT_CONV_TEST_EXIT_IO(vcpu, stage) \ + { \ + struct ucall uc; \ + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ + ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC); \ + ASSERT_EQ(uc.args[1], stage); \ + } + +#define ASSERT_GUEST_DONE(vcpu) \ + { \ + struct ucall uc; \ + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ + ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_DONE); \ + } + +static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); + void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_STARTED); + populate_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4); + VM_STAGE_PROCESSED(GUEST_STARTED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN4), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_SHARED_MEM_POPULATED); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN2), "failed"); + populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PATTERN5); + VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED2); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN5), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_GUEST_DONE(vcpu); +} + +void execute_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src) +{ + struct kvm_vm *vm; + struct kvm_enable_cap cap; + struct kvm_vcpu *vcpu; + + vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); + + vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); + cap.cap = KVM_CAP_EXIT_HYPERCALL; + cap.flags = 0; + cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); + vm_ioctl(vm, KVM_ENABLE_CAP, &cap); + + vm_userspace_mem_region_add(vm, test_mem_src, TEST_AREA_GPA, + TEST_AREA_SLOT, TEST_AREA_SIZE / vm->page_size, KVM_MEM_PRIVATE); + vm_allocate_private_mem(vm, TEST_AREA_GPA, TEST_AREA_SIZE); + + virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size); + + host_conv_test_fn(vm, vcpu); + + kvm_vm_free(vm); +} diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_test.c index 015ada2e3d54..72c2f913ee92 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c @@ -3,197 +3,12 @@ * Copyright (C) 2022, Google LLC. */ #define _GNU_SOURCE /* for program_invocation_short_name */ -#include -#include -#include -#include #include #include #include -#include -#include -#include -#include -#include - -#include #include -#include -#include - -#define TEST_AREA_SLOT 10 -#define TEST_AREA_GPA 0xC0000000 -#define TEST_AREA_SIZE (2 * 1024 * 1024) -#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) -#define GUEST_TEST_MEM_SIZE (10 * 4096) - -#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) - -#define TEST_MEM_DATA_PATTERN1 0x66 -#define TEST_MEM_DATA_PATTERN2 0x99 -#define TEST_MEM_DATA_PATTERN3 0x33 -#define TEST_MEM_DATA_PATTERN4 0xaa -#define TEST_MEM_DATA_PATTERN5 0x12 - -static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pattern) -{ - uint8_t *buf = (uint8_t *)mem; - - for (uint32_t i = 0; i < size; i++) { - if (buf[i] != pattern) - return false; - } - - return true; -} - -static void populate_test_area(void *test_area_base, uint64_t pattern) -{ - memset(test_area_base, pattern, TEST_AREA_SIZE); -} - -static void populate_guest_test_mem(void *guest_test_mem, uint64_t pattern) -{ - memset(guest_test_mem, pattern, GUEST_TEST_MEM_SIZE); -} - -static bool verify_test_area(void *test_area_base, uint64_t area_pattern, - uint64_t guest_pattern) -{ - void *guest_test_mem = test_area_base + GUEST_TEST_MEM_OFFSET; - void *test_area2_base = guest_test_mem + GUEST_TEST_MEM_SIZE; - uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + - GUEST_TEST_MEM_SIZE)); - - return (verify_mem_contents(test_area_base, GUEST_TEST_MEM_OFFSET, area_pattern) && - verify_mem_contents(guest_test_mem, GUEST_TEST_MEM_SIZE, guest_pattern) && - verify_mem_contents(test_area2_base, test_area2_size, area_pattern)); -} - -#define GUEST_STARTED 0 -#define GUEST_PRIVATE_MEM_POPULATED 1 -#define GUEST_SHARED_MEM_POPULATED 2 -#define GUEST_PRIVATE_MEM_POPULATED2 3 - -/* - * Run memory conversion tests with explicit conversion: - * Execute KVM hypercall to map/unmap gpa range which will cause userspace exit - * to back/unback private memory. Subsequent accesses by guest to the gpa range - * will not cause exit to userspace. - * - * Test memory conversion scenarios with following steps: - * 1) Access private memory using private access and verify that memory contents - * are not visible to userspace. - * 2) Convert memory to shared using explicit conversions and ensure that - * userspace is able to access the shared regions. - * 3) Convert memory back to private using explicit conversions and ensure that - * userspace is again not able to access converted private regions. - */ -static void guest_conv_test_fn(void) -{ - void *test_area_base = (void *)TEST_AREA_GPA; - void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); - uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; - - GUEST_SYNC(GUEST_STARTED); - - populate_test_area(test_area_base, TEST_MEM_DATA_PATTERN1); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, - TEST_MEM_DATA_PATTERN1)); - - kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size); - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2); - - GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, - TEST_MEM_DATA_PATTERN5)); - - kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size); - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); - - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, - TEST_MEM_DATA_PATTERN3)); - GUEST_DONE(); -} - -#define ASSERT_CONV_TEST_EXIT_IO(vcpu, stage) \ - { \ - struct ucall uc; \ - ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ - ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC); \ - ASSERT_EQ(uc.args[1], stage); \ - } - -#define ASSERT_GUEST_DONE(vcpu) \ - { \ - struct ucall uc; \ - ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ - ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_DONE); \ - } - -static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu) -{ - void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); - void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_STARTED); - populate_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4); - VM_STAGE_PROCESSED(GUEST_STARTED); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED); - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, - TEST_MEM_DATA_PATTERN4), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_SHARED_MEM_POPULATED); - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, - TEST_MEM_DATA_PATTERN2), "failed"); - populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PATTERN5); - VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED2); - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, - TEST_MEM_DATA_PATTERN5), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_GUEST_DONE(vcpu); -} - -static void execute_vm_with_private_test_mem( - enum vm_mem_backing_src_type test_mem_src) -{ - struct kvm_vm *vm; - struct kvm_enable_cap cap; - struct kvm_vcpu *vcpu; - - vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); - - vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); - cap.cap = KVM_CAP_EXIT_HYPERCALL; - cap.flags = 0; - cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); - vm_ioctl(vm, KVM_ENABLE_CAP, &cap); - - vm_userspace_mem_region_add(vm, test_mem_src, TEST_AREA_GPA, - TEST_AREA_SLOT, TEST_AREA_SIZE / vm->page_size, KVM_MEM_PRIVATE); - vm_allocate_private_mem(vm, TEST_AREA_GPA, TEST_AREA_SIZE); - - virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size); - - host_conv_test_fn(vm, vcpu); - - kvm_vm_free(vm); -} +#include int main(int argc, char *argv[]) { From patchwork Fri Dec 23 00:13:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 636645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1380BC41535 for ; Fri, 23 Dec 2022 00:14:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235874AbiLWAOt (ORCPT ); Thu, 22 Dec 2022 19:14:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235443AbiLWAO3 (ORCPT ); Thu, 22 Dec 2022 19:14:29 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B553E2CC99 for ; Thu, 22 Dec 2022 16:14:12 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id i65-20020a25d144000000b0074dd0da5b01so3529318ybg.7 for ; Thu, 22 Dec 2022 16:14:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Cg/JgGzimgcKVfqgQO+cUvnCvql5iWRhml8JYXpDmXI=; b=sKa9lZOuyXiASyHL4kp9hKmVWR3zOuZrETb1wLibzqRJxzfzbNiXmD3S/Ng64NhoIV dG0JMir7Cp6iNktQKkX5sBYxyju/rlNqCnClo9LzUHp/AWgs2Rqw4X/ISUkosL7Vk3hF vOtpWhHppffPQzTZWm8ePcCZB4Mg0y1R3GLScM3UVXHIb1oriQz3rC37vJh6KSbT9dk9 QwVio4xoJMLgsmYL9wa/th4KU9kK7Ifj82VKjZ6iiKgfpicNUasXw2+pT10Imdo1LYz6 RsxZ4cYzbTWlVUGCpcI+kuwa0buaX4maiwjQf007tXO124EPcBnjuFRyKclaij46vGZJ GmGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Cg/JgGzimgcKVfqgQO+cUvnCvql5iWRhml8JYXpDmXI=; b=UXJ6R9WX4am99OG3FPz0ew/mqEfxN89me8n3oA85vsLBr/ieD5Uk+jWlJ/+yoiSOCN Sy22Ya6E58U3lAc33/Uv5jSkYwUwzB0JZ21gDwkKF28UmqnyrsnHjNGwnQGpzI6S2wfS /kSCp6JSq5Kc2u649s5pxqDejL7xtG38R9gicg3wGiBP8n0+X8j04Ie4meRGxCSrgaq3 JrNskH4ibJQLFi7wkSs0UvpqnozHvIGGPSn5FL+Bdo2Ia8QPQZHP4RtUllCfrE0oPPMm CbGkbbyhWJsq1o5Uw3gk8GyoVZSH6Oc7udLppLzJL4cgV30h8KGABX3295kNjPeuSfNT zhWQ== X-Gm-Message-State: AFqh2kpQH0PfbeenL83ipCz6JbvBcPdrzUidNkGA4/c7OfVo6XN4+LAc NnPB2KrMNsxr58rq0hwYUFeKsWm25vEJg6sy X-Google-Smtp-Source: AMrXdXsGUdmfYazAx27DakjPtcERF5H/JBaT1GEG+jEsY0D2C/qM670rVyWuDdWr5cFAJrkfpPOF+5JCcq5fk0Li X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a25:4485:0:b0:764:2d1e:4345 with SMTP id r127-20020a254485000000b007642d1e4345mr404750yba.551.1671754451977; Thu, 22 Dec 2022 16:14:11 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:51 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-8-vannapurve@google.com> Subject: [V3 PATCH 7/8] KVM: selftests: private_mem_test: Add support for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add support of executing private mem test with SEV VMs to allow creating SEV VMs and make the guest code do page table updates in case of executiong from SEV VM context. Signed-off-by: Vishal Annapurve --- .../include/x86_64/private_mem_test_helper.h | 3 ++ .../kvm/lib/x86_64/private_mem_test_helper.c | 37 +++++++++++++++++-- 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h index 4d32c025876c..e54870b72369 100644 --- a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h @@ -12,4 +12,7 @@ void execute_vm_with_private_test_mem( enum vm_mem_backing_src_type test_mem_src); +void execute_sev_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src); + #endif /* SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c index 600bd21d1bb8..36a8b1ab1c74 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c @@ -22,6 +22,9 @@ #include #include #include +#include + +static bool is_guest_sev_vm; #define TEST_AREA_SLOT 10 #define TEST_AREA_GPA 0xC0000000 @@ -104,6 +107,8 @@ static void guest_conv_test_fn(void) GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, TEST_MEM_DATA_PATTERN1)); + if (is_guest_sev_vm) + guest_set_region_shared(guest_test_mem, guest_test_size); kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size); populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2); @@ -112,6 +117,9 @@ static void guest_conv_test_fn(void) GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, TEST_MEM_DATA_PATTERN5)); + if (is_guest_sev_vm) + guest_set_region_private(guest_test_mem, guest_test_size); + kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size); populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3); @@ -170,14 +178,19 @@ static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu) ASSERT_GUEST_DONE(vcpu); } -void execute_vm_with_private_test_mem( - enum vm_mem_backing_src_type test_mem_src) +static void execute_private_mem_test(enum vm_mem_backing_src_type test_mem_src, + bool is_sev_vm) { struct kvm_vm *vm; struct kvm_enable_cap cap; struct kvm_vcpu *vcpu; - vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); + if (is_sev_vm) + vm = sev_vm_init_with_one_vcpu(SEV_POLICY_NO_DBG, + guest_conv_test_fn, &vcpu); + else + vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); + TEST_ASSERT(vm, "VM creation failed\n"); vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); cap.cap = KVM_CAP_EXIT_HYPERCALL; @@ -191,7 +204,25 @@ void execute_vm_with_private_test_mem( virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size); + if (is_sev_vm) { + is_guest_sev_vm = true; + sync_global_to_guest(vm, is_guest_sev_vm); + sev_vm_finalize(vm, SEV_POLICY_NO_DBG); + } + host_conv_test_fn(vm, vcpu); kvm_vm_free(vm); } + +void execute_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src) +{ + execute_private_mem_test(test_mem_src, false); +} + +void execute_sev_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src) +{ + execute_private_mem_test(test_mem_src, true); +} From patchwork Fri Dec 23 00:13:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 636514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD3EDC4332F for ; Fri, 23 Dec 2022 00:15:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235901AbiLWAO7 (ORCPT ); Thu, 22 Dec 2022 19:14:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235380AbiLWAOe (ORCPT ); Thu, 22 Dec 2022 19:14:34 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F8022EFA9 for ; Thu, 22 Dec 2022 16:14:14 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id n5-20020a170902d2c500b00189e5b86fe2so2324246plc.16 for ; Thu, 22 Dec 2022 16:14:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mXjvTaFGl0oQkWFtQLkmlcU0BiyCn63t9cGXi+l6V/w=; b=acLkEOZ4nvyQu2uxwn2hL7G9VDeqPcNGplhMBS2a+Yx+2SNZIeWbRIHKUjv160NG0L m4d1KpHGQ2RR4P8vLlHgq6jgo7zEUuXsmdGzHVhSiFQvET6Dn4GkihmCSrLavj+zpSes 8CGusZoJ/ApV1KTPj7x4yLhN9l2mKIEZY9i00cpn307Av1luAJreVNajomg2aj1GLaH5 BjUxMIfDWuKDDZfHZN4igce+Jgfx/dOsevMY+0KgQPEcSG5w3V7gSDyz9iQ8s6oGsVO6 51pr/aXD8Jb0W+dVj27LxZ94KU0+dCZsk08OInfH1quXH3A4uj+O+BM3KnT33TzIPjoq dv1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mXjvTaFGl0oQkWFtQLkmlcU0BiyCn63t9cGXi+l6V/w=; b=ZRDGaEbYJyr97msPd53rzJJQRfALXXAlQ1e7t1JsmVbXQ7R+/sixS4KTaBAjaNvDwi 6Kbd7DGaug0tg6hVWWjMRanFRXR0ptofW7dwUajwKmEnVX8au7OBHtJGCKfbq5QQeyrp zDrnRpFUW+CQufI1z0YZRpSWEeiS10AYDf3WsI54iINQSob4U2kcGJjvA/3R9itg5s8R jV21wiYdz2Rs1IBBX1K70mFd+6dFAi51xefz1tSZp0LQfKSgTD4ThCFW/w1+kQdXUIEK arNC4kOvQbACPMjMM/FgbWjOCWoRFW0IjXDjWmOa6KKjsL2lcw3sGZclv10cO3Xg3AsW 04+Q== X-Gm-Message-State: AFqh2kpmEbspm9drxFI+Bq8kIxauqs6gpmylukn5fD6f1tvBjQx14NQy iZwtbqwfFKd4UPCWvtnDmpj2Bo0/ZupzMRdg X-Google-Smtp-Source: AMrXdXvICP9uAMh9F1qCb0MWER00sCMo/yyKBdu7ShrN3F6Pwkbk7bfoFJbU6crsZgKj54Km5Tgzn7z2dXwHkz4z X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a63:530a:0:b0:46f:1e8d:d6a8 with SMTP id h10-20020a63530a000000b0046f1e8dd6a8mr592287pgb.248.1671754454109; Thu, 22 Dec 2022 16:14:14 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:52 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-9-vannapurve@google.com> Subject: [V3 PATCH 8/8] KVM: selftests: Add private mem test for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add SEV VM specific private mem test to invoke selftest logic similar to the one executed for non-confidential VMs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../kvm/x86_64/sev_private_mem_test.c | 26 +++++++++++++++++++ 3 files changed, 28 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index f73639dcbebb..e5c82a1cd733 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -40,6 +40,7 @@ /x86_64/set_sregs_test /x86_64/sev_all_boot_test /x86_64/sev_migrate_tests +/x86_64/sev_private_mem_test /x86_64/smaller_maxphyaddr_emulation_test /x86_64/smm_test /x86_64/state_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 83c649c9de23..a8ee7c473644 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -104,6 +104,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 += x86_64/private_mem_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_private_mem_test TEST_GEN_PROGS_x86_64 += x86_64/smaller_maxphyaddr_emulation_test TEST_GEN_PROGS_x86_64 += x86_64/smm_test TEST_GEN_PROGS_x86_64 += x86_64/state_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c new file mode 100644 index 000000000000..943fdfbe41d9 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include + +#include + +int main(int argc, char *argv[]) +{ + execute_sev_vm_with_private_test_mem( + VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD); + + /* Needs 2MB Hugepages */ + if (get_free_huge_2mb_pages() >= 1) { + printf("Running SEV VM private mem test with 2M pages\n"); + execute_sev_vm_with_private_test_mem( + VM_MEM_SRC_ANON_HTLB2M_AND_RESTRICTED_MEMFD); + } else + printf("Skipping SEV VM private mem test with 2M pages\n"); + + return 0; +}