From patchwork Tue Aug 30 22:42:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 601364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B58ADECAAD5 for ; Tue, 30 Aug 2022 22:43:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229591AbiH3WnW (ORCPT ); Tue, 30 Aug 2022 18:43:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231447AbiH3WnQ (ORCPT ); Tue, 30 Aug 2022 18:43:16 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D94172869 for ; Tue, 30 Aug 2022 15:43:14 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 136-20020a63008e000000b0042d707c94fbso1757417pga.9 for ; Tue, 30 Aug 2022 15:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=+fxq/Nava99rJhdF98nlLDcbHMSZkK7JwuVLQ9+L7LI=; b=DHoNLClh8yhyoZiS/SOMsIl2aIpcJVxVV9GzQMVByyHDqoNx9nRUYK3wjKzOHhK0e2 Ve3zejRFM1b/U/lUIsZhvyCBprcOZliQNCImj2hhSVKWglHjW7hU2xd1iw4LisE5ZcF/ uMmSLVuMNtDcimD5pKzi1h7xa+NjhczDQF0+LA2q+Tj/evqyt1Blo/vp/M9piIdnjawa kUo4CwWgul6o1em8VllsdzIuhJdp0U7KDGYgXhOQLTkbnBD7K+kPu4F7Z/uzdEJ8/r5P 6yOxb0YMQl7IPsWr1/eIW5lLZArDYe5KiZ/rhyvx0ThyfD7ofsUrXb6n6DOcVlBpF6tb W9ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=+fxq/Nava99rJhdF98nlLDcbHMSZkK7JwuVLQ9+L7LI=; b=JHlqNsr9gqbF58pZwz7qrxhnkvHFgIXlNbY5WXIZCUsBslMEVE22jULBdkdIkYmWN7 dCRkaasIWRu84sSYInKA5oO/I2j3IVk07ZWyre0/BfepfmsXb/U4bXCoi+aYzC5UfzSJ XjQ8RaYoBeVVWNEeGVcDcS4RoHFG2cJ89o+widpIbUtzHO9GU4u/LKrh68ONKrIZiMnJ 9d04KHGpgUYrkFgtYuLFSlVb2p/kOffdKGQIrW022JywCpixe/7oQ4ObGV7fJHdDRBFI NwClTG53hgKrefEjFyRg2DQyNS57J3Jhdc4GmnAoyNf9MJB/0VzdJiUyRz7qimg2LvlF IAyA== X-Gm-Message-State: ACgBeo0ks6zeX6azgC7DxJY75kw4Sx959jOwtngQHOYSYGf52eTZFFPO EyUULUGgMPV9bt6+D1fbkSeLKUQJaUbQbw5i X-Google-Smtp-Source: AA6agR6RJsD1NAYvHIDVRGOYRaxfpafbYixbKpI62dXEqhoHdt0NLNK3jfnQP9gQFc6+VnKg/he2n/SEEQ0HNsGL X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:aa7:81c6:0:b0:535:2aea:e29f with SMTP id c6-20020aa781c6000000b005352aeae29fmr23476929pfn.78.1661899393921; Tue, 30 Aug 2022 15:43:13 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:52 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-2-vannapurve@google.com> Subject: [RFC V2 PATCH 1/8] selftests: kvm: x86_64: Add support for pagetable tracking From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add support for mapping guest pagetable pages to a contiguous guest virtual address range and sharing the physical to virtual mappings with the guest in a pre-defined format. This functionality will allow the guests to modify their page table entries. One such usecase for CC VMs is to toggle encryption bit in their ptes to switch from encrypted to shared memory and vice a versa. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/kvm_util_base.h | 105 ++++++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 78 ++++++++++++- .../selftests/kvm/lib/x86_64/processor.c | 32 ++++++ 3 files changed, 214 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index dfe454f228e7..f57ced56da1b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -74,6 +74,11 @@ struct vm_memcrypt { int8_t enc_bit; }; +struct pgt_page { + vm_paddr_t paddr; + struct list_head list; +}; + struct kvm_vm { int mode; unsigned long type; @@ -98,6 +103,10 @@ struct kvm_vm { vm_vaddr_t handlers; uint32_t dirty_ring_size; struct vm_memcrypt memcrypt; + struct list_head pgt_pages; + bool track_pgt_pages; + uint32_t num_pgt_pages; + vm_vaddr_t pgt_vaddr_start; /* Cache of information for binary stats interface */ int stats_fd; @@ -184,6 +193,23 @@ struct vm_guest_mode_params { unsigned int page_size; unsigned int page_shift; }; + +/* + * Structure shared with the guest containing information about: + * - Starting virtual address for num_pgt_pages physical pagetable + * page addresses tracked via paddrs array + * - page size of the guest + * + * Guest can walk through its pagetables using this information to + * read/modify pagetable attributes. + */ +struct guest_pgt_info { + uint64_t num_pgt_pages; + uint64_t pgt_vaddr_start; + uint64_t page_size; + uint64_t paddrs[]; +}; + extern const struct vm_guest_mode_params vm_guest_mode_params[]; int open_path_or_exit(const char *path, int flags); @@ -394,6 +420,49 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + +/* + * function called by guest code to translate physical address of a pagetable + * page to guest virtual address. + * + * input args: + * gpgt_info - pointer to the guest_pgt_info structure containing info + * about guest virtual address mappings for guest physical + * addresses of page table pages. + * pgt_pa - physical address of guest page table page to be translated + * to a virtual address. + * + * output args: none + * + * return: + * pointer to the pagetable page, null in case physical address is not + * tracked via given guest_pgt_info structure. + */ +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, uint64_t pgt_pa); + +/* + * Allocate and setup a page to be shared with guest containing guest_pgt_info + * structure. + * + * Note: + * 1) vm_set_pgt_alloc_tracking function should be used to start tracking + * of physical page table page allocation. + * 2) This function should be invoked after needed pagetable pages are + * mapped to the VM using virt_pg_map. + * + * input args: + * vm - virtual machine + * vaddr_min - Minimum guest virtual address to start mapping the + * guest_pgt_info structure page(s). + * + * output args: none + * + * return: + * virtual address mapping guest_pgt_info structure. + */ +vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); @@ -647,10 +716,46 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing); const char *exit_reason_str(unsigned int exit_reason); +#ifdef __x86_64__ +/* + * Guest called function to get a pointer to pte corresponding to a given + * guest virtual address and pointer to the guest_pgt_info structure. + * + * input args: + * gpgt_info - pointer to guest_pgt_info structure containing information + * about guest virtual addresses mapped to pagetable physical + * addresses. + * vaddr - guest virtual address + * + * output args: none + * + * return: + * pointer to the pte corresponding to guest virtual address, + * Null if pte is not found + */ +uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, uint64_t vaddr); +#endif + vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); + +/* + * Enable tracking of physical guest pagetable pages for the given vm. + * This function should be called right after vm creation before any pages are + * mapped into the VM using vm_alloc_* / vm_vaddr_alloc* functions. + * + * input args: + * vm - virtual machine + * + * output args: none + * + * return: + * None + */ +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm); + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); /* diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index f153c71d6988..243d04a3d4b6 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -155,6 +155,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) TEST_ASSERT(vm != NULL, "Insufficient Memory"); INIT_LIST_HEAD(&vm->vcpus); + INIT_LIST_HEAD(&vm->pgt_pages); vm->regions.gpa_tree = RB_ROOT; vm->regions.hva_tree = RB_ROOT; hash_init(vm->regions.slot_hash); @@ -573,6 +574,7 @@ void kvm_vm_free(struct kvm_vm *vmp) { int ctr; struct hlist_node *node; + struct pgt_page *entry, *nentry; struct userspace_mem_region *region; if (vmp == NULL) @@ -588,6 +590,9 @@ void kvm_vm_free(struct kvm_vm *vmp) hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node) __vm_mem_region_delete(vmp, region, false); + list_for_each_entry_safe(entry, nentry, &vmp->pgt_pages, list) + free(entry); + /* Free sparsebit arrays. */ sparsebit_free(&vmp->vpages_valid); sparsebit_free(&vmp->vpages_mapped); @@ -1195,9 +1200,24 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables. */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm) +{ + vm->track_pgt_pages = true; +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + struct pgt_page *pgt; + vm_paddr_t paddr = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + + if (vm->track_pgt_pages) { + pgt = calloc(1, sizeof(*pgt)); + TEST_ASSERT(pgt != NULL, "Insufficient memory"); + pgt->paddr = addr_gpa2raw(vm, paddr); + list_add(&pgt->list, &vm->pgt_pages); + vm->num_pgt_pages++; + } + return paddr; } /* @@ -1286,6 +1306,27 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + vm_vaddr_t vaddr; + + /* Stop tracking further pgt pages, mapping pagetable may itself need + * new pages. + */ + vm->track_pgt_pages = false; + vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, + vm->num_pgt_pages * vm->page_size, vaddr_min); + vaddr = vaddr_start; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + /* Map the virtual page. */ + virt_pg_map(vm, vaddr, addr_raw2gpa(vm, pgt_page_entry->paddr)); + sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); + vaddr += vm->page_size; + } + vm->pgt_vaddr_start = vaddr_start; +} + /* * VM Virtual Address Allocate Shared/Encrypted * @@ -1345,6 +1386,41 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_ return _vm_vaddr_alloc(vm, sz, vaddr_min, false); } +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, + uint64_t pgt_pa) +{ + uint64_t num_pgt_pages = gpgt_info->num_pgt_pages; + uint64_t pgt_vaddr_start = gpgt_info->pgt_vaddr_start; + uint64_t page_size = gpgt_info->page_size; + + for (uint32_t i = 0; i < num_pgt_pages; i++) { + if (gpgt_info->paddrs[i] == pgt_pa) + return (void *)(pgt_vaddr_start + i * page_size); + } + return NULL; +} + +vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + struct guest_pgt_info *gpgt_info; + uint64_t info_size = sizeof(*gpgt_info) + (sizeof(uint64_t) * vm->num_pgt_pages); + uint64_t num_pages = align_up(info_size, vm->page_size); + vm_vaddr_t buf_start = vm_vaddr_alloc(vm, num_pages, vaddr_min); + uint32_t i = 0; + + gpgt_info = (struct guest_pgt_info *)addr_gva2hva(vm, buf_start); + gpgt_info->num_pgt_pages = vm->num_pgt_pages; + gpgt_info->pgt_vaddr_start = vm->pgt_vaddr_start; + gpgt_info->page_size = vm->page_size; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + gpgt_info->paddrs[i] = pgt_page_entry->paddr; + i++; + } + TEST_ASSERT((i == vm->num_pgt_pages), "pgt entries mismatch with the counter"); + return buf_start; +} + /* * VM Virtual Address Allocate Pages * diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 09d757a0b148..02252cabf9ec 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -217,6 +217,38 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } +uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, uint64_t vaddr) +{ + uint16_t index[4]; + uint64_t *pml4e, *pdpe, *pde, *pte; + uint64_t pgt_paddr = get_cr3(); + uint64_t page_size = gpgt_info->page_size; + + index[0] = (vaddr >> 12) & 0x1ffu; + index[1] = (vaddr >> 21) & 0x1ffu; + index[2] = (vaddr >> 30) & 0x1ffu; + index[3] = (vaddr >> 39) & 0x1ffu; + + pml4e = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pml4e && (pml4e[index[3]] & PTE_PRESENT_MASK)); + + pgt_paddr = (PTE_GET_PFN(pml4e[index[3]]) * page_size); + pdpe = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pdpe && (pdpe[index[2]] & PTE_PRESENT_MASK) && + !(pdpe[index[2]] & PTE_LARGE_MASK)); + + pgt_paddr = (PTE_GET_PFN(pdpe[index[2]]) * page_size); + pde = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pde && (pde[index[1]] & PTE_PRESENT_MASK) && + !(pde[index[1]] & PTE_LARGE_MASK)); + + pgt_paddr = (PTE_GET_PFN(pde[index[1]]) * page_size); + pte = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pte && (pte[index[0]] & PTE_PRESENT_MASK)); + + return (uint64_t *)&pte[index[0]]; +} + static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu, uint64_t vaddr) From patchwork Tue Aug 30 22:42:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 601362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B56A3C0502A for ; Tue, 30 Aug 2022 22:43:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231617AbiH3Wnq (ORCPT ); Tue, 30 Aug 2022 18:43:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231788AbiH3Wn2 (ORCPT ); Tue, 30 Aug 2022 18:43:28 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08E8F7FFA7 for ; Tue, 30 Aug 2022 15:43:25 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id n18-20020a25d612000000b0069661a1dc48so811263ybg.20 for ; Tue, 30 Aug 2022 15:43:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=FnC2MODp1Ilbo/3qVuV673Djb8hUYYAKXHPp7uV/l3U=; b=QPd795t1C0ahrsS55brxetoonol6V2jAppY01knmBO6dTEexYKdxvNbkTSQcIcPS0g 2kVquW8pntnzZ8ayf2KGc74LWg1ImA0LODvDVKrQ5RNE9iPAZqoluWZufVPBUn48fc67 kW1IFw4yDbnloy5QQ/STgOmCIxz9plIKPFM0HDvfvRd0xIR/a+XXnVcrsaDE4dXMXlvp 2IrquyFJf8Ta/3dM4DHzEUE06trbxbXXW3PUtdEzPpNE9l9jhx85/6oghBZ5fuiPnSIa HihGAukLvraBpoNRHwoZwUflCiTPtuG2KVmnc5uUF0aSGnXsPpEIuWMW3HwQM/7lAwtJ qghA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=FnC2MODp1Ilbo/3qVuV673Djb8hUYYAKXHPp7uV/l3U=; b=BKIHvrSwwBndBF8uvi1ZasQkg9u4jO8/iNF/O++7pkFRcaXNQrSVSWa6sEZ6sb3wr6 NchOpmrO4gi8qTZtisX5qmJQgFgiQO+ehJ4JuPm2cfcZgU4CaG5+NcI39nnlGmQlsnNf QIhlAIVqHtSm+0VmxHvamD45QbvXMbPF4K6fAXYP4KNy2q0dOxX/yq5gzZW8HGpYohzD 76PuGz9pnpHaV2Q5LWak2HRB+hnom5xoGI/kMnvE7yNLZZtdpuIQUyJWuTivf4ShXUpu dvo/lexh3HkgwUNGuy7oZzI4c1hP7xcw6xWKBNpfJ2sXtZHYXanbztyaw3plVCJR6GVi FDZA== X-Gm-Message-State: ACgBeo0jfxMdeWY3RzVdOFNBCTgkvM8w/ZP8AsrPqYXxyCbVJIhAkynm FrFqAMTBx+0FxSaEFJERMsrgIR/p7KwxSj+y X-Google-Smtp-Source: AA6agR7J2g4yWWCjTXSW8bJGJNJQDF4Q6cZWWSd4WOmEIBPpC8PfIDGdBXAbKzkcTBd8rZjpqBovURMzodQSv4dn X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a81:bb43:0:b0:33d:cdd9:aa56 with SMTP id a3-20020a81bb43000000b0033dcdd9aa56mr15659487ywl.240.1661899403835; Tue, 30 Aug 2022 15:43:23 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:55 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-5-vannapurve@google.com> Subject: [RFC V2 PATCH 4/8] selftests: kvm: sev: Support memslots with private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Introduce an additional helper API to create a SEV VM with private memory memslots. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/include/x86_64/sev.h | 2 ++ tools/testing/selftests/kvm/lib/x86_64/sev.c | 15 ++++++++++++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h index b6552ea1c716..628801707917 100644 --- a/tools/testing/selftests/kvm/include/x86_64/sev.h +++ b/tools/testing/selftests/kvm/include/x86_64/sev.h @@ -38,6 +38,8 @@ void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data); struct kvm_vm *sev_get_vm(struct sev_vm *sev); uint8_t sev_get_enc_bit(struct sev_vm *sev); +struct sev_vm *sev_vm_create_with_flags(uint32_t policy, uint64_t npages, + uint32_t memslot_flags); struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages); void sev_vm_free(struct sev_vm *sev); void sev_vm_launch(struct sev_vm *sev); diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c index 44b5ce5cd8db..6a329ea17f9f 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -171,7 +171,8 @@ void sev_vm_free(struct sev_vm *sev) free(sev); } -struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) +struct sev_vm *sev_vm_create_with_flags(uint32_t policy, uint64_t npages, + uint32_t memslot_flags) { struct sev_vm *sev; struct kvm_vm *vm; @@ -188,9 +189,12 @@ struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) vm->vpages_mapped = sparsebit_alloc(); vm_set_memory_encryption(vm, true, true, sev->enc_bit); pr_info("SEV cbit: %d\n", sev->enc_bit); - vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, 0); - sev_register_user_region(sev, addr_gpa2hva(vm, 0), + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, + memslot_flags); + if (!(memslot_flags & KVM_MEM_PRIVATE)) { + sev_register_user_region(sev, addr_gpa2hva(vm, 0), npages * vm->page_size); + } pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n", sev->sev_policy, npages * vm->page_size / 1024); @@ -198,6 +202,11 @@ struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) return sev; } +struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) +{ + return sev_vm_create_with_flags(policy, npages, 0); +} + void sev_vm_launch(struct sev_vm *sev) { struct kvm_sev_launch_start ksev_launch_start = {0}; From patchwork Tue Aug 30 22:42:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 601363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90EC3ECAAA1 for ; Tue, 30 Aug 2022 22:43:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231404AbiH3Wnp (ORCPT ); Tue, 30 Aug 2022 18:43:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232048AbiH3Wni (ORCPT ); Tue, 30 Aug 2022 18:43:38 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36D3D80F5C for ; Tue, 30 Aug 2022 15:43:28 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id a19-20020aa780d3000000b0052bccd363f8so5177937pfn.22 for ; Tue, 30 Aug 2022 15:43:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=FxpX7DoPE3GMtBbFVyScvJLAhxo3IL8fu/3CfhEaMzM=; b=erWtdHITEzoRn+OGEzvo+IuiSJLt5Me15nfx9IkgRhJht9nbPSXu7kj81eibvCHxOT 8M3ZqpSXqa4NaBg7NigHUZSvPcXHBoADeRVYKq4WEO/UNmBJRFNXVdSTO4ue0ZRuQtaD JiAZANlRx8tKA5Ye5zGOIcUn4Q5VLY2za0WwBB/xhwaQ+jJVvnOkIWTivOC3U5prOyh/ WGPWTkMRkz22EMo1pSMVLroa6Xnfr0I1IfJczL6Prk3f7lk9vDClqGKzUoKFMGqjwEqC 66QWJbO+YFFfMWVaqpjedTSIejKgWpjppNYV5cP4JqFCSkjB+30XQ+ZVcmXp2EVOkuOr gWgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=FxpX7DoPE3GMtBbFVyScvJLAhxo3IL8fu/3CfhEaMzM=; b=7BlWEqGhKMFV82Xc9GP4FVJT1lviCOLx4UOYPjfI1BD8YCGeLHprM8lvB253OU20t9 v4hXSUXCiSXlJiCmxz7mwlkXtL7ZVRjrKaZSJiTaNdhvXzTYS1xKRLw7SlQQdY6Lk8Rq F+7JhQDaVjEzGLkftVtZfjw2MHo76fy/ugY97XTuOD+oV2NbsOF9pySZUIGP83OXdNar 415o+YMxp8pSIphQO37RpA4FSUxsqbqt02mbTBDqMrFWfIrNRKLRCR6MIo46OWY/LMpB 9BFKI+W7av+0w9bvAfUdE5rXzqkhGOySbZulpHAaeNWseR4n5dO4tcYtDzwBdWY0k5kI 2RYA== X-Gm-Message-State: ACgBeo3qfZh3952lL+aZGApA+IZZEMWI8hhqEWdKWWWCFikIe7aVpotc fjdzjAw/RxgjL41DDiZ2xnPvH6ih7SNzFtR8 X-Google-Smtp-Source: AA6agR4B1IUI8+BUs70F/elePrZ2UGxVCcCeqHEWdmOOw5t10UrnaK5juPiTyW2FEuX0yLwZlQ/8qutIu2KwxlQ1 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:249:b0:1e0:a8a3:3c6c with SMTP id t9-20020a17090a024900b001e0a8a33c6cmr5346pje.0.1661899406644; Tue, 30 Aug 2022 15:43:26 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:56 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-6-vannapurve@google.com> Subject: [RFC V2 PATCH 5/8] selftests: kvm: Update usage of private mem lib for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add/update APIs to allow reusing private mem lib for SEV VMs. Memory conversion for SEV VMs includes updating guest pagetables based on virtual addresses to toggle C-bit. Signed-off-by: Vishal Annapurve --- .../kvm/include/x86_64/private_mem.h | 9 +- .../selftests/kvm/lib/x86_64/private_mem.c | 103 +++++++++++++----- 2 files changed, 83 insertions(+), 29 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/tools/testing/selftests/kvm/include/x86_64/private_mem.h index 645bf3f61d1e..183b53b8c486 100644 --- a/tools/testing/selftests/kvm/include/x86_64/private_mem.h +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h @@ -14,10 +14,10 @@ enum mem_conversion_type { TO_SHARED }; -void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, - uint64_t size); -void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, - uint64_t size); +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size); +void guest_update_mem_map(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size); void guest_map_ucall_page_shared(void); @@ -45,6 +45,7 @@ struct vm_setup_info { struct test_setup_info test_info; guest_code_fn guest_fn; io_exit_handler ioexit_cb; + uint32_t policy; /* Used for Sev VMs */ }; void execute_vm_with_private_mem(struct vm_setup_info *info); diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c index f6dcfa4d353f..28d93754e1f2 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -22,12 +22,45 @@ #include #include #include +#include + +#define GUEST_PGT_MIN_VADDR 0x10000 + +/* Variables populated by userspace logic and consumed by guest code */ +static bool is_sev_vm; +static struct guest_pgt_info *sev_gpgt_info; +static uint8_t sev_enc_bit; + +static void sev_guest_set_clr_pte_bit(uint64_t vaddr_start, uint64_t mem_size, + bool set) +{ + uint64_t vaddr = vaddr_start; + uint32_t guest_page_size = sev_gpgt_info->page_size; + uint32_t num_pages; + + GUEST_ASSERT(!(mem_size % guest_page_size) && !(vaddr_start % + guest_page_size)); + + num_pages = mem_size / guest_page_size; + for (uint32_t i = 0; i < num_pages; i++) { + uint64_t *pte = guest_code_get_pte(sev_gpgt_info, vaddr); + + GUEST_ASSERT(pte); + if (set) + *pte |= (1ULL << sev_enc_bit); + else + *pte &= ~(1ULL << sev_enc_bit); + asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory"); + vaddr += guest_page_size; + } +} /* * Execute KVM hypercall to change memory access type for a given gpa range. * * Input Args: * type - memory conversion type TO_SHARED/TO_PRIVATE + * gva - starting gva address * gpa - starting gpa address * size - size of the range starting from gpa for which memory access needs * to be changed @@ -40,9 +73,12 @@ * for a given gpa range. This API is useful in exercising implicit conversion * path. */ -void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, - uint64_t size) +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size) { + if (is_sev_vm) + sev_guest_set_clr_pte_bit(gva, size, type == TO_PRIVATE ? true : false); + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT, type == TO_PRIVATE ? KVM_MARK_GPA_RANGE_ENC_ACCESS : KVM_CLR_GPA_RANGE_ENC_ACCESS, 0); @@ -54,6 +90,7 @@ void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, * * Input Args: * type - memory conversion type TO_SHARED/TO_PRIVATE + * gva - starting gva address * gpa - starting gpa address * size - size of the range starting from gpa for which memory type needs * to be changed @@ -65,9 +102,12 @@ void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, * Function called by guest logic in selftests to update the memory type for a * given gpa range. This API is useful in exercising explicit conversion path. */ -void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, - uint64_t size) +void guest_update_mem_map(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size) { + if (is_sev_vm) + sev_guest_set_clr_pte_bit(gva, size, type == TO_PRIVATE ? true : false); + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT, type == TO_PRIVATE ? KVM_MAP_GPA_RANGE_ENCRYPTED : KVM_MAP_GPA_RANGE_DECRYPTED, 0); @@ -90,30 +130,15 @@ void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, void guest_map_ucall_page_shared(void) { vm_paddr_t ucall_paddr = get_ucall_pool_paddr(); + GUEST_ASSERT(ucall_paddr); - guest_update_mem_access(TO_SHARED, ucall_paddr, 1 << MIN_PAGE_SHIFT); + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, ucall_paddr, 1, + KVM_MAP_GPA_RANGE_DECRYPTED, 0); + GUEST_ASSERT_1(!ret, ret); } -/* - * Execute KVM ioctl to back/unback private memory for given gpa range. - * - * Input Args: - * vm - kvm_vm handle - * gpa - starting gpa address - * size - size of the gpa range - * op - mem_op indicating whether private memory needs to be allocated or - * unbacked - * - * Output Args: None - * - * Return: None - * - * Function called by host userspace logic in selftests to back/unback private - * memory for gpa ranges. This function is useful to setup initial boot private - * memory and then convert memory during runtime. - */ -void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, - enum mem_op op) +static void vm_update_private_mem_internal(struct kvm_vm *vm, uint64_t gpa, + uint64_t size, enum mem_op op, bool encrypt) { int priv_memfd; uint64_t priv_offset, guest_phys_base, fd_offset; @@ -142,6 +167,10 @@ void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, TEST_ASSERT(ret == 0, "fallocate failed\n"); enc_region.addr = gpa; enc_region.size = size; + + if (!encrypt) + return; + if (op == ALLOCATE_MEM) { printf("doing encryption for gpa 0x%lx size 0x%lx\n", gpa, size); vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &enc_region); @@ -151,6 +180,30 @@ void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, } } +/* + * Execute KVM ioctl to back/unback private memory for given gpa range. + * + * Input Args: + * vm - kvm_vm handle + * gpa - starting gpa address + * size - size of the gpa range + * op - mem_op indicating whether private memory needs to be allocated or + * unbacked + * + * Output Args: None + * + * Return: None + * + * Function called by host userspace logic in selftests to back/unback private + * memory for gpa ranges. This function is useful to setup initial boot private + * memory and then convert memory during runtime. + */ +void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, + enum mem_op op) +{ + vm_update_private_mem_internal(vm, gpa, size, op, true /* encrypt */); +} + static void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm, volatile struct kvm_run *run) { From patchwork Tue Aug 30 22:42:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 601361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C059BECAAA1 for ; Tue, 30 Aug 2022 22:44:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232481AbiH3WoP (ORCPT ); Tue, 30 Aug 2022 18:44:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231442AbiH3Wnp (ORCPT ); Tue, 30 Aug 2022 18:43:45 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86DFE84ED7 for ; Tue, 30 Aug 2022 15:43:34 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id 125-20020a621483000000b0053814ac4b8bso3420903pfu.16 for ; Tue, 30 Aug 2022 15:43:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=lqixpQXBBABWZQORlxTuydpIP0GZwF2DjDV0e6REe5k=; b=d4nHpC0Ur/2l1zx8uv7diUCkftpV5Vp6eS/8jS9BbOEp7ec/TJDCKBYUpr9NZxQl3z xQP9x3TvMEjbgUkDeXQ2VMC+VKfyIcvqmhZrak7Ty2iI0+aaf2A7iDFImRoKRV9WNH7s kN94+gLz32tYIMkjEtL3H034U2gDjvqDdApLgHXd+qeBwDGhmG7g6J/8Ov+xm4OnDBIg wzO6ZXO8E1BD5P+rcR22geRPjBZMVcEWeA4VOPWlfwCMM4uGbI6xHwX1/cPZ59vc0mWb 8ApRoYpRd7FYjG0ufGmgGYXWH4h9c86bQCEinqquiZmYmu/WkIGKbO67kSaJOSBq4FI6 XA7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=lqixpQXBBABWZQORlxTuydpIP0GZwF2DjDV0e6REe5k=; b=BHwH4lNGRrJfgJoDB5wHXxwks4N4XlJ5EByZoOVRhd4cvWXG9AnvwpcBqteJLqjzee MAOU9E1meTsAsheg4SWZPIYw55zMFi36e0KYEJpVY6JgoodpLCRNuBzpWiqeqEg1p8v2 os+NS5D44ED75RGVRa8lkAQFJGXCOnmiCYRr3rSzvUUCdF0RtIKnMEhYOp5h0PRmPV48 kOO3vsHxOuHVm3DHqQUJjhdaX/WfrAltyD9zJOXU5PcRV9W3Uwgb7M0k3HKyOL8UuO2s xZlotB8zQTFEawklvpqqBc0DzFYxPlGDF4nxoPiQvYuSYILzV6HZ/NuGTzI1uaz7al2O QR2g== X-Gm-Message-State: ACgBeo05pXdPxNiFM28BNwb1RaxBLI7Fg8Mhn+xT62B+DxQBA2sMWt0A eRlSOkc25yBxDOEOLgQvtH5aL5mtldjqYoOs X-Google-Smtp-Source: AA6agR4ml/Ql2xekqUAm4T/hSZGEEww9e0VlFIKKFuer2hjk7TU2Wk7Uwk9mBoOlkkPKqFgQci6wOzxkIp6dKzBD X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:e515:b0:1fd:6e58:40 with SMTP id t21-20020a17090ae51500b001fd6e580040mr272209pjy.46.1661899413403; Tue, 30 Aug 2022 15:43:33 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:58 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-8-vannapurve@google.com> Subject: [RFC V2 PATCH 7/8] selftests: kvm: Refactor testing logic for private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Move all of the logic to execute memory conversion tests into library to allow sharing the logic between normal non-confidential VMs and SEV VMs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/Makefile | 1 + .../include/x86_64/private_mem_test_helper.h | 13 + .../kvm/lib/x86_64/private_mem_test_helper.c | 273 ++++++++++++++++++ .../selftests/kvm/x86_64/private_mem_test.c | 246 +--------------- 4 files changed, 289 insertions(+), 244 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c5fc8ea2c843..36874fedff4a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -52,6 +52,7 @@ LIBKVM_x86_64 += lib/x86_64/apic.c LIBKVM_x86_64 += lib/x86_64/handlers.S LIBKVM_x86_64 += lib/x86_64/perf_test_util.c LIBKVM_x86_64 += lib/x86_64/private_mem.c +LIBKVM_x86_64 += lib/x86_64/private_mem_test_helper.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h new file mode 100644 index 000000000000..31bc559cd813 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022, Google LLC. + */ + +#ifndef SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H +#define SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H + +void execute_memory_conversion_tests(void); + +void execute_sev_memory_conversion_tests(void); + +#endif // SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c new file mode 100644 index 000000000000..ce53bef7896e --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c @@ -0,0 +1,273 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#define VM_MEMSLOT0_PAGES (512 * 10) + +#define TEST_AREA_SLOT 10 +#define TEST_AREA_GPA 0xC0000000 +#define TEST_AREA_SIZE (2 * 1024 * 1024) +#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) +#define GUEST_TEST_MEM_SIZE (10 * 4096) + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +#define TEST_MEM_DATA_PAT1 0x66 +#define TEST_MEM_DATA_PAT2 0x99 +#define TEST_MEM_DATA_PAT3 0x33 +#define TEST_MEM_DATA_PAT4 0xaa +#define TEST_MEM_DATA_PAT5 0x12 + +static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) { + if (buf[i] != pat) + return false; + } + + return true; +} + +/* + * Add custom implementation for memset to avoid using standard/builtin memset + * which may use features like SSE/GOT that don't work with guest vm execution + * within selftests. + */ +void *memset(void *mem, int byte, size_t size) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) + buf[i] = byte; + + return buf; +} + +static void populate_test_area(void *test_area_base, uint64_t pat) +{ + memset(test_area_base, pat, TEST_AREA_SIZE); +} + +static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat) +{ + memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE); +} + +static bool verify_test_area(void *test_area_base, uint64_t area_pat, + uint64_t guest_pat) +{ + void *test_area1_base = test_area_base; + uint64_t test_area1_size = GUEST_TEST_MEM_OFFSET; + void *guest_test_mem = test_area_base + test_area1_size; + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + void *test_area2_base = guest_test_mem + guest_test_size; + uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + + GUEST_TEST_MEM_SIZE)); + + return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) && + verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) && + verify_mem_contents(test_area2_base, test_area2_size, area_pat)); +} + +#define GUEST_STARTED 0 +#define GUEST_PRIVATE_MEM_POPULATED 1 +#define GUEST_SHARED_MEM_POPULATED 2 +#define GUEST_PRIVATE_MEM_POPULATED2 3 +#define GUEST_IMPLICIT_MEM_CONV1 4 +#define GUEST_IMPLICIT_MEM_CONV2 5 + +/* + * Run memory conversion tests supporting two types of conversion: + * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will cause + * userspace exit to back/unback private memory. Subsequent accesses by guest + * to the gpa range will not cause exit to userspace. + * 2) Implicit: Execute KVM hypercall to update memory access to a gpa range as + * private/shared without exiting to userspace. Subsequent accesses by guest + * to the gpa range will result in KVM EPT/NPT faults and then exit to + * userspace for each page. + * + * Test memory conversion scenarios with following steps: + * 1) Access private memory using private access and verify that memory contents + * are not visible to userspace. + * 2) Convert memory to shared using explicit/implicit conversions and ensure + * that userspace is able to access the shared regions. + * 3) Convert memory back to private using explicit/implicit conversions and + * ensure that userspace is again not able to access converted private + * regions. + */ +static void guest_conv_test_fn(bool test_explicit_conv) +{ + void *test_area_base = (void *)TEST_AREA_GPA; + void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + guest_map_ucall_page_shared(); + GUEST_SYNC(GUEST_STARTED); + + populate_test_area(test_area_base, TEST_MEM_DATA_PAT1); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT1)); + + if (test_explicit_conv) + guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + else { + guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1); + } + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2); + + GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT5)); + + if (test_explicit_conv) + guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + else { + guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2); + } + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); + + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT3)); + GUEST_DONE(); +} + +static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1) +{ + void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); + void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); + uint64_t guest_mem_gpa = (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + switch (uc_arg1) { + case GUEST_STARTED: + populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4); + VM_STAGE_PROCESSED(GUEST_STARTED); + break; + case GUEST_PRIVATE_MEM_POPULATED: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT4), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); + break; + case GUEST_SHARED_MEM_POPULATED: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT2), "failed"); + populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5); + VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); + break; + case GUEST_PRIVATE_MEM_POPULATED2: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT5), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); + break; + case GUEST_IMPLICIT_MEM_CONV1: + /* + * For first implicit conversion, memory is already private so + * mark it private again just to zap the pte entries for the gpa + * range, so that subsequent accesses from the guest will + * generate ept/npt fault and memory conversion path will be + * exercised by KVM. + */ + vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, + ALLOCATE_MEM); + VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1); + break; + case GUEST_IMPLICIT_MEM_CONV2: + /* + * For second implicit conversion, memory is already shared so + * mark it shared again just to zap the pte entries for the gpa + * range, so that subsequent accesses from the guest will + * generate ept/npt fault and memory conversion path will be + * exercised by KVM. + */ + vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, + UNBACK_MEM); + VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2); + break; + default: + TEST_FAIL("Unknown stage %d\n", uc_arg1); + break; + } +} + +static void guest_explicit_conv_test_fn(void) +{ + guest_conv_test_fn(true); +} + +static void guest_implicit_conv_test_fn(void) +{ + guest_conv_test_fn(false); +} + +/* + * Execute implicit and explicit memory conversion tests with non-confidential + * VMs using memslots with private memory. + */ +void execute_memory_conversion_tests(void) +{ + struct vm_setup_info info; + struct test_setup_info *test_info = &info.test_info; + + info.vm_mem_src = VM_MEM_SRC_ANONYMOUS; + info.memslot0_pages = VM_MEMSLOT0_PAGES; + test_info->test_area_gpa = TEST_AREA_GPA; + test_info->test_area_size = TEST_AREA_SIZE; + test_info->test_area_slot = TEST_AREA_SLOT; + test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS; + info.ioexit_cb = conv_test_ioexit_fn; + + info.guest_fn = guest_explicit_conv_test_fn; + execute_vm_with_private_mem(&info); + + info.guest_fn = guest_implicit_conv_test_fn; + execute_vm_with_private_mem(&info); +} + +/* + * Execute implicit and explicit memory conversion tests with SEV VMs using + * memslots with private memory. + */ +void execute_sev_memory_conversion_tests(void) +{ + struct vm_setup_info info; + struct test_setup_info *test_info = &info.test_info; + + info.vm_mem_src = VM_MEM_SRC_ANONYMOUS; + info.memslot0_pages = VM_MEMSLOT0_PAGES; + test_info->test_area_gpa = TEST_AREA_GPA; + test_info->test_area_size = TEST_AREA_SIZE; + test_info->test_area_slot = TEST_AREA_SLOT; + test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS; + info.ioexit_cb = conv_test_ioexit_fn; + + info.policy = SEV_POLICY_NO_DBG; + info.guest_fn = guest_explicit_conv_test_fn; + execute_sev_vm_with_private_mem(&info); + + info.guest_fn = guest_implicit_conv_test_fn; + execute_sev_vm_with_private_mem(&info); +} diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_test.c index 52430b97bd0b..49da626e5807 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c @@ -1,263 +1,21 @@ // SPDX-License-Identifier: GPL-2.0 /* - * tools/testing/selftests/kvm/lib/kvm_util.c - * * Copyright (C) 2022, Google LLC. */ #define _GNU_SOURCE /* for program_invocation_short_name */ -#include -#include -#include -#include #include #include #include -#include - -#include -#include -#include -#include #include #include -#include -#include - -#define VM_MEMSLOT0_PAGES (512 * 10) - -#define TEST_AREA_SLOT 10 -#define TEST_AREA_GPA 0xC0000000 -#define TEST_AREA_SIZE (2 * 1024 * 1024) -#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) -#define GUEST_TEST_MEM_SIZE (10 * 4096) - -#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) - -#define TEST_MEM_DATA_PAT1 0x66 -#define TEST_MEM_DATA_PAT2 0x99 -#define TEST_MEM_DATA_PAT3 0x33 -#define TEST_MEM_DATA_PAT4 0xaa -#define TEST_MEM_DATA_PAT5 0x12 - -static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat) -{ - uint8_t *buf = (uint8_t *)mem; - - for (uint32_t i = 0; i < size; i++) { - if (buf[i] != pat) - return false; - } - - return true; -} - -/* - * Add custom implementation for memset to avoid using standard/builtin memset - * which may use features like SSE/GOT that don't work with guest vm execution - * within selftests. - */ -void *memset(void *mem, int byte, size_t size) -{ - uint8_t *buf = (uint8_t *)mem; - - for (uint32_t i = 0; i < size; i++) - buf[i] = byte; - - return buf; -} - -static void populate_test_area(void *test_area_base, uint64_t pat) -{ - memset(test_area_base, pat, TEST_AREA_SIZE); -} - -static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat) -{ - memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE); -} - -static bool verify_test_area(void *test_area_base, uint64_t area_pat, - uint64_t guest_pat) -{ - void *test_area1_base = test_area_base; - uint64_t test_area1_size = GUEST_TEST_MEM_OFFSET; - void *guest_test_mem = test_area_base + test_area1_size; - uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; - void *test_area2_base = guest_test_mem + guest_test_size; - uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + - GUEST_TEST_MEM_SIZE)); - - return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) && - verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) && - verify_mem_contents(test_area2_base, test_area2_size, area_pat)); -} - -#define GUEST_STARTED 0 -#define GUEST_PRIVATE_MEM_POPULATED 1 -#define GUEST_SHARED_MEM_POPULATED 2 -#define GUEST_PRIVATE_MEM_POPULATED2 3 -#define GUEST_IMPLICIT_MEM_CONV1 4 -#define GUEST_IMPLICIT_MEM_CONV2 5 - -/* - * Run memory conversion tests supporting two types of conversion: - * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will cause - * userspace exit to back/unback private memory. Subsequent accesses by guest - * to the gpa range will not cause exit to userspace. - * 2) Implicit: Execute KVM hypercall to update memory access to a gpa range as - * private/shared without exiting to userspace. Subsequent accesses by guest - * to the gpa range will result in KVM EPT/NPT faults and then exit to - * userspace for each page. - * - * Test memory conversion scenarios with following steps: - * 1) Access private memory using private access and verify that memory contents - * are not visible to userspace. - * 2) Convert memory to shared using explicit/implicit conversions and ensure - * that userspace is able to access the shared regions. - * 3) Convert memory back to private using explicit/implicit conversions and - * ensure that userspace is again not able to access converted private - * regions. - */ -static void guest_conv_test_fn(bool test_explicit_conv) -{ - void *test_area_base = (void *)TEST_AREA_GPA; - void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); - uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; - - guest_map_ucall_page_shared(); - GUEST_SYNC(GUEST_STARTED); - - populate_test_area(test_area_base, TEST_MEM_DATA_PAT1); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT1)); - - if (test_explicit_conv) - guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem, - guest_test_size); - else { - guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem, - guest_test_size); - GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1); - } - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2); - - GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT5)); - - if (test_explicit_conv) - guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem, - guest_test_size); - else { - guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem, - guest_test_size); - GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2); - } - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); - - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT3)); - GUEST_DONE(); -} - -static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1) -{ - void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); - void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); - uint64_t guest_mem_gpa = (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); - uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; - - switch (uc_arg1) { - case GUEST_STARTED: - populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4); - VM_STAGE_PROCESSED(GUEST_STARTED); - break; - case GUEST_PRIVATE_MEM_POPULATED: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT4), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); - break; - case GUEST_SHARED_MEM_POPULATED: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT2), "failed"); - populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5); - VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); - break; - case GUEST_PRIVATE_MEM_POPULATED2: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT5), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); - break; - case GUEST_IMPLICIT_MEM_CONV1: - /* - * For first implicit conversion, memory is already private so - * mark it private again just to zap the pte entries for the gpa - * range, so that subsequent accesses from the guest will - * generate ept/npt fault and memory conversion path will be - * exercised by KVM. - */ - vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, - ALLOCATE_MEM); - VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1); - break; - case GUEST_IMPLICIT_MEM_CONV2: - /* - * For second implicit conversion, memory is already shared so - * mark it shared again just to zap the pte entries for the gpa - * range, so that subsequent accesses from the guest will - * generate ept/npt fault and memory conversion path will be - * exercised by KVM. - */ - vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, - UNBACK_MEM); - VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2); - break; - default: - TEST_FAIL("Unknown stage %d\n", uc_arg1); - break; - } -} - -static void guest_explicit_conv_test_fn(void) -{ - guest_conv_test_fn(true); -} - -static void guest_implicit_conv_test_fn(void) -{ - guest_conv_test_fn(false); -} - -static void execute_memory_conversion_test(void) -{ - struct vm_setup_info info; - struct test_setup_info *test_info = &info.test_info; - - info.vm_mem_src = VM_MEM_SRC_ANONYMOUS; - info.memslot0_pages = VM_MEMSLOT0_PAGES; - test_info->test_area_gpa = TEST_AREA_GPA; - test_info->test_area_size = TEST_AREA_SIZE; - test_info->test_area_slot = TEST_AREA_SLOT; - test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS; - info.ioexit_cb = conv_test_ioexit_fn; - - info.guest_fn = guest_explicit_conv_test_fn; - execute_vm_with_private_mem(&info); - - info.guest_fn = guest_implicit_conv_test_fn; - execute_vm_with_private_mem(&info); -} +#include int main(int argc, char *argv[]) { /* Tell stdout not to buffer its content */ setbuf(stdout, NULL); - execute_memory_conversion_test(); + execute_memory_conversion_tests(); return 0; }