From patchwork Tue Dec 12 20:46:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Shahar X-Patchwork-Id: 754137 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mHCSnpsD" Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D328138 for ; Tue, 12 Dec 2023 12:47:40 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1d341b6945dso4252775ad.0 for ; Tue, 12 Dec 2023 12:47:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702414059; x=1703018859; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H/Dndi1239gwCc2M1UpDzeOelUNamAjzhLbQUW3a080=; b=mHCSnpsDbPmf0KQ8+wqCOr1++vSYyppA6LZzKnNSctNnnLqn6J/kWxv7j5dmBJhUxo 0c6BRcigpP2nAY7pBpTUSHI/BkV8hU8xExywF4xhhFxt03n6xSwXOprTno28EBOqNe9f fQFZdflVOXKQZiQQ1OfNs3DvIchfdmERCvnnW09aviyH/R7nZbf+MaJOXBakVAIK/mfV C1uqgOxQm5IJXhJTngv02Hnep+SAD8pHWvg/VR8leTPGkjQ7dw23CwLtoUtHeFUZWTHj lM84JY/SvGnRBvc88WmZAL/Gx0U8XqqYzHJsKGnw9yGEJTpMwq1VdBZQE6wYxjjZWF/T WTMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702414059; x=1703018859; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H/Dndi1239gwCc2M1UpDzeOelUNamAjzhLbQUW3a080=; b=tkob7jzVA9Tw4tcyqSO0YgCSP2vFmnojQCUTif80pooZB6v46a15Tiy+U7Ucn30bfm /Efe0Y/9qbP3TiJN/Km+csRjSywRyCSGsq6RlRfmSvCTl9uyyBO6/W/bSaoWsY4vLxs9 Lk5EWjbd3hOYCLQgaObqzwNJGrx9H3Tdp//4PbtOu9NnIKXryEXndJCcRcYOtRzUuOxm xiu/OJmBmJVCJx5WJc2JdC+JV/rtzgPgTROvGUKJCSN5U+XSHLM/m0S+X36TmJBWD//9 kcnAcrsG7ZowmHQPrkfHyayWv+lUtMWBANtk97VN8gJEaHYUJcENUiZy0FN+gzIkfPb7 4tKg== X-Gm-Message-State: AOJu0Ywjzq9WdFVGZlIHGJGsDAIPV5hUVZWgpS/cZk1FJyhD7cBySyKr ZsVmZLbSreMs+dSa9V4fJc2bVB/gfWRuaq4RBosZpPRtWTO4V++2TtP6IzCY9MvqbSWMGvpQixp LzeOkdU3BH5vKAjB9T+8NC3TS9waClhgx1miXX5lRuEzfVu9d8+Ot1T+N4ixJo1Lgrvn+L5M= X-Google-Smtp-Source: AGHT+IF90pAMjCwjeHxfySVElT6LDGL6d14S5h4fspD1Gay3dn08vhLUXCGLUkTCyivuxiIl7pjXKXPVEw== X-Received: from sagi.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:241b]) (user=sagis job=sendgmr) by 2002:a17:903:2292:b0:1d0:bf4c:163d with SMTP id b18-20020a170903229200b001d0bf4c163dmr56827plh.2.1702414058496; Tue, 12 Dec 2023 12:47:38 -0800 (PST) Date: Tue, 12 Dec 2023 12:46:37 -0800 In-Reply-To: <20231212204647.2170650-1-sagis@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231212204647.2170650-1-sagis@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231212204647.2170650-23-sagis@google.com> Subject: [RFC PATCH v5 22/29] KVM: selftests: Add functions to allow mapping as shared From: Sagi Shahar To: linux-kselftest@vger.kernel.org, Ackerley Tng , Ryan Afranji , Erdem Aktas , Sagi Shahar , Isaku Yamahata Cc: Sean Christopherson , Paolo Bonzini , Shuah Khan , Peter Gonda , Haibo Xu , Chao Peng , Vishal Annapurve , Roger Wang , Vipin Sharma , jmattson@google.com, dmatlack@google.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org From: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji Signed-off-by: Sagi Shahar --- .../selftests/kvm/include/kvm_util_base.h | 24 ++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 32 +++++++++++++++++++ .../selftests/kvm/lib/x86_64/processor.c | 15 +++++++-- 3 files changed, 69 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index b353617fcdd1..efd7ae8abb20 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -574,6 +574,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, unsigned int npages); +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages); void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa); void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva); vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva); @@ -1034,6 +1036,28 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr virt_arch_pg_map(vm, vaddr, paddr); } +/* + * VM Virtual Page Map as Shared + * + * Input Args: + * vm - Virtual Machine + * vaddr - VM Virtual Address + * paddr - VM Physical Address + * memslot - Memory region slot for new virtual translation tables + * + * Output Args: None + * + * Return: None + * + * Within @vm, creates a virtual translation for the page starting + * at @vaddr to the page starting at @paddr. + */ +void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr); + +static inline void virt_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) +{ + virt_arch_pg_map_shared(vm, vaddr, paddr); +} /* * Address Guest Virtual to Guest Physical diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 4f1ae0f1eef0..28780fa1f0f2 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1573,6 +1573,38 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, } } +/* + * Map a range of VM virtual address to the VM's physical address as shared + * + * Input Args: + * vm - Virtual Machine + * vaddr - Virtuall address to map + * paddr - VM Physical Address + * npages - The number of pages to map + * + * Output Args: None + * + * Return: None + * + * Within the VM given by @vm, creates a virtual translation for + * @npages starting at @vaddr to the page range starting at @paddr. + */ +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages) +{ + size_t page_size = vm->page_size; + size_t size = npages * page_size; + + TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow"); + TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); + + while (npages--) { + virt_pg_map_shared(vm, vaddr, paddr); + vaddr += page_size; + paddr += page_size; + } +} + /* * Address VM Physical to Host Virtual * diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 566d82829da4..aa2a57ddb8d3 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -190,7 +190,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm, return pte; } -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) +static void ___virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + int level, bool protected) { const uint64_t pg_size = PG_LEVEL_SIZE(level); uint64_t *pml4e, *pdpe, *pde; @@ -235,17 +236,27 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr); *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK); - if (vm_is_gpa_protected(vm, paddr)) + if (protected) *pte |= vm->arch.c_bit; else *pte |= vm->arch.s_bit; } +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) +{ + ___virt_pg_map(vm, vaddr, paddr, level, vm_is_gpa_protected(vm, paddr)); +} + void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } +void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) +{ + ___virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K, false); +} + void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level) {