From patchwork Wed May 12 21:44:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 435954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75AA8C433ED for ; Wed, 12 May 2021 23:10:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3DA32613EE for ; Wed, 12 May 2021 23:10:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345371AbhELXDj (ORCPT ); Wed, 12 May 2021 19:03:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388316AbhELVtR (ORCPT ); Wed, 12 May 2021 17:49:17 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C7A7C061257 for ; Wed, 12 May 2021 14:45:08 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id t12-20020a170902dcccb02900ed4648d0f9so10051755pll.2 for ; Wed, 12 May 2021 14:45:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dsZRmacMS6KM7zL6VioeEOgTKaLma8ZGBhS/giKTJdU=; b=eg8ku46+dEOGYq0sMq9kRhyoEZR8wLo/LkK/PHISXJba6ZWp90a+oTa2Zmf44FUdCu QYumBp6pQdiXTHsMt//WJ/sbXj1mH40QMd5YXHvAYBe7BnrTHHcfSWDUywhyGXQwc5HD bgeIY0By9ibXUyB2DtLL3Qyjm1rS4rdAKFAlGksh9ckZlPjC2SaQJsVLmOcPFCBhkWWk dDG/8nPE5IvMhDyjHYrczCg1Iah/pfa3gm0jhh//4ICrRThO9tvqL5Medbimjohfom5Y GcimKBjoFIUPfD2m1ZjdI9zQRYJ1p4s4l+DG357O4J96st68HVlhnKvf7rKOOMNo1fgp SZeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dsZRmacMS6KM7zL6VioeEOgTKaLma8ZGBhS/giKTJdU=; b=XF5WKzj9ZjK665oQgaPXmrVmSEuPlefZWBp20xf6TdO1KaBXpt892hyIssCCqFxamd U++akiFAvaYKtZjaMPwj9LeYx/jLFI0DYtn431mvxsy62yNtIaTIbryC6Y5MMat8NDUo eZC7/DqyNbje2xyeBNhy5WSRbuehAjqSWjpZVAdOuYn7iFjRx77LPbNmZkGuNavT31xs 75EUPrQ3DWwvSJGx32RTRlVXZ8epwIk57Idyad0wfo6J46N6mD2PtWZFDRndSTNCWiKf czGp+TVEkvoYH+ExKCO7c4GUHHf45YdbNSk9iPWdfGB7RUVraG0b/NHtJBrD4WMdBvah vzLw== X-Gm-Message-State: AOAM533psQnRkiUTTGBcCvkNXAhkUvt3ZZctJEk/82c2x/LgFbwykzGp JIJ7GRoU31xZGHDwG/KVM4Fgt/5n+l7I7R3U+ikS X-Google-Smtp-Source: ABdhPJyswNHOW1PP+QOxNH6b+qDviBQKkHwncNUAmYrXswvR5b7LxW2U/DVYPrwXHf+eUMvpaaOnuyZ8ix1pI+iqGJxl X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:29e5:10fc:1128:b0c0]) (user=axelrasmussen job=sendgmr) by 2002:a17:902:b406:b029:ec:fbf2:4114 with SMTP id x6-20020a170902b406b02900ecfbf24114mr37698704plr.32.1620855907823; Wed, 12 May 2021 14:45:07 -0700 (PDT) Date: Wed, 12 May 2021 14:44:58 -0700 In-Reply-To: <20210512214502.2047008-1-axelrasmussen@google.com> Message-Id: <20210512214502.2047008-2-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210512214502.2047008-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH 1/5] KVM: selftests: allow different backing memory types for demand paging From: Axel Rasmussen To: Aaron Lewis , Alexander Graf , Andrew Jones , Andrew Morton , Ben Gardon , Emanuele Giuseppe Esposito , Eric Auger , Jacob Xu , Makarand Sonare , Oliver Upton , Paolo Bonzini , Peter Xu , Shuah Khan , Yanan Wang Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Axel Rasmussen Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add an argument which lets us specify a different backing memory type for the test. The default is just to use anonymous, matching existing behavior (if the argument is omitted). This is in preparation for testing UFFD minor faults. For that, we need to use a new backing memory type which is setup with MAP_SHARED. This notably requires one other change. Perhaps counter-intuitively, perf_test_args.host_page_size is the host's *native* page size, not the size of the pages the host is using to back the guest. This means, if we try to run the test with e.g. VM_MEM_SRC_ANONYMOUS_HUGETLB, we'll try to do demand paging with 4k pages instead of 2M hugepages. So, convert everything to use a new demand_paging_size, computed based on the backing memory type. Signed-off-by: Axel Rasmussen --- .../selftests/kvm/demand_paging_test.c | 24 +++++++++++++------ 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 5f7a229c3af1..10c7ba76a9c6 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -38,6 +38,7 @@ static int nr_vcpus = 1; static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; +static size_t demand_paging_size; static char *guest_data_prototype; static void *vcpu_worker(void *data) @@ -83,7 +84,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) copy.src = (uint64_t)guest_data_prototype; copy.dst = addr; - copy.len = perf_test_args.host_page_size; + copy.len = demand_paging_size; copy.mode = 0; clock_gettime(CLOCK_MONOTONIC, &start); @@ -100,7 +101,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) PER_PAGE_DEBUG("UFFDIO_COPY %d \t%ld ns\n", tid, timespec_to_ns(ts_diff)); PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n", - perf_test_args.host_page_size, addr, tid); + demand_paging_size, addr, tid); return 0; } @@ -250,6 +251,7 @@ static int setup_demand_paging(struct kvm_vm *vm, struct test_params { bool use_uffd; useconds_t uffd_delay; + enum vm_mem_backing_src_type src_type; bool partition_vcpu_memory_access; }; @@ -267,14 +269,16 @@ static void run_test(enum vm_guest_mode mode, void *arg) int r; vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size, - VM_MEM_SRC_ANONYMOUS); + p->src_type); perf_test_args.wr_fract = 1; - guest_data_prototype = malloc(perf_test_args.host_page_size); + demand_paging_size = get_backing_src_pagesz(p->src_type); + + guest_data_prototype = malloc(demand_paging_size); TEST_ASSERT(guest_data_prototype, "Failed to allocate buffer for guest data pattern"); - memset(guest_data_prototype, 0xAB, perf_test_args.host_page_size); + memset(guest_data_prototype, 0xAB, demand_paging_size); vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads)); TEST_ASSERT(vcpu_threads, "Memory allocation failed"); @@ -388,7 +392,7 @@ static void help(char *name) { puts(""); printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n" - " [-b memory] [-v vcpus] [-o]\n", name); + " [-b memory] [-t type] [-v vcpus] [-o]\n", name); guest_modes_help(); printf(" -u: use User Fault FD to handle vCPU page\n" " faults.\n"); @@ -398,6 +402,8 @@ static void help(char *name) printf(" -b: specify the size of the memory region which should be\n" " demand paged by each vCPU. e.g. 10M or 3G.\n" " Default: 1G\n"); + printf(" -t: The type of backing memory to use. Default: anonymous\n"); + backing_src_help(); printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); @@ -409,13 +415,14 @@ int main(int argc, char *argv[]) { int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); struct test_params p = { + .src_type = VM_MEM_SRC_ANONYMOUS, .partition_vcpu_memory_access = true, }; int opt; guest_modes_append_default(); - while ((opt = getopt(argc, argv, "hm:ud:b:v:o")) != -1) { + while ((opt = getopt(argc, argv, "hm:ud:b:t:v:o")) != -1) { switch (opt) { case 'm': guest_modes_cmdline(optarg); @@ -430,6 +437,9 @@ int main(int argc, char *argv[]) case 'b': guest_percpu_mem_size = parse_size(optarg); break; + case 't': + p.src_type = parse_backing_src_type(optarg); + break; case 'v': nr_vcpus = atoi(optarg); TEST_ASSERT(nr_vcpus > 0 && nr_vcpus <= max_vcpus, From patchwork Wed May 12 21:45:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 435952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CD59C43617 for ; Wed, 12 May 2021 23:10:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCAE6613FB for ; Wed, 12 May 2021 23:10:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347102AbhELXE2 (ORCPT ); Wed, 12 May 2021 19:04:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1390026AbhELVtV (ORCPT ); Wed, 12 May 2021 17:49:21 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F684C061265 for ; Wed, 12 May 2021 14:45:14 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id l8-20020a25b3080000b02904f8bd69022dso15912838ybj.9 for ; Wed, 12 May 2021 14:45:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EJQBBbq8T2xij9lh3CFyuCXf5mM7D9L45k9jrUHXiZc=; b=YiVzEi6+wA4fBCN/c8v7sDfYnPkF53baryHGg4LmUaTVhveP9MvX7irBGlqwM0xu8K C03rWLn/QpT5XV0TbbrilmYx8cvF0PV/NjoZQZKGia4UovN5jCTU+fCy0LiPzWKRwLkp paDdLz4/NKyWU8qCT8boymH/VA5VRll0EhEnpoOtS2YvWJZvSoo/lef0Iu0Fbq0SCsVG QP6n2m3Hi0BuPlLYMOroTHb/QjJy6o9Gp4g45Rvn1tCxbkD+a2TytSwc2oAF9/sOoH1t LzIEjMucrYRhZz8OSMJC7a9EIs86k3JmCR088AnPGpiLD/D+W37PnscfOEqEshLrz2Nf LvrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EJQBBbq8T2xij9lh3CFyuCXf5mM7D9L45k9jrUHXiZc=; b=P2lzZo+ZcGXewcoO8y1SqjyAYxXuQ/V3MCFYLmnsqIO1lDqjN70uaW0A0VFUvVbrD0 t1kZSwb5s1CWYfHaQDG+D19WNx85wy9eeKGHTLGpBjFiEgD8vyEwpcnpl5+MzicXFNe5 pWVeVb8KviquckoPJSK+5LXUkGgcxquj9cXn18DRuBUMSazpLRfOk67zn5KP7xLfRPyN tDNtdVC5PAw6u76q6QmNUj+gkMtx9z6APGrYcM7ZIRalmt2ScbtTw3QyCUajqrI+iNFg h9TQ1INjKRL4XojauoYvDthcX0dOp5PwhM8wj/vQmWp5NsxqK6QqRXVmpAdjIhp4Cbwl f+Dw== X-Gm-Message-State: AOAM530QtNcjhYACsoPgQ44XRvFO/3QarXcm6ERrkTLBAVxnKaXS6BZR TvNXma5tgSaUEcLWRQQ5yYFouW1EatIRVIyb1jZ/ X-Google-Smtp-Source: ABdhPJzn6CgTuxWq8M+xnzJR0A/UDuL40gKKVk5BtA8lewE6oBhQJs4WJlYlHQO1ZZl9IMMLNSi2ecvDYOgeJNKRbQpz X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:29e5:10fc:1128:b0c0]) (user=axelrasmussen job=sendgmr) by 2002:a25:3bc9:: with SMTP id i192mr22991429yba.72.1620855913405; Wed, 12 May 2021 14:45:13 -0700 (PDT) Date: Wed, 12 May 2021 14:45:01 -0700 In-Reply-To: <20210512214502.2047008-1-axelrasmussen@google.com> Message-Id: <20210512214502.2047008-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210512214502.2047008-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH 4/5] KVM: selftests: allow using UFFD minor faults for demand paging From: Axel Rasmussen To: Aaron Lewis , Alexander Graf , Andrew Jones , Andrew Morton , Ben Gardon , Emanuele Giuseppe Esposito , Eric Auger , Jacob Xu , Makarand Sonare , Oliver Upton , Paolo Bonzini , Peter Xu , Shuah Khan , Yanan Wang Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Axel Rasmussen Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org UFFD handling of MINOR faults is a new feature whose use case is to speed up demand paging (compared to MISSING faults). So, it's interesting to let this selftest exercise this new mode. Modify the demand paging test to have the option of using UFFD minor faults, as opposed to missing faults. Now, when turning on userfaultfd with '-u', the desired mode has to be specified ("MISSING" or "MINOR"). If we're in minor mode, before registering, prefault via the *alias*. This way, the guest will trigger minor faults, instead of missing faults, and we can UFFDIO_CONTINUE to resolve them. Modify the page fault handler function to use the right ioctl depending on the mode we're running in. In MINOR mode, use UFFDIO_CONTINUE. Signed-off-by: Axel Rasmussen --- .../selftests/kvm/demand_paging_test.c | 124 ++++++++++++------ 1 file changed, 87 insertions(+), 37 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 10c7ba76a9c6..ff29aaea3120 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -72,33 +72,57 @@ static void *vcpu_worker(void *data) return NULL; } -static int handle_uffd_page_request(int uffd, uint64_t addr) +static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) { - pid_t tid; + const char *ioctl_name; + pid_t tid = syscall(__NR_gettid); struct timespec start; struct timespec ts_diff; - struct uffdio_copy copy; int r; - tid = syscall(__NR_gettid); + if (uffd_mode == UFFDIO_REGISTER_MODE_MISSING) { + struct uffdio_copy copy; - copy.src = (uint64_t)guest_data_prototype; - copy.dst = addr; - copy.len = demand_paging_size; - copy.mode = 0; + ioctl_name = "UFFDIO_COPY"; - clock_gettime(CLOCK_MONOTONIC, &start); + copy.src = (uint64_t)guest_data_prototype; + copy.dst = addr; + copy.len = demand_paging_size; + copy.mode = 0; - r = ioctl(uffd, UFFDIO_COPY, ©); - if (r == -1) { - pr_info("Failed Paged in 0x%lx from thread %d with errno: %d\n", - addr, tid, errno); - return r; - } + clock_gettime(CLOCK_MONOTONIC, &start); - ts_diff = timespec_elapsed(start); + r = ioctl(uffd, UFFDIO_COPY, ©); + if (r == -1) { + pr_info("Failed UFFDIO_COPY in 0x%lx from thread %d with errno: %d\n", + addr, tid, errno); + return r; + } + + ts_diff = timespec_elapsed(start); + } else if (uffd_mode == UFFDIO_REGISTER_MODE_MINOR) { + struct uffdio_continue cont = {0}; + + ioctl_name = "UFFDIO_CONTINUE"; + + cont.range.start = addr; + cont.range.len = demand_paging_size; + + clock_gettime(CLOCK_MONOTONIC, &start); + + r = ioctl(uffd, UFFDIO_CONTINUE, &cont); + if (r == -1) { + pr_info("Failed UFFDIO_CONTINUE in 0x%lx from thread %d with errno: %d\n", + addr, tid, errno); + return r; + } - PER_PAGE_DEBUG("UFFDIO_COPY %d \t%ld ns\n", tid, + ts_diff = timespec_elapsed(start); + } else { + TEST_FAIL("Invalid uffd mode %d", uffd_mode); + } + + PER_PAGE_DEBUG("%s %d \t%ld ns\n", ioctl_name, tid, timespec_to_ns(ts_diff)); PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n", demand_paging_size, addr, tid); @@ -109,6 +133,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) bool quit_uffd_thread; struct uffd_handler_args { + int uffd_mode; int uffd; int pipefd; useconds_t delay; @@ -170,7 +195,7 @@ static void *uffd_handler_thread_fn(void *arg) if (r == -1) { if (errno == EAGAIN) continue; - pr_info("Read of uffd gor errno %d", errno); + pr_info("Read of uffd got errno %d\n", errno); return NULL; } @@ -185,7 +210,7 @@ static void *uffd_handler_thread_fn(void *arg) if (delay) usleep(delay); addr = msg.arg.pagefault.address; - r = handle_uffd_page_request(uffd, addr); + r = handle_uffd_page_request(uffd_args->uffd_mode, uffd, addr); if (r < 0) return NULL; pages++; @@ -201,17 +226,32 @@ static void *uffd_handler_thread_fn(void *arg) static int setup_demand_paging(struct kvm_vm *vm, pthread_t *uffd_handler_thread, int pipefd, + int uffd_mode, useconds_t uffd_delay, struct uffd_handler_args *uffd_args, - void *hva, uint64_t len) + void *hva, void *alias, uint64_t len) { int uffd; struct uffdio_api uffdio_api; struct uffdio_register uffdio_register; + uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; + + /* In order to get minor faults, prefault via the alias. */ + if (uffd_mode == UFFDIO_REGISTER_MODE_MINOR) { + size_t p; + + expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; + + TEST_ASSERT(alias != NULL, "Alias required for minor faults"); + for (p = 0; p < (len / demand_paging_size); ++p) { + memcpy(alias + (p * demand_paging_size), + guest_data_prototype, demand_paging_size); + } + } uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); if (uffd == -1) { - pr_info("uffd creation failed\n"); + pr_info("uffd creation failed, errno: %d\n", errno); return -1; } @@ -224,18 +264,18 @@ static int setup_demand_paging(struct kvm_vm *vm, uffdio_register.range.start = (uint64_t)hva; uffdio_register.range.len = len; - uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; + uffdio_register.mode = uffd_mode; if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) { pr_info("ioctl uffdio_register failed\n"); return -1; } - if ((uffdio_register.ioctls & UFFD_API_RANGE_IOCTLS) != - UFFD_API_RANGE_IOCTLS) { - pr_info("unexpected userfaultfd ioctl set\n"); + if ((uffdio_register.ioctls & expected_ioctls) != expected_ioctls) { + pr_info("missing userfaultfd ioctls\n"); return -1; } + uffd_args->uffd_mode = uffd_mode; uffd_args->uffd = uffd; uffd_args->pipefd = pipefd; uffd_args->delay = uffd_delay; @@ -249,7 +289,7 @@ static int setup_demand_paging(struct kvm_vm *vm, } struct test_params { - bool use_uffd; + int uffd_mode; useconds_t uffd_delay; enum vm_mem_backing_src_type src_type; bool partition_vcpu_memory_access; @@ -286,7 +326,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size, p->partition_vcpu_memory_access); - if (p->use_uffd) { + if (p->uffd_mode) { uffd_handler_threads = malloc(nr_vcpus * sizeof(*uffd_handler_threads)); TEST_ASSERT(uffd_handler_threads, "Memory allocation failed"); @@ -300,6 +340,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { vm_paddr_t vcpu_gpa; void *vcpu_hva; + void *vcpu_alias; uint64_t vcpu_mem_size; @@ -314,8 +355,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n", vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_mem_size); - /* Cache the HVA pointer of the region */ + /* Cache the host addresses of the region */ vcpu_hva = addr_gpa2hva(vm, vcpu_gpa); + vcpu_alias = addr_gpa2alias(vm, vcpu_gpa); /* * Set up user fault fd to handle demand paging @@ -327,9 +369,10 @@ static void run_test(enum vm_guest_mode mode, void *arg) r = setup_demand_paging(vm, &uffd_handler_threads[vcpu_id], - pipefds[vcpu_id * 2], + pipefds[vcpu_id * 2], p->uffd_mode, p->uffd_delay, &uffd_args[vcpu_id], - vcpu_hva, vcpu_mem_size); + vcpu_hva, vcpu_alias, + vcpu_mem_size); if (r < 0) exit(-r); } @@ -359,7 +402,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("All vCPU threads joined\n"); - if (p->use_uffd) { + if (p->uffd_mode) { char c; /* Tell the user fault fd handler threads to quit */ @@ -381,7 +424,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) free(guest_data_prototype); free(vcpu_threads); - if (p->use_uffd) { + if (p->uffd_mode) { free(uffd_handler_threads); free(uffd_args); free(pipefds); @@ -391,11 +434,11 @@ static void run_test(enum vm_guest_mode mode, void *arg) static void help(char *name) { puts(""); - printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n" + printf("usage: %s [-h] [-m mode] [-u mode] [-d uffd_delay_usec]\n" " [-b memory] [-t type] [-v vcpus] [-o]\n", name); guest_modes_help(); - printf(" -u: use User Fault FD to handle vCPU page\n" - " faults.\n"); + printf(" -u: use userfaultfd to handle vCPU page faults. Mode is a\n" + " UFFD registration mode: 'MISSING' or 'MINOR'.\n"); printf(" -d: add a delay in usec to the User Fault\n" " FD handler to simulate demand paging\n" " overheads. Ignored without -u.\n"); @@ -422,13 +465,17 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "hm:ud:b:t:v:o")) != -1) { + while ((opt = getopt(argc, argv, "hm:u:d:b:t:v:o")) != -1) { switch (opt) { case 'm': guest_modes_cmdline(optarg); break; case 'u': - p.use_uffd = true; + if (!strcmp("MISSING", optarg)) + p.uffd_mode = UFFDIO_REGISTER_MODE_MISSING; + else if (!strcmp("MINOR", optarg)) + p.uffd_mode = UFFDIO_REGISTER_MODE_MINOR; + TEST_ASSERT(p.uffd_mode, "UFFD mode must be 'MISSING' or 'MINOR'."); break; case 'd': p.uffd_delay = strtoul(optarg, NULL, 0); @@ -455,6 +502,9 @@ int main(int argc, char *argv[]) } } + TEST_ASSERT(p.uffd_mode != UFFDIO_REGISTER_MODE_MINOR || p.src_type == VM_MEM_SRC_SHMEM, + "userfaultfd MINOR mode requires shared memory; pick a different -t"); + for_each_guest_mode(run_test, &p); return 0; From patchwork Wed May 12 21:45:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 435953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CA51C43603 for ; Wed, 12 May 2021 23:10:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C178613EB for ; Wed, 12 May 2021 23:10:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347163AbhELXEg (ORCPT ); Wed, 12 May 2021 19:04:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1390040AbhELVtW (ORCPT ); Wed, 12 May 2021 17:49:22 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48AC8C06121A for ; Wed, 12 May 2021 14:45:16 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id 1-20020aed31010000b029019d1c685840so16700881qtg.3 for ; Wed, 12 May 2021 14:45:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=L2NNDNh+hgASTvjB+YiQroB4OhnVKpDdWzCu7PeehzQ=; b=f/jWqQ9KrReYTTcLlSByNRew6FCsKM1EvpOY6NccUCKPvBGo/JQ4Zxqp+Y9qdFB990 +ppzycm59zMkwaQn5JB3+WQrWFuHIUB51D/IExtH3kHfqQy7KV4xRmUfSvB5zMyym+5G TLlMSt8evZA5NNZagdBNH9Qf04w+rO/dsWryvqWKU6DKDptYEPlTI0XBk7J8ppG5Jxho p9ur6H3WcWjXmfJo0OsUqOPujxxTfEZBCyrkWgpZeGut4UQVgKVfweicGfBXTBbSaKFD 9LxR/mFKzVoks3DH8wdXU7A3Pj/Uk55aP9AZ387d1ryZA3jEYsk07gR7lV4mV2jNWFvL 5RoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=L2NNDNh+hgASTvjB+YiQroB4OhnVKpDdWzCu7PeehzQ=; b=MNOU47T3VEmifuY7Pj19cRILJp3VG+73Mf5kR7/v0dTE03ehHRuGASwtETxxa+D4iH fx/ulV7e8d7NK+mmoXj8qLtCarVTTIfHGzDUx1E/Hlzjy3FF6UV9zn1vEHF2cJ2pA+Fv 3hnFJ1lSlZBifAt5uVo5N7W2ABjr4pKpL+BoqthOGYAW3hzQgDEKMa7Hh9X+0vmBrlQh gE8Pptwsljl7yRWQdY8imFFh5bnbWWUcYrm5XMxKtzJyqePvlP4padeuBT2PtorA8gG0 M2j42IOQyZFrZXCxVMJFMbp+RPbv+nMR+AgBWVs98PFhSCbN2qeUPBAR7SYssf6yldUy vSPg== X-Gm-Message-State: AOAM5314yItv7LuU1huCHgJG2W9PLHFmQkMxaro5DQucUpQIcXcLdN3+ om1Dt4sRF38ixFS3ha5MQSSmhblRPs7LcMW/YR5S X-Google-Smtp-Source: ABdhPJwmZd1tQ0tXLXGE1m4mOZ8nKprfTo961VKDRhbFhsEOZfmfNUk6cuiyjbyK+g36JmjmfxxYDg5OhcHVQXYya+QJ X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:29e5:10fc:1128:b0c0]) (user=axelrasmussen job=sendgmr) by 2002:a0c:aa44:: with SMTP id e4mr37564404qvb.41.1620855915420; Wed, 12 May 2021 14:45:15 -0700 (PDT) Date: Wed, 12 May 2021 14:45:02 -0700 In-Reply-To: <20210512214502.2047008-1-axelrasmussen@google.com> Message-Id: <20210512214502.2047008-6-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210512214502.2047008-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH 5/5] KVM: selftests: add shared hugetlbfs backing source type From: Axel Rasmussen To: Aaron Lewis , Alexander Graf , Andrew Jones , Andrew Morton , Ben Gardon , Emanuele Giuseppe Esposito , Eric Auger , Jacob Xu , Makarand Sonare , Oliver Upton , Paolo Bonzini , Peter Xu , Shuah Khan , Yanan Wang Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Axel Rasmussen Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This lets us run the demand paging test on top of a shared hugetlbfs-backed area. The "shared" is key, as this allows us to exercise userfaultfd minor faults on hugetlbfs. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/kvm/demand_paging_test.c | 6 ++++-- tools/testing/selftests/kvm/include/test_util.h | 10 ++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 9 +++++++-- tools/testing/selftests/kvm/lib/test_util.c | 6 ++++++ 4 files changed, 27 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index ff29aaea3120..32942c9e0376 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -502,8 +502,10 @@ int main(int argc, char *argv[]) } } - TEST_ASSERT(p.uffd_mode != UFFDIO_REGISTER_MODE_MINOR || p.src_type == VM_MEM_SRC_SHMEM, - "userfaultfd MINOR mode requires shared memory; pick a different -t"); + if (p.uffd_mode == UFFDIO_REGISTER_MODE_MINOR && + !backing_src_is_shared(p.src_type)) { + TEST_FAIL("userfaultfd MINOR mode requires shared memory; pick a different -t"); + } for_each_guest_mode(run_test, &p); diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index 7377f00469ef..852d6d2cc285 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -85,9 +85,19 @@ enum vm_mem_backing_src_type { VM_MEM_SRC_ANONYMOUS_HUGETLB_2GB, VM_MEM_SRC_ANONYMOUS_HUGETLB_16GB, VM_MEM_SRC_SHMEM, + VM_MEM_SRC_SHARED_HUGETLB, NUM_SRC_TYPES, }; +/* + * Whether or not the given source type is shared memory (as opposed to + * anonymous). + */ +static inline bool backing_src_is_shared(enum vm_mem_backing_src_type t) +{ + return t == VM_MEM_SRC_SHMEM || t == VM_MEM_SRC_SHARED_HUGETLB; +} + struct vm_mem_backing_src_alias { const char *name; uint32_t flag; diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 838d58633f7e..fed02153c919 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -756,8 +756,13 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, region->mmap_size += alignment; region->fd = -1; - if (src_type == VM_MEM_SRC_SHMEM) { - region->fd = memfd_create("kvm_selftest", MFD_CLOEXEC); + if (backing_src_is_shared(src_type)) { + int memfd_flags = MFD_CLOEXEC; + + if (src_type == VM_MEM_SRC_SHARED_HUGETLB) + memfd_flags |= MFD_HUGETLB; + + region->fd = memfd_create("kvm_selftest", memfd_flags); TEST_ASSERT(region->fd != -1, "memfd_create failed, errno: %i", errno); diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index c7a265da5090..65fb8b43782c 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -240,6 +240,11 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i) .name = "shmem", .flag = MAP_SHARED, }, + [VM_MEM_SRC_SHARED_HUGETLB] = { + .name = "shared_hugetlb", + /* No MAP_HUGETLB, we use MFD_HUGETLB instead. */ + .flag = MAP_SHARED, + }, }; _Static_assert(ARRAY_SIZE(aliases) == NUM_SRC_TYPES, "Missing new backing src types?"); @@ -262,6 +267,7 @@ size_t get_backing_src_pagesz(uint32_t i) case VM_MEM_SRC_ANONYMOUS_THP: return get_trans_hugepagesz(); case VM_MEM_SRC_ANONYMOUS_HUGETLB: + case VM_MEM_SRC_SHARED_HUGETLB: return get_def_hugetlb_pagesz(); default: return MAP_HUGE_PAGE_SIZE(flag);