From patchwork Wed Mar 14 18:20:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 131718 Delivered-To: patch@linaro.org Received: by 10.46.84.17 with SMTP id i17csp157288ljb; Wed, 14 Mar 2018 11:22:51 -0700 (PDT) X-Google-Smtp-Source: AG47ELuWY39n75gJwnpJYknlxdjQjdl51iLx9cDSet+xL1rHRJd4GFsk07TCBFhRtnFUnoAaLzUN X-Received: by 10.107.18.22 with SMTP id a22mr5583606ioj.190.1521051771212; Wed, 14 Mar 2018 11:22:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521051771; cv=none; d=google.com; s=arc-20160816; b=wtX5rozZg/gXC7V9Y44pJgBsxo5shqTY3uAQk3fYjQcw3LTnxoHYNnz6BOQQUM0fKK wQ4p0nPtjITH8DR7Wv4VvoxXaUz8gF+M65NHRl5WOu+KJcnOXw1tCjXNkBW08tRPNApU G0BQcuBqOhtOXHgHCtbyRs7/Tsag1CFzS6IcgqunNFnj5g2dMlq3a/KysdZyYnBes01q OnxVlBBqw2Zx/xGiJ4YL63BxdyxkaDhDoFufxr5L8Sp8nxirMoxMTlVdSL7dlfqshBNE 4icleeW6wyFZHZHHdPuEbX5Xrm/vEZqifX7JiKOL8+O31+qF6G8yYVNXjWXymI2qwe6c /Vzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:arc-authentication-results; bh=irJ8x/lnDc4YkVv+D9oZ456p1Fb2WV6VcB5k2ihNavI=; b=GEG01FF4DRMJ1SqioSDZU/ld6DR6t3JhRTMXEgLHWDFnbbW1myoinWaavbyhYSuh9n qMIbYxvBobm/fnmJHHGpF6ZaMx33HmuCWfggjIBbkY4XLQTi4Q6g6g9cGs9h7VdiMvum tAFkEae61hj/4FA5T7nmGQx60pRCN1Oy98cnFpEHB2Eq0oTxIBpBI/Fly8GG6hf7EYIn m8muZ3t8gfHjtSy984fjT1vaJZCGwNAFw1zPUBEajpTaN3HrETDam6pjIeuRLjoUh+Fn +5HZ19tlFDH1CsxPBgs0GQnAjGz++/21VkY9iZWCJyWYmQXm1Jflm0vUfqszBoHRhYQ5 EwIQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id p66si1938785iof.285.2018.03.14.11.22.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Mar 2018 11:22:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ewB12-00079e-KL; Wed, 14 Mar 2018 18:20:44 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ewB11-00070g-7L for xen-devel@lists.xen.org; Wed, 14 Mar 2018 18:20:43 +0000 X-Inumbo-ID: 686d0800-27b4-11e8-8248-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 686d0800-27b4-11e8-8248-2fda3a446a53; Wed, 14 Mar 2018 18:20:43 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3231315AB; Wed, 14 Mar 2018 11:20:36 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 66AA03F53D; Wed, 14 Mar 2018 11:20:34 -0700 (PDT) From: julien.grall@arm.com To: xen-devel@lists.xen.org Date: Wed, 14 Mar 2018 18:20:05 +0000 Message-Id: <20180314182009.14274-13-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180314182009.14274-1-julien.grall@arm.com> References: <20180314182009.14274-1-julien.grall@arm.com> Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich Subject: [Xen-devel] [PATCH v5 12/16] xen/mm: Switch common/memory.c to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall A new helper copy_mfn_to_guest is introduced to easily to copy a MFN to the guest memory. Not functional change intended Signed-off-by: Julien Grall --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu Changes in v5: - Restrict the scope of some mfn variable. Changes in v4: - Patch added --- xen/common/memory.c | 75 ++++++++++++++++++++++++++++++++--------------------- 1 file changed, 46 insertions(+), 29 deletions(-) diff --git a/xen/common/memory.c b/xen/common/memory.c index 3ed71f8f74..01f1d2dbc3 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -33,6 +33,12 @@ #include #endif +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef mfn_to_page +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) + struct memop_args { /* INPUT */ struct domain *domain; /* Domain to be affected. */ @@ -95,11 +101,17 @@ static unsigned int max_order(const struct domain *d) return min(order, MAX_ORDER + 0U); } +/* Helper to copy a typesafe MFN to guest */ +#define copy_mfn_to_guest(hnd, off, mfn) \ + ({ \ + xen_pfn_t mfn_ = mfn_x(mfn); \ + __copy_to_guest_offset(hnd, off, &mfn_, 1); \ + }) + static void increase_reservation(struct memop_args *a) { struct page_info *page; unsigned long i; - xen_pfn_t mfn; struct domain *d = a->domain; if ( !guest_handle_is_null(a->extent_list) && @@ -132,8 +144,9 @@ static void increase_reservation(struct memop_args *a) if ( !paging_mode_translate(d) && !guest_handle_is_null(a->extent_list) ) { - mfn = page_to_mfn(page); - if ( unlikely(__copy_to_guest_offset(a->extent_list, i, &mfn, 1)) ) + mfn_t mfn = page_to_mfn(page); + + if ( unlikely(copy_mfn_to_guest(a->extent_list, i, mfn)) ) goto out; } } @@ -146,7 +159,7 @@ static void populate_physmap(struct memop_args *a) { struct page_info *page; unsigned int i, j; - xen_pfn_t gpfn, mfn; + xen_pfn_t gpfn; struct domain *d = a->domain, *curr_d = current->domain; bool need_tlbflush = false; uint32_t tlbflush_timestamp = 0; @@ -182,6 +195,8 @@ static void populate_physmap(struct memop_args *a) for ( i = a->nr_done; i < a->nr_extents; i++ ) { + mfn_t mfn; + if ( i != a->nr_done && hypercall_preempt_check() ) { a->preempted = 1; @@ -205,14 +220,15 @@ static void populate_physmap(struct memop_args *a) { if ( is_domain_direct_mapped(d) ) { - mfn = gpfn; + mfn = _mfn(gpfn); - for ( j = 0; j < (1U << a->extent_order); j++, mfn++ ) + for ( j = 0; j < (1U << a->extent_order); j++, + mfn = mfn_add(mfn, 1) ) { - if ( !mfn_valid(_mfn(mfn)) ) + if ( !mfn_valid(mfn) ) { - gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_xen_pfn"\n", - mfn); + gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_mfn"\n", + mfn_x(mfn)); goto out; } @@ -220,14 +236,14 @@ static void populate_physmap(struct memop_args *a) if ( !get_page(page, d) ) { gdprintk(XENLOG_INFO, - "mfn %#"PRI_xen_pfn" doesn't belong to d%d\n", - mfn, d->domain_id); + "mfn %#"PRI_mfn" doesn't belong to d%d\n", + mfn_x(mfn), d->domain_id); goto out; } put_page(page); } - mfn = gpfn; + mfn = _mfn(gpfn); } else { @@ -253,15 +269,15 @@ static void populate_physmap(struct memop_args *a) mfn = page_to_mfn(page); } - guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), a->extent_order); + guest_physmap_add_page(d, _gfn(gpfn), mfn, a->extent_order); if ( !paging_mode_translate(d) ) { for ( j = 0; j < (1U << a->extent_order); j++ ) - set_gpfn_from_mfn(mfn + j, gpfn + j); + set_gpfn_from_mfn(mfn_x(mfn_add(mfn, j)), gpfn + j); /* Inform the domain of the new page's machine address. */ - if ( unlikely(__copy_to_guest_offset(a->extent_list, i, &mfn, 1)) ) + if ( unlikely(copy_mfn_to_guest(a->extent_list, i, mfn)) ) goto out; } } @@ -304,7 +320,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) if ( p2mt == p2m_ram_paging_out ) { ASSERT(mfn_valid(mfn)); - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) put_page(page); } @@ -349,7 +365,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) } #endif /* CONFIG_X86 */ - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( unlikely(!get_page(page, d)) ) { put_gfn(d, gmfn); @@ -485,7 +501,8 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) PAGE_LIST_HEAD(in_chunk_list); PAGE_LIST_HEAD(out_chunk_list); unsigned long in_chunk_order, out_chunk_order; - xen_pfn_t gpfn, gmfn, mfn; + xen_pfn_t gpfn, gmfn; + mfn_t mfn; unsigned long i, j, k; unsigned int memflags = 0; long rc = 0; @@ -607,7 +624,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) p2m_type_t p2mt; /* Shared pages cannot be exchanged */ - mfn = mfn_x(get_gfn_unshare(d, gmfn + k, &p2mt)); + mfn = get_gfn_unshare(d, gmfn + k, &p2mt); if ( p2m_is_shared(p2mt) ) { put_gfn(d, gmfn + k); @@ -615,9 +632,9 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) goto fail; } #else /* !CONFIG_X86 */ - mfn = mfn_x(gfn_to_mfn(d, _gfn(gmfn + k))); + mfn = gfn_to_mfn(d, _gfn(gmfn + k)); #endif - if ( unlikely(!mfn_valid(_mfn(mfn))) ) + if ( unlikely(!mfn_valid(mfn)) ) { put_gfn(d, gmfn + k); rc = -EINVAL; @@ -664,10 +681,10 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) if ( !test_and_clear_bit(_PGC_allocated, &page->count_info) ) BUG(); mfn = page_to_mfn(page); - gfn = mfn_to_gmfn(d, mfn); + gfn = mfn_to_gmfn(d, mfn_x(mfn)); /* Pages were unshared above */ BUG_ON(SHARED_M2P(gfn)); - if ( guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), 0) ) + if ( guest_physmap_remove_page(d, _gfn(gfn), mfn, 0) ) domain_crash(d); put_page(page); } @@ -712,16 +729,16 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) } mfn = page_to_mfn(page); - guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), + guest_physmap_add_page(d, _gfn(gpfn), mfn, exch.out.extent_order); if ( !paging_mode_translate(d) ) { for ( k = 0; k < (1UL << exch.out.extent_order); k++ ) - set_gpfn_from_mfn(mfn + k, gpfn + k); - if ( __copy_to_guest_offset(exch.out.extent_start, - (i << out_chunk_order) + j, - &mfn, 1) ) + set_gpfn_from_mfn(mfn_x(mfn_add(mfn, k)), gpfn + k); + if ( copy_mfn_to_guest(exch.out.extent_start, + (i << out_chunk_order) + j, + mfn) ) rc = -EFAULT; } } @@ -1216,7 +1233,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) if ( page ) { rc = guest_physmap_remove_page(d, _gfn(xrfp.gpfn), - _mfn(page_to_mfn(page)), 0); + page_to_mfn(page), 0); put_page(page); } else