From patchwork Mon Oct 2 12:59:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 114609 Delivered-To: patch@linaro.org Received: by 10.140.22.163 with SMTP id 32csp653918qgn; Mon, 2 Oct 2017 06:02:12 -0700 (PDT) X-Google-Smtp-Source: AOwi7QBD5t5bMCdFgk5+vld6GLacGcyyLZwXRwLKOqY9ZsfApD8wc69pVmk4Mcspof75x6ySQ/CX X-Received: by 10.36.53.75 with SMTP id k72mr19375322ita.8.1506949332244; Mon, 02 Oct 2017 06:02:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506949332; cv=none; d=google.com; s=arc-20160816; b=laMOWBR+tH9nQknVXy35eDcACvyH7HD128uFH2QZ9tL9AFFN7ZFQ/4eTY3bGmNrBlr RKHAXZg+dqPmtZMXNwcgy8T/bfZ0WoO2JQCP8VF0x8IvyqEc8qnhfTqVeNxHFiyzUpi7 SstH9YmZKc9JvF7BDHQbITa9ZZYd5F2xpgF3PDBaBhoeeUNhZvcs50vvSKeIfraZBSdo 8te6uA9bZhNpGKlGyHqQl+yJylppgmGSmMlUJ7mTSeAEwk/EQvZ2tSxo9a8hJ5eA81Vi GoiLB6z2+MFTDNZlmvKVahNUMdjMzwD8591KFZrS32UBpgoGwWpIRit80jhjBtxwL93k +e7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:arc-authentication-results; bh=7ysG71tHsIZ+ZkTJnj8iLp3ZN9pbcrmXbp6JrM2/2cc=; b=CYTYz0boVNFjWANmRwV3r6q4FFSyZow2NRQpihqn4/qZv1QwF4Byizn0Kj2xgVJY5K 4nqmuX3H5RGomamE3xmZQPumwq6Z9u158lHkHMGkhxbzU9WmVZ5K+u2zUPJAN/LeR3w9 tKCMzuQ8d9i1XHZG6bChLOBs0IcE7rFjnAJxKxwWg9Xd6dxBUH/UopVFiG5teyceO87S A4GHfWtzTYkR5hTcv/0yUeeJfnCmiTmn5gx/gj0y4e0i3Z+ftgfKihgT7HTqaMwT6VT7 73YBV8LdhuwmVRIbsEid6nM3GJXgt9QGAz6xlzJJmulDtpok7qELmIoy5E/qksoN6UFh spOg== ARC-Authentication-Results: i=1; mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id h66si8603127itg.201.2017.10.02.06.02.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 02 Oct 2017 06:02:12 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dz0KE-0003cd-FE; Mon, 02 Oct 2017 12:59:58 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dz0KC-0003bX-TW for xen-devel@lists.xen.org; Mon, 02 Oct 2017 12:59:57 +0000 Received: from [85.158.137.68] by server-16.bemta-3.messagelabs.com id 21/8A-01778-C4832D95; Mon, 02 Oct 2017 12:59:56 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRWlGSWpSXmKPExsVysyfVTdfb4lK kwcKT2hZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aX9S/ZC97mVRy53MPYwLjEt4uRi0NIYDOj xPz/k5ggnNOMEgfWNjJ2MXJysAloStz5/IkJxBYRkJa49vkyI0gRs8BiRonP+3uZQRLCAjYSJ 6feZgexWQRUJS7ePc8CYvMKWErsbLvGBmJLCMhL7Gq7yApicwpYSbw//hHMFgKqOd49iWkCI/ cCRoZVjOrFqUVlqUW6ZnpJRZnpGSW5iZk5uoYGxnq5qcXFiempOYlJxXrJ+bmbGIEermdgYNz BeKXN+RCjJAeTkigvs+GlSCG+pPyUyozE4oz4otKc1OJDjDIcHEoSvBrmQDnBotT01Iq0zBxg qMGkJTh4lER4TUHSvMUFibnFmekQqVOMuhwdN+/+YRJiycvPS5US5/1vBlQkAFKUUZoHNwIW9 pcYZaWEeRkZGBiEeApSi3IzS1DlXzGKczAqCfPWgqziycwrgdv0CugIJqAj5nRdADmiJBEhJd XAyMvzJUXnns4UC7FJtuy/L54MEdzCXNb0doL9Y32m54pfa44mbrHWSwyZm60YYZ5mp/tgcaF EjeeR75Kc7ZPZ0me3LFu4S/rZpwMHi32bNDdP4Pb1Wsin8/WpS53e0asyO1Y6Kjav2v79nOOR e7wtXFnRmuszTPt/v2K8khdVlH3q5PQL73MtlViKMxINtZiLihMBJlGY8nYCAAA= X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-5.tower-31.messagelabs.com!1506949194!114264054!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 22724 invoked from network); 2 Oct 2017 12:59:55 -0000 Received: from usa-sjc-mx-foss1.foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-5.tower-31.messagelabs.com with SMTP; 2 Oct 2017 12:59:55 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9F4A1165D; Mon, 2 Oct 2017 05:59:54 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 868A33F53D; Mon, 2 Oct 2017 05:59:53 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Mon, 2 Oct 2017 13:59:30 +0100 Message-Id: <20171002125941.11274-5-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171002125941.11274-1-julien.grall@arm.com> References: <20171002125941.11274-1-julien.grall@arm.com> Cc: George Dunlap , Andrew Cooper , Julien Grall , Jan Beulich Subject: [Xen-devel] [PATCH v3 04/15] xen/x86: p2m-pod: Fix coding style X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" Also take the opportunity to: - move from 1 << * to 1UL << *. - use unsigned when possible - move from unsigned int -> unsigned long for some induction variables Signed-off-by: Julien Grall Acked-by: Andrew Cooper Reviewed-by: George Dunlap Reviewed-by: Wei Liu --- Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper Changes in v3: - Move some 1 << * to 1UL << * changes from previous to this patch - Add George's and Wei's reviewed-by Changes in v2: - Add Andrew's acked-by --- xen/arch/x86/mm/p2m-pod.c | 106 +++++++++++++++++++++++----------------------- 1 file changed, 54 insertions(+), 52 deletions(-) diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 6beb26b00a..f04d6e03e2 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -60,7 +60,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m, struct page_info *page, unsigned int order) { - int i; + unsigned long i; struct page_info *p; struct domain *d = p2m->domain; @@ -70,23 +70,24 @@ p2m_pod_cache_add(struct p2m_domain *p2m, mfn = page_to_mfn(page); /* Check to make sure this is a contiguous region */ - if( mfn_x(mfn) & ((1 << order) - 1) ) + if ( mfn_x(mfn) & ((1UL << order) - 1) ) { printk("%s: mfn %lx not aligned order %u! (mask %lx)\n", __func__, mfn_x(mfn), order, ((1UL << order) - 1)); return -1; } - for(i=0; i < 1 << order ; i++) { + for ( i = 0; i < 1UL << order ; i++) + { struct domain * od; p = mfn_to_page(_mfn(mfn_x(mfn) + i)); od = page_get_owner(p); - if(od != d) + if ( od != d ) { printk("%s: mfn %lx expected owner d%d, got owner d%d!\n", __func__, mfn_x(mfn), d->domain_id, - od?od->domain_id:-1); + od ? od->domain_id : -1); return -1; } } @@ -99,12 +100,12 @@ p2m_pod_cache_add(struct p2m_domain *p2m, * guaranteed to be zero; but by reclaiming zero pages, we implicitly * promise to provide zero pages. So we scrub pages before using. */ - for ( i = 0; i < (1 << order); i++ ) + for ( i = 0; i < (1UL << order); i++ ) clear_domain_page(_mfn(mfn_x(page_to_mfn(page)) + i)); /* First, take all pages off the domain list */ lock_page_alloc(p2m); - for(i=0; i < 1 << order ; i++) + for ( i = 0; i < 1UL << order ; i++ ) { p = page + i; page_list_del(p, &d->page_list); @@ -128,7 +129,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m, default: BUG(); } - p2m->pod.count += 1L << order; + p2m->pod.count += 1UL << order; return 0; } @@ -140,7 +141,7 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m, unsigned int order) { struct page_info *p = NULL; - int i; + unsigned long i; ASSERT(pod_locked_by_me(p2m)); @@ -162,7 +163,7 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m, p = page_list_remove_head(&p2m->pod.super); mfn = mfn_x(page_to_mfn(p)); - for ( i=0; ipod.single); @@ -174,12 +175,12 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m, case PAGE_ORDER_2M: BUG_ON( page_list_empty(&p2m->pod.super) ); p = page_list_remove_head(&p2m->pod.super); - p2m->pod.count -= 1 << order; + p2m->pod.count -= 1UL << order; break; case PAGE_ORDER_4K: BUG_ON( page_list_empty(&p2m->pod.single) ); p = page_list_remove_head(&p2m->pod.single); - p2m->pod.count -= 1; + p2m->pod.count -= 1UL; break; default: BUG(); @@ -187,7 +188,7 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m, /* Put the pages back on the domain page_list */ lock_page_alloc(p2m); - for ( i = 0 ; i < (1 << order); i++ ) + for ( i = 0 ; i < (1UL << order); i++ ) { BUG_ON(page_get_owner(p + i) != p2m->domain); page_list_add_tail(p + i, &p2m->domain->page_list); @@ -251,7 +252,8 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p while ( pod_target < p2m->pod.count ) { struct page_info * page; - int order, i; + unsigned int order; + unsigned long i; if ( (p2m->pod.count - pod_target) > SUPERPAGE_PAGES && !page_list_empty(&p2m->pod.super) ) @@ -264,10 +266,10 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p ASSERT(page != NULL); /* Then free them */ - for ( i = 0 ; i < (1 << order) ; i++ ) + for ( i = 0 ; i < (1UL << order) ; i++ ) { /* Copied from common/memory.c:guest_remove_page() */ - if ( unlikely(!get_page(page+i, d)) ) + if ( unlikely(!get_page(page + i, d)) ) { gdprintk(XENLOG_INFO, "Bad page free for domain %u\n", d->domain_id); ret = -EINVAL; @@ -275,12 +277,12 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p } if ( test_and_clear_bit(_PGT_pinned, &(page+i)->u.inuse.type_info) ) - put_page_and_type(page+i); + put_page_and_type(page + i); if ( test_and_clear_bit(_PGC_allocated, &(page+i)->count_info) ) - put_page(page+i); + put_page(page + i); - put_page(page+i); + put_page(page + i); if ( preemptible && pod_target != p2m->pod.count && hypercall_preempt_check() ) @@ -513,7 +515,7 @@ p2m_pod_decrease_reservation(struct domain *d, xen_pfn_t gpfn, unsigned int order) { - int ret=0; + int ret = 0; unsigned long i, n; struct p2m_domain *p2m = p2m_get_hostp2m(d); bool_t steal_for_cache; @@ -556,7 +558,7 @@ p2m_pod_decrease_reservation(struct domain *d, } /* No populate-on-demand? Don't need to steal anything? Then we're done!*/ - if(!pod && !steal_for_cache) + if ( !pod && !steal_for_cache ) goto out_unlock; if ( !nonpod ) @@ -567,7 +569,7 @@ p2m_pod_decrease_reservation(struct domain *d, */ p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid, p2m->default_access); - p2m->pod.entry_count-=(1<pod.entry_count -= 1UL << order; BUG_ON(p2m->pod.entry_count < 0); ret = 1; goto out_entry_check; @@ -581,10 +583,10 @@ p2m_pod_decrease_reservation(struct domain *d, * - order >= SUPERPAGE_ORDER (the loop below will take care of this) * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER) */ - if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1 << order) && + if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1UL << order) && p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES - 1)) ) { - pod = 1 << order; + pod = 1UL << order; ram = nonpod = 0; ASSERT(steal_for_cache == (p2m->pod.entry_count > p2m->pod.count)); } @@ -625,7 +627,7 @@ p2m_pod_decrease_reservation(struct domain *d, * avoid breaking up superpages. */ struct page_info *page; - unsigned int j; + unsigned long j; ASSERT(mfn_valid(mfn)); @@ -753,13 +755,13 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) } /* Now, do a quick check to see if it may be zero before unmapping. */ - for ( i=0; icount_info & PGC_count_mask) > 1 ) @@ -790,12 +792,12 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) } /* Finally, do a full zero-check */ - for ( i=0; i < SUPERPAGE_PAGES; i++ ) + for ( i = 0; i < SUPERPAGE_PAGES; i++ ) { map = map_domain_page(_mfn(mfn_x(mfn0) + i)); - for ( j=0; jdomain; int i, j; @@ -856,7 +858,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count) max_ref++; /* First, get the gfn list, translate to mfns, and map the pages. */ - for ( i=0; iget_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL); @@ -877,14 +879,14 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count) * Then, go through and check for zeroed pages, removing write permission * for those with zeroes. */ - for ( i=0; idefault_access); + types[i], p2m->default_access); } else { @@ -968,7 +970,7 @@ static void p2m_pod_emergency_sweep(struct p2m_domain *p2m) { unsigned long gfns[POD_SWEEP_STRIDE]; - unsigned long i, j=0, start, limit; + unsigned long i, j = 0, start, limit; p2m_type_t t; @@ -985,7 +987,7 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m) * careful about spinlock recursion limits and POD_SWEEP_STRIDE. */ p2m_lock(p2m); - for ( i=p2m->pod.reclaim_single; i > 0 ; i-- ) + for ( i = p2m->pod.reclaim_single; i > 0 ; i-- ) { p2m_access_t a; (void)p2m->get_entry(p2m, i, &t, &a, 0, NULL, NULL); @@ -1079,7 +1081,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn, struct page_info *p = NULL; /* Compiler warnings */ unsigned long gfn_aligned; mfn_t mfn; - int i; + unsigned long i; ASSERT(gfn_locked_by_me(p2m, gfn)); pod_lock(p2m); @@ -1143,7 +1145,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn, mfn = page_to_mfn(p); - BUG_ON((mfn_x(mfn) & ((1 << order)-1)) != 0); + BUG_ON((mfn_x(mfn) & ((1UL << order) - 1)) != 0); gfn_aligned = (gfn >> order) << order; @@ -1156,7 +1158,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn, paging_mark_dirty(d, mfn_add(mfn, i)); } - p2m->pod.entry_count -= (1 << order); + p2m->pod.entry_count -= (1UL << order); BUG_ON(p2m->pod.entry_count < 0); pod_eager_record(p2m, gfn_aligned, order); @@ -1198,8 +1200,8 @@ remap_and_retry: * NOTE: In a p2m fine-grained lock scenario this might * need promoting the gfn lock from gfn->2M superpage. */ - gfn_aligned = (gfn>>order)<> order) << order; + for ( i = 0; i < (1UL << order); i++ ) p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K, p2m_populate_on_demand, p2m->default_access); if ( tb_init_done ) @@ -1262,7 +1264,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, if ( rc == 0 ) { pod_lock(p2m); - p2m->pod.entry_count += 1 << order; + p2m->pod.entry_count += 1UL << order; p2m->pod.entry_count -= pod_count; BUG_ON(p2m->pod.entry_count < 0); pod_unlock(p2m);