From patchwork Thu Sep 15 11:28:36 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 76290 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp2387403qgf; Thu, 15 Sep 2016 04:31:05 -0700 (PDT) X-Received: by 10.107.191.196 with SMTP id p187mr17763089iof.131.1473939065371; Thu, 15 Sep 2016 04:31:05 -0700 (PDT) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id s199si3253954itb.65.2016.09.15.04.31.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Sep 2016 04:31:05 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bkUqy-00056x-43; Thu, 15 Sep 2016 11:29:16 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bkUqx-000539-4R for xen-devel@lists.xen.org; Thu, 15 Sep 2016 11:29:15 +0000 Received: from [85.158.137.68] by server-3.bemta-3.messagelabs.com id D6/55-23620-A068AD75; Thu, 15 Sep 2016 11:29:14 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrGLMWRWlGSWpSXmKPExsVysyfVTZez7Va 4wen92hZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8auP3+YCrotKo7M28DcwHhBtYuRi0NIYBOj xOVd11kgnNOMEo0r17B1MXJysAloStz5/IkJxBYRkJa49vkyI0gRs0A7o8Ta/l5mkISwQIrEw fXNQAkODhYBVYkdDb4gYV4BV4l1X46D9UoIyEmcPDaZFaSEEyi+5LkoSFhIwEXi2ImdbBMYuR cwMqxi1ChOLSpLLdI1NNNLKspMzyjJTczM0TU0MNbLTS0uTkxPzUlMKtZLzs/dxAj0LwMQ7GB ctd3zEKMkB5OSKK9b2q1wIb6k/JTKjMTijPii0pzU4kOMMhwcShK8+1qAcoJFqempFWmZOcBA g0lLcPAoifD+AEnzFhck5hZnpkOkTjEqSonzeoAkBEASGaV5cG2w4L7EKCslzMsIdIgQT0FqU W5mCar8K0ZxDkYlYV7WVqApPJl5JXDTXwEtZgJavGXNdZDFJYkIKakGxk29gUzXJTcXzwnmYt 0vk1p5ce6ifGN271yD/L9+x66Jncw4et+n0cTwpMnladODE77fyt2fqDjHtnrx6nk9suqTFkf b/nz24qT2epFvC7i+mRwsXusV7qS/YjtDybNvfQFzwsrD4golfM//0WL9sGSuqsJD5heZgV8F Vp33D5s3x1ro0otPakosxRmJhlrMRcWJAMFCQTlpAgAA X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-6.tower-31.messagelabs.com!1473938953!34559971!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 3425 invoked from network); 15 Sep 2016 11:29:13 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-6.tower-31.messagelabs.com with SMTP; 15 Sep 2016 11:29:13 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C2FA2BA6; Thu, 15 Sep 2016 04:29:12 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.218.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C1B1B3F251; Thu, 15 Sep 2016 04:29:11 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Thu, 15 Sep 2016 12:28:36 +0100 Message-Id: <1473938919-31976-21-git-send-email-julien.grall@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1473938919-31976-1-git-send-email-julien.grall@arm.com> References: <1473938919-31976-1-git-send-email-julien.grall@arm.com> Cc: proskurin@sec.in.tum.de, Julien Grall , sstabellini@kernel.org, steve.capper@arm.com, wei.chen@linaro.org Subject: [Xen-devel] [for-4.8][PATCH v2 20/23] xen/arm: p2m: Re-implement p2m_insert_mapping using p2m_set_entry X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" The function p2m_insert_mapping can be re-implemented using the generic function p2m_set_entry. Note that the mapping is not reverted anymore if Xen fails to insert a mapping. This was added to ensure the MMIO are not kept half-mapped in case of failure and to follow the x86 counterpart. This was removed on the x86 part by commit c3c756bd "x86/p2m: use large pages for MMIO mappings" and I think we should let the caller taking care of it. Finally drop the operation INSERT in apply_* as nobody is using it anymore. Note that the functions could have been dropped in one go at the end, however I find easier to drop the operations one by one avoiding a big deletion in the patch that convert the last operation. Signed-off-by: Julien Grall Reviewed-by: Stefano Stabellini --- Changes in v2: - Drop todo about safety checks (similar as x86) as we are not mandate to protect a guest from his own dumbess as long as it does not impact Xen internal reference counting (e.g foreign). - Add Stefano's Reviewed-by - Fix typo --- xen/arch/arm/p2m.c | 143 +++-------------------------------------------------- 1 file changed, 8 insertions(+), 135 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 6c9a6b2..734923b 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -752,7 +752,6 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, } enum p2m_operation { - INSERT, MEMACCESS, }; @@ -1155,41 +1154,6 @@ int p2m_set_entry(struct p2m_domain *p2m, return rc; } -/* - * Returns true if start_gpaddr..end_gpaddr contains at least one - * suitably aligned level_size mappping of maddr. - * - * So long as the range is large enough the end_gpaddr need not be - * aligned (callers should create one superpage mapping based on this - * result and then call this again on the new range, eventually the - * slop at the end will cause this function to return false). - */ -static bool_t is_mapping_aligned(const paddr_t start_gpaddr, - const paddr_t end_gpaddr, - const paddr_t maddr, - const paddr_t level_size) -{ - const paddr_t level_mask = level_size - 1; - - /* No hardware superpages at level 0 */ - if ( level_size == ZEROETH_SIZE ) - return false; - - /* - * A range smaller than the size of a superpage at this level - * cannot be superpage aligned. - */ - if ( ( end_gpaddr - start_gpaddr ) < level_size - 1 ) - return false; - - /* Both the gpaddr and maddr must be aligned */ - if ( start_gpaddr & level_mask ) - return false; - if ( maddr & level_mask ) - return false; - return true; -} - #define P2M_ONE_DESCEND 0 #define P2M_ONE_PROGRESS_NOP 0x1 #define P2M_ONE_PROGRESS 0x10 @@ -1241,80 +1205,6 @@ static int apply_one_level(struct domain *d, switch ( op ) { - case INSERT: - if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size) && - /* - * We do not handle replacing an existing table with a superpage - * or when mem_access is in use. - */ - (level == 3 || (!p2m_table(orig_pte) && !p2m->mem_access_enabled)) ) - { - rc = p2m_mem_access_radix_set(p2m, _gfn(paddr_to_pfn(*addr)), a); - if ( rc < 0 ) - return rc; - - /* New mapping is superpage aligned, make it */ - pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), t, a); - if ( level < 3 ) - pte.p2m.table = 0; /* Superpage entry */ - - p2m_write_pte(entry, pte, p2m->clean_pte); - - *flush |= p2m_valid(orig_pte); - - *addr += level_size; - *maddr += level_size; - - if ( p2m_valid(orig_pte) ) - { - /* - * We can't currently get here for an existing table - * mapping, since we don't handle replacing an - * existing table with a superpage. If we did we would - * need to handle freeing (and accounting) for the bit - * of the p2m tree which we would be about to lop off. - */ - BUG_ON(level < 3 && p2m_table(orig_pte)); - if ( level == 3 ) - p2m_put_l3_page(orig_pte); - } - else /* New mapping */ - p2m->stats.mappings[level]++; - - return P2M_ONE_PROGRESS; - } - else - { - /* New mapping is not superpage aligned, create a new table entry */ - - /* L3 is always suitably aligned for mapping (handled, above) */ - BUG_ON(level == 3); - - /* Not present -> create table entry and descend */ - if ( !p2m_valid(orig_pte) ) - { - rc = p2m_create_table(p2m, entry, 0); - if ( rc < 0 ) - return rc; - return P2M_ONE_DESCEND; - } - - /* Existing superpage mapping -> shatter and descend */ - if ( p2m_mapping(orig_pte) ) - { - *flush = true; - rc = p2m_shatter_page(p2m, entry, level); - if ( rc < 0 ) - return rc; - } /* else: an existing table mapping -> descend */ - - BUG_ON(!p2m_table(*entry)); - - return P2M_ONE_DESCEND; - } - - break; - case MEMACCESS: if ( level < 3 ) { @@ -1528,13 +1418,6 @@ static int apply_p2m_changes(struct domain *d, BUG_ON(level > 3); } - if ( op == INSERT ) - { - p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn, - gfn_add(sgfn, nr)); - p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn); - } - rc = 0; out: @@ -1557,22 +1440,6 @@ out: p2m_write_unlock(p2m); - if ( rc < 0 && ( op == INSERT ) && - addr != start_gpaddr ) - { - unsigned long gfn = paddr_to_pfn(addr); - - BUG_ON(addr == end_gpaddr); - /* - * addr keeps the address of the end of the last successfully-inserted - * mapping. - */ - p2m_write_lock(p2m); - p2m_set_entry(p2m, sgfn, gfn - gfn_x(sgfn), INVALID_MFN, - p2m_invalid, p2m_access_rwx); - p2m_write_unlock(p2m); - } - return rc; } @@ -1582,8 +1449,14 @@ static inline int p2m_insert_mapping(struct domain *d, mfn_t mfn, p2m_type_t t) { - return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn, - 0, t, d->arch.p2m.default_access); + struct p2m_domain *p2m = &d->arch.p2m; + int rc; + + p2m_write_lock(p2m); + rc = p2m_set_entry(p2m, start_gfn, nr, mfn, t, p2m->default_access); + p2m_write_unlock(p2m); + + return rc; } static inline int p2m_remove_mapping(struct domain *d,