From patchwork Thu Sep 15 11:28:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 76274 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp2387079qgf; Thu, 15 Sep 2016 04:30:28 -0700 (PDT) X-Received: by 10.107.184.131 with SMTP id i125mr18696849iof.167.1473939028550; Thu, 15 Sep 2016 04:30:28 -0700 (PDT) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id v68si3274383itd.27.2016.09.15.04.30.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Sep 2016 04:30:28 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bkUqr-0004sT-Dy; Thu, 15 Sep 2016 11:29:09 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bkUqp-0004oQ-EJ for xen-devel@lists.xen.org; Thu, 15 Sep 2016 11:29:07 +0000 Received: from [193.109.254.147] by server-6.bemta-6.messagelabs.com id C0/DB-11175-2068AD75; Thu, 15 Sep 2016 11:29:06 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrOLMWRWlGSWpSXmKPExsVysyfVTZep7Va 4wfEmdoslHxezODB6HN39mymAMYo1My8pvyKBNaNxzxWmgssqFUtWbWFsYHwr1cXIxSEksIlR 4vX9TWwQzmlGiQddzUxdjJwcbAKaEnc+fwKzRQSkJa59vswIUsQs0M4osba/lxkkISyQKNE5t 4kVxGYRUJW49+I3G4jNK+AqseR9B1izhICcxMljk4FqODg4QeLPRUHCQgIuEsdO7GSbwMi9gJ FhFaNGcWpRWWqRrqGFXlJRZnpGSW5iZo6uoYGZXm5qcXFiempOYlKxXnJ+7iZGoIcZgGAH482 NAYcYJTmYlER53dJuhQvxJeWnVGYkFmfEF5XmpBYfYpTh4FCS4N3XApQTLEpNT61Iy8wBhhpM WoKDR0mE9wdImre4IDG3ODMdInWKUVFKHKJPACSRUZoH1wYL70uMslLCvIxAhwjxFKQW5WaWo Mq/YhTnYFQS5mVtBZrCk5lXAjf9FdBiJqDFW9ZcB1lckoiQkmpg7Pl54ejfxdNXCFq12dhl6j h47xJ456Xtmbe88pr9wXKvfQrpH1bdfjWlUafnjoL8nuywTw6V+zfe1vaS+/r51D1dhalBX4/ dylARsE+cYHW9/07qwc+6ErUtbnvkehd9ulf35Nlug07hjETBB2uc7hbMfGyxly9hd9q1hkOh n7kPZb4/vkRgmhJLcUaioRZzUXEiAIimYLNqAgAA X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-6.tower-27.messagelabs.com!1473938945!59526088!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 10365 invoked from network); 15 Sep 2016 11:29:05 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-6.tower-27.messagelabs.com with SMTP; 15 Sep 2016 11:29:05 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 48A0E806; Thu, 15 Sep 2016 04:29:05 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.218.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 45F9B3F251; Thu, 15 Sep 2016 04:29:04 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Thu, 15 Sep 2016 12:28:30 +0100 Message-Id: <1473938919-31976-15-git-send-email-julien.grall@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1473938919-31976-1-git-send-email-julien.grall@arm.com> References: <1473938919-31976-1-git-send-email-julien.grall@arm.com> Cc: proskurin@sec.in.tum.de, Julien Grall , sstabellini@kernel.org, steve.capper@arm.com, wei.chen@linaro.org Subject: [Xen-devel] [for-4.8][PATCH v2 14/23] xen/arm: p2m: Re-implement p2m_cache_flush using p2m_get_entry X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" The function p2m_cache_flush can be re-implemented using the generic function p2m_get_entry by iterating over the range and using the mapping order given by the callee. As the current implementation, no preemption is implemented, although the comment in the current code claimed it. As the function is called by a DOMCTL with a region of 1GB maximum, I think the preemption can be left unimplemented for now. Finally drop the operation CACHEFLUSH in apply_one_level as nobody is using it anymore. Note that the function could have been dropped in one go at the end, however I find easier to drop the operations one by one avoiding a big deletion in the patch that convert the last operation. Signed-off-by: Julien Grall --- The loop pattern will be very similar for the reliquish function. It might be possible to extract it in a separate function. Changes in v2: - Introduce and use gfn_next_boundary - Flush all the mapping in a superpage rather than page by page. - Update doc --- xen/arch/arm/p2m.c | 83 ++++++++++++++++++++++++++++++++---------------------- 1 file changed, 50 insertions(+), 33 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index ddee258..fa58f1a 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -62,6 +62,22 @@ static inline void p2m_write_lock(struct p2m_domain *p2m) write_lock(&p2m->lock); } +/* + * Return the start of the next mapping based on the order of the + * current one. + */ +static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order) +{ + /* + * The order corresponds to the order of the mapping (or invalid + * range) in the page table. So we need to align the GFN before + * incrementing. + */ + gfn = _gfn(gfn_x(gfn) & ~((1UL << order) - 1)); + + return gfn_add(gfn, 1UL << order); +} + static void p2m_flush_tlb(struct p2m_domain *p2m); static inline void p2m_write_unlock(struct p2m_domain *p2m) @@ -734,7 +750,6 @@ enum p2m_operation { INSERT, REMOVE, RELINQUISH, - CACHEFLUSH, MEMACCESS, }; @@ -993,36 +1008,6 @@ static int apply_one_level(struct domain *d, */ return P2M_ONE_PROGRESS; - case CACHEFLUSH: - if ( !p2m_valid(orig_pte) ) - { - *addr = (*addr + level_size) & level_mask; - return P2M_ONE_PROGRESS_NOP; - } - - if ( level < 3 && p2m_table(orig_pte) ) - return P2M_ONE_DESCEND; - - /* - * could flush up to the next superpage boundary, but would - * need to be careful about preemption, so just do one 4K page - * now and return P2M_ONE_PROGRESS{,_NOP} so that the caller will - * continue to loop over the rest of the range. - */ - if ( p2m_is_ram(orig_pte.p2m.type) ) - { - unsigned long offset = paddr_to_pfn(*addr & ~level_mask); - flush_page_to_ram(orig_pte.p2m.base + offset); - - *addr += PAGE_SIZE; - return P2M_ONE_PROGRESS; - } - else - { - *addr += PAGE_SIZE; - return P2M_ONE_PROGRESS_NOP; - } - case MEMACCESS: if ( level < 3 ) { @@ -1571,12 +1556,44 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr) { struct p2m_domain *p2m = &d->arch.p2m; gfn_t end = gfn_add(start, nr); + gfn_t next_gfn; + p2m_type_t t; + unsigned int order; start = gfn_max(start, p2m->lowest_mapped_gfn); end = gfn_min(end, p2m->max_mapped_gfn); - return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN, - 0, p2m_invalid, d->arch.p2m.default_access); + /* + * The operation cache flush will invalidate the RAM assigned to the + * guest in a given range. It will not modify the page table and + * flushing the cache whilst the page is used by another CPU is + * fine. So using read-lock is fine here. + */ + p2m_read_lock(p2m); + + for ( ; gfn_x(start) < gfn_x(end); start = next_gfn ) + { + mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order); + + next_gfn = gfn_next_boundary(start, order); + + /* Skip hole and non-RAM page */ + if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_ram(t) ) + continue; + + /* XXX: Implement preemption */ + while ( gfn_x(start) < gfn_x(next_gfn) ) + { + flush_page_to_ram(mfn_x(mfn)); + + start = gfn_add(start, 1); + mfn = mfn_add(mfn, 1); + } + } + + p2m_read_unlock(p2m); + + return 0; } mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)