From patchwork Tue Jan 14 16:55:13 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Campbell X-Patchwork-Id: 23200 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f198.google.com (mail-ob0-f198.google.com [209.85.214.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 08E942066C for ; Tue, 14 Jan 2014 16:57:36 +0000 (UTC) Received: by mail-ob0-f198.google.com with SMTP id wm4sf11181917obc.5 for ; Tue, 14 Jan 2014 08:57:36 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id :mime-version:cc:subject:precedence:list-id:list-unsubscribe :list-post:list-help:list-subscribe:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :list-archive:content-type:content-transfer-encoding; bh=fa52EFQxAZBRG/xt7TukVHU6/YL8TJYWXY1dhLpXw24=; b=cWAa65BqrJ2YVnIbixQD3tPRSSs29X47RqzGd5cgTCX669zDxA1suEEC9dJAguEz6u NcQRlT/yD+4lM1wOg0GjxJ+gQ8gV3nmE60p+4F5T1w5a/I+g4+716NgQs0WNkNXibKsN u2HvWV6g23Rq8OIgIyp8fsfYfLl097ZQnZFo8cDc1c5URQu1nwxHdm8x6SW5i/BVTN5L I1YdXfuBXngTWBuVfY7dTGE9lHvCZNeLKndlGn7eSPV6hSLuERCkPlWPHGWfS0vOJmfs Bz75x5GusmkmslXBvxlFIz/e4i9EW7HW1E3JDbLfo8civD+a0nX7cwuW/mLUHUYqpcD1 payg== X-Gm-Message-State: ALoCoQmhvkvzySAWeMk9tFdsD8MMM2Y3WXy6TELG6JBKFQ2J0qlSAisZKRBNE4K8ccmBwxKvVdic X-Received: by 10.50.33.47 with SMTP id o15mr9822455igi.0.1389718656130; Tue, 14 Jan 2014 08:57:36 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.36.161 with SMTP id r1ls201031qej.71.gmail; Tue, 14 Jan 2014 08:57:36 -0800 (PST) X-Received: by 10.58.12.38 with SMTP id v6mr662642veb.57.1389718655996; Tue, 14 Jan 2014 08:57:35 -0800 (PST) Received: from mail-vb0-f48.google.com (mail-vb0-f48.google.com [209.85.212.48]) by mx.google.com with ESMTPS id eo4si456172vdb.69.2014.01.14.08.57.35 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 14 Jan 2014 08:57:35 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.48 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.48; Received: by mail-vb0-f48.google.com with SMTP id q16so3902344vbe.35 for ; Tue, 14 Jan 2014 08:57:35 -0800 (PST) X-Received: by 10.52.187.232 with SMTP id fv8mr527136vdc.60.1389718655901; Tue, 14 Jan 2014 08:57:35 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.59.13.131 with SMTP id ey3csp215561ved; Tue, 14 Jan 2014 08:57:35 -0800 (PST) X-Received: by 10.220.92.135 with SMTP id r7mr1228702vcm.11.1389718652371; Tue, 14 Jan 2014 08:57:32 -0800 (PST) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id q10si464439ves.20.2014.01.14.08.57.31 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 14 Jan 2014 08:57:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1W37HE-0005YS-P4; Tue, 14 Jan 2014 16:55:44 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1W37HC-0005YD-Nb for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:55:43 +0000 Received: from [193.109.254.147:4755] by server-8.bemta-14.messagelabs.com id 1E/CB-30921-E0C65D25; Tue, 14 Jan 2014 16:55:42 +0000 X-Env-Sender: Ian.Campbell@citrix.com X-Msg-Ref: server-5.tower-27.messagelabs.com!1389718539!10757759!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.9.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 15625 invoked from network); 14 Jan 2014 16:55:41 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 14 Jan 2014 16:55:41 -0000 X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90637776" Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:55:15 +0000 Received: from norwich.cam.xci-test.com (10.80.248.129) by smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 14 Jan 2014 11:55:14 -0500 Received: from drall.uk.xensource.com ([10.80.16.71] helo=drall.uk.xensource.com.) by norwich.cam.xci-test.com with esmtp (Exim 4.72) (envelope-from ) id 1W37Gj-0008Db-LP; Tue, 14 Jan 2014 16:55:13 +0000 From: Ian Campbell To: Date: Tue, 14 Jan 2014 16:55:13 +0000 Message-ID: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com> X-Mailer: git-send-email 1.7.10.4 MIME-Version: 1.0 X-DLP: MIA1 Cc: julien.grall@linaro.org, tim@xen.org, Ian Campbell , stefano.stabellini@eu.citrix.com Subject: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when setting or clearing fixmaps X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ian.campbell@citrix.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.48 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: These mappings are global and therefore need flushing on all processors. Add flush_all_xen_data_tlb_range_va which accomplishes this. Also update the comments in the other flush_xen_*_tlb functions to mention that they operate on the local processor only. Signed-off-by: Ian Campbell --- xen/arch/arm/mm.c | 4 ++-- xen/include/asm-arm/arm32/page.h | 36 ++++++++++++++++++++++++++++++------ xen/include/asm-arm/arm64/page.h | 35 +++++++++++++++++++++++++++++------ 3 files changed, 61 insertions(+), 14 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 35af1ad..cddb174 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -234,7 +234,7 @@ void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes) pte.pt.ai = attributes; pte.pt.xn = 1; write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte); - flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE); + flush_all_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE); } /* Remove a mapping from a fixmap entry */ @@ -242,7 +242,7 @@ void clear_fixmap(unsigned map) { lpae_t pte = {0}; write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte); - flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE); + flush_all_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE); } #ifdef CONFIG_DOMAIN_PAGE diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h index cf12a89..533b253 100644 --- a/xen/include/asm-arm/arm32/page.h +++ b/xen/include/asm-arm/arm32/page.h @@ -23,7 +23,9 @@ static inline void write_pte(lpae_t *p, lpae_t pte) #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC) /* - * Flush all hypervisor mappings from the TLB and branch predictor. + * Flush all hypervisor mappings from the TLB and branch predictor of + * the local processor. + * * This is needed after changing Xen code mappings. * * The caller needs to issue the necessary DSB and D-cache flushes @@ -43,8 +45,9 @@ static inline void flush_xen_text_tlb(void) } /* - * Flush all hypervisor mappings from the data TLB. This is not - * sufficient when changing code mappings or for self modifying code. + * Flush all hypervisor mappings from the data TLB of the local + * processor. This is not sufficient when changing code mappings or + * for self modifying code. */ static inline void flush_xen_data_tlb(void) { @@ -57,10 +60,12 @@ static inline void flush_xen_data_tlb(void) } /* - * Flush a range of VA's hypervisor mappings from the data TLB. This is not - * sufficient when changing code mappings or for self modifying code. + * Flush a range of VA's hypervisor mappings from the data TLB of the + * local processor. This is not sufficient when changing code mappings + * or for self modifying code. */ -static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size) +static inline void flush_xen_data_tlb_range_va(unsigned long va, + unsigned long size) { unsigned long end = va + size; dsb(); /* Ensure preceding are visible */ @@ -73,6 +78,25 @@ static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s isb(); } +/* + * Flush a range of VA's hypervisor mappings from the data TLB on all + * processors in the inner-shareable domain. This is not sufficient + * when changing code mappings or for self modifying code. + */ +static inline void flush_all_xen_data_tlb_range_va(unsigned long va, + unsigned long size) +{ + unsigned long end = va + size; + dsb(); /* Ensure preceding are visible */ + while ( va < end ) { + asm volatile(STORE_CP32(0, TLBIMVAHIS) + : : "r" (va) : "memory"); + va += PAGE_SIZE; + } + dsb(); /* Ensure completion of the TLB flush */ + isb(); +} + /* Ask the MMU to translate a VA for us */ static inline uint64_t __va_to_par(vaddr_t va) { diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h index 9551f90..42023cc 100644 --- a/xen/include/asm-arm/arm64/page.h +++ b/xen/include/asm-arm/arm64/page.h @@ -18,7 +18,8 @@ static inline void write_pte(lpae_t *p, lpae_t pte) #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";" /* - * Flush all hypervisor mappings from the TLB + * Flush all hypervisor mappings from the TLB of the local processor. + * * This is needed after changing Xen code mappings. * * The caller needs to issue the necessary DSB and D-cache flushes @@ -36,8 +37,9 @@ static inline void flush_xen_text_tlb(void) } /* - * Flush all hypervisor mappings from the data TLB. This is not - * sufficient when changing code mappings or for self modifying code. + * Flush all hypervisor mappings from the data TLB of the local + * processor. This is not sufficient when changing code mappings or + * for self modifying code. */ static inline void flush_xen_data_tlb(void) { @@ -50,10 +52,12 @@ static inline void flush_xen_data_tlb(void) } /* - * Flush a range of VA's hypervisor mappings from the data TLB. This is not - * sufficient when changing code mappings or for self modifying code. + * Flush a range of VA's hypervisor mappings from the data TLB of the + * local processor. This is not sufficient when changing code mappings + * or for self modifying code. */ -static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size) +static inline void flush_xen_data_tlb_range_va(unsigned long va, + unsigned long size) { unsigned long end = va + size; dsb(); /* Ensure preceding are visible */ @@ -66,6 +70,25 @@ static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s isb(); } +/* + * Flush a range of VA's hypervisor mappings from the data TLB of all + * processors in the inner-shareable domain. This is not sufficient + * when changing code mappings or for self modifying code. + */ +static inline void flush_all_xen_data_tlb_range_va(unsigned long va, + unsigned long size) +{ + unsigned long end = va + size; + dsb(); /* Ensure preceding are visible */ + while ( va < end ) { + asm volatile("tlbi vae2is, %0;" + : : "r" (va>>PAGE_SHIFT) : "memory"); + va += PAGE_SIZE; + } + dsb(); /* Ensure completion of the TLB flush */ + isb(); +} + /* Ask the MMU to translate a VA for us */ static inline uint64_t __va_to_par(vaddr_t va) {