From patchwork Mon Mar 16 12:55:00 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 45846 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 30B4F21547 for ; Mon, 16 Mar 2015 12:57:15 +0000 (UTC) Received: by wghk14 with SMTP id k14sf26395052wgh.3 for ; Mon, 16 Mar 2015 05:57:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=wUWkvtmvcvGlT2ngANnB0N0fKTtOtUuNbhSrPyaXpGY=; b=VeNpF+8+YNBkm7mCR5ydNtgk3X3Py7GYLbg4CT9CA8H5N00M51Y7b4oXrF8iOBeobr MIfatKgoHNdgQF3DplFgnDdNq3HFQTnD/QT4FpNELuS4m+HYCs1zdYEjPoAL7eQ/biWb i1YkIyeaBkzlhbXy/7X9OoSQuBfpZR/KrNwoKDzGNQC7SFkMedFqHD7DddG5EDloPr1Y FMafabbEPOrwxkWvMwyiBDzlG8iOtkzpQMlU9LfxQFolB4AUeOZVD+V2ZY2Hkas7h6JM pdox1vrH11ZcfLwMNtoQHL9Fv48AEEUk97qviI6SM5/C2C1rnSQc9a8N8X9djzoGoh0h LjXg== X-Gm-Message-State: ALoCoQli7qrv6eekWran2KUUsH5h/0VVDUBzrU/zcYV33/aHkyF6YqQtoB9pE8RZpLVM9+5Zq6FS X-Received: by 10.194.201.10 with SMTP id jw10mr9195368wjc.3.1426510634530; Mon, 16 Mar 2015 05:57:14 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.87.163 with SMTP id az3ls615978lab.64.gmail; Mon, 16 Mar 2015 05:57:14 -0700 (PDT) X-Received: by 10.112.67.107 with SMTP id m11mr30288586lbt.43.1426510634168; Mon, 16 Mar 2015 05:57:14 -0700 (PDT) Received: from mail-la0-f47.google.com (mail-la0-f47.google.com. [209.85.215.47]) by mx.google.com with ESMTPS id y3si8076835laj.9.2015.03.16.05.57.14 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Mar 2015 05:57:14 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.47 as permitted sender) client-ip=209.85.215.47; Received: by lamx15 with SMTP id x15so38965856lam.3 for ; Mon, 16 Mar 2015 05:57:14 -0700 (PDT) X-Received: by 10.152.116.11 with SMTP id js11mr33271091lab.106.1426510633950; Mon, 16 Mar 2015 05:57:13 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp2564223lbj; Mon, 16 Mar 2015 05:57:13 -0700 (PDT) X-Received: by 10.182.19.167 with SMTP id g7mr47562417obe.75.1426510632237; Mon, 16 Mar 2015 05:57:12 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id rf8si22444378pab.88.2015.03.16.05.57.11 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Mar 2015 05:57:12 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YXUYj-000495-8L; Mon, 16 Mar 2015 12:55:53 +0000 Received: from mail-la0-f44.google.com ([209.85.215.44]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YXUYC-0002lu-SV for linux-arm-kernel@lists.infradead.org; Mon, 16 Mar 2015 12:55:22 +0000 Received: by lamx15 with SMTP id x15so38910451lam.3 for ; Mon, 16 Mar 2015 05:54:56 -0700 (PDT) X-Received: by 10.112.198.1 with SMTP id iy1mr55735729lbc.13.1426510496492; Mon, 16 Mar 2015 05:54:56 -0700 (PDT) Received: from localhost.localdomain (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id a10sm2183013lbk.12.2015.03.16.05.54.53 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2015 05:54:54 -0700 (PDT) From: Christoffer Dall To: Paolo Bonzini , Marcelo Tosatti Subject: [GIT PULL 2/5] arm64: KVM: Do not use pgd_index to index stage-2 pgd Date: Mon, 16 Mar 2015 13:55:00 +0100 Message-Id: <1426510503-27995-3-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.1.2.330.g565301e.dirty In-Reply-To: <1426510503-27995-1-git-send-email-christoffer.dall@linaro.org> References: <1426510503-27995-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150316_055521_191124_BC7E1D0A X-CRM114-Status: GOOD ( 13.57 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.215.44 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.215.44 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: linux-arm-kernel@lists.infradead.org, Marc Zyngier , Christoffer Dall , kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.47 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Marc Zyngier The kernel's pgd_index macro is designed to index a normal, page sized array. KVM is a bit diffferent, as we can use concatenated pages to have a bigger address space (for example 40bit IPA with 4kB pages gives us an 8kB PGD. In the above case, the use of pgd_index will always return an index inside the first 4kB, which makes a guest that has memory above 0x8000000000 rather unhappy, as it spins forever in a page fault, whist the host happilly corrupts the lower pgd. The obvious fix is to get our own kvm_pgd_index that does the right thing(tm). Tested on X-Gene with a hacked kvmtool that put memory at a stupidly high address. Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_mmu.h | 3 ++- arch/arm/kvm/mmu.c | 8 ++++---- arch/arm64/include/asm/kvm_mmu.h | 2 ++ 3 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index c57c41dc..4cf48c3 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -149,13 +149,14 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) (__boundary - 1 < (end) - 1)? __boundary: (end); \ }) +#define kvm_pgd_index(addr) pgd_index(addr) + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); return page_count(ptr_page) == 1; } - #define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep) #define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp) #define kvm_pud_table_empty(kvm, pudp) (0) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index a48a73c..5656d79 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -290,7 +290,7 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, phys_addr_t addr = start, end = start + size; phys_addr_t next; - pgd = pgdp + pgd_index(addr); + pgd = pgdp + kvm_pgd_index(addr); do { next = kvm_pgd_addr_end(addr, end); if (!pgd_none(*pgd)) @@ -355,7 +355,7 @@ static void stage2_flush_memslot(struct kvm *kvm, phys_addr_t next; pgd_t *pgd; - pgd = kvm->arch.pgd + pgd_index(addr); + pgd = kvm->arch.pgd + kvm_pgd_index(addr); do { next = kvm_pgd_addr_end(addr, end); stage2_flush_puds(kvm, pgd, addr, next); @@ -830,7 +830,7 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache pgd_t *pgd; pud_t *pud; - pgd = kvm->arch.pgd + pgd_index(addr); + pgd = kvm->arch.pgd + kvm_pgd_index(addr); if (WARN_ON(pgd_none(*pgd))) { if (!cache) return NULL; @@ -1120,7 +1120,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) pgd_t *pgd; phys_addr_t next; - pgd = kvm->arch.pgd + pgd_index(addr); + pgd = kvm->arch.pgd + kvm_pgd_index(addr); do { /* * Release kvm_mmu_lock periodically if the memory region is diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index a099cd9..bbfb600 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -158,6 +158,8 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) #define PTRS_PER_S2_PGD (1 << PTRS_PER_S2_PGD_SHIFT) #define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t)) +#define kvm_pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1)) + /* * If we are concatenating first level stage-2 page tables, we would have less * than or equal to 16 pointers in the fake PGD, because that's what the