From patchwork Wed Mar 11 12:16:35 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 45631 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2FEC6214BF for ; Wed, 11 Mar 2015 12:19:19 +0000 (UTC) Received: by lbiz11 with SMTP id z11sf6396811lbi.2 for ; Wed, 11 Mar 2015 05:19:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:date:from:to:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent :cc:precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:content-type:content-transfer-encoding :sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list; bh=yFWRMfvpnRE0yCqa+mnkak7ycfjmyMY1PGVfdNR3D/w=; b=GGwFB5tIh5K4mlnrP8xLli51pPvWSeTHe9OIjqaO+WQXuWZhSk7HodCWyhdvmH7gPX zfaqsDow+wUeA2UnQ4JRmuoQZRjVNbKEj5lfOvjzWXgLFSjalm3St8eho7dAX9SFswnh nuy8LAv50PF3CziL95yw09QCZYQIi4u9fxVLNK7iLgwGyQ2RwpasUgRw7aIRhx6+1km7 F42zoTVYLyAy8TnUOG/grKwWxmXI3a17tSE3zDgVjfCOe7wP8RDlQXhnnfJp2X8asD0U 71U/MKWiKN7YfqU2DgoS4ZTDHzGwVjI6gzmxXBNh9tfzYKaq36e69PudUdpvAJECrHW7 4U1Q== X-Gm-Message-State: ALoCoQn3SNoXeknhUCEwNKoZsrFPyEUYa92n1Cdm6soBiWTxU/v6ZJK8P7elnOgMjBa4d/hQ+tSa X-Received: by 10.194.161.194 with SMTP id xu2mr5444118wjb.1.1426076358158; Wed, 11 Mar 2015 05:19:18 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.204.134 with SMTP id ky6ls135185lac.76.gmail; Wed, 11 Mar 2015 05:19:17 -0700 (PDT) X-Received: by 10.152.1.194 with SMTP id 2mr10351716lao.38.1426076357670; Wed, 11 Mar 2015 05:19:17 -0700 (PDT) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id dv6si2236859lbc.94.2015.03.11.05.19.17 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Mar 2015 05:19:17 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by labhs14 with SMTP id hs14so7830874lab.5 for ; Wed, 11 Mar 2015 05:19:17 -0700 (PDT) X-Received: by 10.152.28.5 with SMTP id x5mr33928535lag.112.1426076357274; Wed, 11 Mar 2015 05:19:17 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp2739269lbj; Wed, 11 Mar 2015 05:19:16 -0700 (PDT) X-Received: by 10.70.54.103 with SMTP id i7mr15488108pdp.114.1426076355687; Wed, 11 Mar 2015 05:19:15 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id hv8si2390652pad.13.2015.03.11.05.19.14 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Mar 2015 05:19:15 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YVfZ8-0007rf-OR; Wed, 11 Mar 2015 12:16:46 +0000 Received: from mail-lb0-f172.google.com ([209.85.217.172]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YVfZ3-0007lu-TS for linux-arm-kernel@lists.infradead.org; Wed, 11 Mar 2015 12:16:43 +0000 Received: by lbiz11 with SMTP id z11so8297117lbi.13 for ; Wed, 11 Mar 2015 05:16:18 -0700 (PDT) X-Received: by 10.152.178.164 with SMTP id cz4mr34022448lac.39.1426076178341; Wed, 11 Mar 2015 05:16:18 -0700 (PDT) Received: from localhost (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id rz10sm682381lbb.46.2015.03.11.05.16.15 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 11 Mar 2015 05:16:16 -0700 (PDT) Date: Wed, 11 Mar 2015 13:16:35 +0100 From: Christoffer Dall To: Marc Zyngier Subject: Re: [PATCH v2 1/3] arm64: KVM: Fix stage-2 PGD allocation to have per-page refcounting Message-ID: <20150311121635.GA12888@cbox> References: <1426014421-5579-1-git-send-email-marc.zyngier@arm.com> <1426014421-5579-2-git-send-email-marc.zyngier@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1426014421-5579-2-git-send-email-marc.zyngier@arm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150311_051642_330027_6802D0A8 X-CRM114-Status: GOOD ( 31.96 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.217.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.217.172 listed in wl.mailspike.net] Cc: Mark Rutland , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Hi Marc, On Tue, Mar 10, 2015 at 07:06:59PM +0000, Marc Zyngier wrote: > We're using __get_free_pages with to allocate the guest's stage-2 > PGD. The standard behaviour of this function is to return a set of > pages where only the head page has a valid refcount. > > This behaviour gets us into trouble when we're trying to increment > the refcount on a non-head page: > > page:ffff7c00cfb693c0 count:0 mapcount:0 mapping: (null) index:0x0 > flags: 0x4000000000000000() > page dumped because: VM_BUG_ON_PAGE((*({ __attribute__((unused)) typeof((&page->_count)->counter) __var = ( typeof((&page->_count)->counter)) 0; (volatile typeof((&page->_count)->counter) *)&((&page->_count)->counter); })) <= 0) > BUG: failure at include/linux/mm.h:548/get_page()! > Kernel panic - not syncing: BUG! > CPU: 1 PID: 1695 Comm: kvm-vcpu-0 Not tainted 4.0.0-rc1+ #3825 > Hardware name: APM X-Gene Mustang board (DT) > Call trace: > [] dump_backtrace+0x0/0x13c > [] show_stack+0x10/0x1c > [] dump_stack+0x74/0x94 > [] panic+0x100/0x240 > [] stage2_get_pmd+0x17c/0x2bc > [] kvm_handle_guest_abort+0x4b4/0x6b0 > [] handle_exit+0x58/0x180 > [] kvm_arch_vcpu_ioctl_run+0x114/0x45c > [] kvm_vcpu_ioctl+0x2e0/0x754 > [] do_vfs_ioctl+0x424/0x5c8 > [] SyS_ioctl+0x40/0x78 > CPU0: stopping > > A possible approach for this is to split the compound page using > split_page() at allocation time, and change the teardown path to > free one page at a time. > > While we're at it, the PGD allocation code is reworked to reduce > duplication. > > This has been tested on an X-Gene platform with a 4kB/48bit-VA host > kernel, and kvmtool hacked to place memory in the second page of > the hardware PGD (PUD for the host kernel). Also regression-tested > on a Cubietruck (Cortex-A7). > > Reported-by: Mark Rutland > Signed-off-by: Marc Zyngier > --- > arch/arm/include/asm/kvm_mmu.h | 9 ++--- > arch/arm/kvm/mmu.c | 77 +++++++++++++++++++++++++++++++--------- > arch/arm64/include/asm/kvm_mmu.h | 46 +++--------------------- > 3 files changed, 66 insertions(+), 66 deletions(-) > > diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h > index 0187606..ff56f91 100644 > --- a/arch/arm/include/asm/kvm_mmu.h > +++ b/arch/arm/include/asm/kvm_mmu.h > @@ -162,18 +162,13 @@ static inline bool kvm_page_empty(void *ptr) > > #define KVM_PREALLOC_LEVEL 0 > > -static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) > -{ > - return 0; > -} > - > -static inline void kvm_free_hwpgd(struct kvm *kvm) { } > - > static inline void *kvm_get_hwpgd(struct kvm *kvm) > { > return kvm->arch.pgd; > } > > +static inline unsigned int kvm_get_hwpgd_order(void) { return S2_PGD_ORDER; } > + > struct kvm; > > #define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l)) > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c > index 69c2b4c..0a5457c 100644 > --- a/arch/arm/kvm/mmu.c > +++ b/arch/arm/kvm/mmu.c > @@ -634,6 +634,31 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr) > __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE); > } > > +/* Free the HW pgd, one page at a time */ > +static void kvm_free_hwpgd(unsigned long hwpgd) > +{ > + int i; > + > + for (i = 0; i < (1 << kvm_get_hwpgd_order()); i += PAGE_SIZE) > + free_page(hwpgd + i); > +} > + > +/* Allocate the HW PGD, making sure that each page gets its own refcount */ > +static int kvm_alloc_hwpgd(unsigned long *hwpgdp) I think this can be simplified somewhat by just returning an unsigned long that can be 0 in the error case which will make the caller look a little nicer too. > +{ > + unsigned long hwpgd; > + unsigned int order = kvm_get_hwpgd_order(); > + > + hwpgd = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order); > + if (!hwpgd) > + return -ENOMEM; > + > + split_page(virt_to_page((void *)hwpgd), order); nit: alloc_pages_exact() and free_pages_exact() seem to do this for us, is it worth using those instead? It would look something like this on top of your changes: arch/arm/include/asm/kvm_mmu.h | 5 ++++- arch/arm/kvm/mmu.c | 32 ++++++++++---------------------- arch/arm64/include/asm/kvm_mmu.h | 6 +++--- 3 files changed, 17 insertions(+), 26 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index fc05ba8..4cf48c3 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -168,7 +168,10 @@ static inline void *kvm_get_hwpgd(struct kvm *kvm) return kvm->arch.pgd; } -static inline unsigned int kvm_get_hwpgd_order(void) { return S2_PGD_ORDER; } +static inline unsigned int kvm_get_hwpgd_size(void) +{ + return PTRS_PER_S2_PGD * sizeof(pgd_t); +} struct kvm; diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 8e91bea3..5656d79 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -633,28 +633,17 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr) } /* Free the HW pgd, one page at a time */ -static void kvm_free_hwpgd(unsigned long hwpgd) +static void kvm_free_hwpgd(void *hwpgd) { - int i; - - for (i = 0; i < (1 << kvm_get_hwpgd_order()); i += PAGE_SIZE) - free_page(hwpgd + i); + free_pages_exact(hwpgd, kvm_get_hwpgd_size()); } /* Allocate the HW PGD, making sure that each page gets its own refcount */ -static int kvm_alloc_hwpgd(unsigned long *hwpgdp) +static void *kvm_alloc_hwpgd(void) { - unsigned long hwpgd; - unsigned int order = kvm_get_hwpgd_order(); - - hwpgd = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order); - if (!hwpgd) - return -ENOMEM; - - split_page(virt_to_page((void *)hwpgd), order); + unsigned int size = kvm_get_hwpgd_size(); - *hwpgdp = hwpgd; - return 0; + return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO); } /** @@ -670,18 +659,17 @@ static int kvm_alloc_hwpgd(unsigned long *hwpgdp) */ int kvm_alloc_stage2_pgd(struct kvm *kvm) { - int ret; pgd_t *pgd; - unsigned long hwpgd; + void *hwpgd; if (kvm->arch.pgd != NULL) { kvm_err("kvm_arch already initialized?\n"); return -EINVAL; } - ret = kvm_alloc_hwpgd(&hwpgd); - if (ret) - return ret; + hwpgd = kvm_alloc_hwpgd(); + if (!hwpgd) + return -ENOMEM; /* When the kernel uses more levels of page tables than the * guest, we allocate a fake PGD and pre-populate it to point @@ -829,7 +817,7 @@ void kvm_free_stage2_pgd(struct kvm *kvm) return; unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); - kvm_free_hwpgd((unsigned long)kvm_get_hwpgd(kvm)); + kvm_free_hwpgd(kvm_get_hwpgd(kvm)); if (KVM_PREALLOC_LEVEL > 0) kfree(kvm->arch.pgd); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 3668110..bbfb600 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -189,11 +189,11 @@ static inline void *kvm_get_hwpgd(struct kvm *kvm) return pmd_offset(pud, 0); } -static inline unsigned int kvm_get_hwpgd_order(void) +static inline unsigned int kvm_get_hwpgd_size(void) { if (KVM_PREALLOC_LEVEL > 0) - return PTRS_PER_S2_PGD_SHIFT; - return S2_PGD_ORDER; + return PTRS_PER_S2_PGD * PAGE_SIZE; + return PTRS_PER_S2_PGD * sizeof(pgd_t); } static inline bool kvm_page_empty(void *ptr)