From patchwork Tue May 13 09:28:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Slaby X-Patchwork-Id: 30025 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f72.google.com (mail-oa0-f72.google.com [209.85.219.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DBCBD20446 for ; Tue, 13 May 2014 10:15:04 +0000 (UTC) Received: by mail-oa0-f72.google.com with SMTP id l6sf551169oag.3 for ; Tue, 13 May 2014 03:15:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:in-reply-to:references :sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=Fur02fqWLQi9oQBYZ0cSyCzaN5IBOUg+EatUkz/5EnQ=; b=TtPFlgQKqXVfD2B2CRFa7frHtKnS7CCwRNX2wrwsYpsvK5psN5sFkGYJxwUcTjyjJc AAY1oUTWBC3gpuU+RuUwXjMBpE22g83AKRwGwMHTARA36yzEsYVsRb8IOHXH+cizdixJ 02t2UBSQkmXuVQAmSwuP92T76+MMoRMRsek9yLI0UyqAJkmqafvvGuhFRdR2YEY/mh5s +VQz2Y9NIDSxzdnSP15u8WY5EpUHrMc8UKuG+xjygB1puScwRaa53pxaLQ2IcIYkXts9 1mIMizYLmar45FcKMfuR0HfFk05ZzRsUM8KxI2p5yP32634WnB8jtQtoDEnJBtS+judd Yw7w== X-Gm-Message-State: ALoCoQlENRusxFsg1C6mLkR3JfF/D8iFzDgra37NOvClNpbYidWhxP2wz7qRkAJ9SgErzsUovTJN X-Received: by 10.42.229.194 with SMTP id jj2mr15041162icb.18.1399976104229; Tue, 13 May 2014 03:15:04 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.88.105 with SMTP id s96ls1820083qgd.25.gmail; Tue, 13 May 2014 03:15:04 -0700 (PDT) X-Received: by 10.58.88.8 with SMTP id bc8mr25252veb.39.1399976104149; Tue, 13 May 2014 03:15:04 -0700 (PDT) Received: from mail-vc0-f177.google.com (mail-vc0-f177.google.com [209.85.220.177]) by mx.google.com with ESMTPS id ui2si2562313vdc.28.2014.05.13.03.15.04 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 13 May 2014 03:15:04 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.177 as permitted sender) client-ip=209.85.220.177; Received: by mail-vc0-f177.google.com with SMTP id if17so138060vcb.8 for ; Tue, 13 May 2014 03:15:04 -0700 (PDT) X-Received: by 10.52.116.101 with SMTP id jv5mr23959105vdb.11.1399976104033; Tue, 13 May 2014 03:15:04 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp138799vcb; Tue, 13 May 2014 03:15:03 -0700 (PDT) X-Received: by 10.69.0.34 with SMTP id av2mr4102548pbd.95.1399976102826; Tue, 13 May 2014 03:15:02 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ul9si12805131pac.118.2014.05.13.03.15.01; Tue, 13 May 2014 03:15:01 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933153AbaEMKOu (ORCPT + 27 others); Tue, 13 May 2014 06:14:50 -0400 Received: from ip4-83-240-18-248.cust.nbox.cz ([83.240.18.248]:33415 "EHLO ip4-83-240-18-248.cust.nbox.cz" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933663AbaEMKLt (ORCPT ); Tue, 13 May 2014 06:11:49 -0400 Received: from ku by ip4-83-240-18-248.cust.nbox.cz with local (Exim 4.80.1) (envelope-from ) id 1Wk917-00033Z-0a; Tue, 13 May 2014 11:28:57 +0200 From: Jiri Slaby To: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Mark Salter , Christoffer Dall , Jiri Slaby Subject: [PATCH 3.12 130/182] arm: KVM: fix possible misalignment of PGDs and bounce page Date: Tue, 13 May 2014 11:28:02 +0200 Message-Id: <9de8b0ca7dfc5c14fb25a74d35a5892a5ff25e69.1399973152.git.jslaby@suse.cz> X-Mailer: git-send-email 1.9.3 In-Reply-To: <7aa1bde8f6143b2db33e6567a8c3a4debaa246f4.1399973152.git.jslaby@suse.cz> References: <7aa1bde8f6143b2db33e6567a8c3a4debaa246f4.1399973152.git.jslaby@suse.cz> In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: linux-kernel-owner@vger.kernel.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.177 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Mark Salter 3.12-stable review patch. If anyone has any objections, please let me know. =============== commit 5d4e08c45a6cf8f1ab3c7fa375007635ac569165 upstream. The kvm/mmu code shared by arm and arm64 uses kalloc() to allocate a bounce page (if hypervisor init code crosses page boundary) and hypervisor PGDs. The problem is that kalloc() does not guarantee the proper alignment. In the case of the bounce page, the page sized buffer allocated may also cross a page boundary negating the purpose and leading to a hang during kvm initialization. Likewise the PGDs allocated may not meet the minimum alignment requirements of the underlying MMU. This patch uses __get_free_page() to guarantee the worst case alignment needs of the bounce page and PGDs on both arm and arm64. Signed-off-by: Mark Salter Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall Signed-off-by: Jiri Slaby --- arch/arm/kvm/mmu.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index cb79a5dd6d96..fe59e4a19022 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -41,6 +41,8 @@ static unsigned long hyp_idmap_start; static unsigned long hyp_idmap_end; static phys_addr_t hyp_idmap_vector; +#define pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t)) + static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { /* @@ -172,14 +174,14 @@ void free_boot_hyp_pgd(void) if (boot_hyp_pgd) { unmap_range(NULL, boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE); unmap_range(NULL, boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE); - kfree(boot_hyp_pgd); + free_pages((unsigned long)boot_hyp_pgd, pgd_order); boot_hyp_pgd = NULL; } if (hyp_pgd) unmap_range(NULL, hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE); - kfree(init_bounce_page); + free_page((unsigned long)init_bounce_page); init_bounce_page = NULL; mutex_unlock(&kvm_hyp_pgd_mutex); @@ -209,7 +211,7 @@ void free_hyp_pgds(void) for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE) unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); - kfree(hyp_pgd); + free_pages((unsigned long)hyp_pgd, pgd_order); hyp_pgd = NULL; } @@ -781,7 +783,7 @@ int kvm_mmu_init(void) size_t len = __hyp_idmap_text_end - __hyp_idmap_text_start; phys_addr_t phys_base; - init_bounce_page = kmalloc(PAGE_SIZE, GFP_KERNEL); + init_bounce_page = (void *)__get_free_page(GFP_KERNEL); if (!init_bounce_page) { kvm_err("Couldn't allocate HYP init bounce page\n"); err = -ENOMEM; @@ -807,8 +809,9 @@ int kvm_mmu_init(void) (unsigned long)phys_base); } - hyp_pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL); - boot_hyp_pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL); + hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order); + boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order); + if (!hyp_pgd || !boot_hyp_pgd) { kvm_err("Hyp mode PGD not allocated\n"); err = -ENOMEM;