From patchwork Tue Mar 10 18:52:56 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 45608 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f198.google.com (mail-we0-f198.google.com [74.125.82.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E698C214BF for ; Tue, 10 Mar 2015 18:56:39 +0000 (UTC) Received: by wevk48 with SMTP id k48sf2963269wev.0 for ; Tue, 10 Mar 2015 11:56:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=cEO68un/e/sfVXjts6lnoexl8FwCtqHJLP+uaaL4fFA=; b=bFKkgXZcmgLbZycGMw8EFLq8szsDGLFfdsUdBnrn8OFAbUE+dFUutgSbQmo56SbXiL 888Be1ORcfelvQwtiVxPeEh6G0qijdollwuYQh1JGmA/wC1NUR9pqthweWplJZZND9Vh fpIb0rmtE6B9hr65rzV/UToJOlnZPBbBWlUIvXk2V7dRb8EgLhbUTQbsfSYcp/jvBxav 6dmb5q9ctu66FuDBlRaAKDgxzinr2nImS1SJl/WAui6K0A8ckaRwD2xGw/QHU/PXTbEO PRG3tEgrKghzegePVqamtpnEwRB/JhKPaTHZxajkqNn1GFNQSEIjLaL4LOa+ytCUak87 X21w== X-Gm-Message-State: ALoCoQm/nh3nGhKH8U9ew+y+pxJoIlvwmpBWkUfTtLESLNavoWfXZoL5zpauG35NbfEnHWcoqqTP X-Received: by 10.112.44.167 with SMTP id f7mr2418504lbm.9.1426013799126; Tue, 10 Mar 2015 11:56:39 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.204.69 with SMTP id kw5ls59287lac.86.gmail; Tue, 10 Mar 2015 11:56:39 -0700 (PDT) X-Received: by 10.152.36.37 with SMTP id n5mr31939677laj.40.1426013798974; Tue, 10 Mar 2015 11:56:38 -0700 (PDT) Received: from mail-la0-f53.google.com (mail-la0-f53.google.com. [209.85.215.53]) by mx.google.com with ESMTPS id dv6si900394lbc.94.2015.03.10.11.56.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 10 Mar 2015 11:56:38 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) client-ip=209.85.215.53; Received: by labgm9 with SMTP id gm9so3831546lab.11 for ; Tue, 10 Mar 2015 11:56:38 -0700 (PDT) X-Received: by 10.112.172.229 with SMTP id bf5mr31204803lbc.72.1426013798783; Tue, 10 Mar 2015 11:56:38 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp2321105lbj; Tue, 10 Mar 2015 11:56:37 -0700 (PDT) X-Received: by 10.50.107.36 with SMTP id gz4mr59469382igb.25.1426013796939; Tue, 10 Mar 2015 11:56:36 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id kt1si2761061pdb.20.2015.03.10.11.56.36 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 10 Mar 2015 11:56:36 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YVPIn-0000v8-Q6; Tue, 10 Mar 2015 18:54:49 +0000 Received: from mail-wg0-f52.google.com ([74.125.82.52]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YVPHj-00009G-1Z for linux-arm-kernel@lists.infradead.org; Tue, 10 Mar 2015 18:53:44 +0000 Received: by wghk14 with SMTP id k14so3950923wgh.7 for ; Tue, 10 Mar 2015 11:53:19 -0700 (PDT) X-Received: by 10.180.75.108 with SMTP id b12mr71131232wiw.44.1426013599737; Tue, 10 Mar 2015 11:53:19 -0700 (PDT) Received: from ards-macbook-pro.local ([213.143.60.209]) by mx.google.com with ESMTPSA id ew5sm21466156wic.14.2015.03.10.11.53.17 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 10 Mar 2015 11:53:18 -0700 (PDT) From: Ard Biesheuvel To: will.deacon@arm.com, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, linux@arm.linux.org.uk Subject: [PATCH v4 3/4] ARM, arm64: kvm: get rid of the bounce page Date: Tue, 10 Mar 2015 19:52:56 +0100 Message-Id: <1426013577-32081-4-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1426013577-32081-1-git-send-email-ard.biesheuvel@linaro.org> References: <1426013577-32081-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150310_115343_430419_F8E73D4A X-CRM114-Status: GOOD ( 22.00 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.52 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.52 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: arnd@arndb.de, Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The HYP init bounce page is a runtime construct that ensures that the HYP init code does not cross a page boundary. However, this is something we can do perfectly well at build time, by aligning the code appropriately. For arm64, we just align to 4 KB, and enforce that the code size is less than 4 KB, regardless of the chosen page size. For ARM, the whole code is less than 256 bytes, so we tweak the linker script to align at a power of 2 upper bound of the code size Note that this also fixes a benign off-by-one error in the original bounce page code, where a bounce page would be allocated unnecessarily if the code was exactly 1 page in size. Tested-by: Marc Zyngier Reviewed-by: Marc Zyngier Signed-off-by: Ard Biesheuvel --- arch/arm/kernel/vmlinux.lds.S | 25 +++++++++++++++++++++--- arch/arm/kvm/init.S | 3 +++ arch/arm/kvm/mmu.c | 42 +++++------------------------------------ arch/arm64/kernel/vmlinux.lds.S | 17 +++++++++++------ 4 files changed, 41 insertions(+), 46 deletions(-) diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index 7029c8e5b1b5..960b807c8e46 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -26,12 +26,28 @@ #define HYP_IDMAP_RODATA \ __hyp_idmap_rodata : { \ - . = ALIGN(32); \ + . = ALIGN(HYP_IDMAP_ALIGN); \ VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \ *(.hyp.idmap.text) \ VMLINUX_SYMBOL(__hyp_idmap_text_end) = .; \ } +/* + * If the HYP idmap .text section is populated, it needs to be positioned + * such that it will not cross a page boundary in the final output image. + * So align it to the section size rounded up to the next power of 2. + * If __hyp_idmap_size is undefined, the section will be empty so define + * it as 0 in that case. + */ +PROVIDE(__hyp_idmap_size = 0); + +#define HYP_IDMAP_ALIGN \ + __hyp_idmap_size == 0 ? 0 : \ + __hyp_idmap_size <= 0x100 ? 0x100 : \ + __hyp_idmap_size <= 0x200 ? 0x200 : \ + __hyp_idmap_size <= 0x400 ? 0x400 : \ + __hyp_idmap_size <= 0x800 ? 0x800 : 0x1000 + #ifdef CONFIG_HOTPLUG_CPU #define ARM_CPU_DISCARD(x) #define ARM_CPU_KEEP(x) x @@ -351,8 +367,11 @@ SECTIONS */ ASSERT((__proc_info_end - __proc_info_begin), "missing CPU support") ASSERT((__arch_info_end - __arch_info_begin), "no machine record defined") + /* - * The HYP init code can't be more than a page long. + * The HYP init code can't be more than a page long, + * and should not cross a page boundary. * The above comment applies as well. */ -ASSERT(((__hyp_idmap_text_end - __hyp_idmap_text_start) <= PAGE_SIZE), "HYP init code too big") +ASSERT((__hyp_idmap_text_start & ~PAGE_MASK) + __hyp_idmap_size <= PAGE_SIZE, + "HYP init code too big or misaligned") diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S index 3988e72d16ff..11fb1d56f449 100644 --- a/arch/arm/kvm/init.S +++ b/arch/arm/kvm/init.S @@ -157,3 +157,6 @@ target: @ We're now in the trampoline code, switch page tables __kvm_hyp_init_end: .popsection + + .global __hyp_idmap_size + .set __hyp_idmap_size, __kvm_hyp_init_end - __kvm_hyp_init diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 3e6859bc3e11..42a24d6b003b 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -37,7 +37,6 @@ static pgd_t *boot_hyp_pgd; static pgd_t *hyp_pgd; static DEFINE_MUTEX(kvm_hyp_pgd_mutex); -static void *init_bounce_page; static unsigned long hyp_idmap_start; static unsigned long hyp_idmap_end; static phys_addr_t hyp_idmap_vector; @@ -405,9 +404,6 @@ void free_boot_hyp_pgd(void) if (hyp_pgd) unmap_range(NULL, hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE); - free_page((unsigned long)init_bounce_page); - init_bounce_page = NULL; - mutex_unlock(&kvm_hyp_pgd_mutex); } @@ -1498,39 +1494,11 @@ int kvm_mmu_init(void) hyp_idmap_end = kvm_virt_to_phys(__hyp_idmap_text_end); hyp_idmap_vector = kvm_virt_to_phys(__kvm_hyp_init); - if ((hyp_idmap_start ^ hyp_idmap_end) & PAGE_MASK) { - /* - * Our init code is crossing a page boundary. Allocate - * a bounce page, copy the code over and use that. - */ - size_t len = __hyp_idmap_text_end - __hyp_idmap_text_start; - phys_addr_t phys_base; - - init_bounce_page = (void *)__get_free_page(GFP_KERNEL); - if (!init_bounce_page) { - kvm_err("Couldn't allocate HYP init bounce page\n"); - err = -ENOMEM; - goto out; - } - - memcpy(init_bounce_page, __hyp_idmap_text_start, len); - /* - * Warning: the code we just copied to the bounce page - * must be flushed to the point of coherency. - * Otherwise, the data may be sitting in L2, and HYP - * mode won't be able to observe it as it runs with - * caches off at that point. - */ - kvm_flush_dcache_to_poc(init_bounce_page, len); - - phys_base = kvm_virt_to_phys(init_bounce_page); - hyp_idmap_vector += phys_base - hyp_idmap_start; - hyp_idmap_start = phys_base; - hyp_idmap_end = phys_base + len; - - kvm_info("Using HYP init bounce page @%lx\n", - (unsigned long)phys_base); - } + /* + * We rely on the linker script to ensure at build time that the HYP + * init code does not cross a page boundary. + */ + BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK); hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order); boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order); diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 5d9d2dca530d..a2c29865c3fe 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -23,10 +23,14 @@ jiffies = jiffies_64; #define HYPERVISOR_TEXT \ /* \ - * Force the alignment to be compatible with \ - * the vectors requirements \ + * Align to 4 KB so that \ + * a) the HYP vector table is at its minimum \ + * alignment of 2048 bytes \ + * b) the HYP init code will not cross a page \ + * boundary if its size does not exceed \ + * 4 KB (see related ASSERT() below) \ */ \ - . = ALIGN(2048); \ + . = ALIGN(SZ_4K); \ VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \ *(.hyp.idmap.text) \ VMLINUX_SYMBOL(__hyp_idmap_text_end) = .; \ @@ -163,10 +167,11 @@ SECTIONS } /* - * The HYP init code can't be more than a page long. + * The HYP init code can't be more than a page long, + * and should not cross a page boundary. */ -ASSERT(((__hyp_idmap_text_start + PAGE_SIZE) > __hyp_idmap_text_end), - "HYP init code too big") +ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K, + "HYP init code too big or misaligned") /* * If padding is applied before .head.text, virt<->phys conversions will fail.