From patchwork Wed Feb 15 21:47:19 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 6807 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 2950523EA8 for ; Wed, 15 Feb 2012 21:47:24 +0000 (UTC) Received: from mail-gy0-f180.google.com (mail-gy0-f180.google.com [209.85.160.180]) by fiordland.canonical.com (Postfix) with ESMTP id D6623A18A3C for ; Wed, 15 Feb 2012 21:47:23 +0000 (UTC) Received: by ghbz22 with SMTP id z22so1221857ghb.11 for ; Wed, 15 Feb 2012 13:47:23 -0800 (PST) Received: by 10.50.184.168 with SMTP id ev8mr44619908igc.29.1329342443188; Wed, 15 Feb 2012 13:47:23 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.80.19 with SMTP id r19cs7203ibk; Wed, 15 Feb 2012 13:47:22 -0800 (PST) Received: by 10.229.105.195 with SMTP id u3mr7096756qco.82.1329342441996; Wed, 15 Feb 2012 13:47:21 -0800 (PST) Received: from mail-qw0-f50.google.com (mail-qw0-f50.google.com [209.85.216.50]) by mx.google.com with ESMTPS id c5si1975156qcx.20.2012.02.15.13.47.21 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 15 Feb 2012 13:47:21 -0800 (PST) Received-SPF: neutral (google.com: 209.85.216.50 is neither permitted nor denied by best guess record for domain of nicolas.pitre@linaro.org) client-ip=209.85.216.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.216.50 is neither permitted nor denied by best guess record for domain of nicolas.pitre@linaro.org) smtp.mail=nicolas.pitre@linaro.org Received: by qabg27 with SMTP id g27so2047007qab.16 for ; Wed, 15 Feb 2012 13:47:21 -0800 (PST) Received: by 10.229.102.87 with SMTP id f23mr16407218qco.125.1329342441501; Wed, 15 Feb 2012 13:47:21 -0800 (PST) Received: from xanadu.home (modemcable092.28-130-66.mc.videotron.ca. [66.130.28.92]) by mx.google.com with ESMTPS id j17sm13241029qaj.9.2012.02.15.13.47.20 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 15 Feb 2012 13:47:20 -0800 (PST) Date: Wed, 15 Feb 2012 16:47:19 -0500 (EST) From: Nicolas Pitre To: Dave Martin cc: Christoffer Dall , Marc Zyngier , Will Deacon , patches@linaro.org Subject: Re: [PATCH 10/10] ARM: virt: Fix hypervisor stub installation work without RAM In-Reply-To: <1329329763-31508-11-git-send-email-dave.martin@linaro.org> Message-ID: References: <1329329763-31508-1-git-send-email-dave.martin@linaro.org> <1329329763-31508-11-git-send-email-dave.martin@linaro.org> User-Agent: Alpine 2.02 (LFD 1266 2009-07-14) MIME-Version: 1.0 X-Gm-Message-State: ALoCoQkn6Fss2cAEG+qLch5Ew7i+ojNXHS4z8JltR3e/Kp6SYD+R1SGliu4KN93tshwsoc6/pjJr On Wed, 15 Feb 2012, Dave Martin wrote: > The zImage loader doesn't have any usable RAM until the kernel is > relocated. To support this, the hypervisor stub is abstracted to > allow the boot CPU mode to be stored into an SPSR when building the > zImage loader. > > When building the kernel proper, we store the value into .data > (moved from .bss, because zeroing out of the .bss at boot time will > likely clobber it). > > A helper function __get_boot_cpu_mode() is provided to read back > the stored value. This is mostly to help keep the zImage loader > code clean, since in the kernel proper, the __boot_cpu_mode > variable can straightforwardly be relied upon instead. However, > __get_boot_cpu_mode() will still work. > > The safe_svcmode_maskall macro is modified to transfer the old > mode's SPSR to the new mode. For the main kernel entry point this > will result in a couple of redundant instructions, since the SPSR > is not significant -- however, this allows us to continue to use > the same macro in the kernel and in the zImage loader. Bleh.... I don't like this. This is becoming too convoluted, and the abstractions stretched too much IMHO. And tying the abstractions with the zImage wrapper restrictions is too cumbersome to remain manageable in the future. For example, trying to reuse the same stub code as for the kernel proper is problematic because the zImage code has to be relocatable. Using a .data variable because .bss is zeroed late is good, but then that variable has to be referenced with a PLT entry. That makes the assembly code pretty hard to share as is. I'd much prefer something straight forward and obvious such as: Completely untested, but you get the idea. This is self contained, independent from the main kernel code, and smaller as well. Yet, it occurs to me that this is all broken anyway. Exactly because the zImage code does get relocated. That means that the HVBAR will be wrong as soon as zImage copies itself out of the way of the decompressed kernel image and attempting a hvc will branch to a random point in the final kernel image. Trying to update the HVBAR would require yet more code making the whole thing even more fragile. So I'm starting to think that simply adding LPAE support to the zImage wrapper code and leaving the CPU in HYP mode when decompressing is likely to be much simpler after all. The LPAE additions in kernel/head.S weren't very intrusive. Nicolas diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index c5d60250d4..469326580d 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -160,6 +160,28 @@ not_angel: * be needed here - is there an Angel SWI call for this? */ +#ifdef CONFIG_ARM_VIRT + /* + * If we were called in HYP mode, drop to SVC mode + * temporarily to perform kernel decompression with + * the non-LPAE page table code. + */ + mrs r0, cpsr + and r1, r0, #MODE_MASK + teq r1, #HYP_MODE + bne 1f + adr r1, __hyp_vectors + mcr p15, 4, r1, c12, c0, 0 @ set HVBAR + /* + * Use IRQ mode (same PL1 as SVC mode) to remember + * it was HYP mode initially. + */ + eor r0, r0, #(HYP_MODE ^ IRQ_MODE) + msr cpsr_c, r0 + isb +1: +#endif + /* * some architecture specific code can be inserted * by the linker here, but it should preserve r7, r8, and r9. @@ -458,7 +480,29 @@ not_relocated: mov r0, #0 bl decompress_kernel bl cache_clean_flush bl cache_off - mov r0, #0 @ must be zero + +#ifdef CONFIG_ARM_VIRT + /* + * If necessary, return to HYP mode before calling the kernel + * indicated by IRQ mode. + */ + mrs r0, cpsr + and r0, r0, #MODE_MASK + teq r0, #IRQ_MODE + bne 1f + hvc #0 + + .align 5 +__hyp_vectors: + W(b) . @ reset + W(b) . @ undef + W(b) . @ svc + W(b) . @ pabort + W(b) . @ dabort + @ hyp trap: fall through +#endif + +1: mov r0, #0 @ must be zero mov r1, r7 @ restore architecture number mov r2, r8 @ restore atags pointer ARM( mov pc, r4 ) @ call kernel