From patchwork Tue Jul 15 11:00:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 33653 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f71.google.com (mail-yh0-f71.google.com [209.85.213.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5B433201F1 for ; Tue, 15 Jul 2014 11:02:34 +0000 (UTC) Received: by mail-yh0-f71.google.com with SMTP id 29sf18018971yhl.2 for ; Tue, 15 Jul 2014 04:02:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:date:from:to:subject:message-id :references:mime-version:in-reply-to:user-agent:cc:precedence :list-id:list-unsubscribe:list-archive:list-post:list-help :list-subscribe:sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:content-disposition :content-type:content-transfer-encoding; bh=4Isk+zpF29dCqH9WlZkTj4ArH84Fb8WHmC2QIrFe1cU=; b=OHpByh1o/Rbhhp7mvy0jH1UzsQeXJ6Rm04302lb60wHeeBjYNZTkSSVIDqfeZJUhln e6qIvG2nUEkEw/TvRQrSRKjUJjvvh3gGsPZZPnYP8DCJNNxJtXMNcrjPO8e6eUtOORNf nUX9KPERq8O1W7d6WyGx8JFqHhyMXD4mG7ps40RylYx2Sk/pbztfU45wBJwo/goCMAFw NWffOpQvuM9ZY9tZcK/ves6HgqmfbTCtbz9C+rj4/4y1QzmuT6XosE23x27i3OZAXa6z a3f6PdJYbb7vR5WuFQeGTffeyeXyV2nfJDqAnGFbmdS8rZJh00rE8jsMq02eEfkNEGCh gBcA== X-Gm-Message-State: ALoCoQkUHVyD4BBCrp8XRKQTTdJJJkBbgDO6PCpv2uwgJEd9GhxWS/WC93lXzYT4RfsQ43HLZLE6 X-Received: by 10.236.19.7 with SMTP id m7mr9126474yhm.35.1405422154180; Tue, 15 Jul 2014 04:02:34 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.89.203 with SMTP id v69ls54941qgd.6.gmail; Tue, 15 Jul 2014 04:02:34 -0700 (PDT) X-Received: by 10.221.34.13 with SMTP id sq13mr21889102vcb.16.1405422154096; Tue, 15 Jul 2014 04:02:34 -0700 (PDT) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id xq7si6487787veb.103.2014.07.15.04.02.33 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 15 Jul 2014 04:02:33 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.171 as permitted sender) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id id10so9715309vcb.2 for ; Tue, 15 Jul 2014 04:02:33 -0700 (PDT) X-Received: by 10.58.247.167 with SMTP id yf7mr192977vec.46.1405422153914; Tue, 15 Jul 2014 04:02:33 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp205074vcb; Tue, 15 Jul 2014 04:02:33 -0700 (PDT) X-Received: by 10.68.173.65 with SMTP id bi1mr7796402pbc.130.1405422152977; Tue, 15 Jul 2014 04:02:32 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id sw1si11480467pab.131.2014.07.15.04.02.32 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Jul 2014 04:02:32 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X70Ty-0000Gi-47; Tue, 15 Jul 2014 11:01:14 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X70Tt-00005b-Ag for linux-arm-kernel@lists.infradead.org; Tue, 15 Jul 2014 11:01:12 +0000 Received: from leverpostej (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s6FB0Vwo007857; Tue, 15 Jul 2014 12:00:31 +0100 (BST) Date: Tue, 15 Jul 2014 12:00:26 +0100 From: Mark Rutland To: Mark Salter Subject: Re: [PATCH] efi/arm64: efistub: don't abort if base of DRAM is occupied Message-ID: <20140715110026.GW26465@leverpostej> References: <1405351521-12010-1-git-send-email-ard.biesheuvel@linaro.org> <1405363248.25580.12.camel@deneb.redhat.com> MIME-Version: 1.0 In-Reply-To: <1405363248.25580.12.camel@deneb.redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140715_040109_735995_8DC76B79 X-CRM114-Status: GOOD ( 31.88 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.0 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: "linux-efi@vger.kernel.org" , Ard Biesheuvel , Catalin Marinas , "leif.lindholm@linaro.org" , "roy.franz@linaro.org" , "matt.fleming@intel.com" , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.171 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Content-Disposition: inline On Mon, Jul 14, 2014 at 07:40:48PM +0100, Mark Salter wrote: > On Mon, 2014-07-14 at 17:25 +0200, Ard Biesheuvel wrote: > > If we fail to relocate the kernel Image to its preferred offset of TEXT_OFFSET > > bytes above the base of DRAM, accept the lowest alternative mapping available > > instead of aborting. We may lose a bit of memory at the low end, but we can > > still proceed normally otherwise. > > This breaks APM Mustang because the spin-table holding pen for secondary > CPUs is marked as reserved memory in the TEXT_OFFSET area and the kernel > placement using your patch makes it unreachable by kernel. Here is a > patch I've been working with to solve the same problem: I'm not sure that this is strictly speaking an issue with UEFI or the relocation strategy (which sounds sane to me). I believe we could easily hit similar issues with spin-table elsewhere, and I think we can fix this more generally without complicating the EFI stub. As I see it, we have two issues here: 1) The linear mapping starts at VA:PAGE_OFFSET+TEXT_OFFSET / PA:PHYS_OFFSET+TEXT_OFFSET, and we cannot access memory below this start address. This seems like a general issue we need to address, as it forces bootloader code to go through a tricky/impossible dance to get the kernel as close to the start of RAM as possible. 2) We cannot access a given cpu-release-addr if it is not in the linear mapping. This is the problem we're encountering now. We can solve (2) now by using a temporary mapping to write to the cpu-release-addr. Does the below patch (untested) fix your issue with spin-table? For (1) we need to rework the arm64 VA layout to decouple the kernel text mapping from the linear map, but that's a lot more work. Cheers, Mark. ---->8---- >From 73812b654a07f497f71bd38dfb4a6753fb0ad23e Mon Sep 17 00:00:00 2001 From: Mark Rutland Date: Tue, 15 Jul 2014 11:32:53 +0100 Subject: [PATCH] arm64: spin-table: handle unmapped cpu-release-addrs In certain cases the cpu-release-addr of a CPU may not fall in the linear mapping (e.g. when the kernel is loaded above this address due to the presence of other images in memory). This is problematic for the spin-table code as it assumes that it can trivially convert a cpu-release-addr to a valid VA in the linear map. This patch modifies the spin-table code to use a temporary cached mapping to write to a given cpu-release-addr, enabling us to support addresses regardless of whether they are covered by the linear mapping. Signed-off-by: Mark Rutland --- arch/arm64/kernel/smp_spin_table.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c index 0347d38..70181c1 100644 --- a/arch/arm64/kernel/smp_spin_table.c +++ b/arch/arm64/kernel/smp_spin_table.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -65,12 +66,21 @@ static int smp_spin_table_cpu_init(struct device_node *dn, unsigned int cpu) static int smp_spin_table_cpu_prepare(unsigned int cpu) { - void **release_addr; + __le64 __iomem *release_addr; if (!cpu_release_addr[cpu]) return -ENODEV; - release_addr = __va(cpu_release_addr[cpu]); + /* + * The cpu-release-addr may or may not be inside the linear mapping. + * As ioremap_cache will either give us a new mapping or reuse the + * existing linear mapping, we can use it to cover both cases. In + * either case the memory will be MT_NORMAL. + */ + release_addr = ioremap_cache(cpu_release_addr[cpu], + sizeof(*release_addr)); + if (!release_addr) + return -ENOMEM; /* * We write the release address as LE regardless of the native @@ -79,15 +89,16 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu) * boot-loader's endianess before jumping. This is mandated by * the boot protocol. */ - release_addr[0] = (void *) cpu_to_le64(__pa(secondary_holding_pen)); - - __flush_dcache_area(release_addr, sizeof(release_addr[0])); + writeq_relaxed(__pa(secondary_holding_pen), release_addr); + __flush_dcache_area(release_addr, sizeof(*release_addr)); /* * Send an event to wake up the secondary CPU. */ sev(); + iounmap(release_addr); + return 0; }