From patchwork Mon Nov 16 11:23:18 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 56584 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp1258088lbb; Mon, 16 Nov 2015 03:27:29 -0800 (PST) X-Received: by 10.68.132.97 with SMTP id ot1mr54074786pbb.162.1447673249881; Mon, 16 Nov 2015 03:27:29 -0800 (PST) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id zh9si49753203pac.79.2015.11.16.03.27.29 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Nov 2015 03:27:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dkim=neutral (body hash did not verify) header.i=@linaro_org.20150623.gappssmtp.com Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZyHuu-0004ad-Bn; Mon, 16 Nov 2015 11:25:48 +0000 Received: from mail-wm0-x232.google.com ([2a00:1450:400c:c09::232]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZyHtP-0002FR-CZ for linux-arm-kernel@lists.infradead.org; Mon, 16 Nov 2015 11:24:18 +0000 Received: by wmww144 with SMTP id w144so114344601wmw.0 for ; Mon, 16 Nov 2015 03:23:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jV74oZTaC1nj/OYLlOdlF5rzb+iWcGF3PIVtYsVpRAI=; b=2D9Uh1wNTGDbCLLCOQeXE4VskA0UVaF/TgNXXxeX8s5sz+mbGwtMan6/178BSVokcy L8ARmRn7YKZQGUJQAZyBRlcl8wSoXVt675CIUiRWxhp17OZVp/df66V8h9pcC77kXOQ7 7uz8z9wraMMgq1K5pB0RqbcJMFuR/cp1lIsxNy4kuh6dde2CeJCgOa5OsrrFuR/qGIT0 Gx6gsgmTSe0E2cXcBc3kWfA2JmmPhVyv3I4XNNfsVY2gt9Wpy1Eg0oMiyykO8l5UgxDV 3UK0c3Sjk0VNcyZztnyg9nbrcefwh345kKUOAc3+jV/oLybfOWbKScCu2LxLFs2IuIaR wAHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jV74oZTaC1nj/OYLlOdlF5rzb+iWcGF3PIVtYsVpRAI=; b=d16gHDVY6ngtpnmrDHfyrAlHdHvUCSRnaXe3TqAx9G2s4NRkP+yoYXwGYXCosxVpG0 x8IbrFKqajMr3hT1EVTM9+JSsPjliJlCP7iq5ERbpU/1aoMKMSPzGjd0OphZaaGy8h9K dfdc38Nlq3yk/vGFjcT4WjC6Dg6d6MaOyvvpTZRTvKrYOiMAVqd0HNnq+4Bctu/rFpj2 zxr0BsshZDEFHO1QSuh/H2PNNVHZFp3VrA1IWlUkSE6I1c1kzCRjbBgxijXNkzxms1I4 Fo2sr3m/Ajo6nkoZRQdn3AhYe5hleDBLckCyiYMu5SSW+ScHt0mZHO3D3Op6Ahye4hiN DdwA== X-Gm-Message-State: ALoCoQkjqJ5GarSrzwXM4LdfYcbVEb8txAONK1pQxHq4frQih1ORJRS5MiKnhpaNyJ08sP/Mt+o9 X-Received: by 10.28.51.135 with SMTP id z129mr9298280wmz.19.1447673033567; Mon, 16 Nov 2015 03:23:53 -0800 (PST) Received: from localhost.localdomain ([47.53.155.123]) by smtp.gmail.com with ESMTPSA id t126sm18062422wmd.18.2015.11.16.03.23.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Nov 2015 03:23:52 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, catalin.marinas@arm.com, will.deacon@arm.com Subject: [PATCH v3 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory Date: Mon, 16 Nov 2015 12:23:18 +0100 Message-Id: <1447672998-20981-8-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1447672998-20981-1-git-send-email-ard.biesheuvel@linaro.org> References: <1447672998-20981-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151116_032415_877117_3D0ED232 X-CRM114-Status: GOOD ( 22.79 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2a00:1450:400c:c09:0:0:0:232 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel , suzuki.poulose@arm.com, james.morse@arm.com, labbott@fedoraproject.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org This relaxes the kernel Image placement requirements, so that it may be placed at any 2 MB aligned offset in physical memory. This is accomplished by ignoring PHYS_OFFSET when installing memblocks, and accounting for the apparent virtual offset of the kernel Image (in addition to the 64 MB that it is moved below PAGE_OFFSET). As a result, virtual address references below PAGE_OFFSET are correctly mapped onto physical references into the kernel Image regardless of where it sits in memory. Signed-off-by: Ard Biesheuvel --- Documentation/arm64/booting.txt | 12 ++--- arch/arm64/include/asm/memory.h | 9 ++-- arch/arm64/mm/init.c | 49 +++++++++++++++++++- arch/arm64/mm/mmu.c | 29 ++++++++++-- 4 files changed, 83 insertions(+), 16 deletions(-) -- 1.9.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt index 701d39d3171a..f190e708bb9b 100644 --- a/Documentation/arm64/booting.txt +++ b/Documentation/arm64/booting.txt @@ -117,14 +117,14 @@ Header notes: depending on selected features, and is effectively unbound. The Image must be placed text_offset bytes from a 2MB aligned base -address near the start of usable system RAM and called there. Memory -below that base address is currently unusable by Linux, and therefore it -is strongly recommended that this location is the start of system RAM. -The region between the 2 MB aligned base address and the start of the -image has no special significance to the kernel, and may be used for -other purposes. +address anywhere in usable system RAM and called there. The region +between the 2 MB aligned base address and the start of the image has no +special significance to the kernel, and may be used for other purposes. At least image_size bytes from the start of the image must be free for use by the kernel. +NOTE: versions prior to v4.5 cannot make use of memory below the +physical offset of the Image so it is recommended that the Image be +placed as close as possible to the start of system RAM. Any memory described to the kernel (even that below the start of the image) which is not marked as reserved from the kernel (e.g., with a diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 3148691bc80a..d6a237bda1f9 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -120,13 +120,10 @@ extern phys_addr_t memstart_addr; extern u64 kernel_va_offset; /* - * The maximum physical address that the linear direct mapping - * of system RAM can cover. (PAGE_OFFSET can be interpreted as - * a 2's complement signed quantity and negated to derive the - * maximum size of the linear mapping.) + * Allow all memory at the discovery stage. We will clip it later. */ -#define MAX_MEMBLOCK_ADDR ({ memstart_addr - PAGE_OFFSET - 1; }) -#define MIN_MEMBLOCK_ADDR __pa(KIMAGE_VADDR) +#define MIN_MEMBLOCK_ADDR 0 +#define MAX_MEMBLOCK_ADDR U64_MAX /* * PFNs are used to describe any physical page; this means diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index b3b0175d7135..29a7dc5327b6 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -35,6 +35,7 @@ #include #include +#include #include #include #include @@ -158,9 +159,55 @@ static int __init early_mem(char *p) } early_param("mem", early_mem); +static void __init enforce_memory_limit(void) +{ + const phys_addr_t kbase = round_down(__pa(_text), MIN_KIMG_ALIGN); + u64 to_remove = memblock_phys_mem_size() - memory_limit; + phys_addr_t max_addr = 0; + struct memblock_region *r; + + if (memory_limit == (phys_addr_t)ULLONG_MAX) + return; + + /* + * The kernel may be high up in physical memory, so try to apply the + * limit below the kernel first, and only let the generic handling + * take over if it turns out we haven't clipped enough memory yet. + */ + for_each_memblock(memory, r) { + if (r->base + r->size > kbase) { + u64 rem = min(to_remove, kbase - r->base); + + max_addr = r->base + rem; + to_remove -= rem; + break; + } + if (to_remove <= r->size) { + max_addr = r->base + to_remove; + to_remove = 0; + break; + } + to_remove -= r->size; + } + + memblock_remove(0, max_addr); + + if (to_remove) + memblock_enforce_memory_limit(memory_limit); +} + void __init arm64_memblock_init(void) { - memblock_enforce_memory_limit(memory_limit); + /* + * Remove the memory that we will not be able to cover + * with the linear mapping. + */ + const s64 linear_region_size = -(s64)PAGE_OFFSET; + + memblock_remove(round_down(memblock_start_of_DRAM(), SZ_1G) + + linear_region_size, ULLONG_MAX); + + enforce_memory_limit(); /* * Register the kernel text, kernel data, initrd, and initial diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 526eeb7e1e97..1b9d7e48ba1e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -481,11 +482,33 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset) static void __init map_mem(void) { struct memblock_region *reg; + u64 new_memstart_addr; + u64 new_va_offset; - bootstrap_linear_mapping(KIMAGE_OFFSET); + /* + * Select a suitable value for the base of physical memory. + * This should be equal to or below the lowest usable physical + * memory address, and aligned to PUD/PMD size so that we can map + * it efficiently. + */ + new_memstart_addr = round_down(memblock_start_of_DRAM(), SZ_1G); + + /* + * Calculate the offset between the kernel text mapping that exists + * outside of the linear mapping, and its mapping in the linear region. + */ + new_va_offset = memstart_addr - new_memstart_addr; + + bootstrap_linear_mapping(new_va_offset); - kernel_va_offset = KIMAGE_OFFSET; - memstart_addr -= KIMAGE_OFFSET; + kernel_va_offset = new_va_offset; + memstart_addr = new_memstart_addr; + + /* Recalculate virtual addresses of initrd region */ + if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) { + initrd_start += new_va_offset; + initrd_end += new_va_offset; + } /* map all the memory banks */ for_each_memblock(memory, reg) {