From patchwork Wed Apr 15 15:34:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 47210 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5BDDD2121F for ; Wed, 15 Apr 2015 15:43:21 +0000 (UTC) Received: by lbbqq2 with SMTP id qq2sf10507382lbb.0 for ; Wed, 15 Apr 2015 08:43:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=b1K1c2RC1BlQCPqn68mawzsfw3XQ/rHvfQGE63P/RIE=; b=WfmcqovfD9qrDgg/NsJG2oQrWll4spMF5QB0zL+GkGMG6L/UnABGo1KN+xAQ6TNKYJ Er/PtD6GcWQDAAVT47gvaMcDg77LXCj+Hk30FAflTv/Aeq03ijXYKfikuod3OgtSXfmH wog8OWSo2xTYSXxd/zZzvEgFs9hdqqj3rwoq7dQ12mdlVjTaxeSYzDpBU5VnR+/xPQmJ NGUSmGaAlQhfu6eu9fncX8Ga6pr1LEmTyX9b1D+9lhAmF1p1xBEXTqaXcNLwi0g1oSmq gxfQeFyMJtMWuMgauXHNujFLWW58sP7OJ/j9RZ//S4MJUx87cBozs8CY4uHRpxNTs89n t+0g== X-Gm-Message-State: ALoCoQmZaffrMccSiq8FNcVRn6X9KI01Z9DOk81J7AYwZNKRo0A31muM7YBjOX8JVspjL8VT85lC X-Received: by 10.194.241.200 with SMTP id wk8mr3925086wjc.7.1429112599968; Wed, 15 Apr 2015 08:43:19 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.204.103 with SMTP id kx7ls209672lac.62.gmail; Wed, 15 Apr 2015 08:43:19 -0700 (PDT) X-Received: by 10.152.25.167 with SMTP id d7mr3185192lag.108.1429112599792; Wed, 15 Apr 2015 08:43:19 -0700 (PDT) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com. [209.85.215.54]) by mx.google.com with ESMTPS id ci12si4194448lad.37.2015.04.15.08.43.19 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:43:19 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) client-ip=209.85.215.54; Received: by lagv1 with SMTP id v1so35871975lag.3 for ; Wed, 15 Apr 2015 08:43:19 -0700 (PDT) X-Received: by 10.112.222.133 with SMTP id qm5mr24423820lbc.86.1429112599688; Wed, 15 Apr 2015 08:43:19 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2594686lbt; Wed, 15 Apr 2015 08:43:18 -0700 (PDT) X-Received: by 10.70.134.198 with SMTP id pm6mr48015026pdb.17.1429112596760; Wed, 15 Apr 2015 08:43:16 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id nl9si7624701pdb.79.2015.04.15.08.43.15 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:43:16 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPR9-0005i2-Jr; Wed, 15 Apr 2015 15:41:11 +0000 Received: from mail-wg0-f48.google.com ([74.125.82.48]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPMq-0001Hm-Vd for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 15:36:46 +0000 Received: by wgyo15 with SMTP id o15so51305754wgy.2 for ; Wed, 15 Apr 2015 08:36:22 -0700 (PDT) X-Received: by 10.194.121.68 with SMTP id li4mr48771339wjb.84.1429112182681; Wed, 15 Apr 2015 08:36:22 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.175]) by mx.google.com with ESMTPSA id eh5sm7674765wic.20.2015.04.15.08.36.12 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 15 Apr 2015 08:36:21 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 12/13] arm64: allow kernel Image to be loaded anywhere in physical memory Date: Wed, 15 Apr 2015 17:34:23 +0200 Message-Id: <1429112064-19952-13-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> References: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_083645_177437_F000D062 X-CRM114-Status: GOOD ( 22.20 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.48 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.48 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This relaxes the kernel Image placement requirements, so that it may be placed at any 2 MB aligned offset in physical memory. This is accomplished by ignoring PHYS_OFFSET when installing memblocks, and accounting for the apparent virtual offset of the kernel Image (in addition to the 64 MB that is is moved below PAGE_OFFSET). As a result, virtual address references below PAGE_OFFSET are correctly mapped onto physical references into the kernel Image regardless of where it sits in memory. Signed-off-by: Ard Biesheuvel --- Documentation/arm64/booting.txt | 20 ++++++++++---------- arch/arm64/mm/Makefile | 1 + arch/arm64/mm/init.c | 38 +++++++++++++++++++++++++++++++++++--- arch/arm64/mm/mmu.c | 24 ++++++++++++++++++++++-- 4 files changed, 68 insertions(+), 15 deletions(-) diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt index 53f18e13d51c..7bd9feedb6f9 100644 --- a/Documentation/arm64/booting.txt +++ b/Documentation/arm64/booting.txt @@ -113,16 +113,16 @@ Header notes: depending on selected features, and is effectively unbound. The Image must be placed text_offset bytes from a 2MB aligned base -address near the start of usable system RAM and called there. Memory -below that base address is currently unusable by Linux, and therefore it -is strongly recommended that this location is the start of system RAM. -At least image_size bytes from the start of the image must be free for -use by the kernel. - -Any memory described to the kernel (even that below the 2MB aligned base -address) which is not marked as reserved from the kernel e.g. with a -memreserve region in the device tree) will be considered as available to -the kernel. +address anywhere in usable system RAM and called there. At least +image_size bytes from the start of the image must be free for use +by the kernel. +NOTE: versions prior to v4.2 cannot make use of memory below the +physical offset of the Image so it is recommended that the Image be +placed as close as possible to the start of system RAM. + +Any memory described to the kernel which is not marked as reserved from +the kernel (e.g., with a memreserve region in the device tree) will be +considered as available to the kernel. Before jumping into the kernel, the following conditions must be met: diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 9d84feb41a16..49e90bab4d57 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -6,3 +6,4 @@ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_ARM64_PTDUMP) += dump.o CFLAGS_mmu.o := -I$(srctree)/scripts/dtc/libfdt/ +CFLAGS_init.o := -DTEXT_OFFSET=$(TEXT_OFFSET) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 0e7d9a2aad39..98a009885229 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -157,6 +157,38 @@ static int __init early_mem(char *p) } early_param("mem", early_mem); +static void enforce_memory_limit(void) +{ + const phys_addr_t kstart = __pa(_text) - TEXT_OFFSET; + const phys_addr_t kend = round_up(__pa(_end), SZ_2M); + const u64 ksize = kend - kstart; + struct memblock_region *reg; + + if (likely(memory_limit == (phys_addr_t)ULLONG_MAX)) + return; + + if (WARN(memory_limit < ksize, "mem= limit is unreasonably low")) + return; + + /* + * We have to make sure that the kernel image is still covered by + * memblock after we apply the memory limit, even if the kernel image + * is high up in physical memory. So if the kernel image becomes + * inaccessible after the limit is applied, we will lower the limit + * so that it compensates for the kernel image and reapply it. That way, + * we can add back the kernel image region and still honor the limit. + */ + memblock_enforce_memory_limit(memory_limit); + + for_each_memblock(memory, reg) + if (reg->base <= kstart && reg->base + reg->size >= kend) + /* kernel image still accessible -> we're done */ + return; + + memblock_enforce_memory_limit(memory_limit - ksize); + memblock_add(kstart, ksize); +} + void __init arm64_memblock_init(void) { /* @@ -165,10 +197,10 @@ void __init arm64_memblock_init(void) */ const s64 linear_region_size = -(s64)PAGE_OFFSET; - memblock_remove(0, memstart_addr); - memblock_remove(memstart_addr + linear_region_size, ULLONG_MAX); + memblock_remove(round_down(memblock_start_of_DRAM(), SZ_1G) + + linear_region_size, ULLONG_MAX); - memblock_enforce_memory_limit(memory_limit); + enforce_memory_limit(); /* * Register the kernel text, kernel data, initrd, and initial diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index c07ba8bdd8ed..1487824c5896 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -409,10 +409,30 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset) static void __init map_mem(void) { struct memblock_region *reg; + u64 new_memstart_addr = memblock_start_of_DRAM(); + u64 new_va_offset; - bootstrap_linear_mapping(KIMAGE_OFFSET); + /* + * Select a suitable value for the base of physical memory. + * This should be below the lowest usable physical memory + * address, and aligned to PUD/PMD size so that we can map + * it efficiently. + */ + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) + new_memstart_addr &= PMD_MASK; + else + new_memstart_addr &= PUD_MASK; + + /* + * Calculate the offset between the kernel text mapping that exists + * outside of the linear mapping, and its mapping in the linear region. + */ + new_va_offset = memstart_addr - new_memstart_addr + phys_offset_bias; + + bootstrap_linear_mapping(new_va_offset); - kernel_va_offset = KIMAGE_OFFSET; + memstart_addr = new_memstart_addr; + kernel_va_offset = new_va_offset; phys_offset_bias = 0; /* map all the memory banks */