From patchwork Wed Jan 22 11:25:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 23503 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f199.google.com (mail-vc0-f199.google.com [209.85.220.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1BCD2203C6 for ; Wed, 22 Jan 2014 11:29:04 +0000 (UTC) Received: by mail-vc0-f199.google.com with SMTP id hu8sf385471vcb.6 for ; Wed, 22 Jan 2014 03:29:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type; bh=h4rKp5qdRYXnWXEcCjzrdwrZSL2Oe1diInnOlVuGXa0=; b=V1gtmYHukBEcsQZ9HPxBtTfqdy8RlCLIDKfOjwvQOOQxC51tbkC2lFJHF2EUdYysv7 nfCFqVfUzg1EjSdEW+xk5EfjTXyx9yquiPqyr2EjqZUNKFn5kbyC8duAMUm6l9jLDRhe YK8/azRDmEMGW5YTvUWUyX9h8HkoFdLDV103AIGoXnG8BVAzjUmsRFsCOpkp9ajdHgLg acPehxD/fpZf6e9jr15WZN+mrsl9yuW7CT2cNc7O2quUwU/UDUuHi0fDiT0teFAQ1j8+ 1Ybx3v97woloDJPr/PBKFTLIf34njhHRNsEd43qCW1gUZs+188BAbdk2OxZsLRS4rRAM /TkA== X-Gm-Message-State: ALoCoQn4KS1xW+9Q/wZ6aaeUYFomK/9Up5F3bF/e+Gt3FdjAxAtSYd0dxqh85r+JAZxl/lc6Vuqc X-Received: by 10.236.121.195 with SMTP id r43mr291071yhh.44.1390390143287; Wed, 22 Jan 2014 03:29:03 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.116.73 with SMTP id ju9ls34845qeb.69.gmail; Wed, 22 Jan 2014 03:29:03 -0800 (PST) X-Received: by 10.58.100.100 with SMTP id ex4mr542744veb.2.1390390143146; Wed, 22 Jan 2014 03:29:03 -0800 (PST) Received: from mail-vc0-f173.google.com (mail-vc0-f173.google.com [209.85.220.173]) by mx.google.com with ESMTPS id uq6si4366074vcb.0.2014.01.22.03.29.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 22 Jan 2014 03:29:03 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.173 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.173; Received: by mail-vc0-f173.google.com with SMTP id ld13so134952vcb.18 for ; Wed, 22 Jan 2014 03:29:03 -0800 (PST) X-Received: by 10.221.40.10 with SMTP id to10mr522774vcb.22.1390390143053; Wed, 22 Jan 2014 03:29:03 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp178100vcz; Wed, 22 Jan 2014 03:29:02 -0800 (PST) X-Received: by 10.68.191.73 with SMTP id gw9mr1035834pbc.158.1390390142122; Wed, 22 Jan 2014 03:29:02 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id xu6si9428578pab.341.2014.01.22.03.29.01; Wed, 22 Jan 2014 03:29:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755310AbaAVL2w (ORCPT + 27 others); Wed, 22 Jan 2014 06:28:52 -0500 Received: from szxga03-in.huawei.com ([119.145.14.66]:44543 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753613AbaAVL2n (ORCPT ); Wed, 22 Jan 2014 06:28:43 -0500 Received: from 172.24.2.119 (EHLO szxeml212-edg.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id AJQ35795; Wed, 22 Jan 2014 19:28:34 +0800 (CST) Received: from SZXEML423-HUB.china.huawei.com (10.82.67.162) by szxeml212-edg.china.huawei.com (172.24.2.181) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 22 Jan 2014 19:28:31 +0800 Received: from LGGEML424-HUB.china.huawei.com (10.72.61.124) by szxeml423-hub.china.huawei.com (10.82.67.162) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 22 Jan 2014 19:28:32 +0800 Received: from kernel-host.huawei (10.107.197.247) by lggeml424-hub.china.huawei.com (10.72.61.124) with Microsoft SMTP Server id 14.3.158.1; Wed, 22 Jan 2014 19:28:17 +0800 From: Wang Nan To: CC: Eric Biederman , Russell King , Andrew Morton , Geng Hui , , , , Wang Nan , Subject: [PATCH 3/3] ARM: allow kernel to be loaded in middle of phymem Date: Wed, 22 Jan 2014 19:25:16 +0800 Message-ID: <1390389916-8711-4-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1390389916-8711-1-git-send-email-wangnan0@huawei.com> References: <1390389916-8711-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: wangnan0@huawei.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.173 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch allows the kernel to be loaded at the middle of kernel awared physical memory. Before this patch, users must use mem= or device tree to cheat kernel about the start address of physical memory. This feature is useful in some special cases, for example, building a crash dump kernel. Without it, kernel command line, atag and devicetree must be adjusted carefully, sometimes is impossible. Signed-off-by: Wang Nan Cc: # 3.4+ Cc: Eric Biederman Cc: Russell King Cc: Andrew Morton Cc: Geng Hui --- arch/arm/mm/init.c | 21 ++++++++++++++++++++- arch/arm/mm/mmu.c | 13 +++++++++++++ mm/page_alloc.c | 7 +++++-- 3 files changed, 38 insertions(+), 3 deletions(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 3e8f106..4952726 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -334,9 +334,28 @@ void __init arm_memblock_init(struct meminfo *mi, { int i; - for (i = 0; i < mi->nr_banks; i++) + for (i = 0; i < mi->nr_banks; i++) { memblock_add(mi->bank[i].start, mi->bank[i].size); + /* + * In some special case, for example, building a crushdump + * kernel, we want the kernel to be loaded in the middle of + * physical memory. In such case, the physical memory before + * PHYS_OFFSET is awkward: it can't get directly mapped + * (because its address will be smaller than PAGE_OFFSET, + * disturbs user address space) also can't be mapped as + * HighMem. We reserve such pages here. The only way to access + * those pages is ioremap. + */ + if (mi->bank[i].start < PHYS_OFFSET) { + unsigned long reserv_size = PHYS_OFFSET - + mi->bank[i].start; + if (reserv_size > mi->bank[i].size) + reserv_size = mi->bank[i].size; + memblock_reserve(mi->bank[i].start, reserv_size); + } + } + /* Register the kernel text, kernel data and initrd with memblock. */ #ifdef CONFIG_XIP_KERNEL memblock_reserve(__pa(_sdata), _end - _sdata); diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 580ef2d..2a17c24 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1308,6 +1308,19 @@ static void __init map_lowmem(void) if (start >= end) break; + /* + * If this memblock contain memory before PAGE_OFFSET, memory + * before PAGE_OFFSET should't get directly mapped, see code + * in create_mapping(). However, memory after PAGE_OFFSET is + * occupyed by kernel and still need to be mapped. + */ + if (__phys_to_virt(start) < PAGE_OFFSET) { + if (__phys_to_virt(end) > PAGE_OFFSET) + start = __virt_to_phys(PAGE_OFFSET); + else + break; + } + map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5248fe0..d2959e3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4840,10 +4840,13 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) */ if (pgdat == NODE_DATA(0)) { mem_map = NODE_DATA(0)->node_mem_map; -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP + /* + * In case of CONFIG_HAVE_MEMBLOCK_NODE_MAP or when kernel + * loaded at the middle of physical memory, mem_map should + * be adjusted. + */ if (page_to_pfn(mem_map) != pgdat->node_start_pfn) mem_map -= (pgdat->node_start_pfn - ARCH_PFN_OFFSET); -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ } #endif #endif /* CONFIG_FLAT_NODE_MEM_MAP */