From patchwork Wed Apr 15 15:34:17 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 47204 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3E1282121F for ; Wed, 15 Apr 2015 15:38:58 +0000 (UTC) Received: by lbbqq2 with SMTP id qq2sf10482875lbb.0 for ; Wed, 15 Apr 2015 08:38:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=CvjPglMWuef/VEy3LyOqzBzLWbI1tf1ZXEtxgZKoeUs=; b=l7Eo2rQjvlHD+p2rttHGBisgL2fV1LajmcdYGBllRm0rGzgyPyv8l3eNnaJNdRCF+R PenDxxfyjc2lMKY9jbIasRjGfsr/fNubQWHoM7fnFI0EafER7NzqOdhwJrBG5hfaWxan q+WCma/aSA+oXJh6djkVeKNtM1J+5IPJtJdiKGjZR/HJSNTfI7OjWF6Wuh5t7VA80lwH fhUKc7qypzFRPySNGzdAtfxfV7WNVfrbo/ya4Y7prm5lX/07OZz3JzfvUkKFehqhi3nY /8yhhC44HQJJ3Ek+9HxFJEv+JolX32acAweWbbYytD5T4eGhgp7MeNuUsNa9DJV0B8K8 WyPw== X-Gm-Message-State: ALoCoQndVXMqav+b4MfHwYegd+KsPsiK/nUx5Dh5qTDJD4oPx8JoKOd7Z5Za+p6qWNYQBOegpRx6 X-Received: by 10.180.96.6 with SMTP id do6mr5093168wib.4.1429112337184; Wed, 15 Apr 2015 08:38:57 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.28.100 with SMTP id a4ls213244lah.23.gmail; Wed, 15 Apr 2015 08:38:57 -0700 (PDT) X-Received: by 10.152.204.40 with SMTP id kv8mr24094038lac.113.1429112337022; Wed, 15 Apr 2015 08:38:57 -0700 (PDT) Received: from mail-lb0-f175.google.com (mail-lb0-f175.google.com. [209.85.217.175]) by mx.google.com with ESMTPS id aq10si4181536lbc.58.2015.04.15.08.38.57 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:38:57 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) client-ip=209.85.217.175; Received: by lbbqq2 with SMTP id qq2so37075660lbb.3 for ; Wed, 15 Apr 2015 08:38:56 -0700 (PDT) X-Received: by 10.152.28.5 with SMTP id x5mr23999421lag.112.1429112336938; Wed, 15 Apr 2015 08:38:56 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2592344lbt; Wed, 15 Apr 2015 08:38:56 -0700 (PDT) X-Received: by 10.68.203.66 with SMTP id ko2mr46944457pbc.156.1429112334221; Wed, 15 Apr 2015 08:38:54 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id nl9si7608780pdb.79.2015.04.15.08.38.52 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:38:54 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPNK-0001iC-HB; Wed, 15 Apr 2015 15:37:14 +0000 Received: from mail-wi0-f169.google.com ([209.85.212.169]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPLw-0000us-VG for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 15:35:50 +0000 Received: by wiax7 with SMTP id x7so114870267wia.0 for ; Wed, 15 Apr 2015 08:35:27 -0700 (PDT) X-Received: by 10.195.11.73 with SMTP id eg9mr49650368wjd.62.1429112126980; Wed, 15 Apr 2015 08:35:26 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.175]) by mx.google.com with ESMTPSA id eh5sm7674765wic.20.2015.04.15.08.35.17 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 15 Apr 2015 08:35:26 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 06/13] arm64: implement our own early_init_dt_add_memory_arch() Date: Wed, 15 Apr 2015 17:34:17 +0200 Message-Id: <1429112064-19952-7-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> References: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_083549_430496_00D27499 X-CRM114-Status: GOOD ( 12.94 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.169 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.212.169 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Override the __weak early_init_dt_add_memory_arch() with our own version. This allows us to relax the imposed restrictions at memory discovery time, which is needed if we want to defer the assignment of PHYS_OFFSET and make it independent of where the kernel Image is placed in physical memory. So copy the generic original, but only retain the check against regions whose sizes become zero when clipped to page alignment. For now, we will remove the range below PHYS_OFFSET explicitly until we rework that logic in a subsequent patch. Any memory that we will not be able to map due to insufficient size of the linear region is also removed. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/init.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index ae85da6307bb..1599a5c5e94a 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -158,6 +158,15 @@ early_param("mem", early_mem); void __init arm64_memblock_init(void) { + /* + * Remove the memory that we will not be able to cover + * with the linear mapping. + */ + const s64 linear_region_size = -(s64)PAGE_OFFSET; + + memblock_remove(0, memstart_addr); + memblock_remove(memstart_addr + linear_region_size, ULLONG_MAX); + memblock_enforce_memory_limit(memory_limit); /* @@ -372,3 +381,19 @@ static int __init keepinitrd_setup(char *__unused) __setup("keepinitrd", keepinitrd_setup); #endif + +void __init early_init_dt_add_memory_arch(u64 base, u64 size) +{ + if (!PAGE_ALIGNED(base)) { + if (size < PAGE_SIZE - (base & ~PAGE_MASK)) { + pr_warn("Ignoring memory block 0x%llx - 0x%llx\n", + base, base + size); + return; + } + size -= PAGE_SIZE - (base & ~PAGE_MASK); + base = PAGE_ALIGN(base); + } + size &= PAGE_MASK; + + memblock_add(base, size); +}