From patchwork Mon Sep 5 09:29:35 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 75366 Delivered-To: patch@linaro.org Received: by 10.140.106.11 with SMTP id d11csp390400qgf; Mon, 5 Sep 2016 02:33:28 -0700 (PDT) X-Received: by 10.98.222.5 with SMTP id h5mr53895912pfg.144.1473068008152; Mon, 05 Sep 2016 02:33:28 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id iw9si28423604pac.266.2016.09.05.02.33.28 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Sep 2016 02:33:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bgqGB-0003oR-Op; Mon, 05 Sep 2016 09:32:11 +0000 Received: from szxga01-in.huawei.com ([58.251.152.64]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bgqG4-0003Mo-J7 for linux-arm-kernel@lists.infradead.org; Mon, 05 Sep 2016 09:32:09 +0000 Received: from 172.24.1.36 (EHLO szxeml433-hub.china.huawei.com) ([172.24.1.36]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DQP96929; Mon, 05 Sep 2016 17:31:14 +0800 (CST) Received: from linux-ibm.site (10.175.102.37) by szxeml433-hub.china.huawei.com (10.82.67.210) with Microsoft SMTP Server id 14.3.235.1; Mon, 5 Sep 2016 17:31:05 +0800 From: Kefeng Wang To: Catalin Marinas , Will Deacon Subject: [PATCH] arm64: mm: drop fixup_init() and mm.h Date: Mon, 5 Sep 2016 17:29:35 +0800 Message-ID: <1473067775-26284-1-git-send-email-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 1.7.12.4 MIME-Version: 1.0 X-Originating-IP: [10.175.102.37] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090203.57CD3B63.00D4, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 9592b443895ce565575d9bac092dfc64 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160905_023205_166661_2BCB3030 X-CRM114-Status: GOOD ( 15.02 ) X-Spam-Score: -5.6 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [58.251.152.64 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [58.251.152.64 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -1.4 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kefeng Wang , Ard Biesheuvel , guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org, Laura Abbott Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org There is only fixup_init() in mm.h , and it is only called in free_initmem(), so move the codes from fixup_init() into free_initmem(), then drop fixup_init() and mm.h. Signed-off-by: Kefeng Wang --- arch/arm64/mm/flush.c | 2 -- arch/arm64/mm/init.c | 9 ++++++--- arch/arm64/mm/mm.h | 2 -- arch/arm64/mm/mmu.c | 12 ------------ arch/arm64/mm/pgd.c | 2 -- 5 files changed, 6 insertions(+), 21 deletions(-) delete mode 100644 arch/arm64/mm/mm.h -- 1.7.12.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel Acked-by: Mark Rutland diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 43a76b0..8377329 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -25,8 +25,6 @@ #include #include -#include "mm.h" - void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index bbb7ee7..c9937fb 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -47,8 +47,6 @@ #include #include -#include "mm.h" - /* * We need to be able to catch inadvertent references to memstart_addr * that occur (potentially in generic code) before arm64_memblock_init() @@ -485,7 +483,12 @@ void free_initmem(void) { free_reserved_area(__va(__pa(__init_begin)), __va(__pa(__init_end)), 0, "unused kernel"); - fixup_init(); + /* + * Unmap the __init region but leave the VM area in place. This + * prevents the region from being reused for kernel modules, which + * is not supported by kallsyms. + */ + unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin)); } #ifdef CONFIG_BLK_DEV_INITRD diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h deleted file mode 100644 index 71fe989..0000000 --- a/arch/arm64/mm/mm.h +++ /dev/null @@ -1,2 +0,0 @@ - -void fixup_init(void); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 4989948..88f15c8 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -42,8 +42,6 @@ #include #include -#include "mm.h" - u64 idmap_t0sz = TCR_T0SZ(VA_BITS); u64 kimage_voffset __read_mostly; @@ -399,16 +397,6 @@ void mark_rodata_ro(void) section_size, PAGE_KERNEL_RO); } -void fixup_init(void) -{ - /* - * Unmap the __init region but leave the VM area in place. This - * prevents the region from being reused for kernel modules, which - * is not supported by kallsyms. - */ - unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin)); -} - static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, pgprot_t prot, struct vm_struct *vma) { diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index ae11d4e..371c5f0 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -26,8 +26,6 @@ #include #include -#include "mm.h" - static struct kmem_cache *pgd_cache; pgd_t *pgd_alloc(struct mm_struct *mm)