From patchwork Wed Jul 14 23:44:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 477369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE8D1C12002 for ; Wed, 14 Jul 2021 23:44:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D2296613CA for ; Wed, 14 Jul 2021 23:44:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233694AbhGNXrd (ORCPT ); Wed, 14 Jul 2021 19:47:33 -0400 Received: from mail.kernel.org ([198.145.29.99]:45064 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229535AbhGNXrd (ORCPT ); Wed, 14 Jul 2021 19:47:33 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9EF84610A7; Wed, 14 Jul 2021 23:44:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1626306280; bh=f0GWe7ZR3vwGoAxxemm2hyuaobrHRNHXyUorXxjA38M=; h=Date:From:To:Subject:From; b=iHPlZyHCO9FhFCp3S+1le2ITVmY/OF8/53iOoyQkPNGnpjX2w8gd1KoRmJUR2W1nK ZFGvUz6mO0fMvXuypi6QWuCaUxTpo0X0ihucOxeDKsgV+inSdfcpqCPZ2GjtuRje65 YLn5/22X6zKDs7khv+PK6cttociIW1k7pD1Wqn8E= Date: Wed, 14 Jul 2021 16:44:38 -0700 From: akpm@linux-foundation.org To: aneesh.kumar@linux.ibm.com, anshuman.khandual@arm.com, anton@ozlabs.org, ardb@kernel.org, bauerman@linux.ibm.com, benh@kernel.crashing.org, bhe@redhat.com, borntraeger@de.ibm.com, bp@alien8.de, catalin.marinas@arm.com, cheloha@linux.ibm.com, christophe.leroy@c-s.fr, dalias@libc.org, dan.j.williams@intel.com, dave.hansen@linux.intel.com, dave.jiang@intel.com, david@redhat.com, gor@linux.ibm.com, hca@linux.ibm.com, hpa@zytor.com, jasowang@redhat.com, joe@perches.com, justin.he@arm.com, ldufour@linux.ibm.com, lenb@kernel.org, luto@kernel.org, mhocko@kernel.org, michel@lespinasse.org, mingo@redhat.com, mm-commits@vger.kernel.org, mpe@ellerman.id.au, mst@redhat.com, nathanl@linux.ibm.com, npiggin@gmail.com, osalvador@suse.de, pankaj.gupta.linux@gmail.com, pankaj.gupta@ionos.com, pasha.tatashin@soleen.com, paulus@samba.org, peterz@infradead.org, pmorel@linux.ibm.com, rafael.j.wysocki@intel.com, richard.weiyang@linux.alibaba.com, rjw@rjwysocki.net, rppt@kernel.org, slyfox@gentoo.org, stable@vger.kernel.org, tglx@linutronix.de, vbabka@suse.cz, vishal.l.verma@intel.com, vkuznets@redhat.com, wangkefeng.wang@huawei.com, will@kernel.org, ysato@users.sourceforge.jp Subject: + mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range.patch added to -mm tree Message-ID: <20210714234438.0SGsYeFgx%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch titled Subject: mm/memory_hotplug: use "unsigned long" for PFN in zone_for_pfn_range() has been added to the -mm tree. Its filename is mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: David Hildenbrand Subject: mm/memory_hotplug: use "unsigned long" for PFN in zone_for_pfn_range() Patch series "mm/memory_hotplug: preparatory patches for new online policy and memory" These are all cleanups and one fix previously sent as part of [1]: [PATCH v1 00/12] mm/memory_hotplug: "auto-movable" online policy and memory groups. These patches make sense even without the other series, therefore I pulled them out to make the other series easier to digest. [1] https://lkml.kernel.org/r/20210607195430.48228-1-david@redhat.com This patch (of 4): Checkpatch complained on a follow-up patch that we are using "unsigned" here, which defaults to "unsigned int" and checkpatch is correct. Use "unsigned long" instead, just as we do in other places when handling PFNs. This can bite us once we have physical addresses in the range of multiple TB. Link: https://lkml.kernel.org/r/20210712124052.26491-2-david@redhat.com Fixes: e5e689302633 ("mm, memory_hotplug: display allowed zones in the preferred ordering") Signed-off-by: David Hildenbrand Reviewed-by: Pankaj Gupta Cc: David Hildenbrand Cc: Vitaly Kuznetsov Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Pankaj Gupta Cc: Wei Yang Cc: Oscar Salvador Cc: Michal Hocko Cc: Dan Williams Cc: Anshuman Khandual Cc: Dave Hansen Cc: Vlastimil Babka Cc: Mike Rapoport Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Pavel Tatashin Cc: Heiko Carstens Cc: Michael Ellerman Cc: Catalin Marinas Cc: virtualization@lists.linux-foundation.org Cc: Andy Lutomirski Cc: "Aneesh Kumar K.V" Cc: Anton Blanchard Cc: Ard Biesheuvel Cc: Baoquan He Cc: Benjamin Herrenschmidt Cc: Borislav Petkov Cc: Christian Borntraeger Cc: Christophe Leroy Cc: Dave Jiang Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Jia He Cc: Joe Perches Cc: Kefeng Wang Cc: Laurent Dufour Cc: Michel Lespinasse Cc: Nathan Lynch Cc: Nicholas Piggin Cc: Paul Mackerras Cc: Peter Zijlstra Cc: Pierre Morel Cc: "Rafael J. Wysocki" Cc: Rich Felker Cc: Scott Cheloha Cc: Sergei Trofimovich Cc: Thiago Jung Bauermann Cc: Thomas Gleixner Cc: Vasily Gorbik Cc: Vishal Verma Cc: Will Deacon Cc: Yoshinori Sato Cc: Signed-off-by: Andrew Morton --- include/linux/memory_hotplug.h | 4 ++-- mm/memory_hotplug.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) --- a/include/linux/memory_hotplug.h~mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range +++ a/include/linux/memory_hotplug.h @@ -339,8 +339,8 @@ extern void sparse_remove_section(struct unsigned long map_offset, struct vmem_altmap *altmap); extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, unsigned long pnum); -extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn, - unsigned long nr_pages); +extern struct zone *zone_for_pfn_range(int online_type, int nid, + unsigned long start_pfn, unsigned long nr_pages); extern int arch_create_linear_mapping(int nid, u64 start, u64 size, struct mhp_params *params); void arch_remove_linear_mapping(u64 start, u64 size); --- a/mm/memory_hotplug.c~mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range +++ a/mm/memory_hotplug.c @@ -708,8 +708,8 @@ static inline struct zone *default_zone_ return movable_node_enabled ? movable_zone : kernel_zone; } -struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn, - unsigned long nr_pages) +struct zone *zone_for_pfn_range(int online_type, int nid, + unsigned long start_pfn, unsigned long nr_pages) { if (online_type == MMOP_ONLINE_KERNEL) return default_kernel_zone_for_pfn(nid, start_pfn, nr_pages);