diff mbox series

[v2,1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps

Message ID 20200619125923.22602-2-david@redhat.com
State Accepted
Commit 4a93025cbe4a0b19d1a25a2d763a3d2018bad0d9
Headers show
Series [v2,1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps | expand

Commit Message

David Hildenbrand June 19, 2020, 12:59 p.m. UTC
Especially with memory hotplug, we can have offline sections (with a
garbage memmap) and overlapping zones. We have to make sure to only
touch initialized memmaps (online sections managed by the buddy) and that
the zone matches, to not move pages between zones.

To test if this can actually happen, I added a simple
	BUG_ON(page_zone(page_i) != page_zone(page_j));
right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
onlining the first memory block "online_movable" and the second memory
block "online_kernel", it will trigger the BUG, as both zones (NORMAL
and MOVABLE) overlap.

This might result in all kinds of weird situations (e.g., double
allocations, list corruptions, unmovable allocations ending up in the
movable zone).

Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: stable@vger.kernel.org # v5.2+
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/shuffle.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

Comments

Andrew Morton July 24, 2020, 3:08 a.m. UTC | #1
On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:

> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:

> >On 23.06.20 09:39, David Hildenbrand wrote:

> >>> Hmm.. I thought this is the behavior for early section, while it looks current

> >>> code doesn't work like this:

> >>>

> >>>        if (section_is_early && memmap)

> >>>                free_map_bootmem(memmap);

> >>>        else

> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);

> >>>

> >>> section_is_early is always "true" for early section, while memmap is not-NULL

> >>> only when sub-section map is empty.

> >>>

> >>> If my understanding is correct, when we remove a sub-section in early section,

> >>> the code would call depopulate_section_memmap(), which in turn free related

> >>> memmap. By removing the memmap, the return value from pfn_to_online_page() is

> >>> not a valid one.

> >> 

> >> I think you're right, and pfn_valid() would also return true, as it is

> >> an early section. This looks broken.

> >> 

> >>>

> >>> Maybe we want to write the code like this:

> >>>

> >>>        if (section_is_early)

> >>>                if (memmap)

> >>>                        free_map_bootmem(memmap);

> >>>        else

> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);

> >>>

> >> 

> >> I guess that should be the way to go

> >> 

> >> @Dan, I think what Wei proposes here is correct, right? Or how does it

> >> work in the VMEMMAP case with early sections?

> >> 

> >

> >Especially, if you would re-hot-add, section_activate() would assume

> >there is a memmap, it must not be removed.

> >

> 

> You are right here. I didn't notice it.

> 

> >@Wei, can you send a patch?

> >

> 

> Sure, let me prepare for it.


Still awaiting this, and the v3 patch was identical to this v2 patch.

It's tagged for -stable, so there's some urgency.  Should we just go
ahead with the decently-tested v2?
Wei Yang July 24, 2020, 5:45 a.m. UTC | #2
On Thu, Jul 23, 2020 at 08:08:46PM -0700, Andrew Morton wrote:
>On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:

>

>> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:

>> >On 23.06.20 09:39, David Hildenbrand wrote:

>> >>> Hmm.. I thought this is the behavior for early section, while it looks current

>> >>> code doesn't work like this:

>> >>>

>> >>>        if (section_is_early && memmap)

>> >>>                free_map_bootmem(memmap);

>> >>>        else

>> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);

>> >>>

>> >>> section_is_early is always "true" for early section, while memmap is not-NULL

>> >>> only when sub-section map is empty.

>> >>>

>> >>> If my understanding is correct, when we remove a sub-section in early section,

>> >>> the code would call depopulate_section_memmap(), which in turn free related

>> >>> memmap. By removing the memmap, the return value from pfn_to_online_page() is

>> >>> not a valid one.

>> >> 

>> >> I think you're right, and pfn_valid() would also return true, as it is

>> >> an early section. This looks broken.

>> >> 

>> >>>

>> >>> Maybe we want to write the code like this:

>> >>>

>> >>>        if (section_is_early)

>> >>>                if (memmap)

>> >>>                        free_map_bootmem(memmap);

>> >>>        else

>> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);

>> >>>

>> >> 

>> >> I guess that should be the way to go

>> >> 

>> >> @Dan, I think what Wei proposes here is correct, right? Or how does it

>> >> work in the VMEMMAP case with early sections?

>> >> 

>> >

>> >Especially, if you would re-hot-add, section_activate() would assume

>> >there is a memmap, it must not be removed.

>> >

>> 

>> You are right here. I didn't notice it.

>> 

>> >@Wei, can you send a patch?

>> >

>> 

>> Sure, let me prepare for it.

>

>Still awaiting this, and the v3 patch was identical to this v2 patch.

>

>It's tagged for -stable, so there's some urgency.  Should we just go

>ahead with the decently-tested v2?


This message is to me right?

I thought the fix patch is merged, the patch link may be
https://lkml.org/lkml/2020/6/23/380.

If I missed something, just let me know.



-- 
Wei Yang
Help you, Help me
David Hildenbrand July 24, 2020, 8:20 a.m. UTC | #3
On 24.07.20 05:08, Andrew Morton wrote:
> On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:

> 

>> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:

>>> On 23.06.20 09:39, David Hildenbrand wrote:

>>>>> Hmm.. I thought this is the behavior for early section, while it looks current

>>>>> code doesn't work like this:

>>>>>

>>>>>        if (section_is_early && memmap)

>>>>>                free_map_bootmem(memmap);

>>>>>        else

>>>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);

>>>>>

>>>>> section_is_early is always "true" for early section, while memmap is not-NULL

>>>>> only when sub-section map is empty.

>>>>>

>>>>> If my understanding is correct, when we remove a sub-section in early section,

>>>>> the code would call depopulate_section_memmap(), which in turn free related

>>>>> memmap. By removing the memmap, the return value from pfn_to_online_page() is

>>>>> not a valid one.

>>>>

>>>> I think you're right, and pfn_valid() would also return true, as it is

>>>> an early section. This looks broken.

>>>>

>>>>>

>>>>> Maybe we want to write the code like this:

>>>>>

>>>>>        if (section_is_early)

>>>>>                if (memmap)

>>>>>                        free_map_bootmem(memmap);

>>>>>        else

>>>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);

>>>>>

>>>>

>>>> I guess that should be the way to go

>>>>

>>>> @Dan, I think what Wei proposes here is correct, right? Or how does it

>>>> work in the VMEMMAP case with early sections?

>>>>

>>>

>>> Especially, if you would re-hot-add, section_activate() would assume

>>> there is a memmap, it must not be removed.

>>>

>>

>> You are right here. I didn't notice it.

>>

>>> @Wei, can you send a patch?

>>>

>>

>> Sure, let me prepare for it.

> 

> Still awaiting this, and the v3 patch was identical to this v2 patch.

> 

> It's tagged for -stable, so there's some urgency.  Should we just go

> ahead with the decently-tested v2?


This patch (mm/shuffle: don't move pages between zones and don't read
garbage memmaps) is good enough for upstream. While the issue reported
by Wei was valid (and needs to be fixed), the user in this patch is just
one of many affected users. Nothing special.

-- 
Thanks,

David / dhildenb
diff mbox series

Patch

diff --git a/mm/shuffle.c b/mm/shuffle.c
index 44406d9977c77..dd13ab851b3ee 100644
--- a/mm/shuffle.c
+++ b/mm/shuffle.c
@@ -58,25 +58,25 @@  module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
  * For two pages to be swapped in the shuffle, they must be free (on a
  * 'free_area' lru), have the same order, and have the same migratetype.
  */
-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
+static struct page * __meminit shuffle_valid_page(struct zone *zone,
+						  unsigned long pfn, int order)
 {
-	struct page *page;
+	struct page *page = pfn_to_online_page(pfn);
 
 	/*
 	 * Given we're dealing with randomly selected pfns in a zone we
 	 * need to ask questions like...
 	 */
 
-	/* ...is the pfn even in the memmap? */
-	if (!pfn_valid_within(pfn))
+	/* ... is the page managed by the buddy? */
+	if (!page)
 		return NULL;
 
-	/* ...is the pfn in a present section or a hole? */
-	if (!pfn_in_present_section(pfn))
+	/* ... is the page assigned to the same zone? */
+	if (page_zone(page) != zone)
 		return NULL;
 
 	/* ...is the page free and currently on a free_area list? */
-	page = pfn_to_page(pfn);
 	if (!PageBuddy(page))
 		return NULL;
 
@@ -123,7 +123,7 @@  void __meminit __shuffle_zone(struct zone *z)
 		 * page_j randomly selected in the span @zone_start_pfn to
 		 * @spanned_pages.
 		 */
-		page_i = shuffle_valid_page(i, order);
+		page_i = shuffle_valid_page(z, i, order);
 		if (!page_i)
 			continue;
 
@@ -137,7 +137,7 @@  void __meminit __shuffle_zone(struct zone *z)
 			j = z->zone_start_pfn +
 				ALIGN_DOWN(get_random_long() % z->spanned_pages,
 						order_pages);
-			page_j = shuffle_valid_page(j, order);
+			page_j = shuffle_valid_page(z, j, order);
 			if (page_j && page_j != page_i)
 				break;
 		}