Message ID | 20200921180508.61905-1-jpitti@cisco.com |
---|---|
State | Superseded |
Headers | show |
Series | [v5.4] mm: memcg: fix memcg reclaim soft lockup | expand |
On Mon, Sep 21, 2020 at 11:05:08AM -0700, Julius Hemanth Pitti wrote: > From: Xunlei Pang <xlpang@linux.alibaba.com> > > commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream > > We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target memcg > doesn't have any reclaimable memory. > > It can be easily reproduced as below: > > watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204] > CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12 > Call Trace: > shrink_lruvec+0x49f/0x640 > shrink_node+0x2a6/0x6f0 > do_try_to_free_pages+0xe9/0x3e0 > try_to_free_mem_cgroup_pages+0xef/0x1f0 > try_charge+0x2c1/0x750 > mem_cgroup_charge+0xd7/0x240 > __add_to_page_cache_locked+0x2fd/0x370 > add_to_page_cache_lru+0x4a/0xc0 > pagecache_get_page+0x10b/0x2f0 > filemap_fault+0x661/0xad0 > ext4_filemap_fault+0x2c/0x40 > __do_fault+0x4d/0xf9 > handle_mm_fault+0x1080/0x1790 > > It only happens on our 1-vcpu instances, because there's no chance for > oom reaper to run to reclaim the to-be-killed process. > > Add a cond_resched() at the upper shrink_node_memcgs() to solve this > issue, this will mean that we will get a scheduling point for each memcg > in the reclaimed hierarchy without any dependency on the reclaimable > memory in that memcg thus making it more predictable. > > [jpitti@cisco.com: > - backported to v5.4.y > - Upstream patch applies fix in shrink_node_memcgs(), which > is not present to v5.4.y. Appled to shrink_node()] Thanks for this, now queued up here and for 4.19 greg k-h
diff --git a/mm/vmscan.c b/mm/vmscan.c index 7fde5f904c8d..6db9176d8c63 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2775,6 +2775,14 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) unsigned long reclaimed; unsigned long scanned; + /* + * This loop can become CPU-bound when target memcgs + * aren't eligible for reclaim - either because they + * don't have any reclaimable pages, or because their + * memory is explicitly protected. Avoid soft lockups. + */ + cond_resched(); + switch (mem_cgroup_protected(root, memcg)) { case MEMCG_PROT_MIN: /*