From patchwork Fri Mar 4 23:53:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 548504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CFF1C433F5 for ; Fri, 4 Mar 2022 23:53:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229717AbiCDXyo (ORCPT ); Fri, 4 Mar 2022 18:54:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229445AbiCDXyn (ORCPT ); Fri, 4 Mar 2022 18:54:43 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36AA51E2FED; Fri, 4 Mar 2022 15:53:53 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 98C70B82C75; Fri, 4 Mar 2022 23:53:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5043FC340E9; Fri, 4 Mar 2022 23:53:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1646438030; bh=dlxNltCGxYkiCNG9Z7OiUhRYJzwO3keptp1KXilohZc=; h=Date:To:From:Subject:From; b=NPl7EpC5AvHZWgbcXXGjL+EXWXLR6dWthxN/A5+KAYY1PikCf6MXP00v2uIBZOpl1 iNLlE5je0pcRjlgE6BXLM/v6q5Zbb2dre4eOk4K/zY2VwVTo7U1mzOcaW0d+FNAf63 ItSGBYlch3EKRbLH8H13PAZ4WcJgZY76fRJJbENg= Date: Fri, 04 Mar 2022 15:53:49 -0800 To: mm-commits@vger.kernel.org, stable@vger.kernel.org, roman.gushchin@linux.dev, mkoutny@suse.com, mhocko@suse.com, ivan@cloudflare.com, hannes@cmpxchg.org, fhofmann@cloudflare.com, dqminh@cloudflare.com, shakeelb@google.com, akpm@linux-foundation.org From: Andrew Morton Subject: + memcg-sync-flush-only-if-periodic-flush-is-delayed.patch added to -mm tree Message-Id: <20220304235350.5043FC340E9@smtp.kernel.org> Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch titled Subject: memcg: sync flush only if periodic flush is delayed has been added to the -mm tree. Its filename is memcg-sync-flush-only-if-periodic-flush-is-delayed.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/memcg-sync-flush-only-if-periodic-flush-is-delayed.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/memcg-sync-flush-only-if-periodic-flush-is-delayed.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Shakeel Butt Subject: memcg: sync flush only if periodic flush is delayed Daniel Dao has reported [1] a regression on workloads that may trigger a lot of refaults (anon and file). The underlying issue is that flushing rstat is expensive. Although rstat flush are batched with (nr_cpus * MEMCG_BATCH) stat updates, it seems like there are workloads which genuinely do stat updates larger than batch value within short amount of time. Since the rstat flush can happen in the performance critical codepaths like page faults, such workload can suffer greatly. This patch fixes this regression by making the rstat flushing conditional in the performance critical codepaths. More specifically, the kernel relies on the async periodic rstat flusher to flush the stats and only if the periodic flusher is delayed by more than twice the amount of its normal time window then the kernel allows rstat flushing from the performance critical codepaths. Now the question: what are the side-effects of this change? The worst that can happen is the refault codepath will see 4sec old lruvec stats and may cause false (or missed) activations of the refaulted page which may under-or-overestimate the workingset size. Though that is not very concerning as the kernel can already miss or do false activations. There are two more codepaths whose flushing behavior is not changed by this patch and we may need to come to them in future. One is the writeback stats used by dirty throttling and second is the deactivation heuristic in the reclaim. For now keeping an eye on them and if there is report of regression due to these codepaths, we will reevaluate then. Link: https://lore.kernel.org/all/CA+wXwBSyO87ZX5PVwdHm-=dBjZYECGmfnydUicUyrQqndgX2MQ@mail.gmail.com [1] Link: https://lkml.kernel.org/r/20220304184040.1304781-1-shakeelb@google.com Fixes: 1f828223b799 ("memcg: flush lruvec stats in the refault") Signed-off-by: Shakeel Butt Reported-by: Daniel Dao Tested-by: Ivan Babrou Cc: Michal Hocko Cc: Roman Gushchin Cc: Johannes Weiner Cc: Michal Koutný Cc: Frank Hofmann Cc: Signed-off-by: Andrew Morton --- include/linux/memcontrol.h | 5 +++++ mm/memcontrol.c | 12 +++++++++++- mm/workingset.c | 2 +- 3 files changed, 17 insertions(+), 2 deletions(-) --- a/include/linux/memcontrol.h~memcg-sync-flush-only-if-periodic-flush-is-delayed +++ a/include/linux/memcontrol.h @@ -999,6 +999,7 @@ static inline unsigned long lruvec_page_ } void mem_cgroup_flush_stats(void); +void mem_cgroup_flush_stats_delayed(void); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); @@ -1442,6 +1443,10 @@ static inline void mem_cgroup_flush_stat { } +static inline void mem_cgroup_flush_stats_delayed(void) +{ +} + static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { --- a/mm/memcontrol.c~memcg-sync-flush-only-if-periodic-flush-is-delayed +++ a/mm/memcontrol.c @@ -628,6 +628,9 @@ static DECLARE_DEFERRABLE_WORK(stats_flu static DEFINE_SPINLOCK(stats_flush_lock); static DEFINE_PER_CPU(unsigned int, stats_updates); static atomic_t stats_flush_threshold = ATOMIC_INIT(0); +static u64 flush_next_time; + +#define FLUSH_TIME (2UL*HZ) static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) { @@ -649,6 +652,7 @@ static void __mem_cgroup_flush_stats(voi if (!spin_trylock_irqsave(&stats_flush_lock, flag)) return; + flush_next_time = jiffies_64 + 2*FLUSH_TIME; cgroup_rstat_flush_irqsafe(root_mem_cgroup->css.cgroup); atomic_set(&stats_flush_threshold, 0); spin_unlock_irqrestore(&stats_flush_lock, flag); @@ -660,10 +664,16 @@ void mem_cgroup_flush_stats(void) __mem_cgroup_flush_stats(); } +void mem_cgroup_flush_stats_delayed(void) +{ + if (rstat_flush_time && time_after64(jiffies_64, flush_next_time)) + mem_cgroup_flush_stats(); +} + static void flush_memcg_stats_dwork(struct work_struct *w) { __mem_cgroup_flush_stats(); - queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); + queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } /** --- a/mm/workingset.c~memcg-sync-flush-only-if-periodic-flush-is-delayed +++ a/mm/workingset.c @@ -354,7 +354,7 @@ void workingset_refault(struct folio *fo mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_delayed(); /* * Compare the distance to the existing workingset size. We * don't activate pages that couldn't stay resident even if