mbox series

[v2,0/3] Userspace controls soft-offline pages

Message ID 20240611215544.2105970-1-jiaqiyan@google.com
Headers show
Series Userspace controls soft-offline pages | expand

Message

Jiaqi Yan June 11, 2024, 9:55 p.m. UTC
Correctable memory errors are very common on servers with large
amount of memory, and are corrected by ECC, but with two
pain points to users:
1. Correction usually happens on the fly and adds latency overhead
2. Not-fully-proved theory states excessive correctable memory
   errors can develop into uncorrectable memory error.

Soft offline is kernel's additional solution for memory pages
having (excessive) corrected memory errors. Impacted page is migrated
to healthy page if it is in use, then the original page is discarded
for any future use.

The actual policy on whether (and when) to soft offline should be
maintained by userspace, especially in case of an 1G HugeTLB page.
Soft-offline dissolves the HugeTLB page, either in-use or free, into
chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage.
If userspace has not acknowledged such behavior, it may be surprised
when later mmap hugepages MAP_FAILED due to lack of hugepages.
In case of a transparent hugepage, it will be split into 4K pages
as well; userspace will stop enjoying the transparent performance.

In addition, discarding the entire 1G HugeTLB page only because of
corrected memory errors sounds very costly and kernel better not
doing under the hood. But today there are at least 2 such cases:
1. GHES driver sees both GHES_SEV_CORRECTED and
   CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
2. RAS Correctable Errors Collector counts correctable errors per
   PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed
by kernel's memory failure recovery.

This patch series give userspace the control of softofflining any page:
kernel only soft offlines raw page / transparent hugepage / HugeTLB
hugepage if userspace has agreed to. The interface to userspace is a
new sysctl called enable_soft_offline under /proc/sys/vm. By default
enable_soft_line is 1 to preserve existing behavior in kernel.

Changelog

v1 => v2:
* incorporate feedbacks from both Miaohe Lin <linmiaohe@huawei.com> and
  Jane Chu <jane.chu@oracle.com>.
* make the switch to control all pages, instead of HugeTLB specific.
* change the API from
  /sys/kernel/mm/hugepages/hugepages-${size}kB/softoffline_corrected_errors
  to /proc/sys/vm/enable_soft_offline.
* minor update to test code.
* update documentation of the user control API.
* v2 is based on commit 83a7eefedc9b ("Linux 6.10-rc3").

Jiaqi Yan (3):
  mm/memory-failure: userspace controls soft-offlining pages
  selftest/mm: test enable_soft_offline behaviors
  docs: mm: add enable_soft_offline sysctl

 Documentation/admin-guide/sysctl/vm.rst       |  15 +
 mm/memory-failure.c                           |  16 ++
 tools/testing/selftests/mm/.gitignore         |   1 +
 tools/testing/selftests/mm/Makefile           |   1 +
 .../selftests/mm/hugetlb-soft-offline.c       | 258 ++++++++++++++++++
 tools/testing/selftests/mm/run_vmtests.sh     |   4 +
 6 files changed, 295 insertions(+)
 create mode 100644 tools/testing/selftests/mm/hugetlb-soft-offline.c

Comments

David Rientjes June 12, 2024, 12:25 a.m. UTC | #1
On Tue, 11 Jun 2024, Jiaqi Yan wrote:

> @@ -267,6 +268,20 @@ used::
>  These are informational only.  They do not mean that anything is wrong
>  with your system.  To disable them, echo 4 (bit 2) into drop_caches.
>  
> +enable_soft_offline
> +===================
> +Control whether to soft offline memory pages that have (excessive) correctable
> +memory errors.  It is your call to choose between reliability (stay away from
> +fragile physical memory) vs performance (brought by HugeTLB or transparent
> +hugepages).
> +

Could you expand upon the relevance of HugeTLB or THP in this 
documentation?  I understand the need in some cases to soft offline memory 
after a number of correctable memory errors, but it's not clear how the 
performance implications plays into this.  The paragraph below goes into a 
difference in the splitting behavior, are hugepage users the only ones 
that should be concerned with this?

> +When setting to 1, kernel attempts to soft offline the page when it thinks
> +needed.  For in-use page, page content will be migrated to a new page.  If
> +the oringinal hugepage is a HugeTLB hugepage, regardless of in-use or free,

s/oringinal/original/

> +it will be dissolved into raw pages, and the capacity of the HugeTLB pool
> +will reduce by 1.  If the original hugepage is a transparent hugepage, it
> +will be split into raw pages.  When setting to 0, kernel won't attempt to
> +soft offline the page.  Its default value is 1.
>  

This behavior is the same for all architectures?
Jiaqi Yan June 14, 2024, 11:15 p.m. UTC | #2
Thanks for your questions, David!

On Tue, Jun 11, 2024 at 5:25 PM David Rientjes <rientjes@google.com> wrote:
>
> On Tue, 11 Jun 2024, Jiaqi Yan wrote:
>
> > @@ -267,6 +268,20 @@ used::
> >  These are informational only.  They do not mean that anything is wrong
> >  with your system.  To disable them, echo 4 (bit 2) into drop_caches.
> >
> > +enable_soft_offline
> > +===================
> > +Control whether to soft offline memory pages that have (excessive) correctable
> > +memory errors.  It is your call to choose between reliability (stay away from
> > +fragile physical memory) vs performance (brought by HugeTLB or transparent
> > +hugepages).
> > +
>
> Could you expand upon the relevance of HugeTLB or THP in this
> documentation?  I understand the need in some cases to soft offline memory
> after a number of correctable memory errors, but it's not clear how the
> performance implications plays into this.  The paragraph below goes into a

To be accurate, I should say soft offlining transparent hugepage
impacts performance, and soft offlining hugetlb hugepage impacts
capacity. It may be clearer to first explain soft-offline's behaviors
and implications, so that user knows what is the cost of soft-offline,
then talks about the behavior of enable_soft_offline:

  Correctable memory errors are very common on servers. Soft-offline is kernel's
  handling for memory pages having (excessive) corrected memory errors.

  For different types of page, soft-offline has different behaviors / costs.
  - For a raw error page, soft-offline migrates the in-use page's content to
    a new raw page.
  - For a page that is part of a transparent hugepage, soft-offline splits the
    transparent hugepage into raw pages, then migrates only the raw error page.
    As a result, user is transparently backed by 1 less hugepage, impacting
    memory access performance.
  - For a page that is part of a HugeTLB hugepage, soft-offline first migrates
    the entire HugeTLB hugepage, during which a free hugepage will be consumed
    as migration target. Then the original hugepage is dissolved into raw
    pages without compensation, reducing the capacity of the HugeTLB pool by 1.

  It is user's call to choose between reliability (staying away from fragile
  physical memory) vs performance / capacity implications in transparent and
  HugeTLB cases.

> difference in the splitting behavior, are hugepage users the only ones
> that should be concerned with this?

If the cost of migrating a raw page is negligible, then yes, only
hugepage users should be concerned and think about should they disable
soft offline.

>
> > +When setting to 1, kernel attempts to soft offline the page when it thinks
> > +needed.  For in-use page, page content will be migrated to a new page.  If
> > +the oringinal hugepage is a HugeTLB hugepage, regardless of in-use or free,
>
> s/oringinal/original/

To fix in v3.

>
> > +it will be dissolved into raw pages, and the capacity of the HugeTLB pool
> > +will reduce by 1.  If the original hugepage is a transparent hugepage, it
> > +will be split into raw pages.  When setting to 0, kernel won't attempt to
> > +soft offline the page.  Its default value is 1.
> >
>
> This behavior is the same for all architectures?
>

Yes, enable_soft_offline has the same behavior for all architectures,
and default=1.

It may be worth mentioning that setting enable_soft_offline to 0 means:
- If RAS Correctable Errors Collector is running, its request to soft
offline pages will be ignored.
- On ARM, the request to soft offline pages from GHES driver will be ignored.
- On PARISC, the request to soft offline pages from Page Deallocation
Table will be ignored.

I can add these clarifications in v3 if they are valuable.