From patchwork Tue Apr 5 07:17:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 557599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 912C1C433EF for ; Tue, 5 Apr 2022 09:27:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347511AbiDEJ1I (ORCPT ); Tue, 5 Apr 2022 05:27:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243076AbiDEItu (ORCPT ); Tue, 5 Apr 2022 04:49:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7396FA94D0; Tue, 5 Apr 2022 01:38:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 57B4661509; Tue, 5 Apr 2022 08:37:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6312CC385AC; Tue, 5 Apr 2022 08:37:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649147877; bh=Hcv7hpxBTXNT5KDamtWALuQY6m8S/r++O0XYpSU4cWI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=owpMjEMqa0ocLJSii0WGjeVmluFM+hALbjmB3xiDf+aVYagEWAp0kTwFBkvWA7udh EBkAAAIAboILcxNj2Pbrv5hkBd3NRJwnERjdj/PahJWh5/sf7GZrYMaZi5oNp6VaiP BPCRvK3P+1wiJARBd9wcO9edrvw7ekkrTlTQfk9M= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Rik van Riel , Miaohe Lin , Naoya Horiguchi , Oscar Salvador , Mel Gorman , Johannes Weiner , Andrew Morton , Linus Torvalds Subject: [PATCH 5.16 0116/1017] mm,hwpoison: unmap poisoned page before invalidation Date: Tue, 5 Apr 2022 09:17:09 +0200 Message-Id: <20220405070357.640814747@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405070354.155796697@linuxfoundation.org> References: <20220405070354.155796697@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Rik van Riel commit 3149c79f3cb0e2e3bafb7cfadacec090cbd250d3 upstream. In some cases it appears the invalidation of a hwpoisoned page fails because the page is still mapped in another process. This can cause a program to be continuously restarted and die when it page faults on the page that was not invalidated. Avoid that problem by unmapping the hwpoisoned page when we find it. Another issue is that sometimes we end up oopsing in finish_fault, if the code tries to do something with the now-NULL vmf->page. I did not hit this error when submitting the previous patch because there are several opportunities for alloc_set_pte to bail out before accessing vmf->page, and that apparently happened on those systems, and most of the time on other systems, too. However, across several million systems that error does occur a handful of times a day. It can be avoided by returning VM_FAULT_NOPAGE which will cause do_read_fault to return before calling finish_fault. Link: https://lkml.kernel.org/r/20220325161428.5068d97e@imladris.surriel.com Fixes: e53ac7374e64 ("mm: invalidate hwpoison page cache page in fault path") Signed-off-by: Rik van Riel Reviewed-by: Miaohe Lin Tested-by: Naoya Horiguchi Reviewed-by: Oscar Salvador Cc: Mel Gorman Cc: Johannes Weiner Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/memory.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) --- a/mm/memory.c +++ b/mm/memory.c @@ -3888,14 +3888,18 @@ static vm_fault_t __do_fault(struct vm_f return ret; if (unlikely(PageHWPoison(vmf->page))) { + struct page *page = vmf->page; vm_fault_t poisonret = VM_FAULT_HWPOISON; if (ret & VM_FAULT_LOCKED) { + if (page_mapped(page)) + unmap_mapping_pages(page_mapping(page), + page->index, 1, false); /* Retry if a clean page was removed from the cache. */ - if (invalidate_inode_page(vmf->page)) - poisonret = 0; - unlock_page(vmf->page); + if (invalidate_inode_page(page)) + poisonret = VM_FAULT_NOPAGE; + unlock_page(page); } - put_page(vmf->page); + put_page(page); vmf->page = NULL; return poisonret; }