From patchwork Mon Aug 24 08:29:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 264979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51387C433E1 for ; Mon, 24 Aug 2020 09:49:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 23F37206BE for ; Mon, 24 Aug 2020 09:49:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1598262591; bh=83LRrZrC4LInnS5Myvxnbeok5E3I28f+ZTEuGCIFQTQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=pvAlYDIUX7FRL0doDVfAfvL+vim3o+MBXpj0KqcQt7ZFoaTiUHnMZEHsk2MrLAME4 O2q8knMteN0hJuJ+77aoglaq49L1uCURJ/EigwWPOSkXuqSBENvx6B0oQE+8d68Rlo BZ7tcVFCIs9AzS1Xi7PLoTQN0AUwAVIzse6RcGx0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728553AbgHXJtu (ORCPT ); Mon, 24 Aug 2020 05:49:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:55210 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727889AbgHXIj1 (ORCPT ); Mon, 24 Aug 2020 04:39:27 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1CB6920FC3; Mon, 24 Aug 2020 08:39:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1598258366; bh=83LRrZrC4LInnS5Myvxnbeok5E3I28f+ZTEuGCIFQTQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=twt840eeclLi3CT4e3t7W8smeH46sQoGacvHuxTLeOt4ztENfboPxBYNC0F806CW4 kg1GMmmGUY+nJ8B04a0bvEOvP/l3mc74mEF4b5rAV6WX+aA7V6tVbVs1fzlyyBqS3S JGBkmtVM+ZJblVri7zfndMPh7o0cWNqYU00rS/0c= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Linus Torvalds , Xu Yu , Johannes Weiner , Catalin Marinas , Will Deacon , Yang Shi Subject: [PATCH 5.7 022/124] mm/memory.c: skip spurious TLB flush for retried page fault Date: Mon, 24 Aug 2020 10:29:16 +0200 Message-Id: <20200824082410.510847956@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200824082409.368269240@linuxfoundation.org> References: <20200824082409.368269240@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Yang Shi commit b7333b58f358f38d90d78e00c1ee5dec82df10ad upstream. Recently we found regression when running will_it_scale/page_fault3 test on ARM64. Over 70% down for the multi processes cases and over 20% down for the multi threads cases. It turns out the regression is caused by commit 89b15332af7c ("mm: drop mmap_sem before calling balance_dirty_pages() in write fault"). The test mmaps a memory size file then write to the mapping, this would make all memory dirty and trigger dirty pages throttle, that upstream commit would release mmap_sem then retry the page fault. The retried page fault would see correct PTEs installed then just fall through to spurious TLB flush. The regression is caused by the excessive spurious TLB flush. It is fine on x86 since x86's spurious TLB flush is no-op. We could just skip the spurious TLB flush to mitigate the regression. Suggested-by: Linus Torvalds Reported-by: Xu Yu Debugged-by: Xu Yu Tested-by: Xu Yu Cc: Johannes Weiner Cc: Catalin Marinas Cc: Will Deacon Cc: Signed-off-by: Yang Shi Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/memory.c | 3 +++ 1 file changed, 3 insertions(+) --- a/mm/memory.c +++ b/mm/memory.c @@ -4237,6 +4237,9 @@ static vm_fault_t handle_pte_fault(struc vmf->flags & FAULT_FLAG_WRITE)) { update_mmu_cache(vmf->vma, vmf->address, vmf->pte); } else { + /* Skip spurious TLB flush for retried page fault */ + if (vmf->flags & FAULT_FLAG_TRIED) + goto unlock; /* * This is needed only for protection faults but the arch code * is not yet telling us if this is a protection fault or not.