From patchwork Tue Oct 28 11:44:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 39655 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B3EED24026 for ; Tue, 28 Oct 2014 11:45:23 +0000 (UTC) Received: by mail-la0-f70.google.com with SMTP id ge10sf337035lab.1 for ; Tue, 28 Oct 2014 04:45:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=318jHLPJKTHPTGi77stjeZCL84btCrWUbJR9EljbIII=; b=J8dNDjeltXl4F6Zap5o56S3OhDQW0np0WrZh/i5hjLWCq2AzjN+r9MXiHZ4hIqOCP0 6Zrp41CAJEzmbm1eiZbQ8UGkFqYcphWC+kH+7hJXvcp+TQ20VdNZDBmxoGLfk582EjAU ARdM84WTt6l/22hEDlX4WLLndWBwzwP575sslFH+jkd8g08uAm1aJw3ybThuIGk/ztrl noI51Fy/q5mJY77Uv8QLXaK6k6y67TKhI2u7OoNCKv+tt/N8Wor2ZyJrgAV+cdvQcIYI AIzofCSwkUUdbuxa3vLCwtVzaGYioDF6xvbdbCYxQ9lmoYiYbGY5ZB1hLfEGiS3wk9it fqRw== X-Gm-Message-State: ALoCoQlioW0kFRqhysB0hUcvh7zswm+VOIFSAfOS3f8jWph84LxAoGNrn5a7L9V121zLa3q7pZ26 X-Received: by 10.180.19.198 with SMTP id h6mr4912121wie.5.1414496722193; Tue, 28 Oct 2014 04:45:22 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.120.169 with SMTP id ld9ls49143lab.12.gmail; Tue, 28 Oct 2014 04:45:21 -0700 (PDT) X-Received: by 10.112.38.67 with SMTP id e3mr3475198lbk.6.1414496721964; Tue, 28 Oct 2014 04:45:21 -0700 (PDT) Received: from mail-la0-f43.google.com (mail-la0-f43.google.com. [209.85.215.43]) by mx.google.com with ESMTPS id k16si2119495laa.47.2014.10.28.04.45.21 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 28 Oct 2014 04:45:21 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) client-ip=209.85.215.43; Received: by mail-la0-f43.google.com with SMTP id ge10so431595lab.16 for ; Tue, 28 Oct 2014 04:45:21 -0700 (PDT) X-Received: by 10.152.6.228 with SMTP id e4mr3288629laa.71.1414496721770; Tue, 28 Oct 2014 04:45:21 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp462407lbz; Tue, 28 Oct 2014 04:45:20 -0700 (PDT) X-Received: by 10.70.130.81 with SMTP id oc17mr3099575pdb.48.1414496720239; Tue, 28 Oct 2014 04:45:20 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ml2si1082983pab.144.2014.10.28.04.45.19 for ; Tue, 28 Oct 2014 04:45:20 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753866AbaJ1LpQ (ORCPT + 26 others); Tue, 28 Oct 2014 07:45:16 -0400 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]:33025 "EHLO cam-admin0.cambridge.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752129AbaJ1LpM (ORCPT ); Tue, 28 Oct 2014 07:45:12 -0400 Received: from edgewater-inn.cambridge.arm.com (edgewater-inn.cambridge.arm.com [10.1.203.204]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s9SBiNwo009666; Tue, 28 Oct 2014 11:44:23 GMT Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id C4CB41AE02A6; Tue, 28 Oct 2014 11:44:26 +0000 (GMT) From: Will Deacon To: torvalds@linux-foundation.org, peterz@infradead.org Cc: linux-kernel@vger.kernel.org, linux@arm.linux.org.uk, benh@kernel.crashing.org, Will Deacon Subject: [RFC PATCH 2/2] zap_pte_range: fix partial TLB flushing in response to a dirty pte Date: Tue, 28 Oct 2014 11:44:22 +0000 Message-Id: <1414496662-25202-3-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1414496662-25202-1-git-send-email-will.deacon@arm.com> References: <1414496662-25202-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: will.deacon@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , When we encounter a dirty page during unmap, we force a TLB invalidation to avoid a race with pte_mkclean and stale, dirty TLB entries in the CPU. This uses the same force_flush logic as the batch failure code, but since we don't break out of the loop when finding a dirty pte, tlb->end can be < addr as we only batch for present ptes. This can result in a negative range being passed to subsequent TLB invalidation calls, potentially leading to massive over-invalidation of the TLB (observed in practice running firefox on arm64). This patch fixes the issue by restricting the use of addr in the TLB range calculations. The first range then ends up covering tlb->start to min(tlb->end, addr), which corresponds to the currently batched range. The second range then covers anything remaining, which may still lead to a (much reduced) over-invalidation of the TLB. Signed-off-by: Will Deacon --- mm/memory.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 3e503831e042..ea41508d41f3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1194,11 +1194,10 @@ again: * then update the range to be the remaining * TLB range. */ - old_end = tlb->end; - tlb->end = addr; + tlb->end = old_end = min(tlb->end, addr); tlb_flush_mmu_tlbonly(tlb); - tlb->start = addr; - tlb->end = old_end; + tlb->start = old_end; + tlb->end = end; } pte_unmap_unlock(start_pte, ptl);