From patchwork Thu Oct 30 16:55:58 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 39845 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9A71E24046 for ; Thu, 30 Oct 2014 16:57:43 +0000 (UTC) Received: by mail-lb0-f197.google.com with SMTP id b6sf1869122lbj.8 for ; Thu, 30 Oct 2014 09:57:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id:cc :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=00Vx662dLaOVFKbq75KfwuP0xgtaOSbleJjEsCSibnQ=; b=gUE8dnTscJ8t4+lL3Ihc7U76SOUzG2dXCNkWwwG8PDlUsSget3pn1sHEZjZCDiaAJL A5FDp8vOTiO4iMi04tKwUBvgflXRFPHZkFHLFwJz0V5puE2ZkRl3EfU1kF+tcuUPogz3 qvH6oDvN6gYlgvpPIGc1d1P56omRgm2QQptT/z7zgjWOXzpw/yaoPR7enples9f7yPMJ HCSDHVgIJc1Pvj1UHoHviOXcL+T4XX/6SPxFm/Hfl2/Xm4H+FwLwExW6GZSQ7VZQoyw2 j1svwIox5FuKg+qJgZRinY/npn9V5W8wdLFWn5mLxTicvaCmaYXqdsUjd5PUJS54rRLH beQA== X-Gm-Message-State: ALoCoQnas/OYH6wo4pxW4cRIS22BaL8tT7FHhkfdIABYN8msQ6RF0ASQ6wxZXMDx4/5964KmMaOQ X-Received: by 10.180.93.35 with SMTP id cr3mr3211565wib.2.1414688262465; Thu, 30 Oct 2014 09:57:42 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.6.227 with SMTP id e3ls378762laa.69.gmail; Thu, 30 Oct 2014 09:57:42 -0700 (PDT) X-Received: by 10.152.228.140 with SMTP id si12mr20290649lac.66.1414688262171; Thu, 30 Oct 2014 09:57:42 -0700 (PDT) Received: from mail-la0-f43.google.com (mail-la0-f43.google.com. [209.85.215.43]) by mx.google.com with ESMTPS id p1si12958910laj.36.2014.10.30.09.57.42 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 30 Oct 2014 09:57:42 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) client-ip=209.85.215.43; Received: by mail-la0-f43.google.com with SMTP id ge10so4840591lab.16 for ; Thu, 30 Oct 2014 09:57:42 -0700 (PDT) X-Received: by 10.152.87.98 with SMTP id w2mr20064576laz.27.1414688262036; Thu, 30 Oct 2014 09:57:42 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp78073lbz; Thu, 30 Oct 2014 09:57:41 -0700 (PDT) X-Received: by 10.68.139.161 with SMTP id qz1mr19044552pbb.24.1414688260488; Thu, 30 Oct 2014 09:57:40 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id q5si7061745pdm.153.2014.10.30.09.57.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Oct 2014 09:57:40 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xjt1J-0006gc-UH; Thu, 30 Oct 2014 16:56:21 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xjt1H-0006X2-5X for linux-arm-kernel@lists.infradead.org; Thu, 30 Oct 2014 16:56:19 +0000 Received: from edgewater-inn.cambridge.arm.com (edgewater-inn.cambridge.arm.com [10.1.203.204]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s9UGttwo001365; Thu, 30 Oct 2014 16:55:55 GMT Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id F17261AE0249; Thu, 30 Oct 2014 16:55:59 +0000 (GMT) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] ARM: mm: try to re-use old ASID assignments following a rollover Date: Thu, 30 Oct 2014 16:55:58 +0000 Message-Id: <1414688158-12741-1-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141030_095619_539957_59FC4F29 X-CRM114-Status: GOOD ( 15.04 ) X-Spam-Score: -5.6 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.6 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: Will Deacon X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: will.deacon@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Rather than unconditionally allocating a fresh ASID to an mm from an older generation, attempt to re-use the old assignment where possible. This can bring performance benefits on systems where the ASID is used to tag things other than the TLB (e.g. branch prediction resources). Signed-off-by: Will Deacon Reviewed-by: Catalin Marinas --- arch/arm/mm/context.c | 58 ++++++++++++++++++++++++++++++--------------------- 1 file changed, 34 insertions(+), 24 deletions(-) diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c index 6eb97b3a7481..91892569710f 100644 --- a/arch/arm/mm/context.c +++ b/arch/arm/mm/context.c @@ -184,36 +184,46 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) u64 asid = atomic64_read(&mm->context.id); u64 generation = atomic64_read(&asid_generation); - if (asid != 0 && is_reserved_asid(asid)) { + if (asid != 0) { /* - * Our current ASID was active during a rollover, we can - * continue to use it and this was just a false alarm. + * If our current ASID was active during a rollover, we + * can continue to use it and this was just a false alarm. */ - asid = generation | (asid & ~ASID_MASK); - } else { + if (is_reserved_asid(asid)) + return generation | (asid & ~ASID_MASK); + /* - * Allocate a free ASID. If we can't find one, take a - * note of the currently active ASIDs and mark the TLBs - * as requiring flushes. We always count from ASID #1, - * as we reserve ASID #0 to switch via TTBR0 and to - * avoid speculative page table walks from hitting in - * any partial walk caches, which could be populated - * from overlapping level-1 descriptors used to map both - * the module area and the userspace stack. + * We had a valid ASID in a previous life, so try to re-use + * it if possible., */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); - if (asid == NUM_USER_ASIDS) { - generation = atomic64_add_return(ASID_FIRST_VERSION, - &asid_generation); - flush_context(cpu); - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); - } - __set_bit(asid, asid_map); - cur_idx = asid; - asid |= generation; - cpumask_clear(mm_cpumask(mm)); + asid &= ~ASID_MASK; + if (!__test_and_set_bit(asid, asid_map)) + goto bump_gen; } + /* + * Allocate a free ASID. If we can't find one, take a note of the + * currently active ASIDs and mark the TLBs as requiring flushes. + * We always count from ASID #1, as we reserve ASID #0 to switch + * via TTBR0 and to avoid speculative page table walks from hitting + * in any partial walk caches, which could be populated from + * overlapping level-1 descriptors used to map both the module + * area and the userspace stack. + */ + asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); + if (asid == NUM_USER_ASIDS) { + generation = atomic64_add_return(ASID_FIRST_VERSION, + &asid_generation); + flush_context(cpu); + asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); + } + + __set_bit(asid, asid_map); + cur_idx = asid; + +bump_gen: + asid |= generation; + cpumask_clear(mm_cpumask(mm)); return asid; }