From patchwork Fri Nov 17 18:21:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 119206 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp846290qgn; Fri, 17 Nov 2017 10:25:25 -0800 (PST) X-Google-Smtp-Source: AGs4zMZzjwOE+FG5XTZw4PMpECFQMAqteNyVu/2foZAepeCYH793DLlsrt7M/T6hgfFnOCoyVGTA X-Received: by 10.101.93.141 with SMTP id f13mr5679457pgt.92.1510943125703; Fri, 17 Nov 2017 10:25:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510943125; cv=none; d=google.com; s=arc-20160816; b=rXfkVowVel1If+h61qUft0hA+8N+pMQqKeZyKSXDPCOr7EpN0xuLPXIYZGHbu3t6q3 rQPBSInUnZEQX33YR2Z5Dpx23SHFp7BKHIl32mHsRegEh3zwVZDgyE8vpapL63dVauC+ 0SCNi4f+bO23lttGm/qsuos7E6ANIXibnbbEBAqLZnyu///GftkqkiOINhHb6wDV02fm FA6/3CJBdcALR8Hxh3OErlkZwcr1I+7YyhISJr2mPlvr3eptcHLbaHNZLFDSrhpqtJl0 ZUH7o0E40Go/bTIsb2zp88XwOrYmlFeRVjv7SzF85JkHJF5lHg0Xt4mwd3i56SFkgpPD konA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=c7dHcwSDUHqdwwQwJycOKWjLK7Mx3XaXcICWWym4L3k=; b=jndO4rmCVpF+bK4iF/62I3+Xe01MwOFiwIKLOCedoECxFgsk3/4pBs0wsYwoiAZOsx vXfpG1Z+8JAgmhlLqDLJfOaAFdOBcF+AjXQP9C3XF7t+Z7NpSqjhrDDpH6+dK2WsSESq d/IOitUIkYVCevFi77D+X4LJ/Cgc+pgnzk1quckTZvVhsk2Vc48pXh0mDAoELBOmncZ2 S5XH3iC9s9dulYLbdH3gPhKWEPTqytY4aXsi95RxuW15g1b02fomBLzgvO6vAT7R9WxU 2svI4EoXIXn+NME8UFhI6h9ZTLgbGxrQ9E6hgr2C6dbHNeC6NDsvBI9vjc95PY47yWRX h6mQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y62si3088438pgb.383.2017.11.17.10.25.25; Fri, 17 Nov 2017 10:25:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760534AbdKQSZX (ORCPT + 28 others); Fri, 17 Nov 2017 13:25:23 -0500 Received: from foss.arm.com ([217.140.101.70]:39450 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760290AbdKQSV4 (ORCPT ); Fri, 17 Nov 2017 13:21:56 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0858E1682; Fri, 17 Nov 2017 10:21:55 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CB6B83F5A0; Fri, 17 Nov 2017 10:21:54 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 4B2281AE1352; Fri, 17 Nov 2017 18:22:03 +0000 (GMT) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, ard.biesheuvel@linaro.org, sboyd@codeaurora.org, dave.hansen@linux.intel.com, keescook@chromium.org, Will Deacon Subject: [PATCH 07/18] arm64: mm: Allocate ASIDs in pairs Date: Fri, 17 Nov 2017 18:21:50 +0000 Message-Id: <1510942921-12564-8-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1510942921-12564-1-git-send-email-will.deacon@arm.com> References: <1510942921-12564-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for separate kernel/user ASIDs, allocate them in pairs for each mm_struct. The bottom bit distinguishes the two: if it is set, then the ASID will map only userspace. Signed-off-by: Will Deacon --- arch/arm64/include/asm/mmu.h | 1 + arch/arm64/mm/context.c | 25 +++++++++++++++++-------- 2 files changed, 18 insertions(+), 8 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 0d34bf0a89c7..01bfb184f2a8 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -17,6 +17,7 @@ #define __ASM_MMU_H #define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */ +#define USER_ASID_FLAG (UL(1) << 48) typedef struct { atomic64_t id; diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 78816e476491..db28958d9e4f 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) -#define NUM_USER_ASIDS ASID_FIRST_VERSION + +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 +#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1) +#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1) +#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK) +#else +#define NUM_USER_ASIDS (ASID_FIRST_VERSION) +#define asid2idx(asid) ((asid) & ~ASID_MASK) +#define idx2asid(idx) asid2idx(idx) +#endif /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -104,7 +113,7 @@ static void flush_context(unsigned int cpu) */ if (asid == 0) asid = per_cpu(reserved_asids, i); - __set_bit(asid & ~ASID_MASK, asid_map); + __set_bit(asid2idx(asid), asid_map); per_cpu(reserved_asids, i) = asid; } @@ -156,16 +165,16 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - asid &= ~ASID_MASK; - if (!__test_and_set_bit(asid, asid_map)) + if (!__test_and_set_bit(asid2idx(asid), asid_map)) return newasid; } /* * Allocate a free ASID. If we can't find one, take a note of the - * currently active ASIDs and mark the TLBs as requiring flushes. - * We always count from ASID #1, as we use ASID #0 when setting a - * reserved TTBR0 for the init_mm. + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. */ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); if (asid != NUM_USER_ASIDS) @@ -182,7 +191,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) set_asid: __set_bit(asid, asid_map); cur_idx = asid; - return asid | generation; + return idx2asid(asid) | generation; } void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)