From patchwork Fri Oct 21 11:22:57 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 78644 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp1245794qge; Fri, 21 Oct 2016 04:25:52 -0700 (PDT) X-Received: by 10.99.109.138 with SMTP id i132mr594267pgc.39.1477049152450; Fri, 21 Oct 2016 04:25:52 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id hk10si1917550pac.297.2016.10.21.04.25.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Oct 2016 04:25:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bxXwP-0007yR-4Z; Fri, 21 Oct 2016 11:24:49 +0000 Received: from mail-qk0-x233.google.com ([2607:f8b0:400d:c09::233]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bxXvi-0007SB-EW for linux-arm-kernel@lists.infradead.org; Fri, 21 Oct 2016 11:24:09 +0000 Received: by mail-qk0-x233.google.com with SMTP id n189so146097786qke.0 for ; Fri, 21 Oct 2016 04:23:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rST0v1mxay0nWilTDUbgMZUG9iqhH1r/dkEm8E9hiWk=; b=MA8db8jQcWyXp43JZ52N3aBVieDJJ2ckR8HK8yKtHRvgw5KKxDcQcnNcgByFbu7wxh Ml87faEwjOTEk6grUleAj5Dnw7X98+/c6MAy/X0K3gGeXCVDHO1+Wqp2/VqQiW7/Kixd AkzNTX71PbYIkGdlcVQYvn8C6hGwMl2ZmxPnM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rST0v1mxay0nWilTDUbgMZUG9iqhH1r/dkEm8E9hiWk=; b=V0bYBUZHz2voR4ADYL6ug1i8YUqTd6xTr9Xa38ZRybMj7smfv3ZzCY69U3bOFAP220 5uo/vqy7nPUzZjvSSWSdz6GmKFvqxaLCoZ+y69Dg5pUfbJ4SkTYzMnZWB7Y/A+pgf//Z QhRQPTqYuNlzXxifaBTCYP5S9+kRH8pT/XcvWJwVBjTko0HEkNLlyAxsdDfs8folifUq 1nC3urc+dQ9NeCmArfopuQTMLIEINiAtjUcz+zsO0lNEy+0Goai1giBjSrUWmyBjG/Yn JKEoUqlW66NQVYq1/+uSdCRowiYR7s0lr0qdr20alJ9LN1rqgzklijzAoNTDfvmFmgvq q9Jg== X-Gm-Message-State: ABUngvdqLNbFEgc8ko0cEXsjo9/qxdnFuqz8oHMOsEGWNzamjwYWI/CXtS1qxbc/Op6AH5sV X-Received: by 10.194.30.137 with SMTP id s9mr313473wjh.77.1477049024922; Fri, 21 Oct 2016 04:23:44 -0700 (PDT) Received: from localhost.localdomain ([196.66.89.52]) by smtp.gmail.com with ESMTPSA id 130sm3678711wmf.0.2016.10.21.04.23.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 21 Oct 2016 04:23:44 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com Subject: [PATCH v4 2/3] arm64: mm: replace 'block_mappings_allowed' with 'page_mappings_only' Date: Fri, 21 Oct 2016 12:22:57 +0100 Message-Id: <1477048978-4140-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1477048978-4140-1-git-send-email-ard.biesheuvel@linaro.org> References: <1477048978-4140-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161021_042406_653102_E1BBCEA7 X-CRM114-Status: GOOD ( 15.26 ) X-Spam-Score: -2.7 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2607:f8b0:400d:c09:0:0:0:233 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, steve.capper@linaro.org, will.deacon@arm.com, jeremy.linton@arm.com, Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org In preparation of adding support for contiguous PTE and PMD mappings, let's replace 'block_mappings_allowed' with 'page_mappings_only', which will be a more accurate description of the nature of the setting once we add such contiguous mappings into the mix. Reviewed-by: Mark Rutland Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/mmu.h | 2 +- arch/arm64/kernel/efi.c | 8 ++--- arch/arm64/mm/mmu.c | 32 ++++++++++---------- 3 files changed, 21 insertions(+), 21 deletions(-) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 8d9fce037b2f..a81454ad5455 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -34,7 +34,7 @@ extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt); extern void init_mem_pgprot(void); extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, unsigned long virt, phys_addr_t size, - pgprot_t prot, bool allow_block_mappings); + pgprot_t prot, bool page_mappings_only); extern void *fixmap_remap_fdt(phys_addr_t dt_phys); #endif diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index ba9bee389fd5..5d17f377d905 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -62,8 +62,8 @@ struct screen_info screen_info __section(.data); int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md) { pteval_t prot_val = create_mapping_protection(md); - bool allow_block_mappings = (md->type != EFI_RUNTIME_SERVICES_CODE && - md->type != EFI_RUNTIME_SERVICES_DATA); + bool page_mappings_only = (md->type == EFI_RUNTIME_SERVICES_CODE || + md->type == EFI_RUNTIME_SERVICES_DATA); if (!PAGE_ALIGNED(md->phys_addr) || !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT)) { @@ -76,12 +76,12 @@ int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md) * from the MMU routines. So avoid block mappings altogether in * that case. */ - allow_block_mappings = false; + page_mappings_only = true; } create_pgd_mapping(mm, md->phys_addr, md->virt_addr, md->num_pages << EFI_PAGE_SHIFT, - __pgprot(prot_val | PTE_NG), allow_block_mappings); + __pgprot(prot_val | PTE_NG), page_mappings_only); return 0; } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 27dc0e5012a8..7b0dd07212ae 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -143,7 +143,7 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr, static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(void), - bool allow_block_mappings) + bool page_mappings_only) { pmd_t *pmd; unsigned long next; @@ -170,7 +170,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end, /* try section mapping first */ if (((addr | next | phys) & ~SECTION_MASK) == 0 && - allow_block_mappings) { + !page_mappings_only) { pmd_set_huge(pmd, phys, prot); /* @@ -207,7 +207,7 @@ static inline bool use_1G_block(unsigned long addr, unsigned long next, static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(void), - bool allow_block_mappings) + bool page_mappings_only) { pud_t *pud; unsigned long next; @@ -229,7 +229,7 @@ static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end, /* * For 4K granule only, attempt to put down a 1GB block */ - if (use_1G_block(addr, next, phys) && allow_block_mappings) { + if (use_1G_block(addr, next, phys) && !page_mappings_only) { pud_set_huge(pud, phys, prot); /* @@ -240,7 +240,7 @@ static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end, pud_val(*pud))); } else { alloc_init_pmd(pud, addr, next, phys, prot, - pgtable_alloc, allow_block_mappings); + pgtable_alloc, page_mappings_only); BUG_ON(pud_val(old_pud) != 0 && pud_val(old_pud) != pud_val(*pud)); @@ -255,7 +255,7 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, phys_addr_t (*pgtable_alloc)(void), - bool allow_block_mappings) + bool page_mappings_only) { unsigned long addr, length, end, next; pgd_t *pgd = pgd_offset_raw(pgdir, virt); @@ -275,7 +275,7 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, do { next = pgd_addr_end(addr, end); alloc_init_pud(pgd, addr, next, phys, prot, pgtable_alloc, - allow_block_mappings); + page_mappings_only); phys += next - addr; } while (pgd++, addr = next, addr != end); } @@ -304,17 +304,17 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt, &phys, virt); return; } - __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, true); + __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, false); } void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, unsigned long virt, phys_addr_t size, - pgprot_t prot, bool allow_block_mappings) + pgprot_t prot, bool page_mappings_only) { BUG_ON(mm == &init_mm); __create_pgd_mapping(mm->pgd, phys, virt, size, prot, - pgd_pgtable_alloc, allow_block_mappings); + pgd_pgtable_alloc, page_mappings_only); } static void create_mapping_late(phys_addr_t phys, unsigned long virt, @@ -327,7 +327,7 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt, } __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, - NULL, !debug_pagealloc_enabled()); + NULL, debug_pagealloc_enabled()); } static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end) @@ -345,7 +345,7 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end __create_pgd_mapping(pgd, start, __phys_to_virt(start), end - start, PAGE_KERNEL, early_pgtable_alloc, - !debug_pagealloc_enabled()); + debug_pagealloc_enabled()); return; } @@ -358,13 +358,13 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end __phys_to_virt(start), kernel_start - start, PAGE_KERNEL, early_pgtable_alloc, - !debug_pagealloc_enabled()); + debug_pagealloc_enabled()); if (kernel_end < end) __create_pgd_mapping(pgd, kernel_end, __phys_to_virt(kernel_end), end - kernel_end, PAGE_KERNEL, early_pgtable_alloc, - !debug_pagealloc_enabled()); + debug_pagealloc_enabled()); /* * Map the linear alias of the [_text, __init_begin) interval as @@ -374,7 +374,7 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end */ __create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start), kernel_end - kernel_start, PAGE_KERNEL_RO, - early_pgtable_alloc, !debug_pagealloc_enabled()); + early_pgtable_alloc, debug_pagealloc_enabled()); } static void __init map_mem(pgd_t *pgd) @@ -424,7 +424,7 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, BUG_ON(!PAGE_ALIGNED(size)); __create_pgd_mapping(pgd, pa_start, (unsigned long)va_start, size, prot, - early_pgtable_alloc, !debug_pagealloc_enabled()); + early_pgtable_alloc, debug_pagealloc_enabled()); vma->addr = va_start; vma->phys_addr = pa_start;