From patchwork Mon Nov 23 17:24:50 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 57170 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp1568071lbb; Mon, 23 Nov 2015 09:26:52 -0800 (PST) X-Received: by 10.68.196.232 with SMTP id ip8mr13177339pbc.158.1448299612325; Mon, 23 Nov 2015 09:26:52 -0800 (PST) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id y17si20819817pfa.33.2015.11.23.09.26.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Nov 2015 09:26:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a0ure-00027z-CI; Mon, 23 Nov 2015 17:25:18 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a0urb-0000vj-D9 for linux-arm-kernel@lists.infradead.org; Mon, 23 Nov 2015 17:25:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C654F3A1; Mon, 23 Nov 2015 09:24:37 -0800 (PST) Received: from e104818-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6B4AB3F51B; Mon, 23 Nov 2015 09:24:53 -0800 (PST) Date: Mon, 23 Nov 2015 17:24:50 +0000 From: Catalin Marinas To: Jeremy Linton Subject: Re: [PATCH] [PATCH] arm64: Boot failure on m400 with new cont PTEs Message-ID: <20151123172450.GE32300@e104818-lin.cambridge.arm.com> References: <1447858999-26665-1-git-send-email-jeremy.linton@arm.com> <20151118152044.GD10644@leverpostej> <564CA29A.9050905@arm.com> <20151118162932.GA13355@leverpostej> <20151123155132.GC32300@e104818-lin.cambridge.arm.com> <5653387A.2000101@redhat.com> <20151123165214.GD32300@e104818-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20151123165214.GD32300@e104818-lin.cambridge.arm.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151123_092515_524284_BE270ECE X-CRM114-Status: GOOD ( 16.90 ) X-Spam-Score: -7.5 (-------) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-7.5 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.6 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , lauraa@codeaurora.org, ard.biesheuvel@linaro.org, suzuki.poulose@arm.com, will.deacon@arm.com, Jeremy Linton , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org On Mon, Nov 23, 2015 at 04:52:15PM +0000, Catalin Marinas wrote: > We have other cases where we go for smaller to larger block like the 1GB > section. I think until MarkR finishes his code to go via a temporary > TTBR1 + idmap, we should prevent all those. We can hope that going the > other direction (from bigger to smaller block mapping) is fine but we > don't have a clear answer yet. This patch (just briefly tested) prevents going from a smaller block to a bigger one) and the set_pte() sanity check no longer triggers. We still get some contiguous entries, though I haven't checked whether they've been reduced. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index abb66f84d4ac..b3f3f3e3d827 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -89,6 +89,21 @@ static void split_pmd(pmd_t *pmd, pte_t *pte) } while (pte++, i++, i < PTRS_PER_PTE); } +static bool __pte_range_none_or_cont(pte_t *pte) +{ + int i; + + for (i = 0; i < CONT_PTES; i++) { + if (!pte_none(*pte)) + return false; + if (!pte_cont(*pte)) + return false; + pte++; + } + + return true; +} + /* * Given a PTE with the CONT bit set, determine where the CONT range * starts, and clear the entire range of PTE CONT bits. @@ -143,7 +158,8 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr, pte = pte_offset_kernel(pmd, addr); do { next = min(end, (addr + CONT_SIZE) & CONT_MASK); - if (((addr | next | phys) & ~CONT_MASK) == 0) { + if (((addr | next | phys) & ~CONT_MASK) == 0 && + __pte_range_none_or_cont(pte)) { /* a block of CONT_PTES */ __populate_init_pte(pte, addr, next, phys, __pgprot(pgprot_val(prot) | PTE_CONT)); @@ -206,25 +222,12 @@ static void alloc_init_pmd(struct mm_struct *mm, pud_t *pud, do { next = pmd_addr_end(addr, end); /* try section mapping first */ - if (((addr | next | phys) & ~SECTION_MASK) == 0) { - pmd_t old_pmd =*pmd; + if (((addr | next | phys) & ~SECTION_MASK) == 0 && + (pmd_none(*pmd) || pmd_sect(*pmd))) set_pmd(pmd, __pmd(phys | pgprot_val(mk_sect_prot(prot)))); - /* - * Check for previous table entries created during - * boot (__create_page_tables) and flush them. - */ - if (!pmd_none(old_pmd)) { - flush_tlb_all(); - if (pmd_table(old_pmd)) { - phys_addr_t table = __pa(pte_offset_map(&old_pmd, 0)); - if (!WARN_ON_ONCE(slab_is_available())) - memblock_free(table, PAGE_SIZE); - } - } - } else { + else alloc_init_pte(pmd, addr, next, phys, prot, alloc); - } phys += next - addr; } while (pmd++, addr = next, addr != end); } @@ -262,29 +265,12 @@ static void alloc_init_pud(struct mm_struct *mm, pgd_t *pgd, /* * For 4K granule only, attempt to put down a 1GB block */ - if (use_1G_block(addr, next, phys)) { - pud_t old_pud = *pud; + if (use_1G_block(addr, next, phys) && + (pud_none(*pud) || pud_sect(*pud))) set_pud(pud, __pud(phys | pgprot_val(mk_sect_prot(prot)))); - - /* - * If we have an old value for a pud, it will - * be pointing to a pmd table that we no longer - * need (from swapper_pg_dir). - * - * Look up the old pmd table and free it. - */ - if (!pud_none(old_pud)) { - flush_tlb_all(); - if (pud_table(old_pud)) { - phys_addr_t table = __pa(pmd_offset(&old_pud, 0)); - if (!WARN_ON_ONCE(slab_is_available())) - memblock_free(table, PAGE_SIZE); - } - } - } else { + else alloc_init_pmd(mm, pud, addr, next, phys, prot, alloc); - } phys += next - addr; } while (pud++, addr = next, addr != end); }