From patchwork Wed Oct 10 16:23:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 148567 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1062075lji; Wed, 10 Oct 2018 09:23:17 -0700 (PDT) X-Google-Smtp-Source: ACcGV616ZN8Scrx/PM3q18MII4yAYf2Bk+0i37acdEFV5g6XFdyM0+58aciv5Yuw+AV2Jr5PqaEB X-Received: by 2002:a62:8708:: with SMTP id i8-v6mr35320495pfe.150.1539188597346; Wed, 10 Oct 2018 09:23:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539188597; cv=none; d=google.com; s=arc-20160816; b=d6NIrrhFp6KKAmAZUyg+/5YccmFr8Ya4LjxZJLK6qq0R2uHScNq6Nmbd1+1U533NcE 10SWszi9FWM4VZvrQhOdR8wVDnbySuh/m4EleEviLpvQidkPsnmpAlj+IqREl+s+7owj Hgq4bHiN2X5u4ZtyVMQhpp15mdi9uHz2Nhbqq0Lj6wNgIkWDD35MSCvQSusR2xRuTBNq fnqMJZa/UEaB18YF484o9X/5hfCrLbNKvRjPd0IBHfzRXvgxP9Cy2/lzxPYAsmy31Tca 3XSxWF5q8/ObrnwPsoMxDyDUOdJC+3ZUNwJ5wmtTAMEUczJKuzzpwswdn0QmAA4G3eW1 X9zQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Ka6oXZu8DJRXSCZkb2TermywvttDv1cohIKM9nr4o2s=; b=A2Tn3+ejaf0AURjx7s9SSfPnkfEdz+XWVn66e6+/wVkOjX3zqLF3PAdE2Sf6N83g99 Q+h0oH5BrPs+ezWikqYuuL3Dwf4c9mcqDSZgFVoy916+lkU5b2OQrxB+vXu/ra2bWoph llF77GzB7TvXArJZvPq9xFVTusKUgs9m/mOvG56QcFK/fLi2LHDKC6KxCVhDRHRyUqdV yPV7+1k2bu8Eo6ZupWIZLJmWmBTht6ZEuFzRqVfz3F9NkZVd44I3V6ziD0LGRElX9jWO IG5h4NZYbjmkP367DW9GSmE/Gw/aGJBZFVMEfYWUQhuD8r98IPv6Q3QUPbPM9d0azy66 SDYA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x18-v6si69516plr.146.2018.10.10.09.23.17; Wed, 10 Oct 2018 09:23:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726871AbeJJXqH (ORCPT + 32 others); Wed, 10 Oct 2018 19:46:07 -0400 Received: from foss.arm.com ([217.140.101.70]:54800 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbeJJXqG (ORCPT ); Wed, 10 Oct 2018 19:46:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 59B261596; Wed, 10 Oct 2018 09:23:13 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2BBA93F740; Wed, 10 Oct 2018 09:23:13 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id D75B21AE07A7; Wed, 10 Oct 2018 17:23:12 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: cpandya@codeaurora.org, toshi.kani@hpe.com, tglx@linutronix.de, mhocko@suse.com, akpm@linux-foundation.org, sean.j.christopherson@intel.com, Will Deacon Subject: [PATCH v3 1/5] ioremap: Rework pXd_free_pYd_page() API Date: Wed, 10 Oct 2018 17:23:00 +0100 Message-Id: <1539188584-15819-2-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1539188584-15819-1-git-send-email-will.deacon@arm.com> References: <1539188584-15819-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The recently merged API for ensuring break-before-make on page-table entries when installing huge mappings in the vmalloc/ioremap region is fairly counter-intuitive, resulting in the arch freeing functions (e.g. pmd_free_pte_page()) being called even on entries that aren't present. This resulted in a minor bug in the arm64 implementation, giving rise to spurious VM_WARN messages. This patch moves the pXd_present() checks out into the core code, refactoring the callsites at the same time so that we avoid the complex conjunctions when determining whether or not we can put down a huge mapping. Cc: Chintan Pandya Cc: Toshi Kani Cc: Thomas Gleixner Cc: Michal Hocko Cc: Andrew Morton Suggested-by: Linus Torvalds Reviewed-by: Toshi Kani Signed-off-by: Will Deacon --- lib/ioremap.c | 56 ++++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 42 insertions(+), 14 deletions(-) -- 2.1.4 diff --git a/lib/ioremap.c b/lib/ioremap.c index 517f5853ffed..6c72764af19c 100644 --- a/lib/ioremap.c +++ b/lib/ioremap.c @@ -76,6 +76,25 @@ static int ioremap_pte_range(pmd_t *pmd, unsigned long addr, return 0; } +static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, + unsigned long end, phys_addr_t phys_addr, + pgprot_t prot) +{ + if (!ioremap_pmd_enabled()) + return 0; + + if ((end - addr) != PMD_SIZE) + return 0; + + if (!IS_ALIGNED(phys_addr, PMD_SIZE)) + return 0; + + if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) + return 0; + + return pmd_set_huge(pmd, phys_addr, prot); +} + static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { @@ -89,13 +108,8 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, do { next = pmd_addr_end(addr, end); - if (ioremap_pmd_enabled() && - ((next - addr) == PMD_SIZE) && - IS_ALIGNED(phys_addr + addr, PMD_SIZE) && - pmd_free_pte_page(pmd, addr)) { - if (pmd_set_huge(pmd, phys_addr + addr, prot)) - continue; - } + if (ioremap_try_huge_pmd(pmd, addr, next, phys_addr + addr, prot)) + continue; if (ioremap_pte_range(pmd, addr, next, phys_addr + addr, prot)) return -ENOMEM; @@ -103,6 +117,25 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, return 0; } +static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr, + unsigned long end, phys_addr_t phys_addr, + pgprot_t prot) +{ + if (!ioremap_pud_enabled()) + return 0; + + if ((end - addr) != PUD_SIZE) + return 0; + + if (!IS_ALIGNED(phys_addr, PUD_SIZE)) + return 0; + + if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) + return 0; + + return pud_set_huge(pud, phys_addr, prot); +} + static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { @@ -116,13 +149,8 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, do { next = pud_addr_end(addr, end); - if (ioremap_pud_enabled() && - ((next - addr) == PUD_SIZE) && - IS_ALIGNED(phys_addr + addr, PUD_SIZE) && - pud_free_pmd_page(pud, addr)) { - if (pud_set_huge(pud, phys_addr + addr, prot)) - continue; - } + if (ioremap_try_huge_pud(pud, addr, next, phys_addr + addr, prot)) + continue; if (ioremap_pmd_range(pud, addr, next, phys_addr + addr, prot)) return -ENOMEM;