From patchwork Thu May 23 17:07:50 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 17166 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ye0-f198.google.com (mail-ye0-f198.google.com [209.85.213.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2A6902395B for ; Thu, 23 May 2013 17:09:53 +0000 (UTC) Received: by mail-ye0-f198.google.com with SMTP id m14sf642453yen.5 for ; Thu, 23 May 2013 10:08:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=4UUE/vSalKbyY88jU61QQxY6Jym2AerPlM5z7cztpZg=; b=DJyQheum1mIjuYDNdCkczTVByEGxbAU2r42fCuT4WU7k8qtWiwCOhnAg+qLYHanG4x Hj9Lks42x+tNLkxkUsbJk+L12HARXKyOzOm2vwkQCHMhaqLgkMwJIPe8X66aqc2ycr06 uAauOP0Sm6KJlCXZL5mnYVpCEUfhEnITKOXSJrtCzDP1Bi+xMBTJocg6bXops/lEW0l6 3G7VyGMIbjrx6LFyluS8Sh+gpSRYnHzJjPuLqUmW6wwxM5wplT8ixhhB5gwjjc5VAPfH ak7wmbEicWsJuHJTdnfpGKvj+t0Bo6G4ooSyQ64cJComLdahmqKTVzQmTg9QJrS7iFiP dvLg== X-Received: by 10.224.53.198 with SMTP id n6mr3526137qag.2.1369328937505; Thu, 23 May 2013 10:08:57 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.118.66 with SMTP id kk2ls1597354qeb.48.gmail; Thu, 23 May 2013 10:08:57 -0700 (PDT) X-Received: by 10.58.171.167 with SMTP id av7mr5904712vec.15.1369328937257; Thu, 23 May 2013 10:08:57 -0700 (PDT) Received: from mail-vc0-f172.google.com (mail-vc0-f172.google.com [209.85.220.172]) by mx.google.com with ESMTPS id zt2si5825702vdb.50.2013.05.23.10.08.57 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:57 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.172 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.172; Received: by mail-vc0-f172.google.com with SMTP id hf12so2452275vcb.3 for ; Thu, 23 May 2013 10:08:57 -0700 (PDT) X-Received: by 10.58.187.164 with SMTP id ft4mr5892560vec.5.1369328937053; Thu, 23 May 2013 10:08:57 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.126.138 with SMTP id c10csp61562vcs; Thu, 23 May 2013 10:08:52 -0700 (PDT) X-Received: by 10.180.87.162 with SMTP id az2mr45300985wib.10.1369328899162; Thu, 23 May 2013 10:08:19 -0700 (PDT) Received: from mail-we0-x22d.google.com (mail-we0-x22d.google.com [2a00:1450:400c:c03::22d]) by mx.google.com with ESMTPS id el15si15056796wic.97.2013.05.23.10.08.16 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:19 -0700 (PDT) Received-SPF: neutral (google.com: 2a00:1450:400c:c03::22d is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=2a00:1450:400c:c03::22d; Received: by mail-we0-f173.google.com with SMTP id p57so2164608wes.4 for ; Thu, 23 May 2013 10:08:16 -0700 (PDT) X-Received: by 10.180.37.133 with SMTP id y5mr26254407wij.20.1369328893524; Thu, 23 May 2013 10:08:13 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id ca19sm36989435wib.3.2013.05.23.10.08.12 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:12 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Michal Hocko , Ken Chen , Mel Gorman , Catalin Marinas , Will Deacon , patches@linaro.org, Steve Capper Subject: [PATCH 03/11] mm: hugetlb: Copy general hugetlb code from x86 to mm. Date: Thu, 23 May 2013 18:07:50 +0100 Message-Id: <1369328878-11706-4-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> References: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQk0keajirAl5azuH3uXO/nVSWrFu2jZ0P9z7F77eTT1cNS5WX363tB5N1rFIPZtedmL9WQ6 X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.172 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The huge_pte_alloc, huge_pte_offset and follow_huge_p[mu]d functions in x86/mm/hugetlbpage.c do not rely on any architecture specific knowledge other than the fact that pmds and puds can be treated as huge ptes. To allow other architectures to use this code (and reduce the need for code duplication), this patch copies these functions into mm, replaces the use of pud_large with pud_huge and provides a config flag to activate them: CONFIG_ARCH_WANT_GENERAL_HUGETLB If CONFIG_ARCH_WANT_HUGE_PMD_SHARE is also active then the huge_pmd_share code will be called by huge_pte_alloc (othewise we call pmd_alloc and skip the sharing code). Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- mm/hugetlb.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 88 insertions(+), 9 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b0bfb29..6321726 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2931,15 +2931,6 @@ out_mutex: return ret; } -/* Can be overriden by architectures */ -__attribute__((weak)) struct page * -follow_huge_pud(struct mm_struct *mm, unsigned long address, - pud_t *pud, int write) -{ - BUG(); - return NULL; -} - long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, struct vm_area_struct **vmas, unsigned long *position, unsigned long *nr_pages, @@ -3289,8 +3280,96 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; return 1; } +#define want_pmd_share() (1) +#else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) +{ + return NULL; +} +#define want_pmd_share() (0) #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +#ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB +pte_t *huge_pte_alloc(struct mm_struct *mm, + unsigned long addr, unsigned long sz) +{ + pgd_t *pgd; + pud_t *pud; + pte_t *pte = NULL; + + pgd = pgd_offset(mm, addr); + pud = pud_alloc(mm, pgd, addr); + if (pud) { + if (sz == PUD_SIZE) { + pte = (pte_t *)pud; + } else { + BUG_ON(sz != PMD_SIZE); + if (want_pmd_share() && pud_none(*pud)) + pte = huge_pmd_share(mm, addr, pud); + else + pte = (pte_t *)pmd_alloc(mm, pud, addr); + } + } + BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte)); + + return pte; +} + +pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd = NULL; + + pgd = pgd_offset(mm, addr); + if (pgd_present(*pgd)) { + pud = pud_offset(pgd, addr); + if (pud_present(*pud)) { + if (pud_huge(*pud)) + return (pte_t *)pud; + pmd = pmd_offset(pud, addr); + } + } + return (pte_t *) pmd; +} + +struct page * +follow_huge_pmd(struct mm_struct *mm, unsigned long address, + pmd_t *pmd, int write) +{ + struct page *page; + + page = pte_page(*(pte_t *)pmd); + if (page) + page += ((address & ~PMD_MASK) >> PAGE_SHIFT); + return page; +} + +struct page * +follow_huge_pud(struct mm_struct *mm, unsigned long address, + pud_t *pud, int write) +{ + struct page *page; + + page = pte_page(*(pte_t *)pud); + if (page) + page += ((address & ~PUD_MASK) >> PAGE_SHIFT); + return page; +} + +#else /* !CONFIG_ARCH_WANT_GENERAL_HUGETLB */ + +/* Can be overriden by architectures */ +__attribute__((weak)) struct page * +follow_huge_pud(struct mm_struct *mm, unsigned long address, + pud_t *pud, int write) +{ + BUG(); + return NULL; +} + +#endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */ + #ifdef CONFIG_MEMORY_FAILURE /* Should be called in hugetlb_lock */