From patchwork Wed May 8 09:52:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 16790 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ye0-f199.google.com (mail-ye0-f199.google.com [209.85.213.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C9355238FB for ; Wed, 8 May 2013 09:53:47 +0000 (UTC) Received: by mail-ye0-f199.google.com with SMTP id l11sf1768276yen.10 for ; Wed, 08 May 2013 02:53:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-beenthere:x-received:received-spf :x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=a6ky0gcvQK4A/UIujdW3/GL8tvcOsw9E04zlzWzDvLo=; b=WVW1JxnXDD8R1rIGQ/GZa2ctmD6QtKSjGOiPZMgC3qQbgljVhy2qYLOJfgGGBSNDHg b1IiPdRg6rRbHdMfqqeUbfRHAYPVYCE3w8lc6XnUf+diushOk0hQDYy3/o7b3XsSd87a cD8djPq/jeg8M/iVCTFp+Z/v0DAj1cmJU+tSr4q9xAuy/gI3FRUnXAjwVu9Of7h1a8Q8 7JXXQlRxo10O4jm+ZSJON/c31UPv8HbR0aijsYOKt7a6amsP1BH0PfYs6oaQe0GxLRvl OmfTiBUomc/SE9pwf/N7nc/UtneZZitYaw4MBgZ0DHmQRU4OTuT++pem1II4j8GyZyQx FedQ== X-Received: by 10.224.59.205 with SMTP id m13mr7112104qah.7.1368006806820; Wed, 08 May 2013 02:53:26 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.70.195 with SMTP id o3ls808376qeu.14.gmail; Wed, 08 May 2013 02:53:26 -0700 (PDT) X-Received: by 10.58.22.36 with SMTP id a4mr3947846vef.28.1368006806647; Wed, 08 May 2013 02:53:26 -0700 (PDT) Received: from mail-vb0-x235.google.com (mail-vb0-x235.google.com [2607:f8b0:400c:c02::235]) by mx.google.com with ESMTPS id gr7si14692299vdc.116.2013.05.08.02.53.26 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 08 May 2013 02:53:26 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::235 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::235; Received: by mail-vb0-f53.google.com with SMTP id i3so1382652vbh.40 for ; Wed, 08 May 2013 02:53:26 -0700 (PDT) X-Received: by 10.52.66.101 with SMTP id e5mr3346227vdt.57.1368006806509; Wed, 08 May 2013 02:53:26 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp146851veb; Wed, 8 May 2013 02:53:26 -0700 (PDT) X-Received: by 10.194.109.103 with SMTP id hr7mr4049511wjb.0.1368006805585; Wed, 08 May 2013 02:53:25 -0700 (PDT) Received: from mail-wi0-x230.google.com (mail-wi0-x230.google.com [2a00:1450:400c:c05::230]) by mx.google.com with ESMTPS id mb1si1683926wic.91.2013.05.08.02.53.25 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 08 May 2013 02:53:25 -0700 (PDT) Received-SPF: neutral (google.com: 2a00:1450:400c:c05::230 is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=2a00:1450:400c:c05::230; Received: by mail-wi0-f176.google.com with SMTP id hq12so4806791wib.9 for ; Wed, 08 May 2013 02:53:25 -0700 (PDT) X-Received: by 10.180.89.140 with SMTP id bo12mr6088121wib.22.1368006805198; Wed, 08 May 2013 02:53:25 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id m14sm8068040wij.9.2013.05.08.02.53.24 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 08 May 2013 02:53:24 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Michal Hocko , Ken Chen , Mel Gorman , Catalin Marinas , Will Deacon , patches@linaro.org, Steve Capper Subject: [RFC PATCH v2 11/11] ARM64: mm: THP support. Date: Wed, 8 May 2013 10:52:43 +0100 Message-Id: <1368006763-30774-12-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1368006763-30774-1-git-send-email-steve.capper@linaro.org> References: <1368006763-30774-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQm8fUvNnIY8RcQFxXzNEOTqSWwu8ajCrYuJjVMUcq0ESgfDjwXliTqVk7DwnqUSCrmfTs8b X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::235 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Bring Transparent HugePage support to ARM. The size of a transparent huge page depends on the normal page size. A transparent huge page is always represented as a pmd. If PAGE_SIZE is 4KB, THPs are 2MB. If PAGE_SIZE is 64KB, THPs are 512MB. Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- arch/arm64/Kconfig | 3 ++ arch/arm64/include/asm/pgtable-hwdef.h | 4 +++ arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/tlb.h | 6 ++++ arch/arm64/include/asm/tlbflush.h | 2 ++ 5 files changed, 70 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a5f76cf..93a3b9e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -194,6 +194,9 @@ config ARCH_WANT_GENERAL_HUGETLB config ARCH_WANT_HUGE_PMD_SHARE def_bool y if !ARM64_64K_PAGES +config HAVE_ARCH_TRANSPARENT_HUGEPAGE + def_bool y + source "mm/Kconfig" config FORCE_MAX_ZONEORDER diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index e6e0a0d..63c9d0d 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -42,6 +42,10 @@ /* * Section */ +#define PMD_SECT_VALID (_AT(pmdval_t, 1) << 0) +#define PMD_SECT_PROT_NONE (_AT(pmdval_t, 1) << 2) +#define PMD_SECT_USER (_AT(pmdval_t, 1) << 6) /* AP[1] */ +#define PMD_SECT_RDONLY (_AT(pmdval_t, 1) << 7) /* AP[2] */ #define PMD_SECT_S (_AT(pmdval_t, 3) << 8) #define PMD_SECT_AF (_AT(pmdval_t, 1) << 10) #define PMD_SECT_NG (_AT(pmdval_t, 1) << 11) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 6bcbcfd..fd17e0b 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -188,6 +188,61 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, #define __HAVE_ARCH_PTE_SPECIAL /* + * Software PMD bits for THP + */ + +#define PMD_SECT_DIRTY (_AT(pmdval_t, 1) << 55) +#define PMD_SECT_SPLITTING (_AT(pmdval_t, 1) << 57) + +/* + * THP definitions. + */ +#define pmd_young(pmd) (pmd_val(pmd) & PMD_SECT_AF) + +#define __HAVE_ARCH_PMD_WRITE +#define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY)) + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) +#define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_SECT_SPLITTING) +#endif + +#define PMD_BIT_FUNC(fn,op) \ +static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; } + +PMD_BIT_FUNC(wrprotect, |= PMD_SECT_RDONLY); +PMD_BIT_FUNC(mkold, &= ~PMD_SECT_AF); +PMD_BIT_FUNC(mksplitting, |= PMD_SECT_SPLITTING); +PMD_BIT_FUNC(mkwrite, &= ~PMD_SECT_RDONLY); +PMD_BIT_FUNC(mkdirty, |= PMD_SECT_DIRTY); +PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); +PMD_BIT_FUNC(mknotpresent, &= ~PMD_TYPE_MASK); + +#define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) + +#define pmd_pfn(pmd) (((pmd_val(pmd) & PMD_MASK) & PHYS_MASK) >> PAGE_SHIFT) +#define pfn_pmd(pfn,prot) (__pmd(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))) +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) + +#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) + +static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) +{ + const pmdval_t mask = PMD_SECT_USER | PMD_SECT_PXN | PMD_SECT_UXN | + PMD_SECT_RDONLY | PMD_SECT_PROT_NONE | + PMD_SECT_VALID; + pmd_val(pmd) = (pmd_val(pmd) & ~mask) | (pgprot_val(newprot) & mask); + return pmd; +} + +#define set_pmd_at(mm, addr, pmdp, pmd) set_pmd(pmdp, pmd) + +static inline int has_transparent_hugepage(void) +{ + return 1; +} + +/* * Mark the prot value as uncacheable and unbufferable. */ #define pgprot_noncached(prot) \ diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 654f096..46b3beb 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -187,4 +187,10 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, #define tlb_migrate_finish(mm) do { } while (0) +static inline void +tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) +{ + tlb_add_flush(tlb, addr); +} + #endif diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 122d632..8b48203 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -117,6 +117,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, dsb(); } +#define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) + #endif #endif