From patchwork Tue May 6 13:02:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 29700 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f72.google.com (mail-yh0-f72.google.com [209.85.213.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D04CA20534 for ; Tue, 6 May 2014 13:04:44 +0000 (UTC) Received: by mail-yh0-f72.google.com with SMTP id f10sf2385989yha.3 for ; Tue, 06 May 2014 06:04:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id:cc :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=fJbBEC77ukxkrtBAQXs//PXxhYnu5PlNa66wa0uYrek=; b=l4w83zoILCqTQ7rwYSVdQfauv/fzP6yZrTS4wSKun1yQ3poDcpUPf9j/W7Y78vs7ce yCeQ320sNmN/VKKISED+SxkEzXKoKBpNM5Z7XvAmTmy1tXYZnnGCq8GXrs7i7H2ll247 fU9Jmr5M1bKBIbLEVIZnknKnavfjimPEuCUNGaq3Jsw1brBHTbI9zGrqHpsQ4FolbP91 /SEKTFMODD/b1bqjjqaAurBjZO9wjoCbTZZ7fHSKrJukPf98EnbKa0TbG04K60q5T+0j Tw3iFAQkbliAeqj/u4u6ZYbriDh2qYJHaHuEcoz6z4rJiRwmcykWKi5O0h3jAHK62eS5 0/Cg== X-Gm-Message-State: ALoCoQnpXIkeAYYxSBh+yUbraOX8HIWm5tsxQ3lsRMNjNN6UFlqES1tSO9JpeFCaUfaQecK/EXw8 X-Received: by 10.52.61.197 with SMTP id s5mr922032vdr.8.1399381484199; Tue, 06 May 2014 06:04:44 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.51.71 with SMTP id t65ls2055324qga.61.gmail; Tue, 06 May 2014 06:04:44 -0700 (PDT) X-Received: by 10.58.25.3 with SMTP id y3mr372658vef.48.1399381484058; Tue, 06 May 2014 06:04:44 -0700 (PDT) Received: from mail-ve0-f169.google.com (mail-ve0-f169.google.com [209.85.128.169]) by mx.google.com with ESMTPS id sn5si2297718vdc.209.2014.05.06.06.04.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 06 May 2014 06:04:44 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=209.85.128.169; Received: by mail-ve0-f169.google.com with SMTP id jx11so10826097veb.28 for ; Tue, 06 May 2014 06:04:44 -0700 (PDT) X-Received: by 10.52.130.225 with SMTP id oh1mr27542930vdb.8.1399381483935; Tue, 06 May 2014 06:04:43 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp221742vcb; Tue, 6 May 2014 06:04:43 -0700 (PDT) X-Received: by 10.140.83.41 with SMTP id i38mr25168514qgd.7.1399381482659; Tue, 06 May 2014 06:04:42 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id r97si5091071qga.142.2014.05.06.06.04.42 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 May 2014 06:04:42 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Whf1T-0006m8-UI; Tue, 06 May 2014 13:03:03 +0000 Received: from mail-wg0-f49.google.com ([74.125.82.49]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Whf1S-0006ex-0m for linux-arm-kernel@lists.infradead.org; Tue, 06 May 2014 13:03:02 +0000 Received: by mail-wg0-f49.google.com with SMTP id m15so3677241wgh.20 for ; Tue, 06 May 2014 06:02:39 -0700 (PDT) X-Received: by 10.180.211.207 with SMTP id ne15mr2380252wic.31.1399381359496; Tue, 06 May 2014 06:02:39 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id cd10sm24612475wib.0.2014.05.06.06.02.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 May 2014 06:02:38 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATH V3] arm64: mm: Create gigabyte kernel logical mappings where possible Date: Tue, 6 May 2014 14:02:27 +0100 Message-Id: <1399381347-28676-1-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140506_060302_218043_F24D2A7F X-CRM114-Status: GOOD ( 19.60 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.3.2 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.49 listed in list.dnswl.org] Cc: catalin.marinas@arm.com, Steve Capper , jays.lee@samsung.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: Steve Capper Tested-by: Jungseok Lee --- Changed in V3: added awareness of 1GB blocks to kern_addr_valid. This was tested via gdb: gdb ./vmlinux /proc/kcore disassemble kern_addr_valid The output looked valid. In V2 of the patch, I got an oops. I've defined constants with pgdval_t type. Ideally, I would like to define pudval_t types, but due to the way folding works this does not exist for <4 levels. I'm not sure if it would be better to define pudval_t for <4 levels or leave this as is? Changed in V2: free the original pmd table from swapper_pg_dir if we replace it with a block pud entry. --- arch/arm64/include/asm/pgtable-hwdef.h | 2 ++ arch/arm64/include/asm/pgtable.h | 7 +++++++ arch/arm64/mm/mmu.c | 28 +++++++++++++++++++++++++++- 3 files changed, 36 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 5fc8a66..955e8c5 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -29,6 +29,8 @@ */ #define PUD_TABLE_BIT (_AT(pgdval_t, 1) << 1) +#define PUD_TYPE_MASK (_AT(pgdval_t, 3) << 0) +#define PUD_TYPE_SECT (_AT(pgdval_t, 1) << 0) /* * Level 2 descriptor (PMD). diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 90c811f..946d5fc 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -265,6 +265,7 @@ static inline pmd_t pte_pmd(pte_t pte) #define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) #define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) +#define pud_pfn(pud) (((pud_val(pud) & PUD_MASK) & PHYS_MASK) >> PAGE_SHIFT) #define set_pmd_at(mm, addr, pmdp, pmd) set_pmd(pmdp, pmd) @@ -295,6 +296,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ PMD_TYPE_SECT) +#ifdef ARM64_64K_PAGES +#define pud_sect(pud) (0) +#else +#define pud_sect(pud) ((pud_val(pud) & PUD_TYPE_MASK) == \ + PUD_TYPE_SECT) +#endif static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 0a472c4..1baa92e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -227,7 +227,30 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr, do { next = pud_addr_end(addr, end); - alloc_init_pmd(pud, addr, next, phys); + + /* + * For 4K granule only, attempt to put down a 1GB block + */ + if ((PAGE_SHIFT == 12) && + ((addr | next | phys) & ~PUD_MASK) == 0) { + pud_t old_pud = *pud; + set_pud(pud, __pud(phys | prot_sect_kernel)); + + /* + * If we have an old value for a pud, it will + * be pointing to a pmd table that we no longer + * need (from swapper_pg_dir). + * + * Look up the old pmd table and free it. + */ + if (!pud_none(old_pud)) { + phys_addr_t table = __pa(pmd_offset(&old_pud, 0)); + memblock_free(table, PAGE_SIZE); + flush_tlb_all(); + } + } else { + alloc_init_pmd(pud, addr, next, phys); + } phys += next - addr; } while (pud++, addr = next, addr != end); } @@ -370,6 +393,9 @@ int kern_addr_valid(unsigned long addr) if (pud_none(*pud)) return 0; + if (pud_sect(*pud)) + return pfn_valid(pud_pfn(*pud)); + pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) return 0;