From patchwork Fri Dec 13 19:05:44 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 22343 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f197.google.com (mail-ie0-f197.google.com [209.85.223.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D2B0A202E2 for ; Fri, 13 Dec 2013 19:06:03 +0000 (UTC) Received: by mail-ie0-f197.google.com with SMTP id e14sf8177716iej.4 for ; Fri, 13 Dec 2013 11:06:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=KWZHxDy2YAqmpuTpAnv1ydeiqOFIKvqOLkeKUFTLXyc=; b=lUtslOnWyCzBMzG2pJtiG/clGIBazSnc5dPOCQt6VVFf1yMX3U1ISn9faaIVNysquZ erGfqTFJ1EkXqajjubUKyao7KXsHCCBlBNOl0YUk4DZlGYysz/Akh75keL1wOefc9AbY O6PWRYM8E6jCmwIHHZwPIzeyd//ImalAtfYxgezqqBSj+HlkuP6fEAEpi6cV7ID8tmkd 0ZKTo9Repggidg+SBZkrDg1mtjnzFVr9Pdboojp30dqdv4+kTAaseNqpspLOyA2pGOP2 dnQbFb/ZwcdXE6p/dpKi8h2Vpcylp2oZ1C+el58w+3FyXCseeB1/LUf5Hs8Et7DJrn+S M3Xg== X-Gm-Message-State: ALoCoQn6rQ0bGxRl0zJLmOxYg8kBIDafdLPlUlZ+JuGHVkcv/u02SLzSWnIvpvX8741GsOhegIp8 X-Received: by 10.182.66.73 with SMTP id d9mr1467588obt.8.1386961563408; Fri, 13 Dec 2013 11:06:03 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.81.133 with SMTP id a5ls1135719qey.21.gmail; Fri, 13 Dec 2013 11:06:03 -0800 (PST) X-Received: by 10.52.164.203 with SMTP id ys11mr1594287vdb.37.1386961563208; Fri, 13 Dec 2013 11:06:03 -0800 (PST) Received: from mail-vc0-f181.google.com (mail-vc0-f181.google.com [209.85.220.181]) by mx.google.com with ESMTPS id be7si1051470vcb.110.2013.12.13.11.06.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 11:06:03 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.181; Received: by mail-vc0-f181.google.com with SMTP id ks9so1570725vcb.40 for ; Fri, 13 Dec 2013 11:06:03 -0800 (PST) X-Received: by 10.52.160.130 with SMTP id xk2mr1618118vdb.24.1386961563108; Fri, 13 Dec 2013 11:06:03 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp61439vcz; Fri, 13 Dec 2013 11:06:02 -0800 (PST) X-Received: by 10.204.226.12 with SMTP id iu12mr35630bkb.174.1386961562013; Fri, 13 Dec 2013 11:06:02 -0800 (PST) Received: from mail-we0-f170.google.com (mail-we0-f170.google.com [74.125.82.170]) by mx.google.com with ESMTPS id x6si1246093wju.162.2013.12.13.11.06.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 11:06:01 -0800 (PST) Received-SPF: neutral (google.com: 74.125.82.170 is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=74.125.82.170; Received: by mail-we0-f170.google.com with SMTP id w61so2315310wes.1 for ; Fri, 13 Dec 2013 11:06:01 -0800 (PST) X-Received: by 10.180.95.105 with SMTP id dj9mr4320576wib.22.1386961561579; Fri, 13 Dec 2013 11:06:01 -0800 (PST) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id o9sm289604wib.10.2013.12.13.11.05.59 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Dec 2013 11:06:00 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Cc: linux@arm.linux.org.uk, will.deacon@arm.com, catalin.marinas@arm.com, patches@linaro.org, robherring2@gmail.com, deepak.saxena@linaro.org, Steve Capper Subject: [RFC PATCH 4/6] arm: mm: Compute pgprot values for huge page sections Date: Fri, 13 Dec 2013 19:05:44 +0000 Message-Id: <1386961546-10061-5-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1386961546-10061-1-git-send-email-steve.capper@linaro.org> References: <1386961546-10061-1-git-send-email-steve.capper@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The short descriptors memory code stores separate software and hardware ptes. All the pgprot values that vmas inherit and all the pte manipulation functions operate in terms of software ptes. The actual hardware bits are then controlled by the pte setter functions. For short descriptor transparent huge pages we can't really store separate copies of the huge pages without fundamentally changing the pmd traversing code. So one strategy is to work directly with the hardware bits. This patch adds code to compute the appropriate memory description bits for an MT_MEMORY section and translates the executable, writable and PROT_NONE information from the software pgprot to give us a hardware pgprot that can be manipulated directly by the HugeTLB and THP code. Signed-off-by: Steve Capper --- arch/arm/mm/mmu.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 580ef2d..476a668 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -338,6 +338,45 @@ const struct mem_type *get_mem_type(unsigned int type) EXPORT_SYMBOL(get_mem_type); /* + * If the system supports huge pages and we are running with short descriptors, + * then compute the pgprot values for a huge page. We do not need to do this + * with LPAE as there is no software/hardware bit distinction for ptes. + * + * We are only interested in: + * 1) The memory type: huge pages are user pages so a section of type + * MT_MEMORY. This is used to create new huge ptes/thps. + * + * 2) XN, PROT_NONE, WRITE. These are set/unset through protection changes + * by pte_modify or pmd_modify and are used to make new ptes/thps. + * + * The other bits: dirty, young, splitting are not modified by pte_modify + * or pmd_modify nor are they used to create new ptes or pmds thus they are not + * considered here. + */ +#if defined(CONFIG_SYS_SUPPORTS_HUGETLBFS) && !defined(CONFIG_ARM_LPAE) +static pgprot_t _hugepgprotval; + +pgprot_t get_huge_pgprot(pgprot_t newprot) +{ + pte_t inprot = __pte(pgprot_val(newprot)); + pmd_t pmdret = __pmd(pgprot_val(_hugepgprotval)); + + if (!pte_exec(inprot)) + pmdret = pmd_mknexec(pmdret); + + if (pte_write(inprot)) + pmdret = pmd_mkwrite(pmdret); + + if (!pte_protnone(inprot)) + pmdret = pmd_rmprotnone(pmdret); + + return __pgprot(pmd_val(pmdret)); +} +EXPORT_SYMBOL(get_huge_pgprot); +#endif + + +/* * Adjust the PMD section entries according to the CPU in use. */ static void __init build_mem_type_table(void) @@ -568,6 +607,19 @@ static void __init build_mem_type_table(void) if (t->prot_sect) t->prot_sect |= PMD_DOMAIN(t->domain); } + +#if defined(CONFIG_SYS_SUPPORTS_HUGETLBFS) && !defined(CONFIG_ARM_LPAE) + /* + * we assume all huge pages are user pages and that hardware access + * flag updates are disabled (which is the case for short descriptors). + */ + pgprot_val(_hugepgprotval) = mem_types[MT_MEMORY].prot_sect + | PMD_SECT_AP_READ | PMD_SECT_nG; + + pgprot_val(_hugepgprotval) &= ~(PMD_SECT_AP_WRITE | PMD_SECT_XN + | PMD_TYPE_SECT); +#endif + } #ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE