From patchwork Fri Oct 11 11:11:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 20959 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f197.google.com (mail-vc0-f197.google.com [209.85.220.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A910B2611D for ; Fri, 11 Oct 2013 11:12:10 +0000 (UTC) Received: by mail-vc0-f197.google.com with SMTP id gd11sf7185508vcb.4 for ; Fri, 11 Oct 2013 04:12:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:cc:subject :date:message-id:x-original-sender:x-original-authentication-results :precedence:mailing-list:list-id:list-post:list-help:list-archive :list-unsubscribe; bh=s75RDppKbk6sIC3rWmXntqu4q5Nvae3Z2MpWXXtIVpE=; b=bWKk3n+INGy36LXFoGKFnf2OUB1IehshUz99nkW0KZ44E2wdMf+AGY/bv2mEqa1k9r VRM3l0MQlRSdlNxNuTxAYntn4e3j3rPhNtYCx5Z9gjToFdTBZ+n/GffMB74uO2FD33Jx MbOh2T619HSamnAfm8503H7UpvsktY1NH3g2w+Un8cl7X/MvmzVYpwgsuX2rXuIMR03X GiZ703IiZf5xvyq3vkCTiwtUubAfsoVcwz6hrgwwictGxMTD0bIUz+5t/ZqVistfSNL7 iPJe65IFuJF9wftkWmE4WZDqnqzbht3/ioDxE9Gv/L4P4ckcDaMjXinpUIHH8g+mrsM+ UULg== X-Received: by 10.224.111.201 with SMTP id t9mr24319656qap.8.1381489930063; Fri, 11 Oct 2013 04:12:10 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.12.103 with SMTP id x7ls1337851qeb.14.gmail; Fri, 11 Oct 2013 04:12:09 -0700 (PDT) X-Received: by 10.220.16.73 with SMTP id n9mr20975188vca.24.1381489929942; Fri, 11 Oct 2013 04:12:09 -0700 (PDT) Received: from mail-vb0-f45.google.com (mail-vb0-f45.google.com [209.85.212.45]) by mx.google.com with ESMTPS id ee8si16463846vdc.80.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 11 Oct 2013 04:12:09 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.212.45 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.45; Received: by mail-vb0-f45.google.com with SMTP id e15so2594302vbg.4 for ; Fri, 11 Oct 2013 04:12:09 -0700 (PDT) X-Gm-Message-State: ALoCoQluL7M4BJkuGDUrsN13FyVzDmVu8gT1SE9JijQwm8kpuT9Cv56wSXX+0YpYdQ1q8MjtI3+K X-Received: by 10.220.164.202 with SMTP id f10mr20976292vcy.25.1381489929742; Fri, 11 Oct 2013 04:12:09 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp29791vcz; Fri, 11 Oct 2013 04:12:09 -0700 (PDT) X-Received: by 10.180.189.17 with SMTP id ge17mr2656312wic.53.1381489928396; Fri, 11 Oct 2013 04:12:08 -0700 (PDT) Received: from mail-wg0-f44.google.com (mail-wg0-f44.google.com [74.125.82.44]) by mx.google.com with ESMTPS id c10si870788wix.72.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 11 Oct 2013 04:12:08 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.82.44 is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=74.125.82.44; Received: by mail-wg0-f44.google.com with SMTP id n12so3300357wgh.23 for ; Fri, 11 Oct 2013 04:12:08 -0700 (PDT) X-Received: by 10.194.120.226 with SMTP id lf2mr1355560wjb.47.1381489927960; Fri, 11 Oct 2013 04:12:07 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id e1sm4674455wij.6.2013.10.11.04.12.07 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 11 Oct 2013 04:12:07 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Cc: linux@arm.linux.org.uk, nico@linaro.org, linaro-kernel@lists.linaro.org, patches@linaro.org, Steve Capper Subject: [PATCH V3] ARM: mm: make UACCESS_WITH_MEMCPY huge page aware Date: Fri, 11 Oct 2013 12:11:59 +0100 Message-Id: <1381489919-2508-1-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.45 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The memory pinning code in uaccess_with_memcpy.c does not check for HugeTLB or THP pmds, and will enter an infinite loop should a __copy_to_user or __clear_user occur against a huge page. This patch adds detection code for huge pages to pin_page_for_write. As this code can be executed in a fast path it refers to the actual pmds rather than the vma. If a HugeTLB or THP is found (they have the same pmd representation on ARM), the page table spinlock is taken to prevent modification whilst the page is pinned. On ARM, huge pages are only represented as pmds, thus no huge pud checks are performed. (For huge puds one would lock the page table in a similar manner as in the pmd case). Two helper functions are introduced; pmd_thp_or_huge will check whether or not a page is huge or transparent huge (which have the same pmd layout on ARM), and pmd_hugewillfault will detect whether or not a page fault will occur on write to the page. Running the following test (with the chunking from read_zero removed): $ dd if=/dev/zero of=/dev/null bs=10M count=1024 Gave: 2.3 GB/s backed by normal pages, 2.9 GB/s backed by huge pages, 5.1 GB/s backed by huge pages, with page mask=HPAGE_MASK. After some discussion, it was decided not to adopt the HPAGE_MASK, as this would have a significant detrimental effect on the overall system latency due to page_table_lock being held for too long. This could be revisited if split huge page locks are adopted. Signed-off-by: Steve Capper --- Changes in V3: pmd_hugewillfault and pmd_thp_or_huge were erroneously ommitted from pgtable-2level.h causing build problems. Empty stubs are added for now as we don't have huge page support for short descriptors. Nicolas, can I please re-add your Reviewed-by to this? --- arch/arm/include/asm/pgtable-2level.h | 7 ++++++ arch/arm/include/asm/pgtable-3level.h | 3 +++ arch/arm/lib/uaccess_with_memcpy.c | 41 ++++++++++++++++++++++++++++++++--- 3 files changed, 48 insertions(+), 3 deletions(-) diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h index f97ee02..86a659a 100644 --- a/arch/arm/include/asm/pgtable-2level.h +++ b/arch/arm/include/asm/pgtable-2level.h @@ -181,6 +181,13 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) #define set_pte_ext(ptep,pte,ext) cpu_set_pte_ext(ptep,pte,ext) +/* + * We don't have huge page support for short descriptors, for the moment + * define empty stubs for use by pin_page_for_write. + */ +#define pmd_hugewillfault(pmd) (0) +#define pmd_thp_or_huge(pmd) (0) + #endif /* __ASSEMBLY__ */ #endif /* _ASM_PGTABLE_2LEVEL_H */ diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 5689c18..39c54cf 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -206,6 +206,9 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) #define __HAVE_ARCH_PMD_WRITE #define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY)) +#define pmd_hugewillfault(pmd) (!pmd_young(pmd) || !pmd_write(pmd)) +#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) #define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_SECT_SPLITTING) diff --git a/arch/arm/lib/uaccess_with_memcpy.c b/arch/arm/lib/uaccess_with_memcpy.c index 025f742..3e58d71 100644 --- a/arch/arm/lib/uaccess_with_memcpy.c +++ b/arch/arm/lib/uaccess_with_memcpy.c @@ -18,6 +18,7 @@ #include /* for in_atomic() */ #include #include +#include #include #include @@ -40,7 +41,35 @@ pin_page_for_write(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp) return 0; pmd = pmd_offset(pud, addr); - if (unlikely(pmd_none(*pmd) || pmd_bad(*pmd))) + if (unlikely(pmd_none(*pmd))) + return 0; + + /* + * A pmd can be bad if it refers to a HugeTLB or THP page. + * + * Both THP and HugeTLB pages have the same pmd layout + * and should not be manipulated by the pte functions. + * + * Lock the page table for the destination and check + * to see that it's still huge and whether or not we will + * need to fault on write, or if we have a splitting THP. + */ + if (unlikely(pmd_thp_or_huge(*pmd))) { + ptl = ¤t->mm->page_table_lock; + spin_lock(ptl); + if (unlikely(!pmd_thp_or_huge(*pmd) + || pmd_hugewillfault(*pmd) + || pmd_trans_splitting(*pmd))) { + spin_unlock(ptl); + return 0; + } + + *ptep = NULL; + *ptlp = ptl; + return 1; + } + + if (unlikely(pmd_bad(*pmd))) return 0; pte = pte_offset_map_lock(current->mm, pmd, addr, &ptl); @@ -94,7 +123,10 @@ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n) from += tocopy; n -= tocopy; - pte_unmap_unlock(pte, ptl); + if (pte) + pte_unmap_unlock(pte, ptl); + else + spin_unlock(ptl); } if (!atomic) up_read(¤t->mm->mmap_sem); @@ -147,7 +179,10 @@ __clear_user_memset(void __user *addr, unsigned long n) addr += tocopy; n -= tocopy; - pte_unmap_unlock(pte, ptl); + if (pte) + pte_unmap_unlock(pte, ptl); + else + spin_unlock(ptl); } up_read(¤t->mm->mmap_sem);