diff mbox series

mm: page_vma_mapped: ensure pmd is loaded with READ_ONCE outside of lock

Message ID 1508238917-14553-1-git-send-email-will.deacon@arm.com
State Accepted
Commit a7b100953aa33a5bbdc3e5e7f2241b9c0704606e
Headers show
Series mm: page_vma_mapped: ensure pmd is loaded with READ_ONCE outside of lock | expand

Commit Message

Will Deacon Oct. 17, 2017, 11:15 a.m. UTC
[Commit a7b100953aa33a5bbdc3e5e7f2241b9c0704606e upstream]

Loading the pmd without holding the pmd_lock exposes us to races with
concurrent updaters of the page tables but, worse still, it also allows
the compiler to cache the pmd value in a register and reuse it later on,
even if we've performed a READ_ONCE in between and seen a more recent
value.

In the case of page_vma_mapped_walk, this leads to the following crash
when the pmd loaded for the initial pmd_trans_huge check is all zeroes
and a subsequent valid table entry is loaded by check_pmd.  We then
proceed into map_pte, but the compiler re-uses the zero entry inside
pte_offset_map, resulting in a junk pointer being installed in
pvmw->pte:

  PC is at check_pte+0x20/0x170
  LR is at page_vma_mapped_walk+0x2e0/0x540
  [...]
  Process doio (pid: 2463, stack limit = 0xffff00000f2e8000)
  Call trace:
    check_pte+0x20/0x170
    page_vma_mapped_walk+0x2e0/0x540
    page_mkclean_one+0xac/0x278
    rmap_walk_file+0xf0/0x238
    rmap_walk+0x64/0xa0
    page_mkclean+0x90/0xa8
    clear_page_dirty_for_io+0x84/0x2a8
    mpage_submit_page+0x34/0x98
    mpage_process_page_bufs+0x164/0x170
    mpage_prepare_extent_to_map+0x134/0x2b8
    ext4_writepages+0x484/0xe30
    do_writepages+0x44/0xe8
    __filemap_fdatawrite_range+0xbc/0x110
    file_write_and_wait_range+0x48/0xd8
    ext4_sync_file+0x80/0x4b8
    vfs_fsync_range+0x64/0xc0
    SyS_msync+0x194/0x1e8

This patch fixes the problem by ensuring that READ_ONCE is used before
the initial checks on the pmd, and this value is subsequently used when
checking whether or not the pmd is present.  pmd_check is removed and
the pmd_present check is inlined directly.

Link: http://lkml.kernel.org/r/1507222630-5839-1-git-send-email-will.deacon@arm.com
Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
Cc: <stable@vger.kernel.org> # 4.13
[will: backport to 4.13.y]
Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 mm/page_vma_mapped.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

-- 
2.1.4

Comments

Greg KH Oct. 19, 2017, 9:33 a.m. UTC | #1
On Tue, Oct 17, 2017 at 12:15:17PM +0100, Will Deacon wrote:
> [Commit a7b100953aa33a5bbdc3e5e7f2241b9c0704606e upstream]

> 

> Loading the pmd without holding the pmd_lock exposes us to races with

> concurrent updaters of the page tables but, worse still, it also allows

> the compiler to cache the pmd value in a register and reuse it later on,

> even if we've performed a READ_ONCE in between and seen a more recent

> value.

> 

> In the case of page_vma_mapped_walk, this leads to the following crash

> when the pmd loaded for the initial pmd_trans_huge check is all zeroes

> and a subsequent valid table entry is loaded by check_pmd.  We then

> proceed into map_pte, but the compiler re-uses the zero entry inside

> pte_offset_map, resulting in a junk pointer being installed in

> pvmw->pte:

> 

>   PC is at check_pte+0x20/0x170

>   LR is at page_vma_mapped_walk+0x2e0/0x540

>   [...]

>   Process doio (pid: 2463, stack limit = 0xffff00000f2e8000)

>   Call trace:

>     check_pte+0x20/0x170

>     page_vma_mapped_walk+0x2e0/0x540

>     page_mkclean_one+0xac/0x278

>     rmap_walk_file+0xf0/0x238

>     rmap_walk+0x64/0xa0

>     page_mkclean+0x90/0xa8

>     clear_page_dirty_for_io+0x84/0x2a8

>     mpage_submit_page+0x34/0x98

>     mpage_process_page_bufs+0x164/0x170

>     mpage_prepare_extent_to_map+0x134/0x2b8

>     ext4_writepages+0x484/0xe30

>     do_writepages+0x44/0xe8

>     __filemap_fdatawrite_range+0xbc/0x110

>     file_write_and_wait_range+0x48/0xd8

>     ext4_sync_file+0x80/0x4b8

>     vfs_fsync_range+0x64/0xc0

>     SyS_msync+0x194/0x1e8

> 

> This patch fixes the problem by ensuring that READ_ONCE is used before

> the initial checks on the pmd, and this value is subsequently used when

> checking whether or not the pmd is present.  pmd_check is removed and

> the pmd_present check is inlined directly.

> 

> Link: http://lkml.kernel.org/r/1507222630-5839-1-git-send-email-will.deacon@arm.com

> Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")

> Cc: <stable@vger.kernel.org> # 4.13

> [will: backport to 4.13.y]

> Signed-off-by: Will Deacon <will.deacon@arm.com>


Thanks for the backport!

greg k-h
diff mbox series

Patch

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 8ec6ba230bb9..6b9311631aa1 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -6,17 +6,6 @@ 
 
 #include "internal.h"
 
-static inline bool check_pmd(struct page_vma_mapped_walk *pvmw)
-{
-	pmd_t pmde;
-	/*
-	 * Make sure we don't re-load pmd between present and !trans_huge check.
-	 * We need a consistent view.
-	 */
-	pmde = READ_ONCE(*pvmw->pmd);
-	return pmd_present(pmde) && !pmd_trans_huge(pmde);
-}
-
 static inline bool not_found(struct page_vma_mapped_walk *pvmw)
 {
 	page_vma_mapped_walk_done(pvmw);
@@ -106,6 +95,7 @@  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 	pgd_t *pgd;
 	p4d_t *p4d;
 	pud_t *pud;
+	pmd_t pmde;
 
 	/* The only possible pmd mapping has been handled on last iteration */
 	if (pvmw->pmd && !pvmw->pte)
@@ -138,7 +128,13 @@  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 	if (!pud_present(*pud))
 		return false;
 	pvmw->pmd = pmd_offset(pud, pvmw->address);
-	if (pmd_trans_huge(*pvmw->pmd)) {
+	/*
+	 * Make sure the pmd value isn't cached in a register by the
+	 * compiler and used as a stale value after we've observed a
+	 * subsequent update.
+	 */
+	pmde = READ_ONCE(*pvmw->pmd);
+	if (pmd_trans_huge(pmde)) {
 		pvmw->ptl = pmd_lock(mm, pvmw->pmd);
 		if (!pmd_present(*pvmw->pmd))
 			return not_found(pvmw);
@@ -153,9 +149,8 @@  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 			spin_unlock(pvmw->ptl);
 			pvmw->ptl = NULL;
 		}
-	} else {
-		if (!check_pmd(pvmw))
-			return false;
+	} else if (!pmd_present(pmde)) {
+		return false;
 	}
 	if (!map_pte(pvmw))
 		goto next_pte;