diff mbox series

[v2,2/3] mm/page_ref: Ensure page_ref_unfreeze is ordered against prior accesses

Message ID 1497349722-6731-3-git-send-email-will.deacon@arm.com
State Accepted
Commit 108a7ac448caff8e35e8c3f92f65faad893e5aca
Headers show
Series mm: huge pages: Misc fixes for issues found during fuzzing | expand

Commit Message

Will Deacon June 13, 2017, 10:28 a.m. UTC
page_ref_freeze and page_ref_unfreeze are designed to be used as a pair,
wrapping a critical section where struct pages can be modified without
having to worry about consistency for a concurrent fast-GUP.

Whilst page_ref_freeze has full barrier semantics due to its use of
atomic_cmpxchg, page_ref_unfreeze is implemented using atomic_set, which
doesn't provide any barrier semantics and allows the operation to be
reordered with respect to page modifications in the critical section.

This patch ensures that page_ref_unfreeze is ordered after any critical
section updates, by invoking smp_mb() prior to the atomic_set.

Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Acked-by: Steve Capper <steve.capper@arm.com>

Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 include/linux/page_ref.h | 1 +
 1 file changed, 1 insertion(+)

-- 
2.1.4

Comments

Kirill A. Shutemov June 13, 2017, 2:06 p.m. UTC | #1
On Tue, Jun 13, 2017 at 11:28:41AM +0100, Will Deacon wrote:
> page_ref_freeze and page_ref_unfreeze are designed to be used as a pair,

> wrapping a critical section where struct pages can be modified without

> having to worry about consistency for a concurrent fast-GUP.

> 

> Whilst page_ref_freeze has full barrier semantics due to its use of

> atomic_cmpxchg, page_ref_unfreeze is implemented using atomic_set, which

> doesn't provide any barrier semantics and allows the operation to be

> reordered with respect to page modifications in the critical section.

> 

> This patch ensures that page_ref_unfreeze is ordered after any critical

> section updates, by invoking smp_mb() prior to the atomic_set.

> 

> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

> Acked-by: Steve Capper <steve.capper@arm.com>

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> ---

>  include/linux/page_ref.h | 1 +

>  1 file changed, 1 insertion(+)

> 

> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h

> index 610e13271918..1fd71733aa68 100644

> --- a/include/linux/page_ref.h

> +++ b/include/linux/page_ref.h

> @@ -174,6 +174,7 @@ static inline void page_ref_unfreeze(struct page *page, int count)

>  	VM_BUG_ON_PAGE(page_count(page) != 0, page);

>  	VM_BUG_ON(count == 0);

>  

> +	smp_mb();


Don't we want some comment here?

Otherwise:

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>


-- 
 Kirill A. Shutemov
diff mbox series

Patch

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 610e13271918..1fd71733aa68 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -174,6 +174,7 @@  static inline void page_ref_unfreeze(struct page *page, int count)
 	VM_BUG_ON_PAGE(page_count(page) != 0, page);
 	VM_BUG_ON(count == 0);
 
+	smp_mb();
 	atomic_set(&page->_refcount, count);
 	if (page_ref_tracepoint_active(__tracepoint_page_ref_unfreeze))
 		__page_ref_unfreeze(page, count);