Message ID | 1535125966-7666-5-git-send-email-will.deacon@arm.com |
---|---|
State | Superseded |
Headers | show |
Series | Avoid synchronous TLB invalidation for intermediate page-table entries on arm64 | expand |
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index e257f8655b84..ddbf1718669d 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -182,6 +182,10 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, static inline void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { + /* + * We cannot use leaf-only invalidation here, since we may be invalidating + * table entries as part of collapsing hugepages or moving page tables. + */ __flush_tlb_range(vma, start, end, false); }
Add a comment to explain why we can't get away with last-level invalidation in flush_tlb_range() Signed-off-by: Will Deacon <will.deacon@arm.com> --- arch/arm64/include/asm/tlbflush.h | 4 ++++ 1 file changed, 4 insertions(+) -- 2.1.4