Message ID | 1442494219-6133-7-git-send-email-will.deacon@arm.com |
---|---|
State | Accepted |
Commit | 5a7862e83000ccfd36db927c6f060458fe271157 |
Headers | show |
On Thu, Sep 17, 2015 at 01:50:15PM +0100, Will Deacon wrote: > The TLB gather code sets fullmm=1 when tearing down the entire address > space for an mm_struct on exit or execve. Given that the ASID allocator > will never re-allocate a dirty ASID, this flushing is not needed and can > simply be avoided in the flushing code. > > Signed-off-by: Will Deacon <will.deacon@arm.com> > --- > arch/arm64/include/asm/tlb.h | 26 +++++++++++++++----------- > 1 file changed, 15 insertions(+), 11 deletions(-) > > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h > index d6e6b6660380..ffdaea7954bb 100644 > --- a/arch/arm64/include/asm/tlb.h > +++ b/arch/arm64/include/asm/tlb.h > @@ -37,17 +37,21 @@ static inline void __tlb_remove_table(void *_table) > > static inline void tlb_flush(struct mmu_gather *tlb) > { > - if (tlb->fullmm) { > - flush_tlb_mm(tlb->mm); > - } else { > - struct vm_area_struct vma = { .vm_mm = tlb->mm, }; > - /* > - * The intermediate page table levels are already handled by > - * the __(pte|pmd|pud)_free_tlb() functions, so last level > - * TLBI is sufficient here. > - */ > - __flush_tlb_range(&vma, tlb->start, tlb->end, true); > - } > + struct vm_area_struct vma = { .vm_mm = tlb->mm, }; > + > + /* > + * The ASID allocator will either invalidate the ASID or mark > + * it as used. > + */ > + if (tlb->fullmm) > + return; BTW, do we actually need this flush_tlb_mm() with the current ASID allocator? It doesn't reuse old ASIDs either before a full TLBI (just trying to remember if we had any logic; or maybe it was needed before non-lazy __pte_free_tlb).
On Tue, Sep 29, 2015 at 10:29:58AM +0100, Catalin Marinas wrote: > On Thu, Sep 17, 2015 at 01:50:15PM +0100, Will Deacon wrote: > > The TLB gather code sets fullmm=1 when tearing down the entire address > > space for an mm_struct on exit or execve. Given that the ASID allocator > > will never re-allocate a dirty ASID, this flushing is not needed and can > > simply be avoided in the flushing code. > > > > Signed-off-by: Will Deacon <will.deacon@arm.com> > > --- > > arch/arm64/include/asm/tlb.h | 26 +++++++++++++++----------- > > 1 file changed, 15 insertions(+), 11 deletions(-) > > > > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h > > index d6e6b6660380..ffdaea7954bb 100644 > > --- a/arch/arm64/include/asm/tlb.h > > +++ b/arch/arm64/include/asm/tlb.h > > @@ -37,17 +37,21 @@ static inline void __tlb_remove_table(void *_table) > > > > static inline void tlb_flush(struct mmu_gather *tlb) > > { > > - if (tlb->fullmm) { > > - flush_tlb_mm(tlb->mm); > > - } else { > > - struct vm_area_struct vma = { .vm_mm = tlb->mm, }; > > - /* > > - * The intermediate page table levels are already handled by > > - * the __(pte|pmd|pud)_free_tlb() functions, so last level > > - * TLBI is sufficient here. > > - */ > > - __flush_tlb_range(&vma, tlb->start, tlb->end, true); > > - } > > + struct vm_area_struct vma = { .vm_mm = tlb->mm, }; > > + > > + /* > > + * The ASID allocator will either invalidate the ASID or mark > > + * it as used. > > + */ > > + if (tlb->fullmm) > > + return; > > BTW, do we actually need this flush_tlb_mm() with the current ASID > allocator? It doesn't reuse old ASIDs either before a full TLBI (just > trying to remember if we had any logic; or maybe it was needed before > non-lazy __pte_free_tlb). I'm afraid I don't follow you here. This diff is removing the flush_tlb_mm because, as you point out, it's no longer needed. What am I missing? Will
On Mon, Oct 05, 2015 at 05:33:13PM +0100, Will Deacon wrote: > On Tue, Sep 29, 2015 at 10:29:58AM +0100, Catalin Marinas wrote: > > On Thu, Sep 17, 2015 at 01:50:15PM +0100, Will Deacon wrote: > > > The TLB gather code sets fullmm=1 when tearing down the entire address > > > space for an mm_struct on exit or execve. Given that the ASID allocator > > > will never re-allocate a dirty ASID, this flushing is not needed and can > > > simply be avoided in the flushing code. > > > > > > Signed-off-by: Will Deacon <will.deacon@arm.com> > > > --- > > > arch/arm64/include/asm/tlb.h | 26 +++++++++++++++----------- > > > 1 file changed, 15 insertions(+), 11 deletions(-) > > > > > > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h > > > index d6e6b6660380..ffdaea7954bb 100644 > > > --- a/arch/arm64/include/asm/tlb.h > > > +++ b/arch/arm64/include/asm/tlb.h > > > @@ -37,17 +37,21 @@ static inline void __tlb_remove_table(void *_table) > > > > > > static inline void tlb_flush(struct mmu_gather *tlb) > > > { > > > - if (tlb->fullmm) { > > > - flush_tlb_mm(tlb->mm); > > > - } else { > > > - struct vm_area_struct vma = { .vm_mm = tlb->mm, }; > > > - /* > > > - * The intermediate page table levels are already handled by > > > - * the __(pte|pmd|pud)_free_tlb() functions, so last level > > > - * TLBI is sufficient here. > > > - */ > > > - __flush_tlb_range(&vma, tlb->start, tlb->end, true); > > > - } > > > + struct vm_area_struct vma = { .vm_mm = tlb->mm, }; > > > + > > > + /* > > > + * The ASID allocator will either invalidate the ASID or mark > > > + * it as used. > > > + */ > > > + if (tlb->fullmm) > > > + return; > > > > BTW, do we actually need this flush_tlb_mm() with the current ASID > > allocator? It doesn't reuse old ASIDs either before a full TLBI (just > > trying to remember if we had any logic; or maybe it was needed before > > non-lazy __pte_free_tlb). > > I'm afraid I don't follow you here. This diff is removing the flush_tlb_mm > because, as you point out, it's no longer needed. I don't think you are missing anything, just wondering why we had it before (probably forgot to remove it when the non-lazy __pte_free_tlb was added).
diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index d6e6b6660380..ffdaea7954bb 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -37,17 +37,21 @@ static inline void __tlb_remove_table(void *_table) static inline void tlb_flush(struct mmu_gather *tlb) { - if (tlb->fullmm) { - flush_tlb_mm(tlb->mm); - } else { - struct vm_area_struct vma = { .vm_mm = tlb->mm, }; - /* - * The intermediate page table levels are already handled by - * the __(pte|pmd|pud)_free_tlb() functions, so last level - * TLBI is sufficient here. - */ - __flush_tlb_range(&vma, tlb->start, tlb->end, true); - } + struct vm_area_struct vma = { .vm_mm = tlb->mm, }; + + /* + * The ASID allocator will either invalidate the ASID or mark + * it as used. + */ + if (tlb->fullmm) + return; + + /* + * The intermediate page table levels are already handled by + * the __(pte|pmd|pud)_free_tlb() functions, so last level + * TLBI is sufficient here. + */ + __flush_tlb_range(&vma, tlb->start, tlb->end, true); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
The TLB gather code sets fullmm=1 when tearing down the entire address space for an mm_struct on exit or execve. Given that the ASID allocator will never re-allocate a dirty ASID, this flushing is not needed and can simply be avoided in the flushing code. Signed-off-by: Will Deacon <will.deacon@arm.com> --- arch/arm64/include/asm/tlb.h | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-)