Message ID | 20210210180316.23654-1-catalin.marinas@arm.com |
---|---|
State | Accepted |
Commit | 68d54ceeec0e5fee4fb8048e6a04c193f32525ca |
Headers | show |
Series | arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page | expand |
On 2/10/21 3:03 PM, Catalin Marinas wrote: > The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user > page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged > page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns > -EIO. > > A newly created (PROT_MTE) mapping points to the zero page which had its > tags zeroed during cpu_enable_mte(). If there were no prior writes to > this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero > page does not have the PG_mte_tagged flag set. > > Set PG_mte_tagged on the zero page when its tags are cleared during > boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on > !PROT_MTE mappings pointing to the zero page, change the > __access_remote_tags() check to (vm_flags & VM_MTE) instead of > PG_mte_tagged. > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") > Cc: <stable@vger.kernel.org> # 5.10.x > Cc: Will Deacon <will@kernel.org> > Reported-by: Luis Machado <luis.machado@linaro.org> > --- > > The fix is actually checking VM_MTE instead of PG_mte_tagged in > __access_remote_tags() but I added the WARN_ON(!PG_mte_tagged) and > setting the flag on the zero page in case we break this assumption in > the future. > > arch/arm64/kernel/cpufeature.c | 6 +----- > arch/arm64/kernel/mte.c | 3 ++- > 2 files changed, 3 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index e99eddec0a46..3e6331b64932 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused) > #ifdef CONFIG_ARM64_MTE > static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) > { > - static bool cleared_zero_page = false; > - > /* > * Clear the tags in the zero page. This needs to be done via the > * linear map which has the Tagged attribute. > */ > - if (!cleared_zero_page) { > - cleared_zero_page = true; > + if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags)) > mte_clear_page_tags(lm_alias(empty_zero_page)); > - } > > kasan_init_hw_tags_cpu(); > } > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index dc9ada64feed..80b62fe49dcf 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, > * would cause the existing tags to be cleared if the page > * was never mapped with PROT_MTE. > */ > - if (!test_bit(PG_mte_tagged, &page->flags)) { > + if (!(vma->vm_flags & VM_MTE)) { > ret = -EOPNOTSUPP; > put_page(page); > break; > } > + WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags)); > > /* limit access to the end of the page */ > offset = offset_in_page(addr); > Thanks. I gave this a try and it works as expected. So memory that is PROT_MTE but has not been accessed yet can be inspected with PEEKMTETAGS without getting an EIO back.
On Wed, Feb 10, 2021 at 03:52:18PM -0300, Luis Machado wrote: > On 2/10/21 3:03 PM, Catalin Marinas wrote: > > The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user > > page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged > > page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns > > -EIO. > > > > A newly created (PROT_MTE) mapping points to the zero page which had its > > tags zeroed during cpu_enable_mte(). If there were no prior writes to > > this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero > > page does not have the PG_mte_tagged flag set. > > > > Set PG_mte_tagged on the zero page when its tags are cleared during > > boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on > > !PROT_MTE mappings pointing to the zero page, change the > > __access_remote_tags() check to (vm_flags & VM_MTE) instead of > > PG_mte_tagged. > > > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > > Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") > > Cc: <stable@vger.kernel.org> # 5.10.x > > Cc: Will Deacon <will@kernel.org> > > Reported-by: Luis Machado <luis.machado@linaro.org> [...] > Thanks. I gave this a try and it works as expected. So memory that is > PROT_MTE but has not been accessed yet can be inspected with PEEKMTETAGS > without getting an EIO back. Thanks. I assume I can add your tested-by. -- Catalin
On 2/10/21 6:03 PM, Catalin Marinas wrote: > The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user > page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged > page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns > -EIO. > > A newly created (PROT_MTE) mapping points to the zero page which had its > tags zeroed during cpu_enable_mte(). If there were no prior writes to > this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero > page does not have the PG_mte_tagged flag set. > > Set PG_mte_tagged on the zero page when its tags are cleared during > boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on > !PROT_MTE mappings pointing to the zero page, change the > __access_remote_tags() check to (vm_flags & VM_MTE) instead of > PG_mte_tagged. > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") > Cc: <stable@vger.kernel.org> # 5.10.x > Cc: Will Deacon <will@kernel.org> > Reported-by: Luis Machado <luis.machado@linaro.org> > --- > > The fix is actually checking VM_MTE instead of PG_mte_tagged in > __access_remote_tags() but I added the WARN_ON(!PG_mte_tagged) and > setting the flag on the zero page in case we break this assumption in > the future. > > arch/arm64/kernel/cpufeature.c | 6 +----- > arch/arm64/kernel/mte.c | 3 ++- > 2 files changed, 3 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index e99eddec0a46..3e6331b64932 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused) > #ifdef CONFIG_ARM64_MTE > static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) > { > - static bool cleared_zero_page = false; > - > /* > * Clear the tags in the zero page. This needs to be done via the > * linear map which has the Tagged attribute. > */ > - if (!cleared_zero_page) { > - cleared_zero_page = true; > + if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags)) > mte_clear_page_tags(lm_alias(empty_zero_page)); > - } > > kasan_init_hw_tags_cpu(); > } > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index dc9ada64feed..80b62fe49dcf 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, > * would cause the existing tags to be cleared if the page > * was never mapped with PROT_MTE. > */ > - if (!test_bit(PG_mte_tagged, &page->flags)) { > + if (!(vma->vm_flags & VM_MTE)) { > ret = -EOPNOTSUPP; > put_page(page); > break; > } > + WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags)); > Nit: I would live a white line before WARN_ON_ONCE() to improve readability and maybe transform it in WARN_ONCE() with a message (alternatively a comment on top) based on what you are explaining in the commit message. Otherwise: Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> > /* limit access to the end of the page */ > offset = offset_in_page(addr); > -- Regards, Vincenzo
On Wed, 10 Feb 2021 18:03:16 +0000, Catalin Marinas wrote: > The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user > page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged > page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns > -EIO. > > A newly created (PROT_MTE) mapping points to the zero page which had its > tags zeroed during cpu_enable_mte(). If there were no prior writes to > this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero > page does not have the PG_mte_tagged flag set. > > [...] Applied to arm64 (for-next/fixes), thanks! [1/1] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page https://git.kernel.org/arm64/c/68d54ceeec0e -- Catalin
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e99eddec0a46..3e6331b64932 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused) #ifdef CONFIG_ARM64_MTE static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) { - static bool cleared_zero_page = false; - /* * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!cleared_zero_page) { - cleared_zero_page = true; + if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags)) mte_clear_page_tags(lm_alias(empty_zero_page)); - } kasan_init_hw_tags_cpu(); } diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index dc9ada64feed..80b62fe49dcf 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, * would cause the existing tags to be cleared if the page * was never mapped with PROT_MTE. */ - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!(vma->vm_flags & VM_MTE)) { ret = -EOPNOTSUPP; put_page(page); break; } + WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags)); /* limit access to the end of the page */ offset = offset_in_page(addr);
The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns -EIO. A newly created (PROT_MTE) mapping points to the zero page which had its tags zeroed during cpu_enable_mte(). If there were no prior writes to this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero page does not have the PG_mte_tagged flag set. Set PG_mte_tagged on the zero page when its tags are cleared during boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on !PROT_MTE mappings pointing to the zero page, change the __access_remote_tags() check to (vm_flags & VM_MTE) instead of PG_mte_tagged. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") Cc: <stable@vger.kernel.org> # 5.10.x Cc: Will Deacon <will@kernel.org> Reported-by: Luis Machado <luis.machado@linaro.org> --- The fix is actually checking VM_MTE instead of PG_mte_tagged in __access_remote_tags() but I added the WARN_ON(!PG_mte_tagged) and setting the flag on the zero page in case we break this assumption in the future. arch/arm64/kernel/cpufeature.c | 6 +----- arch/arm64/kernel/mte.c | 3 ++- 2 files changed, 3 insertions(+), 6 deletions(-)