Message ID | 20250414153027.1486719-1-pierrick.bouvier@linaro.org |
---|---|
State | New |
Headers | show |
On 4/14/25 8:30 AM, Pierrick Bouvier wrote: > It was reported that QEMU monitor command gva2gpa was reporting unmapped > memory for a valid access (qemu-system-aarch64), during a copy from > kernel to user space (__arch_copy_to_user symbol in Linux) [1]. > This was affecting cpu_memory_rw_debug also, which > is used in numerous places in our codebase. After investigating, the > problem was specific to arm_cpu_get_phys_page_attrs_debug. > > [1] https://lists.nongnu.org/archive/html/qemu-discuss/2025-04/msg00013.html > > When performing user access from a privileged space, we need to do a > second lookup for user mmu idx, following what get_a64_user_mem_index is > doing at translation time. > > This series first extract some functions, and then perform the second lookup > expected using extracted functions. > > Besides running all QEMU tests, it was explicitely checked that during a linux > boot sequence, accesses now report a valid physical address inconditionnally > using this (non sent) patch: > > --- a/accel/tcg/cputlb.c > +++ b/accel/tcg/cputlb.c > @@ -997,9 +997,7 @@ static inline void tlb_set_compare(CPUTLBEntryFull *full, CPUTLBEntry *ent, > if (enable) { > address |= flags & TLB_FLAGS_MASK; > flags &= TLB_SLOW_FLAGS_MASK; > - if (flags) { > address |= TLB_FORCE_SLOW; > - } > } else { > address = -1; > flags = 0; > @@ -1658,6 +1656,10 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, > tlb_addr = tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; > } > > + vaddr page = addr & TARGET_PAGE_MASK; > + hwaddr physaddr = cpu_get_phys_page_debug(cpu, page); > + g_assert(physaddr != -1); > + > full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; > flags = tlb_addr & (TLB_FLAGS_MASK & ~TLB_FORCE_SLOW); > flags |= full->slow_flags[access_type]; > > v2: > - fix style in first commit (philmd) > > Pierrick Bouvier (4): > target/arm/ptw: extract arm_mmu_idx_to_security_space > target/arm/ptw: get current security_space for current mmu_idx > target/arm/ptw: extract arm_cpu_get_phys_page > target/arm/ptw: fix arm_cpu_get_phys_page_attrs_debug > > target/arm/ptw.c | 65 +++++++++++++++++++++++++++++++++++------------- > 1 file changed, 48 insertions(+), 17 deletions(-) > Gentle ping on this series. Any plan to queue it to tcg-next @Richard? Regards, Pierrick
On Mon, 28 Apr 2025 at 20:34, Pierrick Bouvier <pierrick.bouvier@linaro.org> wrote: > > On 4/14/25 8:30 AM, Pierrick Bouvier wrote: > > It was reported that QEMU monitor command gva2gpa was reporting unmapped > > memory for a valid access (qemu-system-aarch64), during a copy from > > kernel to user space (__arch_copy_to_user symbol in Linux) [1]. > > This was affecting cpu_memory_rw_debug also, which > > is used in numerous places in our codebase. After investigating, the > > problem was specific to arm_cpu_get_phys_page_attrs_debug. > > > > [1] https://lists.nongnu.org/archive/html/qemu-discuss/2025-04/msg00013.html > > > > When performing user access from a privileged space, we need to do a > > second lookup for user mmu idx, following what get_a64_user_mem_index is > > doing at translation time. > > > > This series first extract some functions, and then perform the second lookup > > expected using extracted functions. > > > > Besides running all QEMU tests, it was explicitely checked that during a linux > > boot sequence, accesses now report a valid physical address inconditionnally > > using this (non sent) patch: > > > > --- a/accel/tcg/cputlb.c > > +++ b/accel/tcg/cputlb.c > > @@ -997,9 +997,7 @@ static inline void tlb_set_compare(CPUTLBEntryFull *full, CPUTLBEntry *ent, > > if (enable) { > > address |= flags & TLB_FLAGS_MASK; > > flags &= TLB_SLOW_FLAGS_MASK; > > - if (flags) { > > address |= TLB_FORCE_SLOW; > > - } > > } else { > > address = -1; > > flags = 0; > > @@ -1658,6 +1656,10 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, > > tlb_addr = tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; > > } > > > > + vaddr page = addr & TARGET_PAGE_MASK; > > + hwaddr physaddr = cpu_get_phys_page_debug(cpu, page); > > + g_assert(physaddr != -1); > > + > > full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; > > flags = tlb_addr & (TLB_FLAGS_MASK & ~TLB_FORCE_SLOW); > > flags |= full->slow_flags[access_type]; > > > > v2: > > - fix style in first commit (philmd) > > > > Pierrick Bouvier (4): > > target/arm/ptw: extract arm_mmu_idx_to_security_space > > target/arm/ptw: get current security_space for current mmu_idx > > target/arm/ptw: extract arm_cpu_get_phys_page > > target/arm/ptw: fix arm_cpu_get_phys_page_attrs_debug > > > > target/arm/ptw.c | 65 +++++++++++++++++++++++++++++++++++------------- > > 1 file changed, 48 insertions(+), 17 deletions(-) > > > > Gentle ping on this series. > Any plan to queue it to tcg-next @Richard? I've queued this series to target-arm.next; thanks. -- PMM
--- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -997,9 +997,7 @@ static inline void tlb_set_compare(CPUTLBEntryFull *full, CPUTLBEntry *ent, if (enable) { address |= flags & TLB_FLAGS_MASK; flags &= TLB_SLOW_FLAGS_MASK; - if (flags) { address |= TLB_FORCE_SLOW; - } } else { address = -1; flags = 0; @@ -1658,6 +1656,10 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, tlb_addr = tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; } + vaddr page = addr & TARGET_PAGE_MASK; + hwaddr physaddr = cpu_get_phys_page_debug(cpu, page); + g_assert(physaddr != -1); + full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; flags = tlb_addr & (TLB_FLAGS_MASK & ~TLB_FORCE_SLOW); flags |= full->slow_flags[access_type];