Message ID | 1323082897-15249-1-git-send-email-peter.maydell@linaro.org |
---|---|
State | Rejected |
Headers | show |
On 12/05/2011 01:01 PM, Peter Maydell wrote: > Fix a long-standing deficiency of cpu_register_physical_memory_log() > where the start address and region offset had to have the same low > bits (otherwise the IO functions would be passed an incorrect address > offset). This was most likely to bite when registering memory regions > which started at a non-page-boundary. > > Signed-off-by: Peter Maydell <peter.maydell@linaro.org> > --- > This is such a small change to correct this issue that I'm kind of > suspicious of it :-) Your instincts are correct, unfortunately. > @@ -2677,7 +2674,11 @@ void cpu_register_physical_memory_log(target_phys_addr_t start_addr, > if (phys_offset == IO_MEM_UNASSIGNED) { > region_offset = start_addr; > } > - region_offset &= TARGET_PAGE_MASK; > + /* Adjust the region offset to account for the start_addr possibly > + * not being page aligned, so we end up passing the IO functions > + * the true offset from the start of the region. > + */ > + region_offset -= (start_addr & ~TARGET_PAGE_MASK); > size = (size + TARGET_PAGE_SIZE - 1) & TARGET_PAGE_MASK; > end_addr = start_addr + (target_phys_addr_t)size; > region_offset is added to iotlb in tlb_set_page(), smashing the low bits with your change. It's safe in subpage, since that doesn't happen there.
On 5 December 2011 13:40, Avi Kivity <avi@redhat.com> wrote: > On 12/05/2011 01:01 PM, Peter Maydell wrote: >> @@ -2677,7 +2674,11 @@ void cpu_register_physical_memory_log(target_phys_addr_t start_addr, >> if (phys_offset == IO_MEM_UNASSIGNED) { >> region_offset = start_addr; >> } >> - region_offset &= TARGET_PAGE_MASK; >> + /* Adjust the region offset to account for the start_addr possibly >> + * not being page aligned, so we end up passing the IO functions >> + * the true offset from the start of the region. >> + */ >> + region_offset -= (start_addr & ~TARGET_PAGE_MASK); >> size = (size + TARGET_PAGE_SIZE - 1) & TARGET_PAGE_MASK; >> end_addr = start_addr + (target_phys_addr_t)size; >> > > region_offset is added to iotlb in tlb_set_page(), smashing the low bits > with your change. It's safe in subpage, since that doesn't happen there. OK, but we only need to avoid trashing the bottom 5 bits, right? So we could do region_offset -= (start_addr & ~TARGET_PAGE_MASK); if (size >= TARGET_PAGE_SIZE) { region_offset &= ~0x1F; /* can make this a #define IO_MEM_MASK */ } which would allow regions to start on 0x20 granularity, or byte granularity if they're less than a page in size (and so guaranteed to be subpages only). -- PMM
On 12/05/2011 04:01 PM, Peter Maydell wrote: > On 5 December 2011 13:40, Avi Kivity <avi@redhat.com> wrote: > > On 12/05/2011 01:01 PM, Peter Maydell wrote: > >> @@ -2677,7 +2674,11 @@ void cpu_register_physical_memory_log(target_phys_addr_t start_addr, > >> if (phys_offset == IO_MEM_UNASSIGNED) { > >> region_offset = start_addr; > >> } > >> - region_offset &= TARGET_PAGE_MASK; > >> + /* Adjust the region offset to account for the start_addr possibly > >> + * not being page aligned, so we end up passing the IO functions > >> + * the true offset from the start of the region. > >> + */ > >> + region_offset -= (start_addr & ~TARGET_PAGE_MASK); > >> size = (size + TARGET_PAGE_SIZE - 1) & TARGET_PAGE_MASK; > >> end_addr = start_addr + (target_phys_addr_t)size; > >> > > > > region_offset is added to iotlb in tlb_set_page(), smashing the low bits > > with your change. It's safe in subpage, since that doesn't happen there. > > OK, but we only need to avoid trashing the bottom 5 bits, right? All TARGET_PAGE_BITS of them. > So we could do > region_offset -= (start_addr & ~TARGET_PAGE_MASK); > if (size >= TARGET_PAGE_SIZE) { > region_offset &= ~0x1F; /* can make this a #define IO_MEM_MASK */ > } > > which would allow regions to start on 0x20 granularity, or byte granularity > if they're less than a page in size (and so guaranteed to be subpages only). > An alternative is to stash region_offset somewhere else. There's CPUTLBEntry::addend, see comment above its definition.
diff --git a/exec.c b/exec.c index 6b92198..7030cea 100644 --- a/exec.c +++ b/exec.c @@ -2655,10 +2655,7 @@ static subpage_t *subpage_init (target_phys_addr_t base, ram_addr_t *phys, For RAM, 'size' must be a multiple of the target page size. If (phys_offset & ~TARGET_PAGE_MASK) != 0, then it is an io memory page. The address used when calling the IO function is - the offset from the start of the region, plus region_offset. Both - start_addr and region_offset are rounded down to a page boundary - before calculating this offset. This should not be a problem unless - the low bits of start_addr and region_offset differ. */ + the offset from the start of the region, plus region_offset. */ void cpu_register_physical_memory_log(target_phys_addr_t start_addr, ram_addr_t size, ram_addr_t phys_offset, @@ -2677,7 +2674,11 @@ void cpu_register_physical_memory_log(target_phys_addr_t start_addr, if (phys_offset == IO_MEM_UNASSIGNED) { region_offset = start_addr; } - region_offset &= TARGET_PAGE_MASK; + /* Adjust the region offset to account for the start_addr possibly + * not being page aligned, so we end up passing the IO functions + * the true offset from the start of the region. + */ + region_offset -= (start_addr & ~TARGET_PAGE_MASK); size = (size + TARGET_PAGE_SIZE - 1) & TARGET_PAGE_MASK; end_addr = start_addr + (target_phys_addr_t)size;
Fix a long-standing deficiency of cpu_register_physical_memory_log() where the start address and region offset had to have the same low bits (otherwise the IO functions would be passed an incorrect address offset). This was most likely to bite when registering memory regions which started at a non-page-boundary. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- This is such a small change to correct this issue that I'm kind of suspicious of it :-) exec.c | 11 ++++++----- 1 files changed, 6 insertions(+), 5 deletions(-)