Message ID | alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com |
---|---|
State | New |
Headers | show |
On Wed, 29 Jan 2014, Paolo Bonzini wrote: > Il 29/01/2014 13:00, Stefano Stabellini ha scritto: > > Hi Paolo, > > we have been trying to fix a BSOD that would happen during the Windows > > XP installation, once every ten times on average. > > After many days of bisection, we found out that commit > > > > commit 149f54b53b7666a3facd45e86eece60ce7d3b114 > > Author: Paolo Bonzini <pbonzini@redhat.com> > > Date: Fri May 24 12:59:37 2013 +0200 > > > > memory: add address_space_translate > > > > breaks Xen support in QEMU, in particular the Xen mapcache. > > The reason is that after this commit, l in address_space_rw can span a > > page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking > > to map a single page (if block->offset == 0). > > The appended patch works around the issue by reverting to the old > > behaviour. > > > > What do you think is the right fix for this? > > Maybe we need to add a size parameter to qemu_get_ram_ptr? > > Yeah, that would be best but the patch you attached is fine too with a FIXME > comment. Thanks for the quick reply. I have just sent a better and cleaner version of this patch with a proper commit message and signed-off-by lines: http://marc.info/?l=qemu-devel&m=139108598630562&w=2
diff --git a/exec.c b/exec.c index 667a718..15edb69 100644 --- a/exec.c +++ b/exec.c @@ -1948,10 +1948,15 @@ bool address_space_rw(AddressSpace *as, hwaddr addr, uint8_t *buf, hwaddr addr1; MemoryRegion *mr; bool error = false; + hwaddr page; while (len > 0) { l = len; mr = address_space_translate(as, addr, &addr1, &l, is_write); + page = addr & TARGET_PAGE_MASK; + l = (page + TARGET_PAGE_SIZE) - addr; + if (l > len) + l = len; if (is_write) { if (!memory_access_is_direct(mr, is_write)) { @@ -2057,11 +2062,16 @@ void cpu_physical_memory_write_rom(hwaddr addr, uint8_t *ptr; hwaddr addr1; MemoryRegion *mr; + hwaddr page; while (len > 0) { l = len; mr = address_space_translate(&address_space_memory, addr, &addr1, &l, true); + page = addr & TARGET_PAGE_MASK; + l = (page + TARGET_PAGE_SIZE) - addr; + if (l > len) + l = len; if (!(memory_region_is_ram(mr) || memory_region_is_romd(mr))) { @@ -2164,6 +2174,7 @@ void *address_space_map(AddressSpace *as, hwaddr l, xlat, base; MemoryRegion *mr, *this_mr; ram_addr_t raddr; + hwaddr page; if (len == 0) { return NULL; @@ -2171,6 +2182,10 @@ void *address_space_map(AddressSpace *as, l = len; mr = address_space_translate(as, addr, &xlat, &l, is_write); + page = addr & TARGET_PAGE_MASK; + l = (page + TARGET_PAGE_SIZE) - addr; + if (l > len) + l = len; if (!memory_access_is_direct(mr, is_write)) { if (bounce.buffer) { return NULL;