Message ID | 20190304195951.1118807-1-arnd@arndb.de |
---|---|
State | New |
Headers | show |
Series | Revert "swiotlb: remove SWIOTLB_MAP_ERROR" | expand |
On Mon, Mar 04, 2019 at 08:59:03PM +0100, Arnd Bergmann wrote: > This reverts commit b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR"), which > introduced an overflow warning in configurations that have a larger > dma_addr_t than phys_addr_t: > > In file included from include/linux/dma-direct.h:5, > from kernel/dma/swiotlb.c:23: > kernel/dma/swiotlb.c: In function 'swiotlb_tbl_map_single': > include/linux/dma-mapping.h:136:28: error: conversion from 'long long unsigned int' to 'phys_addr_t' {aka 'unsigned int'} changes value from '18446744073709551615' to '4294967295' [-Werror=overflow] > #define DMA_MAPPING_ERROR (~(dma_addr_t)0) > ^ > kernel/dma/swiotlb.c:544:9: note: in expansion of macro 'DMA_MAPPING_ERROR' > return DMA_MAPPING_ERROR; > > The configuration that caused this is on 32-bit ARM, where the DMA address > space depends on the enabled hardware platforms, while the physical > address space depends on the type of MMU chosen (classic vs LPAE). > > I tried a couple of alternative approaches, but the previous version > seems as good as any other, so I went back to that. That is really a bummer. What about making the phys_addr_t and dma_addr_t have the same width with some magic #ifdef hackery? > > Fixes: b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR") > Signed-off-by: Arnd Bergmann <arnd@arndb.de> > --- > drivers/xen/swiotlb-xen.c | 4 ++-- > include/linux/swiotlb.h | 3 +++ > kernel/dma/swiotlb.c | 4 ++-- > 3 files changed, 7 insertions(+), 4 deletions(-) > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c > index 877baf2a94f4..57a98279bf4f 100644 > --- a/drivers/xen/swiotlb-xen.c > +++ b/drivers/xen/swiotlb-xen.c > @@ -406,7 +406,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, > > map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir, > attrs); > - if (map == DMA_MAPPING_ERROR) > + if (map == SWIOTLB_MAP_ERROR) > return DMA_MAPPING_ERROR; > > dev_addr = xen_phys_to_bus(map); > @@ -557,7 +557,7 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, > sg_phys(sg), > sg->length, > dir, attrs); > - if (map == DMA_MAPPING_ERROR) { > + if (map == SWIOTLB_MAP_ERROR) { > dev_warn(hwdev, "swiotlb buffer is full\n"); > /* Don't panic here, we expect map_sg users > to do proper error handling. */ > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h > index 361f62bb4a8e..a65a36551f58 100644 > --- a/include/linux/swiotlb.h > +++ b/include/linux/swiotlb.h > @@ -44,6 +44,9 @@ enum dma_sync_target { > SYNC_FOR_DEVICE = 1, > }; > > +/* define the last possible byte of physical address space as a mapping error */ > +#define SWIOTLB_MAP_ERROR (~(phys_addr_t)0x0) > + > extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, > dma_addr_t tbl_dma_addr, > phys_addr_t phys, size_t size, > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index 12059b78b631..922880b84387 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -541,7 +541,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, > spin_unlock_irqrestore(&io_tlb_lock, flags); > if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) > dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes)\n", size); > - return DMA_MAPPING_ERROR; > + return SWIOTLB_MAP_ERROR; > found: > io_tlb_used += nslots; > spin_unlock_irqrestore(&io_tlb_lock, flags); > @@ -659,7 +659,7 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr, > /* Oh well, have to allocate and map a bounce buffer. */ > *phys = swiotlb_tbl_map_single(dev, __phys_to_dma(dev, io_tlb_start), > *phys, size, dir, attrs); > - if (*phys == DMA_MAPPING_ERROR) > + if (*phys == SWIOTLB_MAP_ERROR) > return false; > > /* Ensure that the address returned is DMA'ble */ > -- > 2.20.0 >
Hi Arnd, On 2019-03-04 7:59 pm, Arnd Bergmann wrote: > This reverts commit b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR"), which > introduced an overflow warning in configurations that have a larger > dma_addr_t than phys_addr_t: > > In file included from include/linux/dma-direct.h:5, > from kernel/dma/swiotlb.c:23: > kernel/dma/swiotlb.c: In function 'swiotlb_tbl_map_single': > include/linux/dma-mapping.h:136:28: error: conversion from 'long long unsigned int' to 'phys_addr_t' {aka 'unsigned int'} changes value from '18446744073709551615' to '4294967295' [-Werror=overflow] > #define DMA_MAPPING_ERROR (~(dma_addr_t)0) > ^ > kernel/dma/swiotlb.c:544:9: note: in expansion of macro 'DMA_MAPPING_ERROR' > return DMA_MAPPING_ERROR; > > The configuration that caused this is on 32-bit ARM, where the DMA address > space depends on the enabled hardware platforms, while the physical > address space depends on the type of MMU chosen (classic vs LPAE). Are these real platforms, or random configs? Realistically I don't see a great deal of need to support DMA_ADDR_T_64BIT for non-LPAE. Particularly in this case since AFAIK the only selector of SWIOTLB on Arm is Xen, and that by definition is never going to be useful on non-LPAE hardware. Fair enough that we don't still don't want even randconfigs generating warnings, though. As long as this change doesn't let SWIOTLB_MAP_ERROR leak out to logic expecting DMA_MAP_ERROR - which does appear to be the case - and is also still OK for the opposite weirdness of PHYS_ADDR_T_64BIT && !DMA_ADDR_T_64BIT, then I think it's reasonable. Robin. > I tried a couple of alternative approaches, but the previous version > seems as good as any other, so I went back to that. > > Fixes: b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR") > Signed-off-by: Arnd Bergmann <arnd@arndb.de> > --- > drivers/xen/swiotlb-xen.c | 4 ++-- > include/linux/swiotlb.h | 3 +++ > kernel/dma/swiotlb.c | 4 ++-- > 3 files changed, 7 insertions(+), 4 deletions(-) > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c > index 877baf2a94f4..57a98279bf4f 100644 > --- a/drivers/xen/swiotlb-xen.c > +++ b/drivers/xen/swiotlb-xen.c > @@ -406,7 +406,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, > > map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir, > attrs); > - if (map == DMA_MAPPING_ERROR) > + if (map == SWIOTLB_MAP_ERROR) > return DMA_MAPPING_ERROR; > > dev_addr = xen_phys_to_bus(map); > @@ -557,7 +557,7 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, > sg_phys(sg), > sg->length, > dir, attrs); > - if (map == DMA_MAPPING_ERROR) { > + if (map == SWIOTLB_MAP_ERROR) { > dev_warn(hwdev, "swiotlb buffer is full\n"); > /* Don't panic here, we expect map_sg users > to do proper error handling. */ > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h > index 361f62bb4a8e..a65a36551f58 100644 > --- a/include/linux/swiotlb.h > +++ b/include/linux/swiotlb.h > @@ -44,6 +44,9 @@ enum dma_sync_target { > SYNC_FOR_DEVICE = 1, > }; > > +/* define the last possible byte of physical address space as a mapping error */ > +#define SWIOTLB_MAP_ERROR (~(phys_addr_t)0x0) > + > extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, > dma_addr_t tbl_dma_addr, > phys_addr_t phys, size_t size, > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index 12059b78b631..922880b84387 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -541,7 +541,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, > spin_unlock_irqrestore(&io_tlb_lock, flags); > if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) > dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes)\n", size); > - return DMA_MAPPING_ERROR; > + return SWIOTLB_MAP_ERROR; > found: > io_tlb_used += nslots; > spin_unlock_irqrestore(&io_tlb_lock, flags); > @@ -659,7 +659,7 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr, > /* Oh well, have to allocate and map a bounce buffer. */ > *phys = swiotlb_tbl_map_single(dev, __phys_to_dma(dev, io_tlb_start), > *phys, size, dir, attrs); > - if (*phys == DMA_MAPPING_ERROR) > + if (*phys == SWIOTLB_MAP_ERROR) > return false; > > /* Ensure that the address returned is DMA'ble */ >
On Tue, Mar 5, 2019 at 12:56 AM Robin Murphy <robin.murphy@arm.com> wrote: > On 2019-03-04 7:59 pm, Arnd Bergmann wrote: > > This reverts commit b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR"), which > > introduced an overflow warning in configurations that have a larger > > dma_addr_t than phys_addr_t: > > > > In file included from include/linux/dma-direct.h:5, > > from kernel/dma/swiotlb.c:23: > > kernel/dma/swiotlb.c: In function 'swiotlb_tbl_map_single': > > include/linux/dma-mapping.h:136:28: error: conversion from 'long long unsigned int' to 'phys_addr_t' {aka 'unsigned int'} changes value from '18446744073709551615' to '4294967295' [-Werror=overflow] > > #define DMA_MAPPING_ERROR (~(dma_addr_t)0) > > ^ > > kernel/dma/swiotlb.c:544:9: note: in expansion of macro 'DMA_MAPPING_ERROR' > > return DMA_MAPPING_ERROR; > > > > The configuration that caused this is on 32-bit ARM, where the DMA address > > space depends on the enabled hardware platforms, while the physical > > address space depends on the type of MMU chosen (classic vs LPAE). > > Are these real platforms, or random configs? Realistically I don't see a > great deal of need to support DMA_ADDR_T_64BIT for non-LPAE. > Particularly in this case since AFAIK the only selector of SWIOTLB on > Arm is Xen, and that by definition is never going to be useful on > non-LPAE hardware. ... On Mon, Mar 4, 2019 at 11:00 PM Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote: > What about making the phys_addr_t and dma_addr_t have the same > width with some magic #ifdef hackery? As far as I can tell, only randconfig builds see this problem, in real systems phys_addr_t is normally the same as dma_addr_t, and you could reasonably have a machine with a larger phys_addr_t than dma_addr_t but wouldn't need to bother. The reason I'd like to keep them independent on ARM is that the difference does occasionally find real driver bugs where some drivers happily mix phys_addr_t and dma_addr_t in ways that are broken even when they are the same size. On Tue, Mar 5, 2019 at 12:56 AM Robin Murphy <robin.murphy@arm.com> wrote: > Fair enough that we don't still don't want even randconfigs generating > warnings, though. As long as this change doesn't let SWIOTLB_MAP_ERROR > leak out to logic expecting DMA_MAP_ERROR - which does appear to be the > case - and is also still OK for the opposite weirdness of > PHYS_ADDR_T_64BIT && !DMA_ADDR_T_64BIT, then I think it's reasonable. Another option would be to change the return type of swiotlb_tbl_map_single() to 'u64', which would always be large enough to contain both a phys_addr_t and DMA_MAP_ERROR. I first tried changing the return type to dma_addr_t, which solves the build issue, but I couldn't convince myself that this is correct in all cases, or that this is a sensible type to use here. Arnd
Hi Robin, On 3/4/19 11:56 PM, Robin Murphy wrote: > On 2019-03-04 7:59 pm, Arnd Bergmann wrote: >> This reverts commit b907e20508d0 ("swiotlb: remove >> SWIOTLB_MAP_ERROR"), which >> introduced an overflow warning in configurations that have a larger >> dma_addr_t than phys_addr_t: >> >> In file included from include/linux/dma-direct.h:5, >> from kernel/dma/swiotlb.c:23: >> kernel/dma/swiotlb.c: In function 'swiotlb_tbl_map_single': >> include/linux/dma-mapping.h:136:28: error: conversion from 'long long >> unsigned int' to 'phys_addr_t' {aka 'unsigned int'} changes value from >> '18446744073709551615' to '4294967295' [-Werror=overflow] >> #define DMA_MAPPING_ERROR (~(dma_addr_t)0) >> ^ >> kernel/dma/swiotlb.c:544:9: note: in expansion of macro >> 'DMA_MAPPING_ERROR' >> return DMA_MAPPING_ERROR; >> >> The configuration that caused this is on 32-bit ARM, where the DMA >> address >> space depends on the enabled hardware platforms, while the physical >> address space depends on the type of MMU chosen (classic vs LPAE). > > Are these real platforms, or random configs? Realistically I don't see a > great deal of need to support DMA_ADDR_T_64BIT for non-LPAE. This is selected by CONFIG_XEN no matter the type of MMU chosen (see more below). > Particularly in this case since AFAIK the only selector of SWIOTLB on > Arm is Xen, and that by definition is never going to be useful on > non-LPAE hardware. While Xen itself requires LPAE, it is still possible to run a non-LPAE kernel in the guest. For instance, last time I checked, Debian was shipping only non-LPAE kernel for Arm32. On Arm, swiotlb is only used by the hardware domain (aka Dom0) to allow DMA in memory mapped from other guest. So the returned DMA address may be 64-bit. Hence why we select DMA_ADDR_T_64BIT above. Cheers, -- Julien Grall
Hi Arnd, On 3/5/19 8:16 AM, Arnd Bergmann wrote: > On Tue, Mar 5, 2019 at 12:56 AM Robin Murphy <robin.murphy@arm.com> wrote: >> On 2019-03-04 7:59 pm, Arnd Bergmann wrote: >>> This reverts commit b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR"), which >>> introduced an overflow warning in configurations that have a larger >>> dma_addr_t than phys_addr_t: >>> >>> In file included from include/linux/dma-direct.h:5, >>> from kernel/dma/swiotlb.c:23: >>> kernel/dma/swiotlb.c: In function 'swiotlb_tbl_map_single': >>> include/linux/dma-mapping.h:136:28: error: conversion from 'long long unsigned int' to 'phys_addr_t' {aka 'unsigned int'} changes value from '18446744073709551615' to '4294967295' [-Werror=overflow] >>> #define DMA_MAPPING_ERROR (~(dma_addr_t)0) >>> ^ >>> kernel/dma/swiotlb.c:544:9: note: in expansion of macro 'DMA_MAPPING_ERROR' >>> return DMA_MAPPING_ERROR; >>> >>> The configuration that caused this is on 32-bit ARM, where the DMA address >>> space depends on the enabled hardware platforms, while the physical >>> address space depends on the type of MMU chosen (classic vs LPAE). >> >> Are these real platforms, or random configs? Realistically I don't see a >> great deal of need to support DMA_ADDR_T_64BIT for non-LPAE. >> Particularly in this case since AFAIK the only selector of SWIOTLB on >> Arm is Xen, and that by definition is never going to be useful on >> non-LPAE hardware. > ... > On Mon, Mar 4, 2019 at 11:00 PM Konrad Rzeszutek Wilk > <konrad.wilk@oracle.com> wrote: >> What about making the phys_addr_t and dma_addr_t have the same >> width with some magic #ifdef hackery? > > As far as I can tell, only randconfig builds see this problem, in > real systems phys_addr_t is normally the same as dma_addr_t, > and you could reasonably have a machine with a larger phys_addr_t > than dma_addr_t but wouldn't need to bother. On Xen, dma_addr_t will always be 64-bit while the phys_addr_t will depend on the MMU type. So we may have phys_addr_t smaller than dma_addr_t from the kernel point of view. Cheers, -- Julien Grall
On Tue, Mar 05, 2019 at 09:41:46AM +0000, Julien Grall wrote: > On Xen, dma_addr_t will always be 64-bit while the phys_addr_t will depend > on the MMU type. So we may have phys_addr_t smaller than dma_addr_t from > the kernel point of view. How can dma_addr_t on arm have value > 32-bit when physical addresses are 32-bit only?
Hi Christoph, On 08/03/2019 15:23, Christoph Hellwig wrote: > On Tue, Mar 05, 2019 at 09:41:46AM +0000, Julien Grall wrote: >> On Xen, dma_addr_t will always be 64-bit while the phys_addr_t will depend >> on the MMU type. So we may have phys_addr_t smaller than dma_addr_t from >> the kernel point of view. > > How can dma_addr_t on arm have value > 32-bit when physical > addresses are 32-bit only? The number of platform with IOMMU protected all DMA-capable device is not yet there. So we decided to not require IOMMU for Dom0. Instead, its memory is be directly mapped (i.e guest physical address == host physical address) and will always be below 4GB to cater the short page table case. In the common case, Dom0 also contains the PV backend drivers. Those drivers may directly use the guest buffer in the DMA request (so a copy is avoided). To avoid using a bounce buffer too much, xen-swiotlb will find the host physical address associated to the guest buffer and will use it to compute the DMA address. While Dom0 kernel may only deal with 32-bit physical address, the hypervisor can still deal with up to 40-bit physical address. This means the guest memory can be allocated above the 4GB threshold. Hence why the dma_addr_t is always 64-bit with CONFIG_XEN=y. Cheers, -- Julien Grall
On Fri, Mar 08, 2019 at 05:25:57PM +0000, Julien Grall wrote: > In the common case, Dom0 also contains the PV backend drivers. Those > drivers may directly use the guest buffer in the DMA request (so a copy is > avoided). To avoid using a bounce buffer too much, xen-swiotlb will find > the host physical address associated to the guest buffer and will use it to > compute the DMA address. > > While Dom0 kernel may only deal with 32-bit physical address, the > hypervisor can still deal with up to 40-bit physical address. This means > the guest memory can be allocated above the 4GB threshold. Hence why the > dma_addr_t is always 64-bit with CONFIG_XEN=y. This at least makes some sense. But is it really so much better to avoid having a 64-bit phys_addr_t?
On Wed, Mar 13, 2019 at 7:33 PM Christoph Hellwig <hch@lst.de> wrote: > On Fri, Mar 08, 2019 at 05:25:57PM +0000, Julien Grall wrote: > > In the common case, Dom0 also contains the PV backend drivers. Those > > drivers may directly use the guest buffer in the DMA request (so a copy is > > avoided). To avoid using a bounce buffer too much, xen-swiotlb will find > > the host physical address associated to the guest buffer and will use it to > > compute the DMA address. > > > > While Dom0 kernel may only deal with 32-bit physical address, the > > hypervisor can still deal with up to 40-bit physical address. This means > > the guest memory can be allocated above the 4GB threshold. Hence why the > > dma_addr_t is always 64-bit with CONFIG_XEN=y. > > This at least makes some sense. But is it really so much better to > avoid having a 64-bit phys_addr_t? I like the way we tie phys_addr_t to the page table format, as it seems consistent to have phys_addr_t be whichever data type can be addressed through virtual memory. The main practical advantage I see in allowing phys_addr_t and dma_addr_t to be independent rather than having both of them be the same and grow to as much as is needed is that most randconfig issues I found that result from a type mismatch are for real bugs, typically in driver code that is written under the assumption that both have not only the same size but also the same binary representation for a given memory address. Arnd
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 877baf2a94f4..57a98279bf4f 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -406,7 +406,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir, attrs); - if (map == DMA_MAPPING_ERROR) + if (map == SWIOTLB_MAP_ERROR) return DMA_MAPPING_ERROR; dev_addr = xen_phys_to_bus(map); @@ -557,7 +557,7 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, sg_phys(sg), sg->length, dir, attrs); - if (map == DMA_MAPPING_ERROR) { + if (map == SWIOTLB_MAP_ERROR) { dev_warn(hwdev, "swiotlb buffer is full\n"); /* Don't panic here, we expect map_sg users to do proper error handling. */ diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 361f62bb4a8e..a65a36551f58 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -44,6 +44,9 @@ enum dma_sync_target { SYNC_FOR_DEVICE = 1, }; +/* define the last possible byte of physical address space as a mapping error */ +#define SWIOTLB_MAP_ERROR (~(phys_addr_t)0x0) + extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, dma_addr_t tbl_dma_addr, phys_addr_t phys, size_t size, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 12059b78b631..922880b84387 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -541,7 +541,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, spin_unlock_irqrestore(&io_tlb_lock, flags); if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes)\n", size); - return DMA_MAPPING_ERROR; + return SWIOTLB_MAP_ERROR; found: io_tlb_used += nslots; spin_unlock_irqrestore(&io_tlb_lock, flags); @@ -659,7 +659,7 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr, /* Oh well, have to allocate and map a bounce buffer. */ *phys = swiotlb_tbl_map_single(dev, __phys_to_dma(dev, io_tlb_start), *phys, size, dir, attrs); - if (*phys == DMA_MAPPING_ERROR) + if (*phys == SWIOTLB_MAP_ERROR) return false; /* Ensure that the address returned is DMA'ble */
This reverts commit b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR"), which introduced an overflow warning in configurations that have a larger dma_addr_t than phys_addr_t: In file included from include/linux/dma-direct.h:5, from kernel/dma/swiotlb.c:23: kernel/dma/swiotlb.c: In function 'swiotlb_tbl_map_single': include/linux/dma-mapping.h:136:28: error: conversion from 'long long unsigned int' to 'phys_addr_t' {aka 'unsigned int'} changes value from '18446744073709551615' to '4294967295' [-Werror=overflow] #define DMA_MAPPING_ERROR (~(dma_addr_t)0) ^ kernel/dma/swiotlb.c:544:9: note: in expansion of macro 'DMA_MAPPING_ERROR' return DMA_MAPPING_ERROR; The configuration that caused this is on 32-bit ARM, where the DMA address space depends on the enabled hardware platforms, while the physical address space depends on the type of MMU chosen (classic vs LPAE). I tried a couple of alternative approaches, but the previous version seems as good as any other, so I went back to that. Fixes: b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR") Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- drivers/xen/swiotlb-xen.c | 4 ++-- include/linux/swiotlb.h | 3 +++ kernel/dma/swiotlb.c | 4 ++-- 3 files changed, 7 insertions(+), 4 deletions(-) -- 2.20.0