Message ID | 20241211-dlech-mainline-spi-engine-offload-2-v6-8-88ee574d5d03@baylibre.com |
---|---|
State | New |
Headers | show |
Series | spi: axi-spi-engine: add offload support | expand |
On Wed, 11 Dec 2024 14:54:45 -0600 David Lechner <dlechner@baylibre.com> wrote: > Refactor the IIO dmaengine buffer code to split requesting the DMA > channel from allocating the buffer. We want to be able to add a new > function where the IIO device driver manages the DMA channel, so these > two actions need to be separate. > > To do this, calling dma_request_chan() is moved from > iio_dmaengine_buffer_alloc() to iio_dmaengine_buffer_setup_ext(). A new > __iio_dmaengine_buffer_setup_ext() helper function is added to simplify > error unwinding and will also be used by a new function in a later > patch. > > iio_dmaengine_buffer_free() now only frees the buffer and does not > release the DMA channel. A new iio_dmaengine_buffer_teardown() function > is added to unwind everything done in iio_dmaengine_buffer_setup_ext(). > This keeps things more symmetrical with obvious pairs alloc/free and > setup/teardown. > > Calling dma_get_slave_caps() in iio_dmaengine_buffer_alloc() is moved so > that we can avoid any gotos for error unwinding. > > Signed-off-by: David Lechner <dlechner@baylibre.com> > --- > > v6 changes: > * Split out from patch that adds the new function > * Dropped owns_chan flag > * Introduced iio_dmaengine_buffer_teardown() so that > iio_dmaengine_buffer_free() doesn't have to manage the DMA channel Ouch this is a fiddly refactor to unwind from the diff. I 'think' it's correct, but am keen to get a few more eyes on this if possible. Not 100% sure what route this series will take, so on off chance this patch doesn't go through IIO or a immutable branch I create. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
On Wed, 2024-12-11 at 14:54 -0600, David Lechner wrote: > Refactor the IIO dmaengine buffer code to split requesting the DMA > channel from allocating the buffer. We want to be able to add a new > function where the IIO device driver manages the DMA channel, so these > two actions need to be separate. > > To do this, calling dma_request_chan() is moved from > iio_dmaengine_buffer_alloc() to iio_dmaengine_buffer_setup_ext(). A new > __iio_dmaengine_buffer_setup_ext() helper function is added to simplify > error unwinding and will also be used by a new function in a later > patch. > > iio_dmaengine_buffer_free() now only frees the buffer and does not > release the DMA channel. A new iio_dmaengine_buffer_teardown() function > is added to unwind everything done in iio_dmaengine_buffer_setup_ext(). > This keeps things more symmetrical with obvious pairs alloc/free and > setup/teardown. > > Calling dma_get_slave_caps() in iio_dmaengine_buffer_alloc() is moved so > that we can avoid any gotos for error unwinding. > > Signed-off-by: David Lechner <dlechner@baylibre.com> > --- Reviewed-by: Nuno Sa <nuno.sa@analog.com> > > v6 changes: > * Split out from patch that adds the new function > * Dropped owns_chan flag > * Introduced iio_dmaengine_buffer_teardown() so that > iio_dmaengine_buffer_free() doesn't have to manage the DMA channel > --- > drivers/iio/adc/adi-axi-adc.c | 2 +- > drivers/iio/buffer/industrialio-buffer-dmaengine.c | 106 ++++++++++++-------- > - > drivers/iio/dac/adi-axi-dac.c | 2 +- > include/linux/iio/buffer-dmaengine.h | 2 +- > 4 files changed, 65 insertions(+), 47 deletions(-) > > diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c > index > c7357601f0f869e57636f00bb1e26c059c3ab15c..a55db308baabf7b26ea98431cab1e6af7fe2 > a5f3 100644 > --- a/drivers/iio/adc/adi-axi-adc.c > +++ b/drivers/iio/adc/adi-axi-adc.c > @@ -305,7 +305,7 @@ static struct iio_buffer *axi_adc_request_buffer(struct > iio_backend *back, > static void axi_adc_free_buffer(struct iio_backend *back, > struct iio_buffer *buffer) > { > - iio_dmaengine_buffer_free(buffer); > + iio_dmaengine_buffer_teardown(buffer); > } > > static int axi_adc_reg_access(struct iio_backend *back, unsigned int reg, > diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > index > 614e1c4189a9cdd5a8d9d8c5ef91566983032951..02847d3962fcbb43ec76167db6482ab951f2 > 0942 100644 > --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > @@ -206,39 +206,29 @@ static const struct iio_dev_attr > *iio_dmaengine_buffer_attrs[] = { > > /** > * iio_dmaengine_buffer_alloc() - Allocate new buffer which uses DMAengine > - * @dev: DMA channel consumer device > - * @channel: DMA channel name, typically "rx". > + * @chan: DMA channel. > * > * This allocates a new IIO buffer which internally uses the DMAengine > framework > - * to perform its transfers. The parent device will be used to request the > DMA > - * channel. > + * to perform its transfers. > * > * Once done using the buffer iio_dmaengine_buffer_free() should be used to > * release it. > */ > -static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, > - const char *channel) > +static struct iio_buffer *iio_dmaengine_buffer_alloc(struct dma_chan *chan) > { > struct dmaengine_buffer *dmaengine_buffer; > unsigned int width, src_width, dest_width; > struct dma_slave_caps caps; > - struct dma_chan *chan; > int ret; > > + ret = dma_get_slave_caps(chan, &caps); > + if (ret < 0) > + return ERR_PTR(ret); > + > dmaengine_buffer = kzalloc(sizeof(*dmaengine_buffer), GFP_KERNEL); > if (!dmaengine_buffer) > return ERR_PTR(-ENOMEM); > > - chan = dma_request_chan(dev, channel); > - if (IS_ERR(chan)) { > - ret = PTR_ERR(chan); > - goto err_free; > - } > - > - ret = dma_get_slave_caps(chan, &caps); > - if (ret < 0) > - goto err_release; > - > /* Needs to be aligned to the maximum of the minimums */ > if (caps.src_addr_widths) > src_width = __ffs(caps.src_addr_widths); > @@ -262,12 +252,6 @@ static struct iio_buffer > *iio_dmaengine_buffer_alloc(struct device *dev, > dmaengine_buffer->queue.buffer.access = &iio_dmaengine_buffer_ops; > > return &dmaengine_buffer->queue.buffer; > - > -err_release: > - dma_release_channel(chan); > -err_free: > - kfree(dmaengine_buffer); > - return ERR_PTR(ret); > } > > /** > @@ -276,17 +260,57 @@ static struct iio_buffer > *iio_dmaengine_buffer_alloc(struct device *dev, > * > * Frees a buffer previously allocated with iio_dmaengine_buffer_alloc(). > */ > -void iio_dmaengine_buffer_free(struct iio_buffer *buffer) > +static void iio_dmaengine_buffer_free(struct iio_buffer *buffer) > { > struct dmaengine_buffer *dmaengine_buffer = > iio_buffer_to_dmaengine_buffer(buffer); > > iio_dma_buffer_exit(&dmaengine_buffer->queue); > - dma_release_channel(dmaengine_buffer->chan); > - > iio_buffer_put(buffer); > } > -EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_free, "IIO_DMAENGINE_BUFFER"); > + > +/** > + * iio_dmaengine_buffer_teardown() - Releases DMA channel and frees buffer > + * @buffer: Buffer to free > + * > + * Releases the DMA channel and frees the buffer previously setup with > + * iio_dmaengine_buffer_setup_ext(). > + */ > +void iio_dmaengine_buffer_teardown(struct iio_buffer *buffer) > +{ > + struct dmaengine_buffer *dmaengine_buffer = > + iio_buffer_to_dmaengine_buffer(buffer); > + struct dma_chan *chan = dmaengine_buffer->chan; > + > + iio_dmaengine_buffer_free(buffer); > + dma_release_channel(chan); > +} > +EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_teardown, "IIO_DMAENGINE_BUFFER"); > + > +static struct iio_buffer > +*__iio_dmaengine_buffer_setup_ext(struct iio_dev *indio_dev, > + struct dma_chan *chan, > + enum iio_buffer_direction dir) > +{ > + struct iio_buffer *buffer; > + int ret; > + > + buffer = iio_dmaengine_buffer_alloc(chan); > + if (IS_ERR(buffer)) > + return ERR_CAST(buffer); > + > + indio_dev->modes |= INDIO_BUFFER_HARDWARE; > + > + buffer->direction = dir; > + > + ret = iio_device_attach_buffer(indio_dev, buffer); > + if (ret) { > + iio_dmaengine_buffer_free(buffer); > + return ERR_PTR(ret); > + } > + > + return buffer; > +} > > /** > * iio_dmaengine_buffer_setup_ext() - Setup a DMA buffer for an IIO device > @@ -300,7 +324,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_free, > "IIO_DMAENGINE_BUFFER"); > * It also appends the INDIO_BUFFER_HARDWARE mode to the supported modes of > the > * IIO device. > * > - * Once done using the buffer iio_dmaengine_buffer_free() should be used to > + * Once done using the buffer iio_dmaengine_buffer_teardown() should be used > to > * release it. > */ > struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev, > @@ -308,30 +332,24 @@ struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct > device *dev, > const char *channel, > enum iio_buffer_direction > dir) > { > + struct dma_chan *chan; > struct iio_buffer *buffer; > - int ret; > - > - buffer = iio_dmaengine_buffer_alloc(dev, channel); > - if (IS_ERR(buffer)) > - return ERR_CAST(buffer); > - > - indio_dev->modes |= INDIO_BUFFER_HARDWARE; > > - buffer->direction = dir; > + chan = dma_request_chan(dev, channel); > + if (IS_ERR(chan)) > + return ERR_CAST(chan); > > - ret = iio_device_attach_buffer(indio_dev, buffer); > - if (ret) { > - iio_dmaengine_buffer_free(buffer); > - return ERR_PTR(ret); > - } > + buffer = __iio_dmaengine_buffer_setup_ext(indio_dev, chan, dir); > + if (IS_ERR(buffer)) > + dma_release_channel(chan); > > return buffer; > } > EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_setup_ext, "IIO_DMAENGINE_BUFFER"); > > -static void __devm_iio_dmaengine_buffer_free(void *buffer) > +static void devm_iio_dmaengine_buffer_teardown(void *buffer) > { > - iio_dmaengine_buffer_free(buffer); > + iio_dmaengine_buffer_teardown(buffer); > } > > /** > @@ -357,7 +375,7 @@ int devm_iio_dmaengine_buffer_setup_ext(struct device > *dev, > if (IS_ERR(buffer)) > return PTR_ERR(buffer); > > - return devm_add_action_or_reset(dev, > __devm_iio_dmaengine_buffer_free, > + return devm_add_action_or_reset(dev, > devm_iio_dmaengine_buffer_teardown, > buffer); > } > EXPORT_SYMBOL_NS_GPL(devm_iio_dmaengine_buffer_setup_ext, > "IIO_DMAENGINE_BUFFER"); > diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c > index > b143f7ed6847277aeb49094627d90e5d95eed71c..5d5157af0a233143daff906b699bdae10f36 > 8867 100644 > --- a/drivers/iio/dac/adi-axi-dac.c > +++ b/drivers/iio/dac/adi-axi-dac.c > @@ -168,7 +168,7 @@ static struct iio_buffer *axi_dac_request_buffer(struct > iio_backend *back, > static void axi_dac_free_buffer(struct iio_backend *back, > struct iio_buffer *buffer) > { > - iio_dmaengine_buffer_free(buffer); > + iio_dmaengine_buffer_teardown(buffer); > } > > enum { > diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer- > dmaengine.h > index > 81d9a19aeb9199dd58bb9d35a91f0ec4b00846df..72a2e3fd8a5bf5e8f27ee226ddd92979d233 > 754b 100644 > --- a/include/linux/iio/buffer-dmaengine.h > +++ b/include/linux/iio/buffer-dmaengine.h > @@ -12,7 +12,7 @@ > struct iio_dev; > struct device; > > -void iio_dmaengine_buffer_free(struct iio_buffer *buffer); > +void iio_dmaengine_buffer_teardown(struct iio_buffer *buffer); > struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev, > struct iio_dev *indio_dev, > const char *channel, >
diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c index c7357601f0f869e57636f00bb1e26c059c3ab15c..a55db308baabf7b26ea98431cab1e6af7fe2a5f3 100644 --- a/drivers/iio/adc/adi-axi-adc.c +++ b/drivers/iio/adc/adi-axi-adc.c @@ -305,7 +305,7 @@ static struct iio_buffer *axi_adc_request_buffer(struct iio_backend *back, static void axi_adc_free_buffer(struct iio_backend *back, struct iio_buffer *buffer) { - iio_dmaengine_buffer_free(buffer); + iio_dmaengine_buffer_teardown(buffer); } static int axi_adc_reg_access(struct iio_backend *back, unsigned int reg, diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index 614e1c4189a9cdd5a8d9d8c5ef91566983032951..02847d3962fcbb43ec76167db6482ab951f20942 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -206,39 +206,29 @@ static const struct iio_dev_attr *iio_dmaengine_buffer_attrs[] = { /** * iio_dmaengine_buffer_alloc() - Allocate new buffer which uses DMAengine - * @dev: DMA channel consumer device - * @channel: DMA channel name, typically "rx". + * @chan: DMA channel. * * This allocates a new IIO buffer which internally uses the DMAengine framework - * to perform its transfers. The parent device will be used to request the DMA - * channel. + * to perform its transfers. * * Once done using the buffer iio_dmaengine_buffer_free() should be used to * release it. */ -static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, - const char *channel) +static struct iio_buffer *iio_dmaengine_buffer_alloc(struct dma_chan *chan) { struct dmaengine_buffer *dmaengine_buffer; unsigned int width, src_width, dest_width; struct dma_slave_caps caps; - struct dma_chan *chan; int ret; + ret = dma_get_slave_caps(chan, &caps); + if (ret < 0) + return ERR_PTR(ret); + dmaengine_buffer = kzalloc(sizeof(*dmaengine_buffer), GFP_KERNEL); if (!dmaengine_buffer) return ERR_PTR(-ENOMEM); - chan = dma_request_chan(dev, channel); - if (IS_ERR(chan)) { - ret = PTR_ERR(chan); - goto err_free; - } - - ret = dma_get_slave_caps(chan, &caps); - if (ret < 0) - goto err_release; - /* Needs to be aligned to the maximum of the minimums */ if (caps.src_addr_widths) src_width = __ffs(caps.src_addr_widths); @@ -262,12 +252,6 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, dmaengine_buffer->queue.buffer.access = &iio_dmaengine_buffer_ops; return &dmaengine_buffer->queue.buffer; - -err_release: - dma_release_channel(chan); -err_free: - kfree(dmaengine_buffer); - return ERR_PTR(ret); } /** @@ -276,17 +260,57 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, * * Frees a buffer previously allocated with iio_dmaengine_buffer_alloc(). */ -void iio_dmaengine_buffer_free(struct iio_buffer *buffer) +static void iio_dmaengine_buffer_free(struct iio_buffer *buffer) { struct dmaengine_buffer *dmaengine_buffer = iio_buffer_to_dmaengine_buffer(buffer); iio_dma_buffer_exit(&dmaengine_buffer->queue); - dma_release_channel(dmaengine_buffer->chan); - iio_buffer_put(buffer); } -EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_free, "IIO_DMAENGINE_BUFFER"); + +/** + * iio_dmaengine_buffer_teardown() - Releases DMA channel and frees buffer + * @buffer: Buffer to free + * + * Releases the DMA channel and frees the buffer previously setup with + * iio_dmaengine_buffer_setup_ext(). + */ +void iio_dmaengine_buffer_teardown(struct iio_buffer *buffer) +{ + struct dmaengine_buffer *dmaengine_buffer = + iio_buffer_to_dmaengine_buffer(buffer); + struct dma_chan *chan = dmaengine_buffer->chan; + + iio_dmaengine_buffer_free(buffer); + dma_release_channel(chan); +} +EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_teardown, "IIO_DMAENGINE_BUFFER"); + +static struct iio_buffer +*__iio_dmaengine_buffer_setup_ext(struct iio_dev *indio_dev, + struct dma_chan *chan, + enum iio_buffer_direction dir) +{ + struct iio_buffer *buffer; + int ret; + + buffer = iio_dmaengine_buffer_alloc(chan); + if (IS_ERR(buffer)) + return ERR_CAST(buffer); + + indio_dev->modes |= INDIO_BUFFER_HARDWARE; + + buffer->direction = dir; + + ret = iio_device_attach_buffer(indio_dev, buffer); + if (ret) { + iio_dmaengine_buffer_free(buffer); + return ERR_PTR(ret); + } + + return buffer; +} /** * iio_dmaengine_buffer_setup_ext() - Setup a DMA buffer for an IIO device @@ -300,7 +324,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_free, "IIO_DMAENGINE_BUFFER"); * It also appends the INDIO_BUFFER_HARDWARE mode to the supported modes of the * IIO device. * - * Once done using the buffer iio_dmaengine_buffer_free() should be used to + * Once done using the buffer iio_dmaengine_buffer_teardown() should be used to * release it. */ struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev, @@ -308,30 +332,24 @@ struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev, const char *channel, enum iio_buffer_direction dir) { + struct dma_chan *chan; struct iio_buffer *buffer; - int ret; - - buffer = iio_dmaengine_buffer_alloc(dev, channel); - if (IS_ERR(buffer)) - return ERR_CAST(buffer); - - indio_dev->modes |= INDIO_BUFFER_HARDWARE; - buffer->direction = dir; + chan = dma_request_chan(dev, channel); + if (IS_ERR(chan)) + return ERR_CAST(chan); - ret = iio_device_attach_buffer(indio_dev, buffer); - if (ret) { - iio_dmaengine_buffer_free(buffer); - return ERR_PTR(ret); - } + buffer = __iio_dmaengine_buffer_setup_ext(indio_dev, chan, dir); + if (IS_ERR(buffer)) + dma_release_channel(chan); return buffer; } EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_setup_ext, "IIO_DMAENGINE_BUFFER"); -static void __devm_iio_dmaengine_buffer_free(void *buffer) +static void devm_iio_dmaengine_buffer_teardown(void *buffer) { - iio_dmaengine_buffer_free(buffer); + iio_dmaengine_buffer_teardown(buffer); } /** @@ -357,7 +375,7 @@ int devm_iio_dmaengine_buffer_setup_ext(struct device *dev, if (IS_ERR(buffer)) return PTR_ERR(buffer); - return devm_add_action_or_reset(dev, __devm_iio_dmaengine_buffer_free, + return devm_add_action_or_reset(dev, devm_iio_dmaengine_buffer_teardown, buffer); } EXPORT_SYMBOL_NS_GPL(devm_iio_dmaengine_buffer_setup_ext, "IIO_DMAENGINE_BUFFER"); diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c index b143f7ed6847277aeb49094627d90e5d95eed71c..5d5157af0a233143daff906b699bdae10f368867 100644 --- a/drivers/iio/dac/adi-axi-dac.c +++ b/drivers/iio/dac/adi-axi-dac.c @@ -168,7 +168,7 @@ static struct iio_buffer *axi_dac_request_buffer(struct iio_backend *back, static void axi_dac_free_buffer(struct iio_backend *back, struct iio_buffer *buffer) { - iio_dmaengine_buffer_free(buffer); + iio_dmaengine_buffer_teardown(buffer); } enum { diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h index 81d9a19aeb9199dd58bb9d35a91f0ec4b00846df..72a2e3fd8a5bf5e8f27ee226ddd92979d233754b 100644 --- a/include/linux/iio/buffer-dmaengine.h +++ b/include/linux/iio/buffer-dmaengine.h @@ -12,7 +12,7 @@ struct iio_dev; struct device; -void iio_dmaengine_buffer_free(struct iio_buffer *buffer); +void iio_dmaengine_buffer_teardown(struct iio_buffer *buffer); struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev, struct iio_dev *indio_dev, const char *channel,
Refactor the IIO dmaengine buffer code to split requesting the DMA channel from allocating the buffer. We want to be able to add a new function where the IIO device driver manages the DMA channel, so these two actions need to be separate. To do this, calling dma_request_chan() is moved from iio_dmaengine_buffer_alloc() to iio_dmaengine_buffer_setup_ext(). A new __iio_dmaengine_buffer_setup_ext() helper function is added to simplify error unwinding and will also be used by a new function in a later patch. iio_dmaengine_buffer_free() now only frees the buffer and does not release the DMA channel. A new iio_dmaengine_buffer_teardown() function is added to unwind everything done in iio_dmaengine_buffer_setup_ext(). This keeps things more symmetrical with obvious pairs alloc/free and setup/teardown. Calling dma_get_slave_caps() in iio_dmaengine_buffer_alloc() is moved so that we can avoid any gotos for error unwinding. Signed-off-by: David Lechner <dlechner@baylibre.com> --- v6 changes: * Split out from patch that adds the new function * Dropped owns_chan flag * Introduced iio_dmaengine_buffer_teardown() so that iio_dmaengine_buffer_free() doesn't have to manage the DMA channel --- drivers/iio/adc/adi-axi-adc.c | 2 +- drivers/iio/buffer/industrialio-buffer-dmaengine.c | 106 ++++++++++++--------- drivers/iio/dac/adi-axi-dac.c | 2 +- include/linux/iio/buffer-dmaengine.h | 2 +- 4 files changed, 65 insertions(+), 47 deletions(-)