Message ID | 20220406142715.2270256-2-ardb@kernel.org |
---|---|
State | New |
Headers | show |
Series | crypto: avoid DMA padding for request structures | expand |
diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 2324ab6f1846..f2e95fb6cedb 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -100,6 +100,13 @@ */ #define CRYPTO_NOLOAD 0x00008000 +/* + * Whether context buffers require DMA alignment. This is the case for + * drivers that perform non-coherent inbound DMA on the context buffer + * directly, but should not be needed otherwise. + */ +#define CRYPTO_ALG_NEED_DMA_ALIGNMENT 0x00010000 + /* * The algorithm may allocate memory during request processing, i.e. during * encryption, decryption, or hashing. Users can request an algorithm with this
On architectures that support non-coherent DMA, we align and round up all dynamically allocated request and TFM structures to the worst case DMA alignment, which is 128 bytes on arm64, even though most systems only have 64 byte cachelines, are cache coherent for DMA, don't use use accelerators for crypto, or the driver for the accelerator does not DMA into the request context buffer to begin with. We can relax this requirement, by only performing this rounding for algorithms that are backed by an implementation that actually requires it. So introduce CRYPTO_ALG_NEED_DMA_ALIGNMENT for this purpose, which will be wired up in subsequent patches. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> --- include/linux/crypto.h | 7 +++++++ 1 file changed, 7 insertions(+)