Message ID | 20240916085741.1636554-2-quic_mdalam@quicinc.com |
---|---|
State | New |
Headers | show |
Series | Add inline encryption support | expand |
Hi Md, kernel test robot noticed the following build errors: [auto build test ERROR on device-mapper-dm/for-next] [also build test ERROR on axboe-block/for-next linus/master song-md/md-next v6.11 next-20240916] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Md-Sadre-Alam/dm-inlinecrypt-Add-inline-encryption-support/20240916-170452 base: https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git for-next patch link: https://lore.kernel.org/r/20240916085741.1636554-2-quic_mdalam%40quicinc.com patch subject: [PATCH v2 1/3] dm-inlinecrypt: Add inline encryption support config: openrisc-randconfig-r062-20240917 (https://download.01.org/0day-ci/archive/20240917/202409171209.aEtxsPez-lkp@intel.com/config) compiler: or1k-linux-gcc (GCC) 14.1.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240917/202409171209.aEtxsPez-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202409171209.aEtxsPez-lkp@intel.com/ All errors (new ones prefixed by >>): drivers/md/dm-inline-crypt.c: In function 'crypt_prepare_inline_crypt_key': >> drivers/md/dm-inline-crypt.c:81:15: error: implicit declaration of function 'blk_crypto_init_key' [-Wimplicit-function-declaration] 81 | ret = blk_crypto_init_key(cc->blk_key, cc->key, cc->crypto_mode, | ^~~~~~~~~~~~~~~~~~~ >> drivers/md/dm-inline-crypt.c:88:15: error: implicit declaration of function 'blk_crypto_start_using_key' [-Wimplicit-function-declaration] 88 | ret = blk_crypto_start_using_key(cc->dev->bdev, cc->blk_key); | ^~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/md/dm-inline-crypt.c: In function 'crypt_destroy_inline_crypt_key': >> drivers/md/dm-inline-crypt.c:104:17: error: implicit declaration of function 'blk_crypto_evict_key'; did you mean 'blk_crypto_register'? [-Wimplicit-function-declaration] 104 | blk_crypto_evict_key(cc->dev->bdev, cc->blk_key); | ^~~~~~~~~~~~~~~~~~~~ | blk_crypto_register drivers/md/dm-inline-crypt.c: In function 'crypt_inline_encrypt_submit': >> drivers/md/dm-inline-crypt.c:121:17: error: implicit declaration of function 'bio_crypt_set_ctx' [-Wimplicit-function-declaration] 121 | bio_crypt_set_ctx(bio, cc->blk_key, dun, GFP_KERNEL); | ^~~~~~~~~~~~~~~~~ vim +/blk_crypto_init_key +81 drivers/md/dm-inline-crypt.c 72 73 static int crypt_prepare_inline_crypt_key(struct inlinecrypt_config *cc) 74 { 75 int ret; 76 77 cc->blk_key = kzalloc(sizeof(*cc->blk_key), GFP_KERNEL); 78 if (!cc->blk_key) 79 return -ENOMEM; 80 > 81 ret = blk_crypto_init_key(cc->blk_key, cc->key, cc->crypto_mode, 82 cc->iv_size, cc->sector_size); 83 if (ret) { 84 DMERR("Failed to init inline encryption key"); 85 goto bad_key; 86 } 87 > 88 ret = blk_crypto_start_using_key(cc->dev->bdev, cc->blk_key); 89 if (ret) { 90 DMERR("Failed to use inline encryption key"); 91 goto bad_key; 92 } 93 94 return 0; 95 bad_key: 96 kfree_sensitive(cc->blk_key); 97 cc->blk_key = NULL; 98 return ret; 99 } 100 101 static void crypt_destroy_inline_crypt_key(struct inlinecrypt_config *cc) 102 { 103 if (cc->blk_key) { > 104 blk_crypto_evict_key(cc->dev->bdev, cc->blk_key); 105 kfree_sensitive(cc->blk_key); 106 cc->blk_key = NULL; 107 } 108 } 109 110 static void crypt_inline_encrypt_submit(struct dm_target *ti, struct bio *bio) 111 { 112 struct inlinecrypt_config *cc = ti->private; 113 u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; 114 115 bio_set_dev(bio, cc->dev->bdev); 116 if (bio_sectors(bio)) { 117 memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE); 118 bio->bi_iter.bi_sector = cc->start + 119 dm_target_offset(ti, bio->bi_iter.bi_sector); 120 dun[0] = le64_to_cpu(bio->bi_iter.bi_sector + cc->iv_offset); > 121 bio_crypt_set_ctx(bio, cc->blk_key, dun, GFP_KERNEL); 122 } 123 124 submit_bio_noacct(bio); 125 } 126
Hi Md, kernel test robot noticed the following build warnings: [auto build test WARNING on device-mapper-dm/for-next] [also build test WARNING on axboe-block/for-next linus/master song-md/md-next v6.11 next-20240916] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Md-Sadre-Alam/dm-inlinecrypt-Add-inline-encryption-support/20240916-170452 base: https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git for-next patch link: https://lore.kernel.org/r/20240916085741.1636554-2-quic_mdalam%40quicinc.com patch subject: [PATCH v2 1/3] dm-inlinecrypt: Add inline encryption support config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20240917/202409171440.qx2iOkY3-lkp@intel.com/config) compiler: clang version 18.1.8 (https://github.com/llvm/llvm-project 3b5b5c1ec4a3095ab096dd780e84d7ab81f3d7ff) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240917/202409171440.qx2iOkY3-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202409171440.qx2iOkY3-lkp@intel.com/ All warnings (new ones prefixed by >>): >> drivers/md/dm-inline-crypt.c:198:6: warning: variable 'ret' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] 198 | if ((sscanf(argv[2], "%llu%c", &tmpll, &dummy) != 1) || | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 199 | (tmpll & ((cc->sector_size >> SECTOR_SHIFT) - 1))) { | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/md/dm-inline-crypt.c:250:9: note: uninitialized use occurs here 250 | return ret; | ^~~ drivers/md/dm-inline-crypt.c:198:2: note: remove the 'if' if its condition is always false 198 | if ((sscanf(argv[2], "%llu%c", &tmpll, &dummy) != 1) || | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 199 | (tmpll & ((cc->sector_size >> SECTOR_SHIFT) - 1))) { | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 200 | ti->error = "Invalid iv_offset sector"; | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 201 | goto bad; | ~~~~~~~~~ 202 | } | ~ >> drivers/md/dm-inline-crypt.c:198:6: warning: variable 'ret' is used uninitialized whenever '||' condition is true [-Wsometimes-uninitialized] 198 | if ((sscanf(argv[2], "%llu%c", &tmpll, &dummy) != 1) || | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/md/dm-inline-crypt.c:250:9: note: uninitialized use occurs here 250 | return ret; | ^~~ drivers/md/dm-inline-crypt.c:198:6: note: remove the '||' if its condition is always false 198 | if ((sscanf(argv[2], "%llu%c", &tmpll, &dummy) != 1) || | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/md/dm-inline-crypt.c:178:9: note: initialize the variable 'ret' to silence this warning 178 | int ret; | ^ | = 0 2 warnings generated. vim +198 drivers/md/dm-inline-crypt.c 168 169 static int inlinecrypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) 170 { 171 struct inlinecrypt_config *cc; 172 char *cipher_api = NULL; 173 char *cipher, *chainmode; 174 unsigned long long tmpll; 175 char *ivmode; 176 int key_size; 177 char dummy; 178 int ret; 179 180 if (argc < 5) { 181 ti->error = "Not enough arguments"; 182 return -EINVAL; 183 } 184 185 key_size = strlen(argv[1]) >> 1; 186 187 cc = kzalloc(struct_size(cc, key, key_size), GFP_KERNEL); 188 if (!cc) { 189 ti->error = "Cannot allocate encryption context"; 190 return -ENOMEM; 191 } 192 cc->key_size = key_size; 193 cc->sector_size = (1 << SECTOR_SHIFT); 194 cc->sector_shift = 0; 195 196 ti->private = cc; 197 > 198 if ((sscanf(argv[2], "%llu%c", &tmpll, &dummy) != 1) || 199 (tmpll & ((cc->sector_size >> SECTOR_SHIFT) - 1))) { 200 ti->error = "Invalid iv_offset sector"; 201 goto bad; 202 } 203 cc->iv_offset = tmpll; 204 205 ret = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table), 206 &cc->dev); 207 if (ret) { 208 ti->error = "Device lookup failed"; 209 goto bad; 210 } 211 212 ret = -EINVAL; 213 if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 || 214 tmpll != (sector_t)tmpll) { 215 ti->error = "Invalid device sector"; 216 goto bad; 217 } 218 219 cc->start = tmpll; 220 221 cipher = strsep(&argv[0], "-"); 222 chainmode = strsep(&argv[0], "-"); 223 ivmode = strsep(&argv[0], "-"); 224 225 cipher_api = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL); 226 if (!cipher_api) 227 goto bad; 228 229 ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, 230 "%s(%s)", chainmode, cipher); 231 if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) { 232 kfree(cipher_api); 233 ret = -ENOMEM; 234 goto bad; 235 } 236 237 ret = crypt_select_inline_crypt_mode(ti, cipher_api, ivmode); 238 239 /* Initialize and set key */ 240 ret = inlinecrypt_set_key(cc, argv[1]); 241 if (ret < 0) { 242 ti->error = "Error decoding and setting key"; 243 return ret; 244 } 245 246 return 0; 247 bad: 248 ti->error = "Error in inlinecrypt mapping"; 249 inlinecrypt_dtr(ti); 250 return ret; 251 } 252
Hi Md, kernel test robot noticed the following build warnings: [auto build test WARNING on device-mapper-dm/for-next] [also build test WARNING on axboe-block/for-next linus/master song-md/md-next v6.11 next-20240917] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Md-Sadre-Alam/dm-inlinecrypt-Add-inline-encryption-support/20240916-170452 base: https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git for-next patch link: https://lore.kernel.org/r/20240916085741.1636554-2-quic_mdalam%40quicinc.com patch subject: [PATCH v2 1/3] dm-inlinecrypt: Add inline encryption support config: csky-randconfig-r111-20240918 (https://download.01.org/0day-ci/archive/20240918/202409181233.1FrQNVtU-lkp@intel.com/config) compiler: csky-linux-gcc (GCC) 14.1.0 reproduce: (https://download.01.org/0day-ci/archive/20240918/202409181233.1FrQNVtU-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202409181233.1FrQNVtU-lkp@intel.com/ sparse warnings: (new ones prefixed by >>) >> drivers/md/dm-inline-crypt.c:120:26: sparse: sparse: cast to restricted __le64 drivers/md/dm-inline-crypt.c:214:32: sparse: sparse: self-comparison always evaluates to false vim +120 drivers/md/dm-inline-crypt.c 109 110 static void crypt_inline_encrypt_submit(struct dm_target *ti, struct bio *bio) 111 { 112 struct inlinecrypt_config *cc = ti->private; 113 u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; 114 115 bio_set_dev(bio, cc->dev->bdev); 116 if (bio_sectors(bio)) { 117 memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE); 118 bio->bi_iter.bi_sector = cc->start + 119 dm_target_offset(ti, bio->bi_iter.bi_sector); > 120 dun[0] = le64_to_cpu(bio->bi_iter.bi_sector + cc->iv_offset); 121 bio_crypt_set_ctx(bio, cc->blk_key, dun, GFP_KERNEL); 122 } 123 124 submit_bio_noacct(bio); 125 } 126
Hi, On Mon, Sep 16, 2024 at 02:27:39PM +0530, Md Sadre Alam wrote: > QCOM SDCC controller supports Inline Crypto Engine > This driver will enables inline encryption/decryption > for ICE. The algorithm supported by ICE are XTS(AES) > and CBC(AES). > > Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com> > --- > > Change in [v2] > > * Added dm-inlinecrypt driver support > > * squash the patch blk-crypto: Add additional algo modes for Inline > encryption and md: dm-crypt: Add additional algo modes for inline > encryption and added in this > > Change in [v1] > > * This patch was not included in [v1] > > block/blk-crypto.c | 21 +++ > drivers/md/Kconfig | 8 + > drivers/md/Makefile | 1 + > drivers/md/dm-inline-crypt.c | 316 +++++++++++++++++++++++++++++++++++ > include/linux/blk-crypto.h | 3 + > 5 files changed, 349 insertions(+) > create mode 100644 drivers/md/dm-inline-crypt.c Thanks for working on this! Android uses a similar device-mapper target called dm-default-key (https://android.googlesource.com/kernel/common/+/refs/heads/android-mainline/drivers/md/dm-default-key.c), and I've been looking for the best way to get the functionality upstream. The main challenge is that dm-default-key is integrated with fscrypt, such that if fscrypt encrypts the data, then the data isn't also encrypted with the block device key. There are also cases such as f2fs garbage collection in which filesystems read/write raw data without en/decryption by any key. So essentially a passthrough mode is supported on individual I/O requests. It looks like this patch not only does not support that, but it ignores the existence of fscrypt (or any other use of inline encryption by filesystems) entirely, and overrides any filesystem-provided key with the block device's. At the very least, this case would need to be explicitly not supported initially, i.e. dm-inlinecrypt would error out if the upper layer already provided a key. But I would like there to be an agreed-upon way to extend the code to support the pass-through mode, so that filesystem and block device level encryption work properly together. It can indeed be done with a device-mapper target (as Android does already), whether it's called "dm-inlinecrypt" or "dm-default-key", but not in a standalone way: pass-through requests also require an addition to struct bio and some support in block and filesystem code. Previously, people have said that supporting this functionality natively in the block layer would be a better fit than a dm target (e.g., see the thread https://lore.kernel.org/all/1658316391-13472-1-git-send-email-israelr@nvidia.com/T/#u). I'd appreciate people's feedback on which approach they'd prefer. Anyway, assuming the device-mapper target approach, I also have some other feedback on this patch: New algorithms should not be added in the same patch as a new dm target. Also, I do not see why there is any need for the new algorithms you are adding (AES-128-XTS, AES-128-CBC, and AES-256-CBC). XTS is preferable to CBC, and AES-256 is preferable to AES-128. AES-256-XTS is already supported, both by blk-crypto and by Qualcomm ICE. So you should just use AES-256-XTS. There are also a lot of miscellaneous issues with the proposed code. Missing ->io_hints and ->status methods, truncating IVs to 32 bits, unnecessarily using a custom algorithm name syntax that doesn't make the IV generation method explicit, unnecessarily splitting bios, incrementing IVs every 512 bytes instead of each sector, etc. I can comment on all of these in detail if you want, but to start it might be helpful to just check out dm-default-key (https://android.googlesource.com/kernel/common/+/refs/heads/android-mainline/drivers/md/dm-default-key.c) and maybe base your code on that, as it has handled these issues already. E.g., its syntax is aligned with dm-crypt's, it implements ->io_hints and ->status, and it calculates the maximum DUN correctly so that it can be determined correctly whether the inline encryption hardware can be used or not. Finally, the commit message needs to summarize what the patch does and what its motivation is. Currently it just talks about the Qualcomm ICE driver and doesn't actually say anything about dm-inlinecrypt. Yes, the motivation for dm-inlinecrypt involves being able to take advantage of inline encryption hardware such as Qualcomm ICE, but it's not explained. Thanks, - Eric
On Sat, Sep 21, 2024 at 11:55:19AM -0700, Eric Biggers wrote: > (https://android.googlesource.com/kernel/common/+/refs/heads/android-mainline/drivers/md/dm-default-key.c), > and I've been looking for the best way to get the functionality upstream. The > main challenge is that dm-default-key is integrated with fscrypt, such that if > fscrypt encrypts the data, then the data isn't also encrypted with the block > device key. There are also cases such as f2fs garbage collection in which > filesystems read/write raw data without en/decryption by any key. So > essentially a passthrough mode is supported on individual I/O requests. Adding a default key is not the job of a block remapping driver. You'll need to fit that into the file system and/or file system level helpers. > > It looks like this patch not only does not support that, but it ignores the > existence of fscrypt (or any other use of inline encryption by filesystems) > entirely, and overrides any filesystem-provided key with the block device's. At > the very least, this case would need to be explicitly not supported initially, > i.e. dm-inlinecrypt would error out if the upper layer already provided a key. I agree that we have an incompatibility here, but simply erroring out feels like the wrong way to approach the stacking. If a stacking driver consumes the inline encryption capability it must not advertise it to the upper layers.
On Tue, Sep 24, 2024 at 12:44:53AM -0700, Christoph Hellwig wrote: > On Sat, Sep 21, 2024 at 11:55:19AM -0700, Eric Biggers wrote: > > (https://android.googlesource.com/kernel/common/+/refs/heads/android-mainline/drivers/md/dm-default-key.c), > > and I've been looking for the best way to get the functionality upstream. The > > main challenge is that dm-default-key is integrated with fscrypt, such that if > > fscrypt encrypts the data, then the data isn't also encrypted with the block > > device key. There are also cases such as f2fs garbage collection in which > > filesystems read/write raw data without en/decryption by any key. So > > essentially a passthrough mode is supported on individual I/O requests. > > Adding a default key is not the job of a block remapping driver. You'll > need to fit that into the file system and/or file system level helpers. What about a block device ioctl, as was previously proposed (https://lore.kernel.org/linux-block/1658316391-13472-1-git-send-email-israelr@nvidia.com/T/#u)? > > It looks like this patch not only does not support that, but it ignores the > > existence of fscrypt (or any other use of inline encryption by filesystems) > > entirely, and overrides any filesystem-provided key with the block device's. At > > the very least, this case would need to be explicitly not supported initially, > > i.e. dm-inlinecrypt would error out if the upper layer already provided a key. > > I agree that we have an incompatibility here, but simply erroring out > feels like the wrong way to approach the stacking. If a stacking driver > consumes the inline encryption capability it must not advertise it to > the upper layers. Right, I missed that's actually already how it works. The crypto capabilities are only passed through if the target sets DM_TARGET_PASSES_CRYPTO. - Eric
On Tue, Sep 24, 2024 at 03:04:34PM -0700, Eric Biggers wrote: > What about a block device ioctl, as was previously proposed > (https://lore.kernel.org/linux-block/1658316391-13472-1-git-send-email-israelr@nvidia.com/T/#u)? No. This is a file system layer policy and needs to sit entirely above the block layer instead of breaking abstraction boundaries.
Hi The patch seems OK. Should it go in via the device mapper tree or the block layer tree? On Mon, 16 Sep 2024, Md Sadre Alam wrote: > +#define DM_CRYPT_DEFAULT_MAX_READ_SIZE 131072 > +#define DM_CRYPT_DEFAULT_MAX_WRITE_SIZE 131072 > + > +static unsigned int get_max_request_size(struct inlinecrypt_config *cc, bool wrt) > +{ > + unsigned int val, sector_align; > + > + val = !wrt ? DM_CRYPT_DEFAULT_MAX_READ_SIZE : DM_CRYPT_DEFAULT_MAX_WRITE_SIZE; > + if (wrt) { > + if (unlikely(val > BIO_MAX_VECS << PAGE_SHIFT)) > + val = BIO_MAX_VECS << PAGE_SHIFT; > + } > + sector_align = max(bdev_logical_block_size(cc->dev->bdev), (unsigned int)cc->sector_size); > + val = round_down(val, sector_align); > + if (unlikely(!val)) > + val = sector_align; > + return val >> SECTOR_SHIFT; > +} This piece of code was copied from the dm-crypt target. For dm-crypt, I was actually benchmarking the performance for various DM_CRYPT_DEFAULT_MAX_READ_SIZE and DM_CRYPT_DEFAULT_MAX_WRITE_SIZE values and I selected the values that resulted in the best performance. You should benchmark it too to find the optimal I/O size. Perhaps you find out that there is no need to split big requests and this piece of code can be dropped. > + /* Omit the key for now. */ > + DMEMIT("%s - %llu %s %llu", ctx->cipher_string, ctx->iv_offset, > + ctx->dev->name, (unsigned long long)ctx->start); What if someone reloads the table? I think you should display the key. dmsetup does not display the key if the "--showkeys" parameter is not specified. Mikulas
On October 18, 2024 1:56:38 AM EDT, Christoph Hellwig <hch@infradead.org> wrote: >On Fri, Oct 18, 2024 at 01:44:19AM -0400, Adrian Vovk wrote: >> > So just run a target on each partition. >> >> >> That has different semantics. If I encrypt each virtual partition there's >> nothing encrypting the metadata around the virtual partitions. Of course, >> this is a rather contrived example but point stands, the semantics are >> different. > >Then you set up an dm-crype device mapper table for the partition table as >well. Sure, but then this way you're encrypting each partition twice. Once by the dm-crypt inside of the partition, and again by the dm-crypt that's under the partition table. This double encryption is ruinous for performance, so it's just not a feasible solution and thus people don't do this. Would be nice if we had the flexibility though. Plus, I'm not sure that such a double encryption approach is even feasible with blk-crypto. Is the blk-crypto engine capable of receiving two keys and encrypting twice with them? > >> > This is the prime example of why allowing higher layers to skip >> > encryption is a no-go. >> > >> >> In what way does that break the file system's security model? Could you >> elaborate on what's objectionable about the behavior here? > >Because you are now bypassing encryption for certainl LBA ranges in >the file system based on hints/flags for something sitting way above >in the stack. > Well the data is still encrypted. It's just encrypted with a different key. If the attacker has a FDE dump of the disk, the data is still just as inaccessible to them. In fact, allowing for this will let us tighten up security instead of punching holes. It would let us put encrypted home directories on top of full-disk encryption. So if an attacker has a disk image and the FDE key, they still wouldn't be able to decrypt the user's home directory because they'd need more keys. We also want to put fscrypt on top of the encrypted home directories to encrypt each app data directory, so if you have a banking app the attacker wouldn't be able to get that app's data even if they manage to get your home directory key. Right now, doing something like this requires stacking encryption and is thus unfeasible and we can't do it, so we're stuck with one layer of full disk encryption and no isolation between users and apps. Thanks, Adrian
On Fri, Oct 18, 2024 at 11:03:50AM -0400, Adrian Vovk wrote: > Sure, but then this way you're encrypting each partition twice. Once by the dm-crypt inside of the partition, and again by the dm-crypt that's under the partition table. This double encryption is ruinous for performance, so it's just not a feasible solution and thus people don't do this. Would be nice if we had the flexibility though. Why do you assume the encryption would happen twice? > >Because you are now bypassing encryption for certainl LBA ranges in > >the file system based on hints/flags for something sitting way above > >in the stack. > > > > Well the data is still encrypted. It's just encrypted with a different key. If the attacker has a FDE dump of the disk, the data is still just as inaccessible to them. No one knows that it actually is encryped. The lower layer just knows the skip encryption flag was set, but it has zero assurance data actually was encrypted.
> Alternatively, if I recall correctly it should be possible to just > check if the bio has an attached encryption context. If it has one, > then just pass-through. If it doesn't, then attach your own. No flag > required this way, and dm-default-key would only add encryption iff > the data isn't already encrypted. This piqued my interest, so I went and did some git archeology to see why this isn't the case and there's a flag now. Apparently fscrypt will sometimes rearrange blocks without the key present. This is fine, because if there's no key, blk-crypto doesn't need to do anything and we can just shuffle the encrypted data around. We definitely don't want to re-encrypt the data in that scenario. Also, thinking about it a bit more: what should happen if we stack dm-crypt on top of dm-default-key. I see no point in double-encrypting even in this situation. So, dm-crypt would set the flag to skip dm-default-key, even though it's not actually attaching an encryption context to the bio. So it seems like the flag is the better solution. It would just be impermissible to set the flag on a request that will write plaintext data to disk. - Adrian
On Wed, Oct 23, 2024 at 10:52:06PM -0400, Adrian Vovk wrote: > > Why do you assume the encryption would happen twice? > > I'm not assuming. That's the behavior of dm-crypt without passthrough. > It just encrypts everything that moves through it. If I stack two > layers of dm-crypt on top of each other my data is encrypted twice. Sure. But why would you do that? > > No one knows that it actually is encryped. The lower layer just knows > > the skip encryption flag was set, but it has zero assurance data > > actually was encrypted. > > I think it makes sense to require that the data is actually encrypted > whenever the flag is set. Of course there's no way to enforce that > programmatically, but code that sets the flag without making sure the > data gets encrypted some other way wouldn't pass review. You have a lot of trusted in reviers. But even that doesn't help as the kernel can load code that never passed review. > Alternatively, if I recall correctly it should be possible to just > check if the bio has an attached encryption context. If it has one, > then just pass-through. If it doesn't, then attach your own. No flag > required this way, and dm-default-key would only add encryption iff > the data isn't already encrypted. That at least sounds a little better. But it still doesn't answer why we need this hack instead always encrypting at one layer instead of splitting it up.
On Thu, Oct 24, 2024 at 5:04 AM Christoph Hellwig <hch@infradead.org> wrote: > Can you please fix your mailer? It's creating crazy long lines > that are completely unreadable. Sorry about that. Apparently that happens when I send from mobile, and I couldn't find a setting to change it. I'll stick to replying from the computer from now on. I've re-attached the message below, so that it gets wrapped properly On October 24, 2024 2:14:57 AM EDT, Christoph Hellwig <hch@infradead.org> wrote: >On Wed, Oct 23, 2024 at 10:52:06PM -0400, Adrian Vovk wrote: >> > Why do you assume the encryption would happen twice? >> >> I'm not assuming. That's the behavior of dm-crypt without passthrough. >> It just encrypts everything that moves through it. If I stack two >> layers of dm-crypt on top of each other my data is encrypted twice. > >Sure. But why would you do that? As mentioned earlier in the thread: I don't have a usecase specifically for this and it was an example of a situation where passthrough is necessary and no filesystem is involved at all. Though, as I also pointed out, a usecase where you're putting encrypted virtual partitions on an encrypted LVM setup isn't all that absurd. In my real-world case, I'm putting encrypted loop devices on top of a filesystem that holds its own sensitive data. Each loop device has dm-crypt inside and uses a unique key, but the filesystem needs to be encrypted too (because, again, it has its own sensitive data outside of the loop devices). The loop devices cannot be put onto their own separate partition because there's no good way to know ahead of time how much space either of the partitions would need: sometimes the loop devices need to take up loads of space on the partition, and other times the non-loop-device data needs to take up that space. And to top it all off, the distribution of allocated space needs to change dynamically. The current Linux kernel does not support this use-case without double encryption. The loop devices are encrypted once with their own dm-crypt instance. Then that same data is encrypted a second time over by the partition. Actually this scenario is simplified. We actually want to use fscrypt inside of the loopback file too. So actually, without the passthrough mechanism some data would be encrypted three distinct times. >> > No one knows that it actually is encryped. The lower layer just knows >> > the skip encryption flag was set, but it has zero assurance data >> > actually was encrypted. >> >> I think it makes sense to require that the data is actually encrypted >> whenever the flag is set. Of course there's no way to enforce that >> programmatically, but code that sets the flag without making sure the >> data gets encrypted some other way wouldn't pass review. > >You have a lot of trusted in reviers. But even that doesn't help as >the kernel can load code that never passed review. Ultimately, I'm unsure what the concern is here. It's a glaringly loud opt-in marker that encryption was already performed or is otherwise intentionally unnecessary. The flag existing isn't what punches through the security model. It's the use of the flag that does. I can't imagine anything setting the flag by accident. So what are you actually concerned about? How are you expecting this flag to actually be misused? As for third party modules that might punch holes, so what? 3rd party modules aren't the kernel's responsibility or problem >> Alternatively, if I recall correctly it should be possible to just >> check if the bio has an attached encryption context. If it has one, >> then just pass-through. If it doesn't, then attach your own. No flag >> required this way, and dm-default-key would only add encryption iff >> the data isn't already encrypted. > >That at least sounds a little better. Please see my follow up. This is actually not feasible because it doesn't work. Sometimes, fscrypt will just ask to move encrypted blocks around without providing an encryption context; the data doesn't need to be decrypted to be reshuffled on disk. This flag-less approach I describe would actually just break: it it would unintentionally encrypt that data during shuffling. > But it still doesn't answer >why we need this hack instead always encrypting at one layer instead >of splitting it up. In my loopback file scenario, what would be the one layer that could handle the encryption? - the loopback files are just regular files that happen to have encrypted data inside of them. Doing it a different way changes the semantics: with a loopback file, I'm able to move it into a basic FAT filesystem and back without losing the encryption on the data - the filesystem is completely unaware of any encryption. The loopback files are just files with random content inside. The filesystem itself is encrypted from below by the block layer. So, there's nothing for it to do. - the underlying instance of dm-crypt is encrypting a single opaque blob of data, and so without explicit communication from above it cannot possibly know how to handle this. Thus, I don't see a single layer that can handle this. The only solution is for upper layers to communicate downward. Best, Adrian On Thu, Oct 24, 2024 at 5:04 AM Christoph Hellwig <hch@infradead.org> wrote: > > On Thu, Oct 24, 2024 at 03:52:24AM -0400, Adrian Vovk wrote: > > > > > >Sure. But why would you do that? > > > > As mentioned earlier in the thread: I don't have a usecase specifically for this and it was an example of a situation where passthrough is necessary and no filesystem is involved at all. Though, as I also pointed out, a usecase where you're putting encrypted virtual partitions on an encrypted LVM setup isn't all that absurd. > > Can you please fix your mailer? It's creating crazy long lines > that are completely unreadable. >
On Thu, Oct 24, 2024 at 11:59 AM Christoph Hellwig <hch@infradead.org> wrote: > > On Thu, Oct 24, 2024 at 11:32:58AM -0400, Adrian Vovk wrote: > > >> I'm not assuming. That's the behavior of dm-crypt without passthrough. > > >> It just encrypts everything that moves through it. If I stack two > > >> layers of dm-crypt on top of each other my data is encrypted twice. > > > > > >Sure. But why would you do that? > > > > As mentioned earlier in the thread: I don't have a usecase > > specifically for this and it was an example of a situation where > > passthrough is necessary and no filesystem is involved at all. Though, > > as I also pointed out, a usecase where you're putting encrypted > > virtual partitions on an encrypted LVM setup isn't all that absurd. > > It's a little odd but not entirely absurd indeed. But it can also > be easily handled by setting up a dm-crypt table just for the > partition table. That doesn't cover it. Not all of the virtual partitions necessarily need to have their own dm-crypt, so they should be encrypted by the underlying dm-crypt. So the dm-crypt doesn't just need to cover the partition table, but also arbitrary ranges within the whole partition > > In my real-world case, I'm putting encrypted loop devices on top of a > > filesystem that holds its own sensitive data. Each loop device has > > dm-crypt inside and uses a unique key, but the filesystem needs to be > > encrypted too (because, again, it has its own sensitive data outside > > of the loop devices). The loop devices cannot be put onto their own > > separate partition because there's no good way to know ahead of time > > how much space either of the partitions would need: sometimes the loop > > devices need to take up loads of space on the partition, and other > > times the non-loop-device data needs to take up that space. And to top > > it all off, the distribution of allocated space needs to change > > dynamically. > > And that's exactly the case I worry about. The file system can't > trust a layer entirely above it. If we want to be able to have a > space pool between a file systems with one encryption policy and > images with another we'll need to replace the loop driver with a > block driver taking blocks from the file system space pool. Which > might be a good idea for various other reasons. I don't quite understand the difference between a loopback file, and a block driver that uses space from a filesystem space pool. Isn't that what a loopback file is? > > > Ultimately, I'm unsure what the concern is here. > > > > It's a glaringly loud opt-in marker that encryption was already > > performed or is otherwise intentionally unnecessary. The flag existing > > isn't what punches through the security model. It's the use of the > > flag that does. I can't imagine anything setting the flag by accident. > > So what are you actually concerned about? How are you expecting this > > flag to actually be misused? > > > > As for third party modules that might punch holes, so what? 3rd party > > modules aren't the kernel's responsibility or problem > > On the one hand they are not. On the other if you have a file system > encryption scheme that is bypassed by a random other loadable code > setting a single flag I would not consider it very trust worth or in > fact actively dangerous. I'd expect that the lower encryption layer has an opt-in flag to turn on and off passthrough functionality. I think I neglected to mention that before; it was discussed in other threads and I just kinda assumed that it would be there. So, with that in mind: the loadable code could set the flag, but the underlying dm-inlinecrypt would need to opt into the weaker security too. If the system administrator has opted the lower layer into passthrough, then they've considered the risks of what could happen if an upper layer sets the flag and decided that it's OK. If the administrator didn't opt in, then the flag has no effect. Does that sound more acceptable? > > > In my loopback file scenario, what would be the one layer that could > > handle the encryption? > > But getting rid of loopback devices. I can't get rid of the loopback devices. They're an essential part of this. I should be able to take the encrypted loopback file, send it to a different machine, and have it keep working the same as it always has. Without the loopback device, I'm stuck with fscrypt. Which isn't supported by all filesystems, and encrypts much less data than we require. - Adrian
diff --git a/block/blk-crypto.c b/block/blk-crypto.c index 4d760b092deb..e5bc3c7a405b 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -19,6 +19,12 @@ #include "blk-crypto-internal.h" const struct blk_crypto_mode blk_crypto_modes[] = { + [BLK_ENCRYPTION_MODE_AES_128_XTS] = { + .name = "AES-128-XTS", + .cipher_str = "xts(aes)", + .keysize = 32, + .ivsize = 16, + }, [BLK_ENCRYPTION_MODE_AES_256_XTS] = { .name = "AES-256-XTS", .cipher_str = "xts(aes)", @@ -43,6 +49,18 @@ const struct blk_crypto_mode blk_crypto_modes[] = { .keysize = 32, .ivsize = 16, }, + [BLK_ENCRYPTION_MODE_AES_128_CBC] = { + .name = "AES-128-CBC", + .cipher_str = "cbc(aes)", + .keysize = 16, + .ivsize = 16, + }, + [BLK_ENCRYPTION_MODE_AES_256_CBC] = { + .name = "AES-256-CBC", + .cipher_str = "cbc(aes)", + .keysize = 32, + .ivsize = 16, + }, }; /* @@ -106,6 +124,7 @@ void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key, bio->bi_crypt_context = bc; } +EXPORT_SYMBOL_GPL(bio_crypt_set_ctx); void __bio_crypt_free_ctx(struct bio *bio) { @@ -356,6 +375,7 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key, return 0; } +EXPORT_SYMBOL_GPL(blk_crypto_init_key); bool blk_crypto_config_supported_natively(struct block_device *bdev, const struct blk_crypto_config *cfg) @@ -398,6 +418,7 @@ int blk_crypto_start_using_key(struct block_device *bdev, return 0; return blk_crypto_fallback_start_using_mode(key->crypto_cfg.crypto_mode); } +EXPORT_SYMBOL_GPL(blk_crypto_start_using_key); /** * blk_crypto_evict_key() - Evict a blk_crypto_key from a block_device diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig index 1e9db8e4acdf..272a6a3274bb 100644 --- a/drivers/md/Kconfig +++ b/drivers/md/Kconfig @@ -270,6 +270,14 @@ config DM_CRYPT If unsure, say N. +config DM_INLINE_CRYPT + tristate "Inline crypt target support" + depends on BLK_DEV_DM || COMPILE_TEST + help + This inline crypt device-mapper target allows to create a device + that transparently encrypts the data on it using inline crypto HW + engine. + config DM_SNAPSHOT tristate "Snapshot target" depends on BLK_DEV_DM diff --git a/drivers/md/Makefile b/drivers/md/Makefile index 476a214e4bdc..0e09b7665803 100644 --- a/drivers/md/Makefile +++ b/drivers/md/Makefile @@ -51,6 +51,7 @@ obj-$(CONFIG_DM_UNSTRIPED) += dm-unstripe.o obj-$(CONFIG_DM_BUFIO) += dm-bufio.o obj-$(CONFIG_DM_BIO_PRISON) += dm-bio-prison.o obj-$(CONFIG_DM_CRYPT) += dm-crypt.o +obj-$(CONFIG_DM_INLINE_CRYPT) += dm-inline-crypt.o obj-$(CONFIG_DM_DELAY) += dm-delay.o obj-$(CONFIG_DM_DUST) += dm-dust.o obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o diff --git a/drivers/md/dm-inline-crypt.c b/drivers/md/dm-inline-crypt.c new file mode 100644 index 000000000000..e94f86a3a5e0 --- /dev/null +++ b/drivers/md/dm-inline-crypt.c @@ -0,0 +1,316 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2024, Qualcomm Innovation Center, Inc. All rights reserved + * + * Based on work by Israel Rukshin file: dm-crypt.c + * + */ + +#include <linux/module.h> +#include <linux/bio.h> +#include <linux/blkdev.h> +#include <linux/crypto.h> +#include <linux/blk-crypto.h> +#include <linux/device-mapper.h> + +#define DM_MSG_PREFIX "inline-crypt" + +struct inlinecrypt_config { + struct dm_dev *dev; + sector_t start; + u64 iv_offset; + unsigned int iv_size; + unsigned short sector_size; + unsigned char sector_shift; + unsigned int key_size; + enum blk_crypto_mode_num crypto_mode; + struct blk_crypto_key *blk_key; + u8 key[] __counted_by(key_size); +}; + +#define DM_CRYPT_DEFAULT_MAX_READ_SIZE 131072 +#define DM_CRYPT_DEFAULT_MAX_WRITE_SIZE 131072 + +static unsigned int get_max_request_size(struct inlinecrypt_config *cc, bool wrt) +{ + unsigned int val, sector_align; + + val = !wrt ? DM_CRYPT_DEFAULT_MAX_READ_SIZE : DM_CRYPT_DEFAULT_MAX_WRITE_SIZE; + if (wrt) { + if (unlikely(val > BIO_MAX_VECS << PAGE_SHIFT)) + val = BIO_MAX_VECS << PAGE_SHIFT; + } + sector_align = max(bdev_logical_block_size(cc->dev->bdev), (unsigned int)cc->sector_size); + val = round_down(val, sector_align); + if (unlikely(!val)) + val = sector_align; + return val >> SECTOR_SHIFT; +} + +static int crypt_select_inline_crypt_mode(struct dm_target *ti, char *cipher, + char *ivmode) +{ + struct inlinecrypt_config *cc = ti->private; + + if (strcmp(cipher, "xts(aes128)") == 0) { + cc->crypto_mode = BLK_ENCRYPTION_MODE_AES_128_XTS; + } else if (strcmp(cipher, "xts(aes256)") == 0) { + cc->crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS; + } else if (strcmp(cipher, "cbc(aes128)") == 0) { + cc->crypto_mode = BLK_ENCRYPTION_MODE_AES_128_CBC; + } else if (strcmp(cipher, "cbc(aes256)") == 0) { + cc->crypto_mode = BLK_ENCRYPTION_MODE_AES_256_CBC; + } else { + ti->error = "Invalid cipher for inline_crypt"; + return -EINVAL; + } + + cc->iv_size = 4; + + return 0; +} + +static int crypt_prepare_inline_crypt_key(struct inlinecrypt_config *cc) +{ + int ret; + + cc->blk_key = kzalloc(sizeof(*cc->blk_key), GFP_KERNEL); + if (!cc->blk_key) + return -ENOMEM; + + ret = blk_crypto_init_key(cc->blk_key, cc->key, cc->crypto_mode, + cc->iv_size, cc->sector_size); + if (ret) { + DMERR("Failed to init inline encryption key"); + goto bad_key; + } + + ret = blk_crypto_start_using_key(cc->dev->bdev, cc->blk_key); + if (ret) { + DMERR("Failed to use inline encryption key"); + goto bad_key; + } + + return 0; +bad_key: + kfree_sensitive(cc->blk_key); + cc->blk_key = NULL; + return ret; +} + +static void crypt_destroy_inline_crypt_key(struct inlinecrypt_config *cc) +{ + if (cc->blk_key) { + blk_crypto_evict_key(cc->dev->bdev, cc->blk_key); + kfree_sensitive(cc->blk_key); + cc->blk_key = NULL; + } +} + +static void crypt_inline_encrypt_submit(struct dm_target *ti, struct bio *bio) +{ + struct inlinecrypt_config *cc = ti->private; + u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; + + bio_set_dev(bio, cc->dev->bdev); + if (bio_sectors(bio)) { + memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE); + bio->bi_iter.bi_sector = cc->start + + dm_target_offset(ti, bio->bi_iter.bi_sector); + dun[0] = le64_to_cpu(bio->bi_iter.bi_sector + cc->iv_offset); + bio_crypt_set_ctx(bio, cc->blk_key, dun, GFP_KERNEL); + } + + submit_bio_noacct(bio); +} + +static int inlinecrypt_setkey(struct inlinecrypt_config *cc) +{ + crypt_destroy_inline_crypt_key(cc); + + return crypt_prepare_inline_crypt_key(cc); + + return 0; +} + +static int inlinecrypt_set_key(struct inlinecrypt_config *cc, char *key) +{ + int r = -EINVAL; + int key_string_len = strlen(key); + + /* Decode key from its hex representation. */ + if (cc->key_size && hex2bin(cc->key, key, cc->key_size) < 0) + goto out; + + r = inlinecrypt_setkey(cc); +out: + memset(key, '0', key_string_len); + + return r; +} + +static void inlinecrypt_dtr(struct dm_target *ti) +{ + struct inlinecrypt_config *cc = ti->private; + + ti->private = NULL; + + if (!cc) + return; + + crypt_destroy_inline_crypt_key(cc); + + if (cc->dev) + dm_put_device(ti, cc->dev); + + kfree_sensitive(cc); +} + +static int inlinecrypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) +{ + struct inlinecrypt_config *cc; + char *cipher_api = NULL; + char *cipher, *chainmode; + unsigned long long tmpll; + char *ivmode; + int key_size; + char dummy; + int ret; + + if (argc < 5) { + ti->error = "Not enough arguments"; + return -EINVAL; + } + + key_size = strlen(argv[1]) >> 1; + + cc = kzalloc(struct_size(cc, key, key_size), GFP_KERNEL); + if (!cc) { + ti->error = "Cannot allocate encryption context"; + return -ENOMEM; + } + cc->key_size = key_size; + cc->sector_size = (1 << SECTOR_SHIFT); + cc->sector_shift = 0; + + ti->private = cc; + + if ((sscanf(argv[2], "%llu%c", &tmpll, &dummy) != 1) || + (tmpll & ((cc->sector_size >> SECTOR_SHIFT) - 1))) { + ti->error = "Invalid iv_offset sector"; + goto bad; + } + cc->iv_offset = tmpll; + + ret = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table), + &cc->dev); + if (ret) { + ti->error = "Device lookup failed"; + goto bad; + } + + ret = -EINVAL; + if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 || + tmpll != (sector_t)tmpll) { + ti->error = "Invalid device sector"; + goto bad; + } + + cc->start = tmpll; + + cipher = strsep(&argv[0], "-"); + chainmode = strsep(&argv[0], "-"); + ivmode = strsep(&argv[0], "-"); + + cipher_api = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL); + if (!cipher_api) + goto bad; + + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, + "%s(%s)", chainmode, cipher); + if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) { + kfree(cipher_api); + ret = -ENOMEM; + goto bad; + } + + ret = crypt_select_inline_crypt_mode(ti, cipher_api, ivmode); + + /* Initialize and set key */ + ret = inlinecrypt_set_key(cc, argv[1]); + if (ret < 0) { + ti->error = "Error decoding and setting key"; + return ret; + } + + return 0; +bad: + ti->error = "Error in inlinecrypt mapping"; + inlinecrypt_dtr(ti); + return ret; +} + +static int inlinecrypt_map(struct dm_target *ti, struct bio *bio) +{ + struct inlinecrypt_config *cc = ti->private; + unsigned int max_sectors; + + /* + * If bio is REQ_PREFLUSH or REQ_OP_DISCARD, just bypass crypt queues. + * - for REQ_PREFLUSH device-mapper core ensures that no IO is in-flight + * - for REQ_OP_DISCARD caller must use flush if IO ordering matters + */ + if (unlikely(bio->bi_opf & REQ_PREFLUSH || + bio_op(bio) == REQ_OP_DISCARD)) { + bio_set_dev(bio, cc->dev->bdev); + if (bio_sectors(bio)) + bio->bi_iter.bi_sector = cc->start + + dm_target_offset(ti, bio->bi_iter.bi_sector); + return DM_MAPIO_REMAPPED; + } + + /* + * Check if bio is too large, split as needed. + */ + max_sectors = get_max_request_size(cc, bio_data_dir(bio) == WRITE); + if (unlikely(bio_sectors(bio) > max_sectors)) + dm_accept_partial_bio(bio, max_sectors); + + /* + * Ensure that bio is a multiple of internal sector eninlinecryption size + * and is aligned to this size as defined in IO hints. + */ + if (unlikely((bio->bi_iter.bi_sector & ((cc->sector_size >> SECTOR_SHIFT) - 1)) != 0)) + return DM_MAPIO_KILL; + + if (unlikely(bio->bi_iter.bi_size & (cc->sector_size - 1))) + return DM_MAPIO_KILL; + + crypt_inline_encrypt_submit(ti, bio); + return DM_MAPIO_SUBMITTED; + + return 0; +} + +static int inlinecrypt_iterate_devices(struct dm_target *ti, + iterate_devices_callout_fn fn, void *data) +{ + struct inlinecrypt_config *cc = ti->private; + + return fn(ti, cc->dev, cc->start, ti->len, data); +} + +static struct target_type inlinecrypt_target = { + .name = "inline-crypt", + .version = {1, 0, 0}, + .module = THIS_MODULE, + .ctr = inlinecrypt_ctr, + .dtr = inlinecrypt_dtr, + .map = inlinecrypt_map, + .iterate_devices = inlinecrypt_iterate_devices, +}; +module_dm(inlinecrypt); + +MODULE_AUTHOR("Md Sadre Alam <quic_mdalam@quicinc.com>"); +MODULE_DESCRIPTION(DM_NAME " target for inline encryption / decryption"); +MODULE_LICENSE("GPL"); diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h index 5e5822c18ee4..da503a05c5f6 100644 --- a/include/linux/blk-crypto.h +++ b/include/linux/blk-crypto.h @@ -10,10 +10,13 @@ enum blk_crypto_mode_num { BLK_ENCRYPTION_MODE_INVALID, + BLK_ENCRYPTION_MODE_AES_128_XTS, BLK_ENCRYPTION_MODE_AES_256_XTS, BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV, BLK_ENCRYPTION_MODE_ADIANTUM, BLK_ENCRYPTION_MODE_SM4_XTS, + BLK_ENCRYPTION_MODE_AES_128_CBC, + BLK_ENCRYPTION_MODE_AES_256_CBC, BLK_ENCRYPTION_MODE_MAX, };
QCOM SDCC controller supports Inline Crypto Engine This driver will enables inline encryption/decryption for ICE. The algorithm supported by ICE are XTS(AES) and CBC(AES). Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com> --- Change in [v2] * Added dm-inlinecrypt driver support * squash the patch blk-crypto: Add additional algo modes for Inline encryption and md: dm-crypt: Add additional algo modes for inline encryption and added in this Change in [v1] * This patch was not included in [v1] block/blk-crypto.c | 21 +++ drivers/md/Kconfig | 8 + drivers/md/Makefile | 1 + drivers/md/dm-inline-crypt.c | 316 +++++++++++++++++++++++++++++++++++ include/linux/blk-crypto.h | 3 + 5 files changed, 349 insertions(+) create mode 100644 drivers/md/dm-inline-crypt.c