Message ID | CAKv+Gu8w+BuwxQjOtpnFPHnJNUzq7m0K+KJ8=FG2wHigaB54ng@mail.gmail.com |
---|---|
State | New |
Headers | show |
On 12 September 2016 at 03:16, liushuoran <liushuoran@huawei.com> wrote: > Hi Ard, > > Thanks for the prompt reply. With the patch, there is no panic anymore. But it seems that the encryption/decryption is not successful anyway. > > As Herbert points out, "If the page allocation fails in blkcipher_walk_next it'll simply switch over to processing it block by block". So does that mean the encryption/decryption should be successful even if the page allocation fails? Please correct me if I misunderstand anything. Thanks in advance. > Perhaps Herbert can explain: I don't see how the 'n = 0' assignment results in the correct path being taken; this chunk (blkcipher.c:252) if (unlikely(n < bsize)) { err = blkcipher_next_slow(desc, walk, bsize, walk->alignmask); goto set_phys_lowmem; } is skipped due to the fact that n == 0 and therefore bsize == 0, and so the condition is always false for n == 0 Therefore we end up here (blkcipher.c:257) walk->nbytes = n; if (walk->flags & BLKCIPHER_WALK_COPY) { err = blkcipher_next_copy(walk); goto set_phys_lowmem; } where blkcipher_next_copy() unconditionally calls memcpy() with walk->page as destination (even though we ended up here due to the fact that walk->page == NULL) So to me, it seems like we should be taking the blkcipher_next_slow() path, which does a kmalloc() and bails with -ENOMEM if that fails. -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 5c888049d061..6b2aa0fd6cd0 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -216,7 +216,7 @@ static int ctr_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE); } - if (nbytes) { + if (walk.nbytes % AES_BLOCK_SIZE) { u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;