diff mbox series

[03/29] crypto: skcipher - remove redundant clamping to page size

Message ID 20241221091056.282098-4-ebiggers@kernel.org
State New
Headers show
Series crypto: scatterlist handling improvements | expand

Commit Message

Eric Biggers Dec. 21, 2024, 9:10 a.m. UTC
From: Eric Biggers <ebiggers@google.com>

In the case where skcipher_walk_next() allocates a bounce page, that
page by definition has size PAGE_SIZE.  The number of bytes to copy 'n'
is guaranteed to fit in it, since earlier in the function it was clamped
to be at most a page.  Therefore remove the unnecessary logic that tried
to clamp 'n' again to fit in the bounce page.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/skcipher.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)
diff mbox series

Patch

diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 887cbce8f78d..c627e267b125 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -248,28 +248,24 @@  static int skcipher_walk_next(struct skcipher_walk *walk)
 			return skcipher_walk_done(walk, -EINVAL);
 
 slow_path:
 		return skcipher_next_slow(walk, bsize);
 	}
+	walk->nbytes = n;
 
 	if (unlikely((walk->in.offset | walk->out.offset) & walk->alignmask)) {
 		if (!walk->page) {
 			gfp_t gfp = skcipher_walk_gfp(walk);
 
 			walk->page = (void *)__get_free_page(gfp);
 			if (!walk->page)
 				goto slow_path;
 		}
-
-		walk->nbytes = min_t(unsigned, n,
-				     PAGE_SIZE - offset_in_page(walk->page));
 		walk->flags |= SKCIPHER_WALK_COPY;
 		return skcipher_next_copy(walk);
 	}
 
-	walk->nbytes = n;
-
 	return skcipher_next_fast(walk);
 }
 
 static int skcipher_copy_iv(struct skcipher_walk *walk)
 {