From patchwork Mon Dec 30 00:13:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854284 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BAB125948E; Mon, 30 Dec 2024 00:15:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517760; cv=none; b=KXju8bVI88dJlm7Z/R/vso5mMSs9ouQ3bViYfvtCRO29kD4QgN26eefl0nnCFwidbvbEy0W9O5x+h2cECNqBsQGLQPJylZQ84bPPA6n0vb5AvbQxKTruB1JoebSemX4O2l6X7upWu+RD7q60TTDQk7ihvytHzC2P/VKVRByXTZE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517760; c=relaxed/simple; bh=5+ZF0ImlbQo//JHYBtR4hUhu8cd/k3zSglYQL5MyI1I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y8objp7o1ZEbd95siolE0Lvl3TcdnOu08oINwSrlU9ARNvYogBJbs1ZCPDLs+v+vIgoLufaYTO3nk2sSscmURYjIYg1WdBg8pTbfqbtQTWLtm0UCRrHrYmaHnMv3FLBXUrqvKaHTj3ClhS4q/55vFFWAZgD3vObjnNX7qds87wc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CajwGQWs; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CajwGQWs" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 51452C4CED7; Mon, 30 Dec 2024 00:15:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517759; bh=5+ZF0ImlbQo//JHYBtR4hUhu8cd/k3zSglYQL5MyI1I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CajwGQWsxx05ZrJZe2LCuWC1feYaE6hpCpmCosLC9+nd9cABcywaDkOtw+FfJFDKs EDUrpQO5btCgpYAo8cUbvLnpnqDK4JqDQgJ7hY3GDsMPTk5vxUihNLbrBCdyWDnwL4 KO+BY2iZg0D47vV+bSog6PydDkilzlIgdam8Y5vXE89LEAWTVTX60CYCVa6pxl78n9 eE/XkAI0rC8035n/rPCyKI+QP5mvI0p3H/O/TAWH2+4MXZ7itSFE863g2FHGVGjQVR xwQ4wzHr9WHwMfsG52Wbfm7b/HkqYS+aazwxFrXj4uI6STp5hPNRqxRZeLwtO6e4df HBEmF/JgoUpKQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 01/29] crypto: skcipher - document skcipher_walk_done() and rename some vars Date: Sun, 29 Dec 2024 16:13:50 -0800 Message-ID: <20241230001418.74739-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers skcipher_walk_done() has an unusual calling convention, and some of its local variables have unclear names. Document it and rename variables to make it a bit clearer what is going on. No change in behavior. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 50 ++++++++++++++++++++---------- include/crypto/internal/skcipher.h | 2 +- 2 files changed, 35 insertions(+), 17 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index d5fe0eca3826..8749c44f98a2 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -87,21 +87,39 @@ static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) addr = skcipher_get_spot(addr, bsize); scatterwalk_copychunks(addr, &walk->out, bsize, 1); return 0; } -int skcipher_walk_done(struct skcipher_walk *walk, int err) +/** + * skcipher_walk_done() - finish one step of a skcipher_walk + * @walk: the skcipher_walk + * @res: number of bytes *not* processed (>= 0) from walk->nbytes, + * or a -errno value to terminate the walk due to an error + * + * This function cleans up after one step of walking through the source and + * destination scatterlists, and advances to the next step if applicable. + * walk->nbytes is set to the number of bytes available in the next step, + * walk->total is set to the new total number of bytes remaining, and + * walk->{src,dst}.virt.addr is set to the next pair of data pointers. If there + * is no more data, or if an error occurred (i.e. -errno return), then + * walk->nbytes and walk->total are set to 0 and all resources owned by the + * skcipher_walk are freed. + * + * Return: 0 or a -errno value. If @res was a -errno value then it will be + * returned, but other errors may occur too. + */ +int skcipher_walk_done(struct skcipher_walk *walk, int res) { - unsigned int n = walk->nbytes; - unsigned int nbytes = 0; + unsigned int n = walk->nbytes; /* num bytes processed this step */ + unsigned int total = 0; /* new total remaining */ if (!n) goto finish; - if (likely(err >= 0)) { - n -= err; - nbytes = walk->total - n; + if (likely(res >= 0)) { + n -= res; /* subtract num bytes *not* processed */ + total = walk->total - n; } if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF)))) { @@ -113,35 +131,35 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) } else if (walk->flags & SKCIPHER_WALK_COPY) { skcipher_map_dst(walk); memcpy(walk->dst.virt.addr, walk->page, n); skcipher_unmap_dst(walk); } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) { - if (err > 0) { + if (res > 0) { /* * Didn't process all bytes. Either the algorithm is * broken, or this was the last step and it turned out * the message wasn't evenly divisible into blocks but * the algorithm requires it. */ - err = -EINVAL; - nbytes = 0; + res = -EINVAL; + total = 0; } else n = skcipher_done_slow(walk, n); } - if (err > 0) - err = 0; + if (res > 0) + res = 0; - walk->total = nbytes; + walk->total = total; walk->nbytes = 0; scatterwalk_advance(&walk->in, n); scatterwalk_advance(&walk->out, n); - scatterwalk_done(&walk->in, 0, nbytes); - scatterwalk_done(&walk->out, 1, nbytes); + scatterwalk_done(&walk->in, 0, total); + scatterwalk_done(&walk->out, 1, total); - if (nbytes) { + if (total) { crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? CRYPTO_TFM_REQ_MAY_SLEEP : 0); return skcipher_walk_next(walk); } @@ -156,11 +174,11 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) kfree(walk->buffer); if (walk->page) free_page((unsigned long)walk->page); out: - return err; + return res; } EXPORT_SYMBOL_GPL(skcipher_walk_done); static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) { diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h index 08d1e8c63afc..4f49621d3eb6 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -194,11 +194,11 @@ void crypto_unregister_lskcipher(struct lskcipher_alg *alg); int crypto_register_lskciphers(struct lskcipher_alg *algs, int count); void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count); int lskcipher_register_instance(struct crypto_template *tmpl, struct lskcipher_instance *inst); -int skcipher_walk_done(struct skcipher_walk *walk, int err); +int skcipher_walk_done(struct skcipher_walk *walk, int res); int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic); int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, struct aead_request *req, bool atomic); From patchwork Mon Dec 30 00:13:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854283 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BA86259481; Mon, 30 Dec 2024 00:15:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517760; cv=none; b=r4Uwxtc4AqKgA8PG681+/fL3bGpKI0Q8bxl/QVtcRlOdqAAR934yjYEXC0fe75u/WgAabIixz+KIoPunIno8J4rYmpF6rD+RGzDNX1Dunq91eksbceuEiKedT/q3+obZe5ygIVRDTgrJ1CwTZARTQCdKLz4YG+LPi+Zipsosjs0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517760; c=relaxed/simple; bh=J5FzviwCD+n8p+5rLyvIFWAP97F/jKX49Nuk3AnfzpU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CqDufiVMlPtNtSz2B/9DMIAUET0RAvEywqMrYiqU0kr79KwMFWRdagk/ckj7LxxNupXA+urray4fiL2V7uij+kA7zQfqC28m5J0CtZHHTV559MkeYWNfSr9PNS8qB7QMnTl1DU+hnkUSFNGpcyP22jr9wRFiGI26kQfun/aLVPQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=n/HpaXI5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="n/HpaXI5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97E2CC4CED4; Mon, 30 Dec 2024 00:15:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517759; bh=J5FzviwCD+n8p+5rLyvIFWAP97F/jKX49Nuk3AnfzpU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n/HpaXI5IGM2bT2b3rlNIJtS96jx+szt27vQd/3l1Vm3/4Nm85/XQsNY1RDbdtv8z 9M4alsvzIy89ecSly9DTTsgc+2crthiG1KsIEKJl3+/XZoOLw3FK8toGbXFvT2DSh8 /7uoqlCaw39PKwt50yEzep3KC3LomwCc8+61kwRL2aO+a0DURUTvu8gSXC2LWB/KjR S+AIMSzVI9c/1kuTPXb3rzlMoIqRx17BwGfg3rwdi4pfD1gJBtAxBYOk4lrTce53um 98580FImzduySG8GJypA6eciUewgJPi5vuaKvaCx/7OPKulnzYSQKvCkXCBAYG7C/H kdcVv6lMjrX8A== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 02/29] crypto: skcipher - remove unnecessary page alignment of bounce buffer Date: Sun, 29 Dec 2024 16:13:51 -0800 Message-ID: <20241230001418.74739-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In the slow path of skcipher_walk where it uses a slab bounce buffer for the data and/or IV, do not bother to avoid crossing a page boundary in the part(s) of this buffer that are used, and do not bother to allocate extra space in the buffer for that purpose. The buffer is accessed only by virtual address, so pages are irrelevant for it. This logic may have been present due to the physical address support in skcipher_walk, but that has now been removed. Or it may have been present to be consistent with the fast path that currently does not hand back addresses that span pages, but that behavior is a side effect of the pages being "mapped" one by one and is not actually a requirement. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 62 ++++++++++++----------------------------------- 1 file changed, 15 insertions(+), 47 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 8749c44f98a2..887cbce8f78d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -61,32 +61,20 @@ static inline void skcipher_unmap_dst(struct skcipher_walk *walk) static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) { return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; } -/* Get a spot of the specified length that does not straddle a page. - * The caller needs to ensure that there is enough space for this operation. - */ -static inline u8 *skcipher_get_spot(u8 *start, unsigned int len) -{ - u8 *end_page = (u8 *)(((unsigned long)(start + len - 1)) & PAGE_MASK); - - return max(start, end_page); -} - static inline struct skcipher_alg *__crypto_skcipher_alg( struct crypto_alg *alg) { return container_of(alg, struct skcipher_alg, base); } static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) { - u8 *addr; + u8 *addr = PTR_ALIGN(walk->buffer, walk->alignmask + 1); - addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1); - addr = skcipher_get_spot(addr, bsize); scatterwalk_copychunks(addr, &walk->out, bsize, 1); return 0; } /** @@ -181,37 +169,26 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) EXPORT_SYMBOL_GPL(skcipher_walk_done); static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) { unsigned alignmask = walk->alignmask; - unsigned a; unsigned n; u8 *buffer; if (!walk->buffer) walk->buffer = walk->page; buffer = walk->buffer; - if (buffer) - goto ok; - - /* Start with the minimum alignment of kmalloc. */ - a = crypto_tfm_ctx_alignment() - 1; - n = bsize; - - /* Minimum size to align buffer by alignmask. */ - n += alignmask & ~a; - - /* Minimum size to ensure buffer does not straddle a page. */ - n += (bsize - 1) & ~(alignmask | a); - - buffer = kzalloc(n, skcipher_walk_gfp(walk)); - if (!buffer) - return skcipher_walk_done(walk, -ENOMEM); - walk->buffer = buffer; -ok: + if (!buffer) { + /* Min size for a buffer of bsize bytes aligned to alignmask */ + n = bsize + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); + + buffer = kzalloc(n, skcipher_walk_gfp(walk)); + if (!buffer) + return skcipher_walk_done(walk, -ENOMEM); + walk->buffer = buffer; + } walk->dst.virt.addr = PTR_ALIGN(buffer, alignmask + 1); - walk->dst.virt.addr = skcipher_get_spot(walk->dst.virt.addr, bsize); walk->src.virt.addr = walk->dst.virt.addr; scatterwalk_copychunks(walk->src.virt.addr, &walk->in, bsize, 0); walk->nbytes = bsize; @@ -294,34 +271,25 @@ static int skcipher_walk_next(struct skcipher_walk *walk) return skcipher_next_fast(walk); } static int skcipher_copy_iv(struct skcipher_walk *walk) { - unsigned a = crypto_tfm_ctx_alignment() - 1; unsigned alignmask = walk->alignmask; unsigned ivsize = walk->ivsize; - unsigned bs = walk->stride; - unsigned aligned_bs; + unsigned aligned_stride = ALIGN(walk->stride, alignmask + 1); unsigned size; u8 *iv; - aligned_bs = ALIGN(bs, alignmask + 1); - - /* Minimum size to align buffer by alignmask. */ - size = alignmask & ~a; - - size += aligned_bs + ivsize; - - /* Minimum size to ensure buffer does not straddle a page. */ - size += (bs - 1) & ~(alignmask | a); + /* Min size for a buffer of stride + ivsize, aligned to alignmask */ + size = aligned_stride + ivsize + + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); walk->buffer = kmalloc(size, skcipher_walk_gfp(walk)); if (!walk->buffer) return -ENOMEM; - iv = PTR_ALIGN(walk->buffer, alignmask + 1); - iv = skcipher_get_spot(iv, bs) + aligned_bs; + iv = PTR_ALIGN(walk->buffer, alignmask + 1) + aligned_stride; walk->iv = memcpy(iv, walk->iv, walk->ivsize); return 0; } From patchwork Mon Dec 30 00:13:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854282 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EAC91BDDF; Mon, 30 Dec 2024 00:16:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517760; cv=none; b=Vs5KURalkVpe3WibTQVXds7ALK7onBfhSE5vxIHRuFKGf8uX4k0DMD4sam7SisEy/rJoDgxC6Jkfe2Gzg6pcVYETBzuB7zzKD6mrzNGxtTO37JTXcBFVHpNUGUrrqYd8xmC6TV4oZqNY+q5zBl5+T/wrL7EhpDg4ohFu8Bypm2o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517760; c=relaxed/simple; bh=l9HLRzTyaTzJjaI3UCqrNcL/jvjXdUpirL97KFLMmiQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LF3efJu+bOxsZyHDVNpkBeHv9kSvoJE5KOO/qwcCXC03QMA7qNdqUxqHttSx1P/R1xcRbMAI5UR0aS5LFV9f6nRCCUV8wzZHOrebaJh7IdzFjzII6ubD7RkRODUqUwi5cg3sFq0dSPFjDbFXHyqXKQ5Yn6emeVakGyalGTOG9SI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=B7Rk1Q3U; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="B7Rk1Q3U" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D84A3C4CEDC; Mon, 30 Dec 2024 00:15:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517760; bh=l9HLRzTyaTzJjaI3UCqrNcL/jvjXdUpirL97KFLMmiQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B7Rk1Q3U5IxG2wqwIkpH2UiBhk8uFIf4ZAU3VhFPR+q0GcuAx94/MQ0Op5pc5p9Ze DZeWemTcHgeheZrIxk5238wUwh47YKIQ6rbqIaReCxsztChlx5ZGcCGcr70VERevcS +kJn0AYqY1e9SFFyiDW0VN76GVxb62YnWCchb7QzbFd87b9NtzkWZpNzbEZPVp9NqE zfenIVT6st7zZDKQLqDXbZaGGhzfibUbYTojkOinu301+3mLNqYyjPqXrxDGtE06g+ yIKkoY7CwxKp8pjvoH8J9CdUWci6S2VCTfrOZDhu+gWkmRqIw69MJ1TkgWRm7cDfYV U8c/ViwzEK+tg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/29] crypto: skcipher - remove redundant clamping to page size Date: Sun, 29 Dec 2024 16:13:52 -0800 Message-ID: <20241230001418.74739-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In the case where skcipher_walk_next() allocates a bounce page, that page by definition has size PAGE_SIZE. The number of bytes to copy 'n' is guaranteed to fit in it, since earlier in the function it was clamped to be at most a page. Therefore remove the unnecessary logic that tried to clamp 'n' again to fit in the bounce page. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 887cbce8f78d..c627e267b125 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -248,28 +248,24 @@ static int skcipher_walk_next(struct skcipher_walk *walk) return skcipher_walk_done(walk, -EINVAL); slow_path: return skcipher_next_slow(walk, bsize); } + walk->nbytes = n; if (unlikely((walk->in.offset | walk->out.offset) & walk->alignmask)) { if (!walk->page) { gfp_t gfp = skcipher_walk_gfp(walk); walk->page = (void *)__get_free_page(gfp); if (!walk->page) goto slow_path; } - - walk->nbytes = min_t(unsigned, n, - PAGE_SIZE - offset_in_page(walk->page)); walk->flags |= SKCIPHER_WALK_COPY; return skcipher_next_copy(walk); } - walk->nbytes = n; - return skcipher_next_fast(walk); } static int skcipher_copy_iv(struct skcipher_walk *walk) { From patchwork Mon Dec 30 00:13:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854281 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45B1570828; Mon, 30 Dec 2024 00:16:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517761; cv=none; b=EgJlmFcJ51cyrat0ScIR4842Fv3eO+T5PeBSDEYjOBzfubcchTzj1V7ERBe0ZFtG4US6qjOv9s4j6GJ16x8ZxQGjP9vJHjoQ5Encs6f/yKlB5ElAOYyBAoKb3+U6rFEKW58qT/8C7C0bIk3ZJ4TvlV3VfJECmYPKw9W7fs/e3bM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517761; c=relaxed/simple; bh=VsaYFM7nf9+iC1YnzN8b1pGbhUrUR9GOM+Yzo4COmtY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y+poRGU6G/UD9MAW/a9QD0VwV+KMLFhjgRw01xuxCVdmaL3qQabU+tzcAXeYqz0WmqcNBhQe02PdbnbNhjp2FYEXr9yflkSeE7WWGwUTTtgohirnUzQZBQUNuPdOaVFR4KcnHSxC18ALUvPFXGlKvONo7sETEmQn6KBwyNS+LgI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=INxUhhS5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="INxUhhS5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ADD05C4CEE1; Mon, 30 Dec 2024 00:16:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517760; bh=VsaYFM7nf9+iC1YnzN8b1pGbhUrUR9GOM+Yzo4COmtY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=INxUhhS5RJfwKM17mEByCo+UysJ62eZJvQ7A1sHZUlBevaCFcsOEi8wvAKx1/fQr6 oYyRR+8ij+ngOvU7DcTMo2qX/9K6gtFHGnagnQsfuOjnxHr/y2WlZi45Hu6XscO6gJ pNuUHLLY+Za8MFrMibOvddFEJGZWdIoqimeky0aKAr6Gn1vaeHk3rZlY4rBAgRQ+ff xGkn2nNm1iRADKTLjxTXIG2g9aBqFENu+1tp2fFO+Ls77nt4FOBND4yNstWYMU59tF A7DK8DXqpgAo7NA1JF27rmcC3iPtCwkJ3yhLrwN0GlDriIdMH0uKtgPT+Tz6WamOVL M4jjsRUWzl16w== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 06/29] crypto: skcipher - clean up initialization of skcipher_walk::flags Date: Sun, 29 Dec 2024 16:13:55 -0800 Message-ID: <20241230001418.74739-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers - Initialize SKCIPHER_WALK_SLEEP in a consistent way, and check for atomic=true at the same time as CRYPTO_TFM_REQ_MAY_SLEEP. Technically atomic=true only needs to apply after the first step, but it is very rarely used. We should optimize for the common case. So, check 'atomic' alongside CRYPTO_TFM_REQ_MAY_SLEEP. This is more efficient. - Initialize flags other than SKCIPHER_WALK_SLEEP to 0 rather than preserving them. No caller actually initializes the flags, which makes it impossible to use their original values for anything. Indeed, that does not happen and all meaningful flags get overridden anyway. It may have been thought that just clearing one flag would be faster than clearing all flags, but that's not the case as the former is a read-write operation whereas the latter is just a write. - Move the explicit clearing of SKCIPHER_WALK_SLOW, SKCIPHER_WALK_COPY, and SKCIPHER_WALK_DIFF into skcipher_walk_done(), since it is now only needed on non-first steps. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 39 +++++++++++++-------------------------- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 17f4bc79ca8b..e54d1ad46566 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -146,10 +146,12 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) scatterwalk_done(&walk->out, 1, total); if (total) { crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? CRYPTO_TFM_REQ_MAY_SLEEP : 0); + walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | + SKCIPHER_WALK_DIFF); return skcipher_walk_next(walk); } finish: /* Short-circuit for the common/fast path. */ @@ -233,13 +235,10 @@ static int skcipher_next_fast(struct skcipher_walk *walk) static int skcipher_walk_next(struct skcipher_walk *walk) { unsigned int bsize; unsigned int n; - walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | - SKCIPHER_WALK_DIFF); - n = walk->total; bsize = min(walk->stride, max(n, walk->blocksize)); n = scatterwalk_clamp(&walk->in, n); n = scatterwalk_clamp(&walk->out, n); @@ -309,55 +308,53 @@ static int skcipher_walk_first(struct skcipher_walk *walk) int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - int err = 0; might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); walk->total = req->cryptlen; walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) + walk->flags = SKCIPHER_WALK_SLEEP; + else + walk->flags = 0; if (unlikely(!walk->total)) - goto out; + return 0; scatterwalk_start(&walk->in, req->src); scatterwalk_start(&walk->out, req->dst); - walk->flags &= ~SKCIPHER_WALK_SLEEP; - walk->flags |= req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? - SKCIPHER_WALK_SLEEP : 0; - walk->blocksize = crypto_skcipher_blocksize(tfm); walk->ivsize = crypto_skcipher_ivsize(tfm); walk->alignmask = crypto_skcipher_alignmask(tfm); if (alg->co.base.cra_type != &crypto_skcipher_type) walk->stride = alg->co.chunksize; else walk->stride = alg->walksize; - err = skcipher_walk_first(walk); -out: - walk->flags &= atomic ? ~SKCIPHER_WALK_SLEEP : ~0; - - return err; + return skcipher_walk_first(walk); } EXPORT_SYMBOL_GPL(skcipher_walk_virt); static int skcipher_walk_aead_common(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - int err; walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) + walk->flags = SKCIPHER_WALK_SLEEP; + else + walk->flags = 0; if (unlikely(!walk->total)) return 0; scatterwalk_start(&walk->in, req->src); @@ -367,26 +364,16 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk, scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2); scatterwalk_done(&walk->in, 0, walk->total); scatterwalk_done(&walk->out, 0, walk->total); - if (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) - walk->flags |= SKCIPHER_WALK_SLEEP; - else - walk->flags &= ~SKCIPHER_WALK_SLEEP; - walk->blocksize = crypto_aead_blocksize(tfm); walk->stride = crypto_aead_chunksize(tfm); walk->ivsize = crypto_aead_ivsize(tfm); walk->alignmask = crypto_aead_alignmask(tfm); - err = skcipher_walk_first(walk); - - if (atomic) - walk->flags &= ~SKCIPHER_WALK_SLEEP; - - return err; + return skcipher_walk_first(walk); } int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { From patchwork Mon Dec 30 00:13:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854280 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A73361369A8; Mon, 30 Dec 2024 00:16:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517761; cv=none; b=iTbaacasC0sXjPrLg2ZtpHrZKq9c79YvBfmwK7eRsnVFEKxSPBvH+zFfvzsNrWw+dwRkxI2feqzAQxjxWsMAJodD6lHf8w7L6Ctr5Rm3Ok/S6gWuktimwMp/txax1zXHLU5pMV80QdFifkEmsTlkgLwbJ/Ct5MdKpoBtwv9VlcU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517761; c=relaxed/simple; bh=6EPGGWMvq5sU1DZayuB4lsmsIXL+PbHsRHle3o9unzc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=toxzF+NZv+A2+ZI71TwHy1e4LfzW2gCF9apafy/y5Klw4sRwnr1hDUrWbsaiYCxU/Q74wQbWTLOYPawL3SBaKdmrUrR+JepvrcQLKoxgSlR9NwXBOxGWgfmeEuFA9cOjck4Aj6GIyIYAqqsqIvxD0VtkIWZHaVEbP/8a7m91y+8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CJT7yIN/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CJT7yIN/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F09F7C4CED7; Mon, 30 Dec 2024 00:16:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517761; bh=6EPGGWMvq5sU1DZayuB4lsmsIXL+PbHsRHle3o9unzc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CJT7yIN/x8Vbgl7vzKa9DUFxpKxXjlDYqBOoweuzubcg++adAVauicPe2irkETNiy N9V9A9vWCBKLle2tPEBRkbJDUoHnOWyELog2V+VF7GXYDTJBXlvMM5LqI3Y/YVVPdZ 1pgUPOSEngfZ7gg4Rxu/QCdGCMfWuyODgVEgfQMU/g89XOhx2ytO0wvQR6RiT0lYlX O11YiZT1b737V4FIn7qc64mgHq42Uk5CBY12oOH9kld/VUC/pjUiG2QZa0S2HuAOz/ Ajefs/fJHAXmfUgBLX81s4v3i7n8gYaYIvjz/H3vw/jAJ0VNgIN1ESbeyR8zS3SrZl OS2Qu2BGpTSnw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 07/29] crypto: skcipher - optimize initializing skcipher_walk fields Date: Sun, 29 Dec 2024 16:13:56 -0800 Message-ID: <20241230001418.74739-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers The helper functions like crypto_skcipher_blocksize() take in a pointer to a tfm object, but they actually return properties of the algorithm. As the Linux kernel is compiled with -fno-strict-aliasing, the compiler has to assume that the writes to struct skcipher_walk could clobber the tfm's pointer to its algorithm. Thus it gets repeatedly reloaded in the generated code. Therefore, replace the use of these helper functions with staightforward accesses to the struct fields. Note that while *users* of the skcipher and aead APIs are supposed to use the helper functions, this particular code is part of the API *implementation* in crypto/skcipher.c, which already accesses the algorithm struct directly in many cases. So there is no reason to prefer the helper functions here. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index e54d1ad46566..7ef2e4ddf07a 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -306,12 +306,12 @@ static int skcipher_walk_first(struct skcipher_walk *walk) } int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic) { - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + const struct skcipher_alg *alg = + crypto_skcipher_alg(crypto_skcipher_reqtfm(req)); might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); walk->total = req->cryptlen; walk->nbytes = 0; @@ -326,13 +326,13 @@ int skcipher_walk_virt(struct skcipher_walk *walk, return 0; scatterwalk_start(&walk->in, req->src); scatterwalk_start(&walk->out, req->dst); - walk->blocksize = crypto_skcipher_blocksize(tfm); - walk->ivsize = crypto_skcipher_ivsize(tfm); - walk->alignmask = crypto_skcipher_alignmask(tfm); + walk->blocksize = alg->base.cra_blocksize; + walk->ivsize = alg->co.ivsize; + walk->alignmask = alg->base.cra_alignmask; if (alg->co.base.cra_type != &crypto_skcipher_type) walk->stride = alg->co.chunksize; else walk->stride = alg->walksize; @@ -342,11 +342,11 @@ int skcipher_walk_virt(struct skcipher_walk *walk, EXPORT_SYMBOL_GPL(skcipher_walk_virt); static int skcipher_walk_aead_common(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { - struct crypto_aead *tfm = crypto_aead_reqtfm(req); + const struct aead_alg *alg = crypto_aead_alg(crypto_aead_reqtfm(req)); walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) @@ -364,14 +364,14 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk, scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2); scatterwalk_done(&walk->in, 0, walk->total); scatterwalk_done(&walk->out, 0, walk->total); - walk->blocksize = crypto_aead_blocksize(tfm); - walk->stride = crypto_aead_chunksize(tfm); - walk->ivsize = crypto_aead_ivsize(tfm); - walk->alignmask = crypto_aead_alignmask(tfm); + walk->blocksize = alg->base.cra_blocksize; + walk->stride = alg->chunksize; + walk->ivsize = alg->ivsize; + walk->alignmask = alg->base.cra_alignmask; return skcipher_walk_first(walk); } int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, From patchwork Mon Dec 30 00:13:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854279 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C218A183CD9; Mon, 30 Dec 2024 00:16:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517762; cv=none; b=gLtSYpD3qoGi0F+FatoTIKfnYdR69gZFHPX9r/Yno0Un0mOevaeZtGUYSQ4uG5+aUPg5Ae2iJPRgZdrS32vprKlc+RdxxjGTNmsoHoJrjeiEGq0nZDF/ahtvnUHvEKQaq+ypbTl6AHMFWkrGA83NVvbKP9Zl42H6F6qaf+2phT4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517762; c=relaxed/simple; bh=l3mcIF8SEtYBua+ussMVqyNpUvcPBoNrQm89sO2KshY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BcXI3vFRhKAm25FoiZ1hUL6lkuDv8uEcJyQjXZrjJGazDqAdAUSDQVeSbMEms/LD4Bu58ocQkwt7CKSbLA2FU8TvyaMtQCHuE4PX/p/nVrXPKgsXWxqG/HLcWc/hnraW8AhZlwRdWPlF7eE9VGHj4AX+62suVojXtHE7X1FnBHI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=O5il+uw4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="O5il+uw4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3560C4CEE1; Mon, 30 Dec 2024 00:16:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517762; bh=l3mcIF8SEtYBua+ussMVqyNpUvcPBoNrQm89sO2KshY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O5il+uw47yUf4bNQKXq32psvRn/OjXkiWSchhO7EH6YLcHiQWhfzIVuy2natJvgp3 coB7MaqzMUG6J7buiQVoj5N/OY8tkKYlmOXu5jF8SiHshrlMngtjET2Ep0xcwH4K+T hcsuUbcDsZHW43Rw7UlkwXRvKFsEQ44k/RMVggp8D42zCvK34maSXPPf+NnMdKQDE+ f+61fYWvEW9BxwsGNqpK5feRzlwLXjtMFv7upZE5CIXw3nYJ0egqn+Zzl+pOmC28x6 Urz2j59NsldJo5BjW3a94bf5OSSPryQ03rKqfh5pG4OxZdLUQE/tj7SQukbbY7vqIb vM4xU3YqhtSHQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Christophe Leroy , Danny Tsen , Michael Ellerman , Naveen N Rao , Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data Date: Sun, 29 Dec 2024 16:13:59 -0800 Message-ID: <20241230001418.74739-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers p10_aes_gcm_crypt() is abusing the scatter_walk API to get the virtual address for the first source scatterlist element. But this code is only built for PPC64 which is a !HIGHMEM platform, and it can read past a page boundary from the address returned by scatterwalk_map() which means it already assumes the address is from the kernel's direct map. Thus, just use sg_virt() instead to get the same result in a simpler way. Cc: Christophe Leroy Cc: Danny Tsen Cc: Michael Ellerman Cc: Naveen N Rao Cc: Nicholas Piggin Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. arch/powerpc/crypto/aes-gcm-p10-glue.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/crypto/aes-gcm-p10-glue.c b/arch/powerpc/crypto/aes-gcm-p10-glue.c index f37b3d13fc53..2862c3cf8e41 100644 --- a/arch/powerpc/crypto/aes-gcm-p10-glue.c +++ b/arch/powerpc/crypto/aes-gcm-p10-glue.c @@ -212,11 +212,10 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv, struct p10_aes_gcm_ctx *ctx = crypto_tfm_ctx(tfm); u8 databuf[sizeof(struct gcm_ctx) + PPC_ALIGN]; struct gcm_ctx *gctx = PTR_ALIGN((void *)databuf, PPC_ALIGN); u8 hashbuf[sizeof(struct Hash_ctx) + PPC_ALIGN]; struct Hash_ctx *hash = PTR_ALIGN((void *)hashbuf, PPC_ALIGN); - struct scatter_walk assoc_sg_walk; struct skcipher_walk walk; u8 *assocmem = NULL; u8 *assoc; unsigned int cryptlen = req->cryptlen; unsigned char ivbuf[AES_BLOCK_SIZE+PPC_ALIGN]; @@ -232,12 +231,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv, memset(ivbuf, 0, sizeof(ivbuf)); memcpy(iv, riv, GCM_IV_SIZE); /* Linearize assoc, if not already linear */ if (req->src->length >= assoclen && req->src->length) { - scatterwalk_start(&assoc_sg_walk, req->src); - assoc = scatterwalk_map(&assoc_sg_walk); + assoc = sg_virt(req->src); /* ppc64 is !HIGHMEM */ } else { gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; /* assoc can be any length, so must be on heap */ @@ -251,13 +249,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv, vsx_begin(); gcmp10_init(gctx, iv, (unsigned char *) &ctx->enc_key, hash, assoc, assoclen); vsx_end(); - if (!assocmem) - scatterwalk_unmap(assoc); - else + if (assocmem) kfree(assocmem); if (enc) ret = skcipher_walk_aead_encrypt(&walk, req, false); else From patchwork Mon Dec 30 00:14:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854278 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 399C719882F; Mon, 30 Dec 2024 00:16:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517763; cv=none; b=j77ioStkYkWm2d35FJr6R2BJ36C6DDhWhMk34tSg1A8jpoUthV4f5GuBW+VtpNkk5Ls819h7UisqY/uUjUK4w4dJpys0nVRseGrMDnzQdCDxIm7SOkHkgukAoPEUv74UL9x6QI2FMiZsjzjZWBpyoLl7iy+kL0j4jryDmJCrWgQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517763; c=relaxed/simple; bh=Aca6uPsUCVLdX4GRY5oc+9eRqWCqEIgCJje21Fl6qos=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gAmDrLfs79xjxS97qe8R8h345NV+M/V2hGrshgwSb1UJwteoG4xnmJnccJh+SAxAFOJiVwb2pw+BlycJWwmyzIYk4GvDw8CUlkromPpmCmrqVZr7YqkVViQFwByUiPWmUI8kaRiQQVuCUYePOpirg9V75V4nDTzFBoLvduOxYPc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IaqQTqpr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IaqQTqpr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B224CC4CED7; Mon, 30 Dec 2024 00:16:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517762; bh=Aca6uPsUCVLdX4GRY5oc+9eRqWCqEIgCJje21Fl6qos=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IaqQTqprjEebzD3/APWUfCv730v52GNpEKc3mhF1u2/UbChVtIT3twHh5P+t8zDVI nk5lCqsKkOxkxvrIDUDsDK3MRuwchANaeymOiIFYGcebqYDCS5UwNkaZAcH1x4Anvw owynb3sn7ml9iLVL8FydhgWL8AFLPAvv7TmroPwRaDChh6kRhP25514uIn/kNubFBc VzFPfOdubPh7brwHWto3LPUHizBH70XlcQv4R9n4SLzBtX3XJU+00PS6Nxb6FwIdWl KPhqu4RL5LLq0w84CvcSHVbfki6lt+RXRlAPuYyAlgVOwQchtItNKG4YUedWLN/WLe zRta/MpzNMuOQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 12/29] crypto: scatterwalk - add new functions for skipping data Date: Sun, 29 Dec 2024 16:14:01 -0800 Message-ID: <20241230001418.74739-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add scatterwalk_skip() to skip the given number of bytes in a scatter_walk. Previously support for skipping was provided through scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was confusing and less efficient. Also add scatterwalk_start_at_pos() which starts a scatter_walk at the given position, equivalent to scatterwalk_start() + scatterwalk_skip(). This addresses another common need in a more streamlined way. Later patches will convert various users to use these functions. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 15 +++++++++++++++ include/crypto/scatterwalk.h | 18 ++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index 16f6ba896fb6..af436ad02e3f 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -13,10 +13,25 @@ #include #include #include #include +void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes) +{ + struct scatterlist *sg = walk->sg; + + nbytes += walk->offset - sg->offset; + + while (nbytes > sg->length) { + nbytes -= sg->length; + sg = sg_next(sg); + } + walk->sg = sg; + walk->offset = sg->offset + nbytes; +} +EXPORT_SYMBOL_GPL(scatterwalk_skip); + static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out) { void *src = out ? buf : sgdata; void *dst = out ? sgdata : buf; diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 924efbaefe67..5c7765f601e0 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -31,10 +31,26 @@ static inline void scatterwalk_start(struct scatter_walk *walk, { walk->sg = sg; walk->offset = sg->offset; } +/* + * This is equivalent to scatterwalk_start(walk, sg) followed by + * scatterwalk_skip(walk, pos). + */ +static inline void scatterwalk_start_at_pos(struct scatter_walk *walk, + struct scatterlist *sg, + unsigned int pos) +{ + while (pos > sg->length) { + pos -= sg->length; + sg = sg_next(sg); + } + walk->sg = sg; + walk->offset = sg->offset + pos; +} + static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk) { unsigned int len = walk->sg->offset + walk->sg->length - walk->offset; unsigned int len_this_page = offset_in_page(~walk->offset) + 1; return len_this_page > len ? len : len_this_page; @@ -90,10 +106,12 @@ static inline void scatterwalk_done(struct scatter_walk *walk, int out, if (!more || walk->offset >= walk->sg->offset + walk->sg->length || !(walk->offset & (PAGE_SIZE - 1))) scatterwalk_pagedone(walk, out, more); } +void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); + void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out); void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, unsigned int start, unsigned int nbytes, int out); From patchwork Mon Dec 30 00:14:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854277 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EC0A19CC34; Mon, 30 Dec 2024 00:16:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517764; cv=none; b=GQrncbUXID7CBrF1ZGVV8sqvYzsEFCRygHNbfhXVhrO7dhfYX03P8naLI4pM3Pn+mFlcY+xdiN0JuAX0bYiQG/EtXqfl2XLUifhFfXkEMYr2Uj/FGqYAQgT8f/Zd71CEB2VqHPGzINaN7fVoMlYHddxlBK4tqJGyXXUcjVuAj1w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517764; c=relaxed/simple; bh=g1Sicjo3WPDOTEntvGipxb6vfLBqpP+1OtFMoBb3F/s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Kf/IR9XyqMMA6PJf9fBrHsU/PLPmHDGxwBal1bkjd9kLOyjbtS9ekecB8eZVLGqPSh0yKAHpNHjuPju8TAp481GvdfBGDJFgK0ytYV/0tZiUrVOIqXFI0uUKBs/BGdQCWDs87OQ0J6WHaEFjbeq7SvGuHsynIROjOIl4WW+3I58= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=EE7ZUPdE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EE7ZUPdE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 07671C4CED7; Mon, 30 Dec 2024 00:16:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517764; bh=g1Sicjo3WPDOTEntvGipxb6vfLBqpP+1OtFMoBb3F/s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EE7ZUPdEs6gq4ZJHyxBrpeS0Zbu0Fd/jruAryECxZgzOYDaBtapoVF25BOdhXwAMB TfZ4VjuSnswCJUYhKp1W700mb+gBZAnjn+8OeoTxueTyi8iRuOs/s46tXfKxu5vJq2 c9Hlh0Dj9x4lEfpGYie0kV0Y/yNn0zQuCBOstT2Dt9Yw0Lt/SvR0KLc4D5PcGgXWf5 9O5QEOS2DtiE5mU2z9UU5m1etRpbrDHhujFxwsymU4KjtSxSZBCTWFcvx9WGy2RTkJ dj7SFRZuZNRxINaIdpOWGNJBuzZesaa/1ru1tnAXj6J9AX7HNzvOi2pQxmEgQCJXyE 6FSK0oxlRsaxQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Boris Pismenny , Jakub Kicinski , John Fastabend Subject: [PATCH v2 15/29] crypto: scatterwalk - add scatterwalk_get_sglist() Date: Sun, 29 Dec 2024 16:14:04 -0800 Message-ID: <20241230001418.74739-16-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add a function that creates a scatterlist that represents the remaining data in a walk. This will be used to replace chain_to_walk() in net/tls/tls_device_fallback.c so that it will no longer need to reach into the internals of struct scatter_walk. Cc: Boris Pismenny Cc: Jakub Kicinski Cc: John Fastabend Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. include/crypto/scatterwalk.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 1689ecd7ddaf..f6262d05a3c7 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -67,10 +67,27 @@ static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, static inline struct page *scatterwalk_page(struct scatter_walk *walk) { return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT); } +/* + * Create a scatterlist that represents the remaining data in a walk. Uses + * chaining to reference the original scatterlist, so this uses at most two + * entries in @sg_out regardless of the number of entries in the original list. + * Assumes that sg_init_table() was already done. + */ +static inline void scatterwalk_get_sglist(struct scatter_walk *walk, + struct scatterlist sg_out[2]) +{ + if (walk->offset >= walk->sg->offset + walk->sg->length) + scatterwalk_start(walk, sg_next(walk->sg)); + sg_set_page(sg_out, sg_page(walk->sg), + walk->sg->offset + walk->sg->length - walk->offset, + walk->offset); + scatterwalk_crypto_chain(sg_out, sg_next(walk->sg), 2); +} + static inline void scatterwalk_unmap(void *vaddr) { kunmap_local(vaddr); } From patchwork Mon Dec 30 00:14:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854276 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF9F919D89B; Mon, 30 Dec 2024 00:16:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517765; cv=none; b=tBYYfSafFiCT6vhQxxjOWoUTEsr3O4MX20nmzC0JBMAzI0qUsk339wKnLMT2q9jDZ6ICLfMwEjIrirdhEMRZZ4ncFsN22UJilHtYJIuOAQxaboN95/AJaqi7Qbui3Lx2FxgL00evqbV3ZeDXHpczaViG+bKko7eZe87ahUZ5Pnw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517765; c=relaxed/simple; bh=ouuUySUoLhMrrlBcXOe8B+9HrJzjwC7w9DOOQhZHjCM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l3gqEX8Z1ybg0wJYi3uQsWQBaolDAXNh3QeAY9c5NMHOByaVa+1vn9akb3jJyrDxMgbGkfFRf2NaqAQKKezWoio1PrHdeK6Q6E7I7GShpyP0CNHVKyQM/2+dXPckajoz/U5n0Enk95fCkTQxxwpvjMplf/dkltDVjbh37WdJvbU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=b0MaNvjW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="b0MaNvjW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A798AC4CEDC; Mon, 30 Dec 2024 00:16:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517764; bh=ouuUySUoLhMrrlBcXOe8B+9HrJzjwC7w9DOOQhZHjCM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b0MaNvjWDn5EygCKZDuBSrXH9cuut+CXK5MMrnjLuPZPxISLNyN+xGBZj7m5WGgTj gouA3Dy+g+1qJAbmU6Ilf84nvIluzwGuXuQBnEephADIj0iaHc79hZdceRz04abiw0 k9uLxF0HCLe/2Vyh1XR1PWcc9Y6u4u0yDLG2IcAq7nj/BmEFYMN5r6cOokThLkFXpK KWgvlQsEPSVumt2pUKeDwRPdj4GtU3QrjTxdzYqb5vmgpwsKUncU8QGPQOoXP5uIUT LfrLCUF88jsRMubvRfag9dEex2Gd4mNFlaPDLUkB3gNoUTU6eL954Bpx6Cf3cs8AEj e2D9+s2b9P/tA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 17/29] crypto: aegis - use the new scatterwalk functions Date: Sun, 29 Dec 2024 16:14:06 -0800 Message-ID: <20241230001418.74739-18-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Signed-off-by: Eric Biggers --- crypto/aegis128-core.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/crypto/aegis128-core.c b/crypto/aegis128-core.c index 6cbff298722b..15d64d836356 100644 --- a/crypto/aegis128-core.c +++ b/crypto/aegis128-core.c @@ -282,14 +282,14 @@ static void crypto_aegis128_process_ad(struct aegis_state *state, union aegis_block buf; unsigned int pos = 0; scatterwalk_start(&walk, sg_src); while (assoclen != 0) { - unsigned int size = scatterwalk_clamp(&walk, assoclen); + unsigned int size; + const u8 *mapped = scatterwalk_next(&walk, assoclen, &size); unsigned int left = size; - void *mapped = scatterwalk_map(&walk); - const u8 *src = (const u8 *)mapped; + const u8 *src = mapped; if (pos + size >= AEGIS_BLOCK_SIZE) { if (pos > 0) { unsigned int fill = AEGIS_BLOCK_SIZE - pos; memcpy(buf.bytes + pos, src, fill); @@ -306,13 +306,11 @@ static void crypto_aegis128_process_ad(struct aegis_state *state, memcpy(buf.bytes + pos, src, left); pos += left; assoclen -= size; - scatterwalk_unmap(mapped); - scatterwalk_advance(&walk, size); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, mapped, size); } if (pos > 0) { memset(buf.bytes + pos, 0, AEGIS_BLOCK_SIZE - pos); crypto_aegis128_update_a(state, &buf, do_simd); From patchwork Mon Dec 30 00:14:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854275 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41C7A19DFAB; Mon, 30 Dec 2024 00:16:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517765; cv=none; b=W3hAq6ejlKajX2+7pPZxRbbiB9YeFKIlzo+lR+4+hpPOwzQUHZtai1myXd1VuUZbgSAomV/PrYtDPLeWl7F0mbqYND9gDedwMLZdnQMbBxFT9IGRh9hHy4juP2hfEP4Ang8oHrWzddvfrnPOx24Cp9XcL2w8WJ+0yextXGPiEPY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517765; c=relaxed/simple; bh=flgH9kGUEUUdU5GGIoLcecbOKpZ/9P71he8YGiVnKlw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MvHrZ9bDUyAYh74MWEuC2k9pd/vJm2Rl+7CWsDEWmXVP9yUVpiplzV2DFh/cpEl8Gx7xf7410LeBtP2bs3fdFknnQh0Mbbl7g3HlbDdyNHLJXMvOXZiYKIXug7G7LVLQSoTZ2o6p92Jh6qSXYG+1mqSnYEG4qH7wa/XGxKBaAiQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JtFliWsz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JtFliWsz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA647C4AF0F; Mon, 30 Dec 2024 00:16:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517765; bh=flgH9kGUEUUdU5GGIoLcecbOKpZ/9P71he8YGiVnKlw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JtFliWsz1MQWt1rERIsI2ACTKt7OkpSZV/Ws3CTI/hJqzB5D+kTwjNVynffI78p3L mX2IDLsguSW5936FvY2DfgWFk8ehPLyx/UrUFmVm84FPMdFaRxLcqUgQzL0RvsW4YE SLfK1NxZgtKxTVFLNOmrcnDk20B3fhaa8wEPh+svrQSRFfSZMFA16cuRzDvgORuNji H1egmLKAUNtkE8V08WKXPqgzr8GNP0dNiqBN5ClYdolActcKsV0Oy7wUdwba+bvndg QNahR/v56j+mt67qLDUHeFxjTsHhfKpCaZQijTVdy9m6nyc4Coi7PrDG1gBo5N1nAJ oTtmS9pWaxRsA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 18/29] crypto: arm/ghash - use the new scatterwalk functions Date: Sun, 29 Dec 2024 16:14:07 -0800 Message-ID: <20241230001418.74739-19-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary code that seemed to be intended to advance to the next sg entry, which is already handled by the scatterwalk functions. Signed-off-by: Eric Biggers --- arch/arm/crypto/ghash-ce-glue.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c index 3af997082534..9613ffed84f9 100644 --- a/arch/arm/crypto/ghash-ce-glue.c +++ b/arch/arm/crypto/ghash-ce-glue.c @@ -457,30 +457,23 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[], u32 len) int buf_count = 0; scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, len); - u8 *p; + unsigned int n; + const u8 *p; - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - - p = scatterwalk_map(&walk); + p = scatterwalk_next(&walk, len, &n); gcm_update_mac(dg, p, n, buf, &buf_count, ctx); - scatterwalk_unmap(p); + scatterwalk_done_src(&walk, p, n); if (unlikely(len / SZ_4K > (len - n) / SZ_4K)) { kernel_neon_end(); kernel_neon_begin(); } len -= n; - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, 0, len); } while (len); if (buf_count) { memset(&buf[buf_count], 0, GHASH_BLOCK_SIZE - buf_count); pmull_ghash_update_p64(1, dg, buf, ctx->h, NULL); From patchwork Mon Dec 30 00:14:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854273 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 394A11A23A7; Mon, 30 Dec 2024 00:16:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517767; cv=none; b=hwe7I+lomIVlwBMmbYG7aHymZSge5ZUb6jIIOl4g0FKqLdWii3mMlTxuXNkOLlN28J9b0/MVYvBCHRcOFeySpbEVFUf+0XQwJI3zu5LDmdyddDpInbcvyzhtp3Jev9Zmv5G6rStlHu375EX7S840Lb0V0orL9U9iuAr5kN3o0BM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517767; c=relaxed/simple; bh=FDzNrJCoVdAmqgX/rTeMmUPgE6IOzOA1GEP6JN6T8HU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d5w0+FxE6JRjAaCyrpn27laACkykDuGdUwdGYNy09oUwnjtWOOb7kjxRZmyJEFhvD/yVYEZXy3RVSx5fr3Mx4t71K6pXpzBajRQsQ0ppXpOnBG06OaOc3pOTmsbYIEuswt4iDzPPjOX/HBKTEKo9M4RxV8vx+jBI2E9yjXNsC7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=T/UvuFwR; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="T/UvuFwR" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89B7FC4CEDD; Mon, 30 Dec 2024 00:16:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517765; bh=FDzNrJCoVdAmqgX/rTeMmUPgE6IOzOA1GEP6JN6T8HU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T/UvuFwRWdKLULyZldooFldhH+tu37W6X22cmGB8QkaaVYdGUqE2vErdBb3LOEZO/ gZ7fJYbPzIF/smtDRzJE7n1NbbUqOI921yjPfm8iNWjQRA2hPN9l65SIy7+fvftSDT JempkVhxX8ZqgZ+D3JV9BMUUqPPu3rnxwgkJQ3idcCGc7mAa6af3GogSRr6269qD0F 7gFNFvUU5uEllXPOgUP8xIIqaJeJLfa+6v+SnZV81TKv/RQrSQ/RYCAV5wwt2i3OvS wpvZamcvU+Wez3NStN+LB8Q5H7TF9CqVwwuPQ6d3G+FC6cijFylUrNdOxJvuYyzWOn gx68h5vs9jGAg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Christophe Leroy , Madhavan Srinivasan , Michael Ellerman , Naveen N Rao , Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 20/29] crypto: nx - use the new scatterwalk functions Date: Sun, 29 Dec 2024 16:14:09 -0800 Message-ID: <20241230001418.74739-21-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers - In nx_walk_and_build(), use scatterwalk_start_at_pos() instead of a more complex way to achieve the same result. - Also in nx_walk_and_build(), use the new functions scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary code that seemed to be intended to advance to the next sg entry, which is already handled by the scatterwalk functions. Note that nx_walk_and_build() does not actually read or write the mapped virtual address, and thus it is misusing the scatter_walk API. It really should just access the scatterlist directly. This patch does not try to address this existing issue. - In nx_gca(), use memcpy_from_sglist() instead of a more complex way to achieve the same result. - In various functions, replace calls to scatterwalk_map_and_copy() with memcpy_from_sglist() or memcpy_to_sglist() as appropriate. Note that this eliminates the confusing 'out' argument (which this driver had tried to work around by defining the missing constants for it...) Cc: Christophe Leroy Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Naveen N Rao Cc: Nicholas Piggin Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. drivers/crypto/nx/nx-aes-ccm.c | 16 ++++++---------- drivers/crypto/nx/nx-aes-gcm.c | 17 ++++++----------- drivers/crypto/nx/nx.c | 31 +++++-------------------------- drivers/crypto/nx/nx.h | 3 --- 4 files changed, 17 insertions(+), 50 deletions(-) diff --git a/drivers/crypto/nx/nx-aes-ccm.c b/drivers/crypto/nx/nx-aes-ccm.c index c843f4c6f684..56a0b3a67c33 100644 --- a/drivers/crypto/nx/nx-aes-ccm.c +++ b/drivers/crypto/nx/nx-aes-ccm.c @@ -215,17 +215,15 @@ static int generate_pat(u8 *iv, */ if (b1) { memset(b1, 0, 16); if (assoclen <= 65280) { *(u16 *)b1 = assoclen; - scatterwalk_map_and_copy(b1 + 2, req->src, 0, - iauth_len, SCATTERWALK_FROM_SG); + memcpy_from_sglist(b1 + 2, req->src, 0, iauth_len); } else { *(u16 *)b1 = (u16)(0xfffe); *(u32 *)&b1[2] = assoclen; - scatterwalk_map_and_copy(b1 + 6, req->src, 0, - iauth_len, SCATTERWALK_FROM_SG); + memcpy_from_sglist(b1 + 6, req->src, 0, iauth_len); } } /* now copy any remaining AAD to scatterlist and call nx... */ if (!assoclen) { @@ -339,13 +337,12 @@ static int ccm_nx_decrypt(struct aead_request *req, spin_lock_irqsave(&nx_ctx->lock, irq_flags); nbytes -= authsize; /* copy out the auth tag to compare with later */ - scatterwalk_map_and_copy(priv->oauth_tag, - req->src, nbytes + req->assoclen, authsize, - SCATTERWALK_FROM_SG); + memcpy_from_sglist(priv->oauth_tag, req->src, nbytes + req->assoclen, + authsize); rc = generate_pat(iv, req, nx_ctx, authsize, nbytes, assoclen, csbcpb->cpb.aes_ccm.in_pat_or_b0); if (rc) goto out; @@ -463,13 +460,12 @@ static int ccm_nx_encrypt(struct aead_request *req, processed += to_process; } while (processed < nbytes); /* copy out the auth tag */ - scatterwalk_map_and_copy(csbcpb->cpb.aes_ccm.out_pat_or_mac, - req->dst, nbytes + req->assoclen, authsize, - SCATTERWALK_TO_SG); + memcpy_to_sglist(req->dst, nbytes + req->assoclen, + csbcpb->cpb.aes_ccm.out_pat_or_mac, authsize); out: spin_unlock_irqrestore(&nx_ctx->lock, irq_flags); return rc; } diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c index 4a796318b430..b7fe2de96d96 100644 --- a/drivers/crypto/nx/nx-aes-gcm.c +++ b/drivers/crypto/nx/nx-aes-gcm.c @@ -101,20 +101,17 @@ static int nx_gca(struct nx_crypto_ctx *nx_ctx, u8 *out, unsigned int assoclen) { int rc; struct nx_csbcpb *csbcpb_aead = nx_ctx->csbcpb_aead; - struct scatter_walk walk; struct nx_sg *nx_sg = nx_ctx->in_sg; unsigned int nbytes = assoclen; unsigned int processed = 0, to_process; unsigned int max_sg_len; if (nbytes <= AES_BLOCK_SIZE) { - scatterwalk_start(&walk, req->src); - scatterwalk_copychunks(out, &walk, nbytes, SCATTERWALK_FROM_SG); - scatterwalk_done(&walk, SCATTERWALK_FROM_SG, 0); + memcpy_from_sglist(out, req->src, 0, nbytes); return 0; } NX_CPB_FDM(csbcpb_aead) &= ~NX_FDM_CONTINUATION; @@ -389,23 +386,21 @@ static int gcm_aes_nx_crypt(struct aead_request *req, int enc, } while (processed < nbytes); mac: if (enc) { /* copy out the auth tag */ - scatterwalk_map_and_copy( - csbcpb->cpb.aes_gcm.out_pat_or_mac, + memcpy_to_sglist( req->dst, req->assoclen + nbytes, - crypto_aead_authsize(crypto_aead_reqtfm(req)), - SCATTERWALK_TO_SG); + csbcpb->cpb.aes_gcm.out_pat_or_mac, + crypto_aead_authsize(crypto_aead_reqtfm(req))); } else { u8 *itag = nx_ctx->priv.gcm.iauth_tag; u8 *otag = csbcpb->cpb.aes_gcm.out_pat_or_mac; - scatterwalk_map_and_copy( + memcpy_from_sglist( itag, req->src, req->assoclen + nbytes, - crypto_aead_authsize(crypto_aead_reqtfm(req)), - SCATTERWALK_FROM_SG); + crypto_aead_authsize(crypto_aead_reqtfm(req))); rc = crypto_memneq(itag, otag, crypto_aead_authsize(crypto_aead_reqtfm(req))) ? -EBADMSG : 0; } out: diff --git a/drivers/crypto/nx/nx.c b/drivers/crypto/nx/nx.c index 010e87d9da36..dd95e5361d88 100644 --- a/drivers/crypto/nx/nx.c +++ b/drivers/crypto/nx/nx.c @@ -151,44 +151,23 @@ struct nx_sg *nx_walk_and_build(struct nx_sg *nx_dst, unsigned int start, unsigned int *src_len) { struct scatter_walk walk; struct nx_sg *nx_sg = nx_dst; - unsigned int n, offset = 0, len = *src_len; + unsigned int n, len = *src_len; char *dst; /* we need to fast forward through @start bytes first */ - for (;;) { - scatterwalk_start(&walk, sg_src); - - if (start < offset + sg_src->length) - break; - - offset += sg_src->length; - sg_src = sg_next(sg_src); - } - - /* start - offset is the number of bytes to advance in the scatterlist - * element we're currently looking at */ - scatterwalk_advance(&walk, start - offset); + scatterwalk_start_at_pos(&walk, sg_src, start); while (len && (nx_sg - nx_dst) < sglen) { - n = scatterwalk_clamp(&walk, len); - if (!n) { - /* In cases where we have scatterlist chain sg_next - * handles with it properly */ - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - dst = scatterwalk_map(&walk); + dst = scatterwalk_next(&walk, len, &n); nx_sg = nx_build_sg_list(nx_sg, dst, &n, sglen - (nx_sg - nx_dst)); - len -= n; - scatterwalk_unmap(dst); - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, SCATTERWALK_FROM_SG, len); + scatterwalk_done_src(&walk, dst, n); + len -= n; } /* update to_process */ *src_len -= len; /* return the moved destination pointer */ diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h index 2697baebb6a3..e1b4b6927bec 100644 --- a/drivers/crypto/nx/nx.h +++ b/drivers/crypto/nx/nx.h @@ -187,9 +187,6 @@ extern struct shash_alg nx_shash_aes_xcbc_alg; extern struct shash_alg nx_shash_sha512_alg; extern struct shash_alg nx_shash_sha256_alg; extern struct nx_crypto_driver nx_driver; -#define SCATTERWALK_TO_SG 1 -#define SCATTERWALK_FROM_SG 0 - #endif From patchwork Mon Dec 30 00:14:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854274 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 929291A0B15; Mon, 30 Dec 2024 00:16:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517766; cv=none; b=dccLli6OUecuJ2L8AgjrFr2O1df9+0p5mRvtamYIFgu/5AJCAEm4wsaEqT+LGinEZXYZ1P+6xnrw/do5MIj5g9qIaM6iSafLWyAfbmMDWx9pVknsMQlT5pAXXeXUlmsFJl4UWmvndA89YncCzPS8QbhW4FEVC3OCl7cv+bNLaZk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517766; c=relaxed/simple; bh=H979TZHDgKLaUbglwnxL0UQbpoLtqMQtPXY6pBrdkHY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=K1XM3f5rYBTkfR5oKW98i9T6hWb3wWKTIPJFsn1sYJDdpTOsrrX+q6XOVFur8lWLlhuHxSC/CamzgkY3iipL+/mKdqFjAg5Yobbr/Opc5bPmqViUMsLBv8unpZEJ3pJqqMy0uJu+ymgqpShsdF+n5lhLswVqLd1rhbkfnIdbOFw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OVv7ieP8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OVv7ieP8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2CCCC4CEDC; Mon, 30 Dec 2024 00:16:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517766; bh=H979TZHDgKLaUbglwnxL0UQbpoLtqMQtPXY6pBrdkHY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OVv7ieP8SjYk/HjaXd0tHHAvWhYyEeizW7IRUKiiA8xqB5Yq63CHK6thVfRetcYdY XrzADxDGMfIio8KG4x38IIh558UFNmRQ95drvi+0u275GXhE/ZJT3YiTenqo2BB9JC LLGpYM7FxTJ/b8vgYPOfCIIKMZZ9IGbVCuWwV9AXSaIEJvlrfQdUEXToZD2+TQ5zA0 uU8xcsI/+F8GRsBGYCAjCT5PpF456KOLQn7/0vH8LjIm08dA0oEbZpEzrYSmIcshQ0 d3ObtqAq5VVH6+bHquDAr3RJevWu8J+5hWPWQF6dqDblo5zHstQJQrO0TMQZUtiS7v wlKCqJxFSDOQA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Harald Freudenberger , Holger Dengler , linux-s390@vger.kernel.org Subject: [PATCH v2 21/29] crypto: s390/aes-gcm - use the new scatterwalk functions Date: Sun, 29 Dec 2024 16:14:10 -0800 Message-ID: <20241230001418.74739-22-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src() and scatterwalk_done_dst() which consolidate scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Besides the new functions being a bit easier to use, this is necessary because scatterwalk_done() is planned to be removed. Cc: Harald Freudenberger Cc: Holger Dengler Cc: linux-s390@vger.kernel.org Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. arch/s390/crypto/aes_s390.c | 33 +++++++++++++-------------------- 1 file changed, 13 insertions(+), 20 deletions(-) diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index 9c46b1b630b1..7fd303df05ab 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -785,32 +785,25 @@ static void gcm_walk_start(struct gcm_sg_walk *gw, struct scatterlist *sg, scatterwalk_start(&gw->walk, sg); } static inline unsigned int _gcm_sg_clamp_and_map(struct gcm_sg_walk *gw) { - struct scatterlist *nextsg; - - gw->walk_bytes = scatterwalk_clamp(&gw->walk, gw->walk_bytes_remain); - while (!gw->walk_bytes) { - nextsg = sg_next(gw->walk.sg); - if (!nextsg) - return 0; - scatterwalk_start(&gw->walk, nextsg); - gw->walk_bytes = scatterwalk_clamp(&gw->walk, - gw->walk_bytes_remain); - } - gw->walk_ptr = scatterwalk_map(&gw->walk); + if (gw->walk_bytes_remain == 0) + return 0; + gw->walk_ptr = scatterwalk_next(&gw->walk, gw->walk_bytes_remain, + &gw->walk_bytes); return gw->walk_bytes; } static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw, - unsigned int nbytes) + unsigned int nbytes, bool out) { gw->walk_bytes_remain -= nbytes; - scatterwalk_unmap(gw->walk_ptr); - scatterwalk_advance(&gw->walk, nbytes); - scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain); + if (out) + scatterwalk_done_dst(&gw->walk, gw->walk_ptr, nbytes); + else + scatterwalk_done_src(&gw->walk, gw->walk_ptr, nbytes); gw->walk_ptr = NULL; } static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) { @@ -842,11 +835,11 @@ static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) while (1) { n = min(gw->walk_bytes, AES_BLOCK_SIZE - gw->buf_bytes); memcpy(gw->buf + gw->buf_bytes, gw->walk_ptr, n); gw->buf_bytes += n; - _gcm_sg_unmap_and_advance(gw, n); + _gcm_sg_unmap_and_advance(gw, n, false); if (gw->buf_bytes >= minbytesneeded) { gw->ptr = gw->buf; gw->nbytes = gw->buf_bytes; goto out; } @@ -902,11 +895,11 @@ static int gcm_in_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) memmove(gw->buf, gw->buf + bytesdone, n); gw->buf_bytes = n; } else gw->buf_bytes = 0; } else - _gcm_sg_unmap_and_advance(gw, bytesdone); + _gcm_sg_unmap_and_advance(gw, bytesdone, false); return bytesdone; } static int gcm_out_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) @@ -920,14 +913,14 @@ static int gcm_out_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) for (i = 0; i < bytesdone; i += n) { if (!_gcm_sg_clamp_and_map(gw)) return i; n = min(gw->walk_bytes, bytesdone - i); memcpy(gw->walk_ptr, gw->buf + i, n); - _gcm_sg_unmap_and_advance(gw, n); + _gcm_sg_unmap_and_advance(gw, n, true); } } else - _gcm_sg_unmap_and_advance(gw, bytesdone); + _gcm_sg_unmap_and_advance(gw, bytesdone, true); return bytesdone; } static int gcm_aes_crypt(struct aead_request *req, unsigned int flags) From patchwork Mon Dec 30 00:14:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854272 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAE971A3031; Mon, 30 Dec 2024 00:16:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517767; cv=none; b=JkB68abSAHv9+nkIf+of9gR6TTRqV9bG07FX492ZsBq58EWAybTpFJpJv05fKm9zchjxLsQV56nkrZBWkJc6mRz6tYuGw510HFi2+J1qVUFdnMz9Eu2RN3FMAzDE7MWfce7Qhhrc1llgfREs26c0VNK/ReTBjuuUxXlmreYE9hM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517767; c=relaxed/simple; bh=Id9OlnOz/d+SiVoBEHgQnOUjd+cNJqgxBPN0qKJ4qTw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kHlr8g1sHJJwrt6DUvXSgY+RAh3CVbE+urvCGJK2+39FdIslAy4hYU/6+QTdJGUe1fFRD58v7ehRHZfxTRgFoK3PEb1bHAqc0nkiBlZX/udYKmgwCguq0mlnlQImhmWpHe+pmhsHq6Yy9uzOi2HfBTF1mOIqpel29kJIt/BUSEQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kTWlGtUP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kTWlGtUP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8B1E7C4CED1; Mon, 30 Dec 2024 00:16:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517767; bh=Id9OlnOz/d+SiVoBEHgQnOUjd+cNJqgxBPN0qKJ4qTw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kTWlGtUP6QMja6h5RgOVuD1T6HdMyJyy6Vr46sHKS9dvrg+mUj7QHHKHHzH/nGmog 043XOYVSuxYfCP/6JvA2/tWqixBMZs18+3ZI40n7Tom0OjsHdxwogEcn+dZHXig67v 5ch83O2l9x0CSPuJeyNVa7n3jP3yGY9LIXhn3p9F5hRRhu9UtLk9zXZ9Qe0NS8aI2P WjyB13eMb/Q+CVsEX+Msim6kgdCPhqgMZNDyFYNI3wms0FWdGNeUNG6Sa0kCnwWAdE oJ26qevfjPPByvynZeOS7nYJYMtErlv+JoljMA5DCgQ0fq6FMGJgYSHQdush4vz7hO G0xpE53SXAaNg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 24/29] crypto: x86/aes-gcm - use the new scatterwalk functions Date: Sun, 29 Dec 2024 16:14:13 -0800 Message-ID: <20241230001418.74739-25-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In gcm_process_assoc(), use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Also rename some variables to avoid implying that anything is actually mapped (it's not), or that the loop is going page by page (it is for now, but nothing actually requires that to be the case). Signed-off-by: Eric Biggers --- arch/x86/crypto/aesni-intel_glue.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 11e95fc62636..22e61efbf5fe 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -1289,45 +1289,45 @@ static void gcm_process_assoc(const struct aes_gcm_key *key, u8 ghash_acc[16], memset(ghash_acc, 0, 16); scatterwalk_start(&walk, sg_src); while (assoclen) { - unsigned int len_this_page = scatterwalk_clamp(&walk, assoclen); - void *mapped = scatterwalk_map(&walk); - const void *src = mapped; + unsigned int orig_len_this_step; + const u8 *orig_src = scatterwalk_next(&walk, assoclen, + &orig_len_this_step); + unsigned int len_this_step = orig_len_this_step; unsigned int len; + const u8 *src = orig_src; - assoclen -= len_this_page; - scatterwalk_advance(&walk, len_this_page); if (unlikely(pos)) { - len = min(len_this_page, 16 - pos); + len = min(len_this_step, 16 - pos); memcpy(&buf[pos], src, len); pos += len; src += len; - len_this_page -= len; + len_this_step -= len; if (pos < 16) goto next; aes_gcm_aad_update(key, ghash_acc, buf, 16, flags); pos = 0; } - len = len_this_page; + len = len_this_step; if (unlikely(assoclen)) /* Not the last segment yet? */ len = round_down(len, 16); aes_gcm_aad_update(key, ghash_acc, src, len, flags); src += len; - len_this_page -= len; - if (unlikely(len_this_page)) { - memcpy(buf, src, len_this_page); - pos = len_this_page; + len_this_step -= len; + if (unlikely(len_this_step)) { + memcpy(buf, src, len_this_step); + pos = len_this_step; } next: - scatterwalk_unmap(mapped); - scatterwalk_pagedone(&walk, 0, assoclen); + scatterwalk_done_src(&walk, orig_src, orig_len_this_step); if (need_resched()) { kernel_fpu_end(); kernel_fpu_begin(); } + assoclen -= orig_len_this_step; } if (unlikely(pos)) aes_gcm_aad_update(key, ghash_acc, buf, pos, flags); } From patchwork Mon Dec 30 00:14:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854271 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C33D51A83F8; Mon, 30 Dec 2024 00:16:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517768; cv=none; b=cY/xMXDTlnVJU2AGpcd+awAhp/rcgDFtm+6e7VWUPmIL1gaC/flj9W01RmQOVRLbsduj7YdG2puN6mQOn8syhGim6nx1gemKOIb8z/KPi6ZLWDjqVHDafJyIG7pGxVdUfw+bd/X463jqP0xNqzRovJuKw2xjjEXbtXhHDGRdOSc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517768; c=relaxed/simple; bh=5aJ14qFdsp3EjNcjdgSBEGYXvCZe209dFMhq2DFCmN0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JtFo1lBM/sHfXfmsc9Gv+ZC1vSlhs22ZtFpDFwWHix41A49jjzMu13M413FVVbowxTe9oUZ30nOJ166QprapPN3syM72jLn2ezKeTeCxgTdMIOs1at+z9qp0wO00EtHEMQTiOiWYUyQnd94qhGbkF9UmLhI+DEJqMZSjCcHiBbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Zrt3lNbn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Zrt3lNbn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E78AC4CEE0; Mon, 30 Dec 2024 00:16:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517768; bh=5aJ14qFdsp3EjNcjdgSBEGYXvCZe209dFMhq2DFCmN0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Zrt3lNbnAS43581HvbVTvyRO4az38cnZlbFuRpBQrSK3GsYN4oF3p3E0VWeTa3Duo Ll9IO0k0TUUpHi4tdE7bWdKGOZPBXxgiJ8snA8VnFtmbzWB46jZ1RGojGVpW4srM7W LTdaS88R/xy77WlL7Gl9N+6jW3o3cx5xnDvm5bCV5GC3QgkRfO3qtk9hAOKgKuN/U9 eUKxW+W/ZLpFHtTlLCxNXTo1ZyPvuZD1WPM/rj5qiyDWX4GO7W07zTOYmitk5T6wDI okvbOlw6X4VLjdj69i6K5OyVowqZb8R1gQFPxwUfDI+0eFCMux+vZLXw5k04KSYWiB Yx5ecKor+UVnw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Boris Pismenny , Jakub Kicinski , John Fastabend Subject: [PATCH v2 26/29] net/tls: use the new scatterwalk functions Date: Sun, 29 Dec 2024 16:14:15 -0800 Message-ID: <20241230001418.74739-27-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Replace calls to the deprecated function scatterwalk_copychunks() with memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), or scatterwalk_skip() as appropriate. The new functions generally behave more as expected and eliminate the need to call scatterwalk_done() or scatterwalk_pagedone(). However, the new functions intentionally do not advance to the next sg entry right away, which would have broken chain_to_walk() which is accessing the fields of struct scatter_walk directly. To avoid this, replace chain_to_walk() with scatterwalk_get_sglist() which supports the needed functionality. Cc: Boris Pismenny Cc: Jakub Kicinski Cc: John Fastabend Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. net/tls/tls_device_fallback.c | 31 ++++++------------------------- 1 file changed, 6 insertions(+), 25 deletions(-) diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c index f9e3d3d90dcf..03d508a45aae 100644 --- a/net/tls/tls_device_fallback.c +++ b/net/tls/tls_device_fallback.c @@ -35,21 +35,10 @@ #include #include #include "tls.h" -static void chain_to_walk(struct scatterlist *sg, struct scatter_walk *walk) -{ - struct scatterlist *src = walk->sg; - int diff = walk->offset - src->offset; - - sg_set_page(sg, sg_page(src), - src->length - diff, walk->offset); - - scatterwalk_crypto_chain(sg, sg_next(src), 2); -} - static int tls_enc_record(struct aead_request *aead_req, struct crypto_aead *aead, char *aad, char *iv, __be64 rcd_sn, struct scatter_walk *in, struct scatter_walk *out, int *in_len, @@ -67,20 +56,17 @@ static int tls_enc_record(struct aead_request *aead_req, DEBUG_NET_WARN_ON_ONCE(!cipher_desc || !cipher_desc->offloadable); buf_size = TLS_HEADER_SIZE + cipher_desc->iv; len = min_t(int, *in_len, buf_size); - scatterwalk_copychunks(buf, in, len, 0); - scatterwalk_copychunks(buf, out, len, 1); + memcpy_from_scatterwalk(buf, in, len); + memcpy_to_scatterwalk(out, buf, len); *in_len -= len; if (!*in_len) return 0; - scatterwalk_pagedone(in, 0, 1); - scatterwalk_pagedone(out, 1, 1); - len = buf[4] | (buf[3] << 8); len -= cipher_desc->iv; tls_make_aad(aad, len - cipher_desc->tag, (char *)&rcd_sn, buf[0], prot); @@ -88,12 +74,12 @@ static int tls_enc_record(struct aead_request *aead_req, sg_init_table(sg_in, ARRAY_SIZE(sg_in)); sg_init_table(sg_out, ARRAY_SIZE(sg_out)); sg_set_buf(sg_in, aad, TLS_AAD_SPACE_SIZE); sg_set_buf(sg_out, aad, TLS_AAD_SPACE_SIZE); - chain_to_walk(sg_in + 1, in); - chain_to_walk(sg_out + 1, out); + scatterwalk_get_sglist(in, sg_in + 1); + scatterwalk_get_sglist(out, sg_out + 1); *in_len -= len; if (*in_len < 0) { *in_len += cipher_desc->tag; /* the input buffer doesn't contain the entire record. @@ -108,14 +94,12 @@ static int tls_enc_record(struct aead_request *aead_req, *in_len = 0; } if (*in_len) { - scatterwalk_copychunks(NULL, in, len, 2); - scatterwalk_pagedone(in, 0, 1); - scatterwalk_copychunks(NULL, out, len, 2); - scatterwalk_pagedone(out, 1, 1); + scatterwalk_skip(in, len); + scatterwalk_skip(out, len); } len -= cipher_desc->tag; aead_request_set_crypt(aead_req, sg_in, sg_out, len, iv); @@ -160,13 +144,10 @@ static int tls_enc_records(struct aead_request *aead_req, cpu_to_be64(rcd_sn), &in, &out, &len, prot); rcd_sn++; } while (rc == 0 && len); - scatterwalk_done(&in, 0, 0); - scatterwalk_done(&out, 1, 0); - return rc; } /* Can't use icsk->icsk_af_ops->send_check here because the ip addresses * might have been changed by NAT. From patchwork Mon Dec 30 00:14:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 854270 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EEF441A8411; Mon, 30 Dec 2024 00:16:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517769; cv=none; b=rILCd4lJHG1eNmxu96LP/CuhA6UxZaDF2x5EDEBLoIRZh9JSfFnsbkaBqfe4d1FGwB/DFq9Rmn7Sm6DxPNlOTnTw/cNz0v9pFJJUFnqs1ICaboCNsR7GNW6+hguFWkG8c7T8qX0d+dEIFH85nGytjds3BXadx59R03Lsoe0YHn0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735517769; c=relaxed/simple; bh=pMrTGKnRhvoJeyHAUlKFSPsItxe+R2jmbyf6RObZplM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KPcLkGwZw47F/dPbJqPykL5DMqkSbplYPp1/ZJo1UDyCJotb7SDwX1txsQIHcSqeKrIpzvJmqDMUviXDHY74Ij95Pj+ORxMASxlNgXZ2PeE6yhgZNeeugVV6LVQ6FwXBwkcw5DNe+7A3YU0lJwx2OpwA+KcvyEVdWN0vy9ezt+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=j43P1jYL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="j43P1jYL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B62FDC4CEDC; Mon, 30 Dec 2024 00:16:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735517768; bh=pMrTGKnRhvoJeyHAUlKFSPsItxe+R2jmbyf6RObZplM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j43P1jYLNwozTAxRKkOgNrcVDZOpRlOgVajvZusCTpMbHeUq1irw/uz8v6ENkXYC4 8c1s4cvnItiCoA1dRuG7UaQl05bzs4AELaX312Bpeyg0esmTtGaoKbCnxvfAPL4pPz J/oWgmQDfFBN02jdyHtgCWFI9p+rwMEHuwzFCub+ih4KLuMAIYhfVseIPK704VYJ1K SruQxkn9rE/0vFy7i55ZU3Ndddv9kcNUKHIEsh/0J6oQQIfULPyUGCWVzcPw9RlyoN YS1b5NN60T+rfHozm8xPjQbNiMrGeA0jMwDx+N5euiX3bhSdxdKgJgLxzxG+1EIPyc NcTzzu1K+h/Zg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 28/29] crypto: scatterwalk - remove obsolete functions Date: Sun, 29 Dec 2024 16:14:17 -0800 Message-ID: <20241230001418.74739-29-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org> References: <20241230001418.74739-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Remove various functions that are no longer used. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 37 ------------------------------------ include/crypto/scatterwalk.h | 25 ------------------------ 2 files changed, 62 deletions(-) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index 2e7a532152d6..87c080f565d4 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -28,47 +28,10 @@ void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes) walk->sg = sg; walk->offset = sg->offset + nbytes; } EXPORT_SYMBOL_GPL(scatterwalk_skip); -static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out) -{ - void *src = out ? buf : sgdata; - void *dst = out ? sgdata : buf; - - memcpy(dst, src, nbytes); -} - -void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, - size_t nbytes, int out) -{ - for (;;) { - unsigned int len_this_page = scatterwalk_pagelen(walk); - u8 *vaddr; - - if (len_this_page > nbytes) - len_this_page = nbytes; - - if (out != 2) { - vaddr = scatterwalk_map(walk); - memcpy_dir(buf, vaddr, len_this_page, out); - scatterwalk_unmap(vaddr); - } - - scatterwalk_advance(walk, len_this_page); - - if (nbytes == len_this_page) - break; - - buf += len_this_page; - nbytes -= len_this_page; - - scatterwalk_pagedone(walk, out & 1, 1); - } -} -EXPORT_SYMBOL_GPL(scatterwalk_copychunks); - inline void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, unsigned int nbytes) { do { const void *src_addr; diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index f6262d05a3c7..ac03fdf88b2a 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -113,32 +113,10 @@ static inline void *scatterwalk_next(struct scatter_walk *walk, { *nbytes_ret = scatterwalk_clamp(walk, total); return scatterwalk_map(walk); } -static inline void scatterwalk_pagedone(struct scatter_walk *walk, int out, - unsigned int more) -{ - if (out) { - struct page *page; - - page = sg_page(walk->sg) + ((walk->offset - 1) >> PAGE_SHIFT); - flush_dcache_page(page); - } - - if (more && walk->offset >= walk->sg->offset + walk->sg->length) - scatterwalk_start(walk, sg_next(walk->sg)); -} - -static inline void scatterwalk_done(struct scatter_walk *walk, int out, - int more) -{ - if (!more || walk->offset >= walk->sg->offset + walk->sg->length || - !(walk->offset & (PAGE_SIZE - 1))) - scatterwalk_pagedone(walk, out, more); -} - static inline void scatterwalk_advance(struct scatter_walk *walk, unsigned int nbytes) { walk->offset += nbytes; } @@ -182,13 +160,10 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk, scatterwalk_advance(walk, nbytes); } void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); -void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, - size_t nbytes, int out); - void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, unsigned int nbytes); void memcpy_to_scatterwalk(struct scatter_walk *walk, const void *buf, unsigned int nbytes);