From patchwork Fri Oct 18 06:40:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 837226 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13C3820E30E; Fri, 18 Oct 2024 06:41:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233665; cv=none; b=PnlQKktOL03SzOk/bY+htUZuIDd2ZN8mADIJITGAQ1LsFTA5f43Kq95S5qZnXu4TDcUN6ObpKhhrRtNykpV2CBxpsWzYjD9EHSwgPCHdrj2Pgw1ix7jXQnXz5HhszOeJ/ACXp0cP7rtP7M8Db2Zgc4sIPzcgjvMomgoSmCOZPoc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233665; c=relaxed/simple; bh=0lpsj1UvR6AcnHgvukFN7BtQLeSUmvJMMadL8XBpSFM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hW3rIMYNMQ9NaFsV4VjXmj4um8nGJ1nT9gndGG7PgQgvQEhKc9fF4A4+sbzYTSxJ0Ec4vriXhmP64yeNK0WSXWIaLzOMToxAd9jiGv7jW50u6Wpx4rOb0ju8OnesurmFgT/cRrqVADdvagb95UVF39IeB2ldVgOiTH2sENLFmzc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=T6RzztvN; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="T6RzztvN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233664; x=1760769664; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0lpsj1UvR6AcnHgvukFN7BtQLeSUmvJMMadL8XBpSFM=; b=T6RzztvNKL4z6FnpNAdlXRbdPG5W+jid9EH+1WysK7m35wRpnS/FzTjA M8evfOAQgAjrKGKtY5/RxmAAFD6gKTmXkvGfQ8JLUk0fBkIGunylGS15o PEeF9ONWgFpJLT46hdO9tPdMNhDP+wt1M9lVCcuYTLZCq3kj7yv7LmxuX oZk4KTSdehX6H9VbNu/sxFL83ZwiM6gMzVmbDkxMvxmmpo1FmSoKQWV7x VLSCu5iAPFc6oHInSnoqITVMftcR+7r4an617mHs1rAwGeKH4PnmdsYAN ZreT3vA7wOKuNTrqG5qdjpqJh0FauMs9/Mtow4EB/hNeAJ5oZX1lv4oNt w==; X-CSE-ConnectionGUID: N+Kf5DnqQkuPvePqegVhGA== X-CSE-MsgGUID: Zx0xJsVKQtStClyrXwzuvA== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884778" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884778" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:01 -0700 X-CSE-ConnectionGUID: EXdOqHO4QOiTM5LpwBoMEQ== X-CSE-MsgGUID: X736q6ymTDK46QF8X8oAwg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607487" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:01 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 01/13] crypto: acomp - Add a poll() operation to acomp_alg and acomp_req Date: Thu, 17 Oct 2024 23:40:49 -0700 Message-Id: <20241018064101.336232-2-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For async compress/decompress, provide a way for the caller to poll for compress/decompress completion, rather than wait for an interrupt to signal completion. Callers can submit a compress/decompress using crypto_acomp_compress and decompress and rather than wait on a completion, call crypto_acomp_poll() to check for completion. This is useful for hardware accelerators where the overhead of interrupts and waiting for completions is too expensive. Typically the compress/decompress hw operations complete very quickly and in the vast majority of cases, adding the overhead of interrupt handling and waiting for completions simply adds unnecessary delays and cancels the gains of using the hw acceleration. Signed-off-by: Tom Zanussi Signed-off-by: Kanchana P Sridhar --- crypto/acompress.c | 1 + include/crypto/acompress.h | 18 ++++++++++++++++++ include/crypto/internal/acompress.h | 1 + 3 files changed, 20 insertions(+) diff --git a/crypto/acompress.c b/crypto/acompress.c index 6fdf0ff9f3c0..00ec7faa2714 100644 --- a/crypto/acompress.c +++ b/crypto/acompress.c @@ -71,6 +71,7 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm) acomp->compress = alg->compress; acomp->decompress = alg->decompress; + acomp->poll = alg->poll; acomp->dst_free = alg->dst_free; acomp->reqsize = alg->reqsize; diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index 54937b615239..65b5de30c8b1 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -51,6 +51,7 @@ struct acomp_req { struct crypto_acomp { int (*compress)(struct acomp_req *req); int (*decompress)(struct acomp_req *req); + int (*poll)(struct acomp_req *req); void (*dst_free)(struct scatterlist *dst); unsigned int reqsize; struct crypto_tfm base; @@ -265,4 +266,21 @@ static inline int crypto_acomp_decompress(struct acomp_req *req) return crypto_acomp_reqtfm(req)->decompress(req); } +/** + * crypto_acomp_poll() -- Invoke asynchronous poll operation + * + * Function invokes the asynchronous poll operation + * + * @req: asynchronous request + * + * Return: zero on poll completion, -EAGAIN if not complete, or + * error code in case of error + */ +static inline int crypto_acomp_poll(struct acomp_req *req) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + + return tfm->poll(req); +} + #endif diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h index 8831edaafc05..fbf5f6c6eeb6 100644 --- a/include/crypto/internal/acompress.h +++ b/include/crypto/internal/acompress.h @@ -37,6 +37,7 @@ struct acomp_alg { int (*compress)(struct acomp_req *req); int (*decompress)(struct acomp_req *req); + int (*poll)(struct acomp_req *req); void (*dst_free)(struct scatterlist *dst); int (*init)(struct crypto_acomp *tfm); void (*exit)(struct crypto_acomp *tfm); From patchwork Fri Oct 18 06:40:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 836852 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1206F187FE7; Fri, 18 Oct 2024 06:41:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233666; cv=none; b=O3r93XEnXo95lXe/qHSLTzu4tGNdbd4qG3JOy0ktxgYPXcXuY3xZkWE4Ph51v9azBa9BOjpRfRnFDpx30oGgqelymH2Ht3CguZgIth97RznFVhto7fmOOVgCtf2lYjGsNw1wvlqOgUFls13xtxBU035RVdyAqim9hpy8PnqewoU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233666; c=relaxed/simple; bh=LlOfKWFmMimgOZRHZFBWIeDtQDuVJjBoFm4a8ckUAcs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OxLPJoE+ZeVM78FcLcg4ybiXOVEzggcAOUzWCYkArquR6edtAyKLUzshojqzg18U/3Nfs7cFrP4GyYBDdnyrD9k8bvT4IsT3XNBlCqljI4ICuNpqP1tGubJVxZdk2J1qSA9PycrnIklAUjrU2c5qtrc3r8qRETFJVNMTupoFSD8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Vt5RjUkb; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Vt5RjUkb" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233665; x=1760769665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LlOfKWFmMimgOZRHZFBWIeDtQDuVJjBoFm4a8ckUAcs=; b=Vt5RjUkbqt3BfPP5u+NR3dit1Ptu5wYySvDbNJy2GaMSS4TAJ5q0M8fC giGle4I5DJZn70A8js7t8wU+hPYOVyhCrxPJFbox0bp4Txi+jHEMpETqe MXudlNY62HO13dWBLN+wKSpmx6XX1zit+fGSaro5PCbuiDVdyJ4W4xfDi Jy2wUNDUuhKFFqqpfHt8s5HGVYMd3N3q/a3LA+sdd/Yhuo0kfw5EtC0p0 ArFYil69s+W6KdzEzBWZJZxsTtibtaqZC01p1KjtmJ2KLnLqCgnV/jJIT 3wEexK3uJ14bosfz/u+21LmJHMwX3sb2KoUNweCkvpn3TcbwEDj3bwVKI g==; X-CSE-ConnectionGUID: nBmi1AG/RvChkDPaTFgDrg== X-CSE-MsgGUID: IIX+Af/aRZOIIcBw5Nck8Q== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884794" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884794" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:02 -0700 X-CSE-ConnectionGUID: gJKulEXVTRCuP5PYxKnngw== X-CSE-MsgGUID: S19yrmp8R/uCrtwKGYX3Rw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607493" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:02 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 02/13] crypto: iaa - Add support for irq-less crypto async interface Date: Thu, 17 Oct 2024 23:40:50 -0700 Message-Id: <20241018064101.336232-3-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a crypto acomp poll() implementation so that callers can use true async iaa compress/decompress without interrupts. To use this mode with zswap, select the 'async' iaa_crypto driver_sync_mode: echo async > /sys/bus/dsa/drivers/crypto/sync_mode This will cause the iaa_crypto driver to register its acomp_alg implementation using a non-NULL poll() member, which callers such as zswap can check for the presence of and use true async mode if found. Signed-off-by: Tom Zanussi Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 74 ++++++++++++++++++++++ 1 file changed, 74 insertions(+) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 237f87000070..6a8577ac1330 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1788,6 +1788,74 @@ static void compression_ctx_init(struct iaa_compression_ctx *ctx) ctx->use_irq = use_irq; } +static int iaa_comp_poll(struct acomp_req *req) +{ + struct idxd_desc *idxd_desc; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + struct idxd_wq *wq; + bool compress_op; + int ret; + + idxd_desc = req->base.data; + if (!idxd_desc) + return -EAGAIN; + + compress_op = (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS); + wq = idxd_desc->wq; + iaa_wq = idxd_wq_get_private(wq); + idxd = iaa_wq->iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + + ret = check_completion(dev, idxd_desc->iax_completion, true, true); + if (ret == -EAGAIN) + return ret; + if (ret) + goto out; + + req->dlen = idxd_desc->iax_completion->output_size; + + /* Update stats */ + if (compress_op) { + update_total_comp_bytes_out(req->dlen); + update_wq_comp_bytes(wq, req->dlen); + } else { + update_total_decomp_bytes_in(req->slen); + update_wq_decomp_bytes(wq, req->slen); + } + + if (iaa_verify_compress && (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS)) { + struct crypto_tfm *tfm = req->base.tfm; + dma_addr_t src_addr, dst_addr; + u32 compression_crc; + + compression_crc = idxd_desc->iax_completion->crc; + + dma_sync_sg_for_device(dev, req->dst, 1, DMA_FROM_DEVICE); + dma_sync_sg_for_device(dev, req->src, 1, DMA_TO_DEVICE); + + src_addr = sg_dma_address(req->src); + dst_addr = sg_dma_address(req->dst); + + ret = iaa_compress_verify(tfm, req, wq, src_addr, req->slen, + dst_addr, &req->dlen, compression_crc); + } +out: + /* caller doesn't call crypto_wait_req, so no acomp_request_complete() */ + + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + + idxd_free_desc(idxd_desc->wq, idxd_desc); + + dev_dbg(dev, "%s: returning ret=%d\n", __func__, ret); + + return ret; +} + static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm); @@ -1813,6 +1881,7 @@ static struct acomp_alg iaa_acomp_fixed_deflate = { .compress = iaa_comp_acompress, .decompress = iaa_comp_adecompress, .dst_free = dst_free, + .poll = iaa_comp_poll, .base = { .cra_name = "deflate", .cra_driver_name = "deflate-iaa", @@ -1827,6 +1896,11 @@ static int iaa_register_compression_device(void) { int ret; + if (async_mode && !use_irq) + iaa_acomp_fixed_deflate.poll = iaa_comp_poll; + else + iaa_acomp_fixed_deflate.poll = NULL; + ret = crypto_register_acomp(&iaa_acomp_fixed_deflate); if (ret) { pr_err("deflate algorithm acomp fixed registration failed (%d)\n", ret); From patchwork Fri Oct 18 06:40:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 837225 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 923411885B8; Fri, 18 Oct 2024 06:41:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233667; cv=none; b=kzXaYxD0uEIjB+QwteKDuLyUzlBp/wqUSjZKPkg+OjRjemrYvyRRZnGzXpaOOOr9hxgC8oFy3HRKvPMzd1EB6pG39FgTKWlx7azCTblOF/UWr5el6Mr3lKdYG2suGAL+VU0GfbActosbZOJbxoTOElUpXBPVbJn0ssJXRv3EGmE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233667; c=relaxed/simple; bh=+wwI3G60HC02QVcBH/92bWn6u+fpHdMtlQxgJjc/fNE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MYxLngm/bfG4RZtCs6terrtrPvNGJEqp1UpBdV9PuxnDSbgLL0MQhiX8/B8iTYubcEpRWGr7RpMJKyeByMkLFF70WGj5oO8w5/U+WWQkZxeBFF+duAEYndy1zjj8Wg4O4OwuEci5jIW1c44U2WuE8cM4+WPOeUvDsgX5KdLXRUc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cWuGYRYc; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cWuGYRYc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233666; x=1760769666; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+wwI3G60HC02QVcBH/92bWn6u+fpHdMtlQxgJjc/fNE=; b=cWuGYRYczGZMRpiW1ug5L4UFgqNP69gsjarBTLkPCJzmXiL2L27oV6rP OWAsmU/3quRohIoEg+K6GRhPo/tQ7WpyMNPsz/WqAXhJYmoGrpSyd52V1 fF1eVWyvieRBBm+fFyN1MVqf6maCq54WZRAtRJAsX67MK9yy0X01ED2Gd Qg51AvTMl2t+827/leCdSk9dBGN0izDp+KssF+3Zt8Qddke1ribYCY0ai /2UTjjRkbza3r15ad2w1hbBsLLLD6lA9WG53LxfIhI4LFEctZFvkk0Wml s7LrSCrXf2wcwv9Ez3v5MZswJEjTn95JhmosCVa9A8KcVpLNyOmYLO25v Q==; X-CSE-ConnectionGUID: aCNhzx+OQZy13ZaO6cQE6Q== X-CSE-MsgGUID: NpZmt4NIRCmvFKrorfRauQ== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884807" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884807" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:02 -0700 X-CSE-ConnectionGUID: bd2/xxwGQ2WQ6ujF1GVEZA== X-CSE-MsgGUID: luRQ3pv+QyWsohgkCEdd8A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607497" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:02 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 03/13] crypto: testmgr - Add crypto testmgr acomp poll support. Date: Thu, 17 Oct 2024 23:40:51 -0700 Message-Id: <20241018064101.336232-4-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch enables the newly added acomp poll API to be exercised in the crypto test_acomp() calls to compress/decompress, if the acomp registers a poll method. Signed-off-by: Glover, Andre Signed-off-by: Kanchana P Sridhar --- crypto/testmgr.c | 70 ++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 65 insertions(+), 5 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index ee8da628e9da..54f6f59ae501 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -3482,7 +3482,19 @@ static int test_acomp(struct crypto_acomp *tfm, acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &wait); - ret = crypto_wait_req(crypto_acomp_compress(req), &wait); + if (tfm->poll) { + ret = crypto_acomp_compress(req); + if (ret == -EINPROGRESS) { + do { + ret = crypto_acomp_poll(req); + if (ret && ret != -EAGAIN) + break; + } while (ret); + } + } else { + ret = crypto_wait_req(crypto_acomp_compress(req), &wait); + } + if (ret) { pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n", i + 1, algo, -ret); @@ -3498,7 +3510,19 @@ static int test_acomp(struct crypto_acomp *tfm, crypto_init_wait(&wait); acomp_request_set_params(req, &src, &dst, ilen, dlen); - ret = crypto_wait_req(crypto_acomp_decompress(req), &wait); + if (tfm->poll) { + ret = crypto_acomp_decompress(req); + if (ret == -EINPROGRESS) { + do { + ret = crypto_acomp_poll(req); + if (ret && ret != -EAGAIN) + break; + } while (ret); + } + } else { + ret = crypto_wait_req(crypto_acomp_decompress(req), &wait); + } + if (ret) { pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n", i + 1, algo, -ret); @@ -3531,7 +3555,19 @@ static int test_acomp(struct crypto_acomp *tfm, sg_init_one(&src, input_vec, ilen); acomp_request_set_params(req, &src, NULL, ilen, 0); - ret = crypto_wait_req(crypto_acomp_compress(req), &wait); + if (tfm->poll) { + ret = crypto_acomp_compress(req); + if (ret == -EINPROGRESS) { + do { + ret = crypto_acomp_poll(req); + if (ret && ret != -EAGAIN) + break; + } while (ret); + } + } else { + ret = crypto_wait_req(crypto_acomp_compress(req), &wait); + } + if (ret) { pr_err("alg: acomp: compression failed on NULL dst buffer test %d for %s: ret=%d\n", i + 1, algo, -ret); @@ -3574,7 +3610,19 @@ static int test_acomp(struct crypto_acomp *tfm, acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &wait); - ret = crypto_wait_req(crypto_acomp_decompress(req), &wait); + if (tfm->poll) { + ret = crypto_acomp_decompress(req); + if (ret == -EINPROGRESS) { + do { + ret = crypto_acomp_poll(req); + if (ret && ret != -EAGAIN) + break; + } while (ret); + } + } else { + ret = crypto_wait_req(crypto_acomp_decompress(req), &wait); + } + if (ret) { pr_err("alg: acomp: decompression failed on test %d for %s: ret=%d\n", i + 1, algo, -ret); @@ -3606,7 +3654,19 @@ static int test_acomp(struct crypto_acomp *tfm, crypto_init_wait(&wait); acomp_request_set_params(req, &src, NULL, ilen, 0); - ret = crypto_wait_req(crypto_acomp_decompress(req), &wait); + if (tfm->poll) { + ret = crypto_acomp_decompress(req); + if (ret == -EINPROGRESS) { + do { + ret = crypto_acomp_poll(req); + if (ret && ret != -EAGAIN) + break; + } while (ret); + } + } else { + ret = crypto_wait_req(crypto_acomp_decompress(req), &wait); + } + if (ret) { pr_err("alg: acomp: decompression failed on NULL dst buffer test %d for %s: ret=%d\n", i + 1, algo, -ret); From patchwork Fri Oct 18 06:40:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 836851 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02788188917; Fri, 18 Oct 2024 06:41:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233667; cv=none; b=AqVt+3zCufOoYYU+lsUoP/YuEJH7Gu5ii3Zy+0q5H2NU1t9qVpCrNHcaZnWgz8XdDDt8Qidq6DcjaWOnjG79/m5lZ6kH5fEGW0Ims+VTOywLg+edBwK9cPhNFykXXG5nNDt/762Ia9ZP4zcNeE8KHI1ZU5QeE86m7Yr4rlWTk/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233667; c=relaxed/simple; bh=gtInCloZyEBkd8uVBGAQIApykz7xeQMHHy5TXu53KaI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jvpuXJN1yyJS13mEVk+YOGnVKQa4PkKX2TRRmi0Ltp70EeXKuzuqF0+zww9bVTOtko5WhnIoPnVbFkwc4+RPfuFu2RVp99tecGMIGkoyKsiIysbHBE4f/2Phyud9ZbcvfI5KpLC4z1LaV6Uk8TniUcMSCcurzBF7w9NaWxWnHVQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=liLzqMSO; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="liLzqMSO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233666; x=1760769666; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gtInCloZyEBkd8uVBGAQIApykz7xeQMHHy5TXu53KaI=; b=liLzqMSOCC1RMSD4oAIjy0k9BxOlvJSR9BvcdqJF8SlwZEekJYz6uJfl anr6CyJEfLgmdaxbXtw8Iaxz3viJl/XsYuHuNUJEwfTdRhyjZvOGthM53 ZHull+yRnZiLhkAOOQ+mQPXx5pjKw+kKInNiS2VGnRNVEAVF8chUwGWMq x9WtyJHXsxLOchS70C9h1A7vN5RMxbpspSz4IgQ7mlRzJKNukq5fSG9Tn 20H8IHOEhXQMG3YLCy+v8K/VBv+HMKl1RZFIJVF/IT7Iq46hWitDxScIT MCdzdHIfwVuxVcNKE5uJdQDmTyhhrq3AAucQsNK4z6zKtWHCxJP13msau Q==; X-CSE-ConnectionGUID: sgpdQsPeTdSd5AQQsr5BMQ== X-CSE-MsgGUID: vudRWV45S32K92de4AOPcA== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884822" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884822" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:02 -0700 X-CSE-ConnectionGUID: S5P7Z3aMRMezS5ESvvkKGg== X-CSE-MsgGUID: 95MDyyYsR9OFLjtcK6GVSw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607500" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:02 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 04/13] mm: zswap: zswap_compress()/decompress() can submit, then poll an acomp_req. Date: Thu, 17 Oct 2024 23:40:52 -0700 Message-Id: <20241018064101.336232-5-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the crypto_acomp has a poll interface registered, zswap_compress() and zswap_decompress() will submit the acomp_req, and then poll() for a successful completion/error status in a busy-wait loop. This allows an asynchronous way to manage (potentially multiple) acomp_reqs without the use of interrupts, which is supported in the iaa_crypto driver. This enables us to implement batch submission of multiple compression/decompression jobs to the Intel IAA hardware accelerator, which will process them in parallel; followed by polling the batch's acomp_reqs for completion status. Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 51 +++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 39 insertions(+), 12 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index f6316b66fb23..948c9745ee57 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -910,18 +910,34 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SIZE, dlen); /* - * it maybe looks a little bit silly that we send an asynchronous request, - * then wait for its completion synchronously. This makes the process look - * synchronous in fact. - * Theoretically, acomp supports users send multiple acomp requests in one - * acomp instance, then get those requests done simultaneously. but in this - * case, zswap actually does store and load page by page, there is no - * existing method to send the second page before the first page is done - * in one thread doing zwap. - * but in different threads running on different cpu, we have different - * acomp instance, so multiple threads can do (de)compression in parallel. + * If the crypto_acomp provides an asynchronous poll() interface, + * submit the descriptor and poll for a completion status. + * + * It maybe looks a little bit silly that we send an asynchronous + * request, then wait for its completion in a busy-wait poll loop, or, + * synchronously. This makes the process look synchronous in fact. + * Theoretically, acomp supports users send multiple acomp requests in + * one acomp instance, then get those requests done simultaneously. + * But in this case, zswap actually does store and load page by page, + * there is no existing method to send the second page before the + * first page is done in one thread doing zswap. + * But in different threads running on different cpu, we have different + * acomp instance, so multiple threads can do (de)compression in + * parallel. */ - comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait); + if (acomp_ctx->acomp->poll) { + comp_ret = crypto_acomp_compress(acomp_ctx->req); + if (comp_ret == -EINPROGRESS) { + do { + comp_ret = crypto_acomp_poll(acomp_ctx->req); + if (comp_ret && comp_ret != -EAGAIN) + break; + } while (comp_ret); + } + } else { + comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait); + } + dlen = acomp_ctx->req->dlen; if (comp_ret) goto unlock; @@ -959,6 +975,7 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; u8 *src; + int ret; acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); mutex_lock(&acomp_ctx->mutex); @@ -984,7 +1001,17 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) sg_init_table(&output, 1); sg_set_folio(&output, folio, PAGE_SIZE, 0); acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE); - BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait)); + if (acomp_ctx->acomp->poll) { + ret = crypto_acomp_decompress(acomp_ctx->req); + if (ret == -EINPROGRESS) { + do { + ret = crypto_acomp_poll(acomp_ctx->req); + BUG_ON(ret && ret != -EAGAIN); + } while (ret); + } + } else { + BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait)); + } BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE); mutex_unlock(&acomp_ctx->mutex); From patchwork Fri Oct 18 06:40:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 837224 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17CC518858C; Fri, 18 Oct 2024 06:41:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233668; cv=none; b=PxUBMMWFVZ93s6IpSTgiTyGshEcdah9xMuDwa/w966Vb8WzIeKnqYHXAkffmbm50hZXAxpJ3YHBXWWizwVYuOolYzwOvHlHHUUBfHsbUBDC8LRL8pUgnASqmiZotwXctHhmljzf0xSabAR262xyDsXHCiPFCk3PMHZc+0gwk8n0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233668; c=relaxed/simple; bh=Z1uLVHFrCxBVRppWSz2uPPFIJpDocNdlo79Vr4aRt6E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=D56mnSvgbL5P8piY8KRfXeWgkNE6dZNaxhC+ub06daRm9gpYrXEs67cev3wUNNCGMODeXyPhZl0XNfyQCgEVS46N2I8BxhIL3pMno60+q/RHy7fkyPeYLQcYp0GVmySkgr/dSwjF36Y5SnC1cSNwOrgb+MhnubD19zAwi7OtYnE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TG8HIowt; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TG8HIowt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233667; x=1760769667; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Z1uLVHFrCxBVRppWSz2uPPFIJpDocNdlo79Vr4aRt6E=; b=TG8HIowtisZKcDx9fylSVbxdwPk1IBk30b+kUbv92cGfT6evCtMwBEji +Ye6d+OJiP8cjPxAOQ9eUGp43egyYrLDAUkPkrEEDWGDKB1red0ew84i8 9iEUxsAPxJmlWMqbwISbPCS82Omvul+iAITHdHSVuhkLVWuYL4NjpkEho FgR+3hqDuWTpyOwuRSzlB2Ep98Y2agRsYsv8aG46gtsYK+uve0Dw+eTRF arcbx2Oquun3VtoHLH4nHq4mr57FPWLHC4Iq4VGsrE908Ze/InSv+8skl cHw4XG0blix0bMmZVchZdq4JewlE09GL0zFCo0UWRnNgG/JaUJLl8/xt5 Q==; X-CSE-ConnectionGUID: nqPY467TSxyDfVNSteo+nA== X-CSE-MsgGUID: iBYUGjgjQ0uPwiJqU/z7EA== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884837" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884837" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:02 -0700 X-CSE-ConnectionGUID: BcOBrWNBQuGJzmoGtHi9jQ== X-CSE-MsgGUID: tf9TGtY1SRSPfpX2G7ugmw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607504" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:02 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 05/13] crypto: iaa - Make async mode the default. Date: Thu, 17 Oct 2024 23:40:53 -0700 Message-Id: <20241018064101.336232-6-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch makes it easier for IAA hardware acceleration in the iaa_crypto driver to be loaded by default in the most efficient/recommended "async" mode, namely, asynchronous submission of descriptors, followed by polling for job completions. Earlier, the "sync" mode used to be the default. This way, anyone that wants to use IAA can do so after building the kernel, and *without* having to go through these steps to use async poll: 1) disable all the IAA device/wq bindings that happen at boot time 2) rmmod iaa_crypto 3) modprobe iaa_crypto 4) echo async > /sys/bus/dsa/drivers/crypto/sync_mode 5) re-run initialization of the IAA devices and wqs Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 6a8577ac1330..6c262b1eb09d 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -153,7 +153,7 @@ static DRIVER_ATTR_RW(verify_compress); */ /* Use async mode */ -static bool async_mode; +static bool async_mode = true; /* Use interrupts */ static bool use_irq; From patchwork Fri Oct 18 06:40:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 836850 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BFF018DF73; Fri, 18 Oct 2024 06:41:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233669; cv=none; b=kZ22l/6E2zroOZ+/V7ERn5AfZiR7rzSJDQt9ncBLWIzKQjV+3AY2tmOeZ/VYld3q9M6NgIXOPrC55W3oc4DLd1kAffncPamVLI2FOfHGY85nEKo1QE3qV+fSeJxuGtdG1Vf1hjgNiZ8Malp9G6kGNe/mY8gGphyb6Lzk3LT9etc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233669; c=relaxed/simple; bh=EYVB88p4llixVMUciVyNq2hbHLQAzfv8NwXmpPaNhRE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=NDVTRtuS5pPEFMBefDkxtWhFN1jMT+qPBaiadBofVybAEh8BtFbExa+FLI3CotalfpObG3Dc48Gd8EaR2um+4NtFFxuduvYG8KEAttJYGuOup2CRl6fFRoahBMM9pcGg8iLElQGjuHO49K6xAYvPvZwrfq7TEC9EniMlwY/yKBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=l8SHsib9; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="l8SHsib9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233668; x=1760769668; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EYVB88p4llixVMUciVyNq2hbHLQAzfv8NwXmpPaNhRE=; b=l8SHsib9QltXFAR8wOWd0fJsFcSaSH6XWcb50Yt2KSEkh4haU1Chgkyn BDSO/wE3xc/IAQdCarULydR8qZaK8d+gdwmo9HP+TlGHvIM16jW6cG2dI kVpFK/eo0gpGiETbM72Yllmf4gzC0VQ8cX0tW3MMvipqbUAZ2xVu+HRk1 U8EAaum01MvtQOZcCHKto/nml9WzmGKgPHs3DPXOe/QjRHxVfT36aWaZx LoXYBuuLf5QzSYcd2D9w5up5DPOSRPCwq9oUXZogiCSWVFOCBle6g5+VC QqbabAeQkB2WOgT4Hf0tgNZH890AZTa2mkEe5bk+H89+ljzYjvpRWQZE0 g==; X-CSE-ConnectionGUID: 2/rQjdimRsm6LZvtU0sAlA== X-CSE-MsgGUID: aOhFlWsHTXqeFpEbosDNFQ== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884856" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884856" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:03 -0700 X-CSE-ConnectionGUID: lhuPBnn6Sp2tniOZwmYnLg== X-CSE-MsgGUID: RubBN4GeQEObYR11REdZpA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607509" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:03 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 06/13] crypto: iaa - Disable iaa_verify_compress by default. Date: Thu, 17 Oct 2024 23:40:54 -0700 Message-Id: <20241018064101.336232-7-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch makes it easier for IAA hardware acceleration in the iaa_crypto driver to be loaded by default with "iaa_verify_compress" disabled, to facilitate performance comparisons with software compressors (which also do not run compress verification by default). Earlier, iaa_crypto compress verification used to be enabled by default. With this patch, if users want to enable compress verification, they can do so with these steps: 1) disable all the IAA device/wq bindings that happen at boot time 2) rmmod iaa_crypto 3) modprobe iaa_crypto 4) echo 1 > /sys/bus/dsa/drivers/crypto/verify_compress 5) re-run initialization of the IAA devices and wqs Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 6c262b1eb09d..8e6859c97970 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -94,7 +94,7 @@ static bool iaa_crypto_enabled; static bool iaa_crypto_registered; /* Verify results of IAA compress or not */ -static bool iaa_verify_compress = true; +static bool iaa_verify_compress = false; static ssize_t verify_compress_show(struct device_driver *driver, char *buf) { From patchwork Fri Oct 18 06:40:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 837223 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1046B18E03D; Fri, 18 Oct 2024 06:41:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233670; cv=none; b=ZpnT7SeP1PVX9o+4r0CDZLm4xNgGldxREk9A0bf9qLiRs3Z6x23chqMAknk1vMLPmV6wm7jcFPoG2SQ6prjwLp1oY3sOoHJnKsfgR6BFPcp6HGK+vRnmgPPDWLPovQh5owxWc3qIloY4GDGLmg00Ad3egSAwMlBPE8xsv6kp9T4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233670; c=relaxed/simple; bh=GA4zNC/BA7uVqXos9F/5h3uPYjHVZ8vkHQMHlvAZyyg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UxgVKhUw+Le7o+QBx3wXyFSQvLEbuzBlXS1Ckf6zLrN+Fs0AtCRGYpjXbPC7yxXAG3fmmYc4eUyqTTeL1BT98e5VD37OgEK6KmcCz9c3f1FDpcWDUxQAvHcEa6IQ7RaAWBUDRB8DAbt7q0+u32gs1boKKaC+uQVEkUIfaGQ8DI4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bnk7JPfe; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bnk7JPfe" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233668; x=1760769668; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GA4zNC/BA7uVqXos9F/5h3uPYjHVZ8vkHQMHlvAZyyg=; b=bnk7JPfeY8gruayeyqqzgqhMsOWWKvshxjBXuACbCC7xQw9niIB4pMRX TcvQxn3LPhqBueJmMDT9+uClyd+MRH4hhAGj3HRj9Qk7BMdS/cklBifVF R4Hf54/zlcIf3r5adn4PUX9wYvBrYJEg283eGVy+QMT7ps+pDDkXABhUv nFTk9wBVlwt1+D7yL3+XDRl6rwaHoQCj7fNy38qopGuR6u3oEPmwEOePQ YEUIG/qQSeyJ2oLxzqGIyGxSZPQNG/zuqsFGwdEFBmSWGjQb1cPaecgLA AW5qADW0eo5V7Hn8ntRLAcANIaQjqWcwQwwJ/UhkIOEa6yc//4Yyijvdp w==; X-CSE-ConnectionGUID: 3QAjLIW6TX+kIeUGCwyoEg== X-CSE-MsgGUID: r/45t9ohTtmtUgkZFbCeFg== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884872" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884872" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:03 -0700 X-CSE-ConnectionGUID: zH0hGZmDQVO7EAYVusV2Hw== X-CSE-MsgGUID: ymW4e2aWT3qeiSwr7IcC9g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607514" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:03 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 07/13] crypto: iaa - Change cpu-to-iaa mappings to evenly balance cores to IAAs. Date: Thu, 17 Oct 2024 23:40:55 -0700 Message-Id: <20241018064101.336232-8-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This change distributes the cpus more evenly among the IAAs in each socket. Old algorithm to assign cpus to IAA: ------------------------------------ If "nr_cpus" = nr_logical_cpus (includes hyper-threading), the current algorithm determines "nr_cpus_per_node" = nr_cpus / nr_nodes. Hence, on a 2-socket Sapphire Rapids server where each socket has 56 cores and 4 IAA devices, nr_cpus_per_node = 112. Further, cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa Hence, cpus_per_iaa = 224/8 = 28. The iaa_crypto driver then assigns 28 "logical" node cpus per IAA device on that node, that results in this cpu-to-iaa mapping: lscpu|grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-55,112-167 NUMA node1 CPU(s): 56-111,168-223 NUMA node 0: cpu 0-27 28-55 112-139 140-167 iaa iax1 iax3 iax5 iax7 NUMA node 1: cpu 56-83 84-111 168-195 196-223 iaa iax9 iax11 iax13 iax15 This appears non-optimal for a few reasons: 1) The 2 logical threads on a core will get assigned to different IAA devices. For e.g.: cpu 0: iax1 cpu 112: iax5 2) One of the logical threads on a core is assigned to an IAA that is not closest to that core. For e.g. cpu 112. 3) If numactl is used to start processes sequentially on the logical cores, some of the IAA devices on the socket could be over-subscribed, while some could be under-utilized. This patch introduces a scheme to more evenly balance the logical cores to IAA devices on a socket. New algorithm to assign cpus to IAA: ------------------------------------ We introduce a function "cpu_to_iaa()" that takes a logical cpu and returns the IAA device closest to it. If "nr_cpus" = nr_logical_cpus (includes hyper-threading), the new algorithm determines "nr_cpus_per_node" = topology_num_cores_per_package(). Hence, on a 2-socket Sapphire Rapids server where each socket has 56 cores and 4 IAA devices, nr_cpus_per_node = 56. Further, cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa Hence, cpus_per_iaa = 112/8 = 14. The iaa_crypto driver then assigns 14 "logical" node cpus per IAA device on that node, that results in this cpu-to-iaa mapping: NUMA node 0: cpu 0-13,112-125 14-27,126-139 28-41,140-153 42-55,154-167 iaa iax1 iax3 iax5 iax7 NUMA node 1: cpu 56-69,168-181 70-83,182-195 84-97,196-209 98-111,210-223 iaa iax9 iax11 iax13 iax15 This resolves the 3 issues with non-optimality of cpu-to-iaa mappings pointed out earlier with the existing approach. Originally-by: Tom Zanussi Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 84 ++++++++++++++-------- 1 file changed, 54 insertions(+), 30 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 8e6859c97970..c854a7a1aaa4 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -55,6 +55,46 @@ static struct idxd_wq *wq_table_next_wq(int cpu) return entry->wqs[entry->cur_wq]; } +/* + * Given a cpu, find the closest IAA instance. The idea is to try to + * choose the most appropriate IAA instance for a caller and spread + * available workqueues around to clients. + */ +static inline int cpu_to_iaa(int cpu) +{ + int node, n_cpus = 0, test_cpu, iaa = 0; + int nr_iaa_per_node; + const struct cpumask *node_cpus; + + if (!nr_nodes) + return 0; + + nr_iaa_per_node = nr_iaa / nr_nodes; + if (!nr_iaa_per_node) + return 0; + + for_each_online_node(node) { + node_cpus = cpumask_of_node(node); + if (!cpumask_test_cpu(cpu, node_cpus)) + continue; + + for_each_cpu(test_cpu, node_cpus) { + if ((n_cpus % nr_cpus_per_node) == 0) + iaa = node * nr_iaa_per_node; + + if (test_cpu == cpu) + return iaa; + + n_cpus++; + + if ((n_cpus % cpus_per_iaa) == 0) + iaa++; + } + } + + return -1; +} + static void wq_table_add(int cpu, struct idxd_wq *wq) { struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); @@ -895,8 +935,7 @@ static int wq_table_add_wqs(int iaa, int cpu) */ static void rebalance_wq_table(void) { - const struct cpumask *node_cpus; - int node, cpu, iaa = -1; + int cpu, iaa; if (nr_iaa == 0) return; @@ -906,37 +945,22 @@ static void rebalance_wq_table(void) clear_wq_table(); - if (nr_iaa == 1) { - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (WARN_ON(wq_table_add_wqs(0, cpu))) { - pr_debug("could not add any wqs for iaa 0 to cpu %d!\n", cpu); - return; - } - } - - return; - } - - for_each_node_with_cpus(node) { - node_cpus = cpumask_of_node(node); - - for (cpu = 0; cpu < cpumask_weight(node_cpus); cpu++) { - int node_cpu = cpumask_nth(cpu, node_cpus); - - if (WARN_ON(node_cpu >= nr_cpu_ids)) { - pr_debug("node_cpu %d doesn't exist!\n", node_cpu); - return; - } + for (cpu = 0; cpu < nr_cpus; cpu++) { + iaa = cpu_to_iaa(cpu); + pr_debug("rebalance: cpu=%d iaa=%d\n", cpu, iaa); - if ((cpu % cpus_per_iaa) == 0) - iaa++; + if (WARN_ON(iaa == -1)) { + pr_debug("rebalance (cpu_to_iaa(%d)) failed!\n", cpu); + return; + } - if (WARN_ON(wq_table_add_wqs(iaa, node_cpu))) { - pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); - return; - } + if (WARN_ON(wq_table_add_wqs(iaa, cpu))) { + pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); + return; } } + + pr_debug("Finished rebalance local wqs."); } static inline int check_completion(struct device *dev, @@ -2084,7 +2108,7 @@ static int __init iaa_crypto_init_module(void) pr_err("IAA couldn't find any nodes with cpus\n"); return -ENODEV; } - nr_cpus_per_node = nr_cpus / nr_nodes; + nr_cpus_per_node = topology_num_cores_per_package(); if (crypto_has_comp("deflate-generic", 0, 0)) deflate_generic_tfm = crypto_alloc_comp("deflate-generic", 0, 0); From patchwork Fri Oct 18 06:40:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 836849 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15A6418E374; Fri, 18 Oct 2024 06:41:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233671; cv=none; b=hFQvr8Vxy6iioCu7zf2qwmTrs1mDbq+e/BrCLEk+Qk0LBnEfCPiofAEMQhhuYJTzc+OdyXnnYp9ElHAdhYlRcu2y9BxdBEMoZJQWPToQyUoZyM8fM7eraT/NsTDza5o1e88dG8lSmlegfKto4NLYmAzJ34ts01jIMeZ53Wof+l4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233671; c=relaxed/simple; bh=nJkwHZhq8LUwIQWuM4nmuZlPd/LXd0bS1ate2IE2Ous=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gKMdNYE+/OkO/6z4HLpYHBlTiWo3Y9N47VOrig0UUjRP1g5lyFskC8dutkkWuitYZqjkwbXLP+EizD/AkEVTuHf2HR6EIRYCyjSqnC+DDzrgfxjs9oC7tvKsYaILr2w6ivkWmJ9+ygIsEbeiTrTKZ2RoLnquTEab4inv4ql0UAw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hFERH1ny; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hFERH1ny" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233669; x=1760769669; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nJkwHZhq8LUwIQWuM4nmuZlPd/LXd0bS1ate2IE2Ous=; b=hFERH1nytrnRvnVYQe4XmAdAuBfsA/3dFcbEZxqbjFTjSBUPZutq5vgv F6tdahWm5ExlYbYpW5UoWMH+Zo7NV74aP4SpX9uOAv4hierb20MQMLUKW 7Kf4bdpMr4vqXWpwQno4t/b/p+NkUAt2UBLAoB3Z3KvpCMBbuvi4dTAa/ 2CwEOZA2Ae5SnFeCio2IwZtVURSkSc9KeiOE9eqckmwv2zlnbY1nGcxZg g6eYCi5nl9bHzx9Ks4RUWOKyrtPgQoce1I334+3vYrbJYg8vXD9Es0Aqm GiNBCXNrUhPYyjTAe5Muur30s480Z1YwhayR8whC5KZyfN+S+aGyZCJo2 Q==; X-CSE-ConnectionGUID: O64J6kZBTsG1AYXLS8QsDw== X-CSE-MsgGUID: 1gZknamyTUaaV4jT8HztTA== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884886" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884886" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:03 -0700 X-CSE-ConnectionGUID: zewXamuiRemTPufLHe37/A== X-CSE-MsgGUID: j/tcH1zEQrytip/oRjhc+w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607519" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:03 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 08/13] crypto: iaa - Distribute compress jobs to all IAA devices on a NUMA node. Date: Thu, 17 Oct 2024 23:40:56 -0700 Message-Id: <20241018064101.336232-9-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This change enables processes running on any logical core on a NUMA node to use all the IAA devices enabled on that NUMA node for compress jobs. In other words, compressions originating from any process in a node will be distributed in round-robin manner to the available IAA devices on the same socket. The main premise behind this change is to make sure that no compress engines on any IAA device are left un-utilized/under-utilized. In other words, the compress engines on all IAA devices are considered a global resource for that socket. This allows the use of all IAA devices present in a given NUMA node for (batched) compressions originating from zswap/zram, from all cores on this node. A new per-cpu "global_wq_table" implements this in the iaa_crypto driver. We can think of the global WQ per IAA as a WQ to which all cores on that socket can submit compress jobs. To avail of this feature, the user must configure 2 WQs per IAA in order to enable distribution of compress jobs to multiple IAA devices. Each IAA will have 2 WQs: wq.0 (local WQ): Used for decompress jobs from cores mapped by the cpu_to_iaa() "even balancing of logical cores to IAA devices" algorithm. wq.1 (global WQ): Used for compress jobs from *all* logical cores on that socket. The iaa_crypto driver will place all global WQs from all same-socket IAA devices in the global_wq_table per cpu on that socket. When the driver receives a compress job, it will lookup the "next" global WQ in the cpu's global_wq_table to submit the descriptor. The starting wq in the global_wq_table for each cpu is the global wq associated with the IAA nearest to it, so that we stagger the starting global wq for each process. This results in very uniform usage of all IAAs for compress jobs. Two new driver module parameters are added for this feature: g_wqs_per_iaa (default 1): /sys/bus/dsa/drivers/crypto/g_wqs_per_iaa This represents the number of global WQs that can be configured per IAA device. The default is 1, and is the recommended setting to enable the use of this feature once the user configures 2 WQs per IAA using higher level scripts as described in Documentation/driver-api/crypto/iaa/iaa-crypto.rst. g_consec_descs_per_gwq (default 1): /sys/bus/dsa/drivers/crypto/g_consec_descs_per_gwq This represents the number of consecutive compress jobs that will be submitted to the same global WQ (i.e. to the same IAA device) from a given core, before moving to the next global WQ. The default is 1, which is also the recommended setting to avail of this feature. The decompress jobs from any core will be sent to the "local" IAA, namely the one that the driver assigns with the cpu_to_iaa() mapping algorithm that evenly balances the assignment of logical cores to IAA devices on a NUMA node. On a 2-socket Sapphire Rapids server where each socket has 56 cores and 4 IAA devices, this is how the compress/decompress jobs will be mapped when the user configures 2 WQs per IAA device (which implies wq.1 will be added to the global WQ table for each logical core on that NUMA node): lscpu|grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-55,112-167 NUMA node1 CPU(s): 56-111,168-223 Compress jobs: -------------- NUMA node 0: All cpus (0-55,112-167) can send compress jobs to all IAA devices on the socket (iax1/iax3/iax5/iax7) in round-robin manner: iaa iax1 iax3 iax5 iax7 NUMA node 1: All cpus (56-111,168-223) can send compress jobs to all IAA devices on the socket (iax9/iax11/iax13/iax15) in round-robin manner: iaa iax9 iax11 iax13 iax15 Decompress jobs: ---------------- NUMA node 0: cpu 0-13,112-125 14-27,126-139 28-41,140-153 42-55,154-167 iaa iax1 iax3 iax5 iax7 NUMA node 1: cpu 56-69,168-181 70-83,182-195 84-97,196-209 98-111,210-223 iaa iax9 iax11 iax13 iax15 Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 305 ++++++++++++++++++++- 1 file changed, 290 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index c854a7a1aaa4..2d6c517e9d9b 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -29,14 +29,23 @@ static unsigned int nr_iaa; static unsigned int nr_cpus; static unsigned int nr_nodes; static unsigned int nr_cpus_per_node; - /* Number of physical cpus sharing each iaa instance */ static unsigned int cpus_per_iaa; static struct crypto_comp *deflate_generic_tfm; /* Per-cpu lookup table for balanced wqs */ -static struct wq_table_entry __percpu *wq_table; +static struct wq_table_entry __percpu *wq_table = NULL; + +/* Per-cpu lookup table for global wqs shared by all cpus. */ +static struct wq_table_entry __percpu *global_wq_table = NULL; + +/* + * Per-cpu counter of consecutive descriptors allocated to + * the same wq in the global_wq_table, so that we know + * when to switch to the next wq in the global_wq_table. + */ +static int __percpu *num_consec_descs_per_wq = NULL; static struct idxd_wq *wq_table_next_wq(int cpu) { @@ -104,26 +113,68 @@ static void wq_table_add(int cpu, struct idxd_wq *wq) entry->wqs[entry->n_wqs++] = wq; - pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__, - entry->wqs[entry->n_wqs - 1]->idxd->id, - entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); + pr_debug("%s: added iaa local wq %d.%d to idx %d of cpu %d\n", __func__, + entry->wqs[entry->n_wqs - 1]->idxd->id, + entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); +} + +static void global_wq_table_add(int cpu, struct idxd_wq *wq) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + + if (WARN_ON(entry->n_wqs == entry->max_wqs)) + return; + + entry->wqs[entry->n_wqs++] = wq; + + pr_debug("%s: added iaa global wq %d.%d to idx %d of cpu %d\n", __func__, + entry->wqs[entry->n_wqs - 1]->idxd->id, + entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); +} + +static void global_wq_table_set_start_wq(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + int start_wq = (entry->n_wqs / nr_iaa) * cpu_to_iaa(cpu); + + if ((start_wq >= 0) && (start_wq < entry->n_wqs)) + entry->cur_wq = start_wq; } static void wq_table_free_entry(int cpu) { struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - kfree(entry->wqs); - memset(entry, 0, sizeof(*entry)); + if (entry) { + kfree(entry->wqs); + memset(entry, 0, sizeof(*entry)); + } + + entry = per_cpu_ptr(global_wq_table, cpu); + + if (entry) { + kfree(entry->wqs); + memset(entry, 0, sizeof(*entry)); + } } static void wq_table_clear_entry(int cpu) { struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - entry->n_wqs = 0; - entry->cur_wq = 0; - memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); + if (entry) { + entry->n_wqs = 0; + entry->cur_wq = 0; + memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); + } + + entry = per_cpu_ptr(global_wq_table, cpu); + + if (entry) { + entry->n_wqs = 0; + entry->cur_wq = 0; + memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); + } } LIST_HEAD(iaa_devices); @@ -163,6 +214,70 @@ static ssize_t verify_compress_store(struct device_driver *driver, } static DRIVER_ATTR_RW(verify_compress); +/* Number of global wqs per iaa*/ +static int g_wqs_per_iaa = 1; + +static ssize_t g_wqs_per_iaa_show(struct device_driver *driver, char *buf) +{ + return sprintf(buf, "%d\n", g_wqs_per_iaa); +} + +static ssize_t g_wqs_per_iaa_store(struct device_driver *driver, + const char *buf, size_t count) +{ + int ret = -EBUSY; + + mutex_lock(&iaa_devices_lock); + + if (iaa_crypto_enabled) + goto out; + + ret = kstrtoint(buf, 10, &g_wqs_per_iaa); + if (ret) + goto out; + + ret = count; +out: + mutex_unlock(&iaa_devices_lock); + + return ret; +} +static DRIVER_ATTR_RW(g_wqs_per_iaa); + +/* + * Number of consecutive descriptors to allocate from a + * given global wq before switching to the next wq in + * the global_wq_table. + */ +static int g_consec_descs_per_gwq = 1; + +static ssize_t g_consec_descs_per_gwq_show(struct device_driver *driver, char *buf) +{ + return sprintf(buf, "%d\n", g_consec_descs_per_gwq); +} + +static ssize_t g_consec_descs_per_gwq_store(struct device_driver *driver, + const char *buf, size_t count) +{ + int ret = -EBUSY; + + mutex_lock(&iaa_devices_lock); + + if (iaa_crypto_enabled) + goto out; + + ret = kstrtoint(buf, 10, &g_consec_descs_per_gwq); + if (ret) + goto out; + + ret = count; +out: + mutex_unlock(&iaa_devices_lock); + + return ret; +} +static DRIVER_ATTR_RW(g_consec_descs_per_gwq); + /* * The iaa crypto driver supports three 'sync' methods determining how * compressions and decompressions are performed: @@ -751,7 +866,20 @@ static void free_wq_table(void) for (cpu = 0; cpu < nr_cpus; cpu++) wq_table_free_entry(cpu); - free_percpu(wq_table); + if (wq_table) { + free_percpu(wq_table); + wq_table = NULL; + } + + if (global_wq_table) { + free_percpu(global_wq_table); + global_wq_table = NULL; + } + + if (num_consec_descs_per_wq) { + free_percpu(num_consec_descs_per_wq); + num_consec_descs_per_wq = NULL; + } pr_debug("freed wq table\n"); } @@ -774,6 +902,38 @@ static int alloc_wq_table(int max_wqs) } entry->max_wqs = max_wqs; + entry->n_wqs = 0; + entry->cur_wq = 0; + } + + global_wq_table = alloc_percpu(struct wq_table_entry); + if (!global_wq_table) { + free_wq_table(); + return -ENOMEM; + } + + for (cpu = 0; cpu < nr_cpus; cpu++) { + entry = per_cpu_ptr(global_wq_table, cpu); + entry->wqs = kzalloc(GFP_KERNEL, max_wqs * sizeof(struct wq *)); + if (!entry->wqs) { + free_wq_table(); + return -ENOMEM; + } + + entry->max_wqs = max_wqs; + entry->n_wqs = 0; + entry->cur_wq = 0; + } + + num_consec_descs_per_wq = alloc_percpu(int); + if (!num_consec_descs_per_wq) { + free_wq_table(); + return -ENOMEM; + } + + for (cpu = 0; cpu < nr_cpus; cpu++) { + int *num_consec_descs = per_cpu_ptr(num_consec_descs_per_wq, cpu); + *num_consec_descs = 0; } pr_debug("initialized wq table\n"); @@ -912,9 +1072,14 @@ static int wq_table_add_wqs(int iaa, int cpu) } list_for_each_entry(iaa_wq, &found_device->wqs, list) { - wq_table_add(cpu, iaa_wq->wq); + + if (((found_device->n_wq - g_wqs_per_iaa) < 1) || + (n_wqs_added < (found_device->n_wq - g_wqs_per_iaa))) { + wq_table_add(cpu, iaa_wq->wq); + } + pr_debug("rebalance: added wq for cpu=%d: iaa wq %d.%d\n", - cpu, iaa_wq->wq->idxd->id, iaa_wq->wq->id); + cpu, iaa_wq->wq->idxd->id, iaa_wq->wq->id); n_wqs_added++; } @@ -927,6 +1092,63 @@ static int wq_table_add_wqs(int iaa, int cpu) return ret; } +static int global_wq_table_add_wqs(void) +{ + struct iaa_device *iaa_device; + int ret = 0, n_wqs_added; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + int cpu, node, node_of_cpu = -1; + + for (cpu = 0; cpu < nr_cpus; cpu++) { + +#ifdef CONFIG_NUMA + node_of_cpu = -1; + for_each_online_node(node) { + const struct cpumask *node_cpus; + node_cpus = cpumask_of_node(node); + if (!cpumask_test_cpu(cpu, node_cpus)) + continue; + node_of_cpu = node; + break; + } +#endif + list_for_each_entry(iaa_device, &iaa_devices, list) { + idxd = iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + +#ifdef CONFIG_NUMA + if (dev && (node_of_cpu != dev->numa_node)) + continue; +#endif + + if (iaa_device->n_wq <= g_wqs_per_iaa) + continue; + + n_wqs_added = 0; + + list_for_each_entry(iaa_wq, &iaa_device->wqs, list) { + + if (n_wqs_added < (iaa_device->n_wq - g_wqs_per_iaa)) { + n_wqs_added++; + } + else { + global_wq_table_add(cpu, iaa_wq->wq); + pr_debug("rebalance: added global wq for cpu=%d: iaa wq %d.%d\n", + cpu, iaa_wq->wq->idxd->id, iaa_wq->wq->id); + } + } + } + + global_wq_table_set_start_wq(cpu); + } + + return ret; +} + /* * Rebalance the wq table so that given a cpu, it's easy to find the * closest IAA instance. The idea is to try to choose the most @@ -961,6 +1183,7 @@ static void rebalance_wq_table(void) } pr_debug("Finished rebalance local wqs."); + global_wq_table_add_wqs(); } static inline int check_completion(struct device *dev, @@ -1509,6 +1732,27 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, goto out; } +/* + * Caller should make sure to call only if the + * per_cpu_ptr "global_wq_table" is non-NULL + * and has at least one wq configured. + */ +static struct idxd_wq *global_wq_table_next_wq(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + int *num_consec_descs = per_cpu_ptr(num_consec_descs_per_wq, cpu); + + if ((*num_consec_descs) == g_consec_descs_per_gwq) { + if (++entry->cur_wq >= entry->n_wqs) + entry->cur_wq = 0; + *num_consec_descs = 0; + } + + ++(*num_consec_descs); + + return entry->wqs[entry->cur_wq]; +} + static int iaa_comp_acompress(struct acomp_req *req) { struct iaa_compression_ctx *compression_ctx; @@ -1521,6 +1765,7 @@ static int iaa_comp_acompress(struct acomp_req *req) struct idxd_wq *wq; struct device *dev; int order = -1; + struct wq_table_entry *entry; compression_ctx = crypto_tfm_ctx(tfm); @@ -1535,8 +1780,15 @@ static int iaa_comp_acompress(struct acomp_req *req) } cpu = get_cpu(); - wq = wq_table_next_wq(cpu); + entry = per_cpu_ptr(global_wq_table, cpu); + + if (!entry || entry->n_wqs == 0) { + wq = wq_table_next_wq(cpu); + } else { + wq = global_wq_table_next_wq(cpu); + } put_cpu(); + if (!wq) { pr_debug("no wq configured for cpu=%d\n", cpu); return -ENODEV; @@ -2145,13 +2397,32 @@ static int __init iaa_crypto_init_module(void) goto err_sync_attr_create; } + ret = driver_create_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); + if (ret) { + pr_debug("IAA g_wqs_per_iaa attr creation failed\n"); + goto err_g_wqs_per_iaa_attr_create; + } + + ret = driver_create_file(&iaa_crypto_driver.drv, + &driver_attr_g_consec_descs_per_gwq); + if (ret) { + pr_debug("IAA g_consec_descs_per_gwq attr creation failed\n"); + goto err_g_consec_descs_per_gwq_attr_create; + } + if (iaa_crypto_debugfs_init()) pr_warn("debugfs init failed, stats not available\n"); pr_debug("initialized\n"); out: return ret; - +err_g_consec_descs_per_gwq_attr_create: + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); +err_g_wqs_per_iaa_attr_create: + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_sync_mode); err_sync_attr_create: driver_remove_file(&iaa_crypto_driver.drv, &driver_attr_verify_compress); @@ -2175,6 +2446,10 @@ static void __exit iaa_crypto_cleanup_module(void) &driver_attr_sync_mode); driver_remove_file(&iaa_crypto_driver.drv, &driver_attr_verify_compress); + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_consec_descs_per_gwq); idxd_driver_unregister(&iaa_crypto_driver); iaa_aecs_cleanup_fixed(); crypto_free_comp(deflate_generic_tfm); From patchwork Fri Oct 18 06:40:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 837222 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4F4918EFC9; Fri, 18 Oct 2024 06:41:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233672; cv=none; b=VrLUEZiGa1Snz8PNUVwIl6IpEGhWSy2LXnx6HJgY0apc4FFaw5H2oWTSLbvwTSHDIbB1l3fjL9e2eUHQDAi2ir0QVBc8mWPbwetSIM834tUvUZjzWyxj6ryJeu6dvEsyZaf3j/xnZ43U4d7RTE3gDJib7z+JrUT7WajmgOzCqE0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233672; c=relaxed/simple; bh=Qqxb4WHT8bprxcMFQO92c1NKetq9casrAYfBRMVL0vQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UXrrAGEhAGPBcs1RNs6Qt//THofZ8A7ygqQuJwso4afgD1KKaB6Knl0ZEaKrf2n3GHqJ0Z5ZLbzislqysg4tJ2CIQ1IUgC5AzgM4j9IM10twv7UjVVWwPzjTOY3Pcu/gJCGmUOB/z+bLpBViD4t0SKMjXXUFAavDuO8t4+00ags= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Adx2kk0j; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Adx2kk0j" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233670; x=1760769670; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Qqxb4WHT8bprxcMFQO92c1NKetq9casrAYfBRMVL0vQ=; b=Adx2kk0j5sX8sz+YILZ2GpPAFybtJzSXfIyGYFTFStPbUL6aVpeRH7zG UkrR6Au0w3B7vvWlUXO5viulnOJKjcYLeh0Ep5VbVNqc2uf4w0mKHuc+0 PdH2fBzWyt4kFBWqs8yl/mVn7t4J9/wznHc+8pfYzaKSyz329CesaNTZ9 AY3yDUBEqQs9dALVKgosNeavez/mqX+y/AuxYvaJOC7nyVqY621P/MwPd 68MUwINM+3PjR1r+Ue9TlN/JUoCXrEVAq8QD7X0Q72v42nsBqVuEawUmt XmJUc7uSCOXk2vKjRYGLn2VspkAlSQ8saKgvW0ltfv2HJH4Z3fhwy/Eea g==; X-CSE-ConnectionGUID: SVG1qsaJQOqklOxKz7Po5w== X-CSE-MsgGUID: Zw8jxwK2SUmWUgxoWESP6Q== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884904" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884904" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:03 -0700 X-CSE-ConnectionGUID: KzCXCTJiQFKm7BYr/a601A== X-CSE-MsgGUID: e8EgEA8oSM6ZXNXDpKJJHQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607525" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:03 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 09/13] mm: zswap: Config variable to enable compress batching in zswap_store(). Date: Thu, 17 Oct 2024 23:40:57 -0700 Message-Id: <20241018064101.336232-10-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a new zswap config variable that controls whether zswap_store() will compress a batch of pages, for instance, the pages in a large folio: CONFIG_ZSWAP_STORE_BATCHING_ENABLED The existing CONFIG_CRYPTO_DEV_IAA_CRYPTO variable added in commit ea7a5cbb4369 ("crypto: iaa - Add Intel IAA Compression Accelerator crypto driver core") is used to detect if the system has the Intel Analytics Accelerator (IAA), and the iaa_crypto module is available. If so, the kernel build will prompt for CONFIG_ZSWAP_STORE_BATCHING_ENABLED. Hence, users have the ability to set CONFIG_ZSWAP_STORE_BATCHING_ENABLED="y" only on systems that have Intel IAA. If CONFIG_ZSWAP_STORE_BATCHING_ENABLED is enabled, and IAA is configured as the zswap compressor, zswap_store() will process the pages in a large folio in batches, i.e., multiple pages at a time. Pages in a batch will be compressed in parallel in hardware, then stored. On systems without Intel IAA and/or if zswap uses software compressors, pages in the batch will be compressed sequentially and stored. The patch also implements a zswap API that returns the status of this config variable. Suggested-by: Ying Huang Signed-off-by: Kanchana P Sridhar --- include/linux/zswap.h | 6 ++++++ mm/Kconfig | 12 ++++++++++++ mm/zswap.c | 14 ++++++++++++++ 3 files changed, 32 insertions(+) diff --git a/include/linux/zswap.h b/include/linux/zswap.h index d961ead91bf1..74ad2a24b309 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -24,6 +24,7 @@ struct zswap_lruvec_state { atomic_long_t nr_disk_swapins; }; +bool zswap_store_batching_enabled(void); unsigned long zswap_total_pages(void); bool zswap_store(struct folio *folio); bool zswap_load(struct folio *folio); @@ -39,6 +40,11 @@ bool zswap_never_enabled(void); struct zswap_lruvec_state {}; +static inline bool zswap_store_batching_enabled(void) +{ + return false; +} + static inline bool zswap_store(struct folio *folio) { return false; diff --git a/mm/Kconfig b/mm/Kconfig index 33fa51d608dc..26d1a5cee471 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -125,6 +125,18 @@ config ZSWAP_COMPRESSOR_DEFAULT default "zstd" if ZSWAP_COMPRESSOR_DEFAULT_ZSTD default "" +config ZSWAP_STORE_BATCHING_ENABLED + bool "Batching of zswap stores with Intel IAA" + depends on ZSWAP && CRYPTO_DEV_IAA_CRYPTO + default n + help + Enables zswap_store to swapout large folios in batches of 8 pages, + rather than a page at a time, if the system has Intel IAA for hardware + acceleration of compressions. If IAA is configured as the zswap + compressor, this will parallelize batch compression of upto 8 pages + in the folio in hardware, thereby improving large folio compression + throughput and reducing swapout latency. + choice prompt "Default allocator" depends on ZSWAP diff --git a/mm/zswap.c b/mm/zswap.c index 948c9745ee57..4893302d8c34 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -127,6 +127,15 @@ static bool zswap_shrinker_enabled = IS_ENABLED( CONFIG_ZSWAP_SHRINKER_DEFAULT_ON); module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644); +/* + * Enable/disable batching of compressions if zswap_store is called with a + * large folio. If enabled, and if IAA is the zswap compressor, pages are + * compressed in parallel in batches of say, 8 pages. + * If not, every page is compressed sequentially. + */ +static bool __zswap_store_batching_enabled = IS_ENABLED( + CONFIG_ZSWAP_STORE_BATCHING_ENABLED); + bool zswap_is_enabled(void) { return zswap_enabled; @@ -241,6 +250,11 @@ static inline struct xarray *swap_zswap_tree(swp_entry_t swp) pr_debug("%s pool %s/%s\n", msg, (p)->tfm_name, \ zpool_get_type((p)->zpool)) +__always_inline bool zswap_store_batching_enabled(void) +{ + return __zswap_store_batching_enabled; +} + /********************************* * pool functions **********************************/ From patchwork Fri Oct 18 06:40:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 836848 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B45C18F2FB; Fri, 18 Oct 2024 06:41:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233672; cv=none; b=qm1iizqN2VIkSJJO3zhNz/NkCch5CYX5ywcolmXm9nfwmudQV4UtxErTNEWIIMjNZDMTF++VltB4hEJt71I5LwgkvfOtgyu7zzSrHeEkoolhhCgj9QKHzokYYbacGofLk6p8/bOn4EZZKmhGPiOcckYwLj1IAwBvkojLAh6Ijns= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233672; c=relaxed/simple; bh=+Q7iJvK2+hp6eB2kPVjEewMUFUZIdF/DxKq670u2u9s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ocFCK8QfNOwKLviPGhJLrGg20n1iIWd/DIbmOjiuEWeYV0k8BW35yEoLbpmsJM+PJg53eDTs0SYOvk700bbPOFfigZTyW3cgCe07Cu+WdxD6N6DlFVHpcNVGqgzFsO5T+8lu2sUiIrskWacklSvz+sa/tJaE71uPzWcYqYrkB/s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=c4HrQAdh; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="c4HrQAdh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233671; x=1760769671; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+Q7iJvK2+hp6eB2kPVjEewMUFUZIdF/DxKq670u2u9s=; b=c4HrQAdh8PHiBZc9vInMg2u/WoTGEwKNDLOr4fprJv5oYrLkncma2IIT YKHwN1NFYLBdBy/w7jiPTJLLB9aXRl7lNndqkY05E5fd9iUaHda0aXcRE m7qunqiPqKxXITO5ca9YOzFsiTZXUHloVtrDneIoWc+MdnQMrvMSgVYdK XNFNuU5kdNK1/X02hryCodUwizheUZKlZBzv6V1hWKZ8U4d/VmTDHb5A6 N2bxzuUCv+AQ8MXWhqKhHk94DB1IwQmoMSKeYH8N4bVwrkM7PrNnvI63G eTU984AMSifOcgvBt0HKl9eia22Jk30/JNvqQDki3cZ7Dbh5r1zJcFQ8G g==; X-CSE-ConnectionGUID: Yh0DMORlQ/aOhhB0HPkLBA== X-CSE-MsgGUID: Zk6ZX3rwT/aHGUfLRCDZsA== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884925" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884925" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:04 -0700 X-CSE-ConnectionGUID: 8bTgRIQISyO43aali6d8vQ== X-CSE-MsgGUID: bqFAXIkNSPSN+aHIWJjjpw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607529" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:04 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 10/13] mm: zswap: Create multiple reqs/buffers in crypto_acomp_ctx if platform has IAA. Date: Thu, 17 Oct 2024 23:40:58 -0700 Message-Id: <20241018064101.336232-11-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Intel IAA hardware acceleration can be used effectively to improve the zswap_store() performance of large folios by batching multiple pages in a folio to be compressed in parallel by IAA. Hence, to build compress batching of zswap large folio stores using IAA, we need to be able to submit a batch of compress jobs from zswap to the hardware to compress in parallel if the iaa_crypto "async" mode is used. The IAA compress batching paradigm works as follows: 1) Submit N crypto_acomp_compress() jobs using N requests. 2) Use the iaa_crypto driver async poll() method to check for the jobs to complete. 3) There are no ordering constraints implied by submission, hence we could loop through the requests and process any job that has completed. 4) This would repeat until all jobs have completed with success/error status. To facilitate this, we need to provide for multiple acomp_reqs in "struct crypto_acomp_ctx", each representing a distinct compress job. Likewise, there needs to be a distinct destination buffer corresponding to each acomp_req. If CONFIG_ZSWAP_STORE_BATCHING_ENABLED is enabled, this patch will set the SWAP_CRYPTO_SUB_BATCH_SIZE constant to 8UL. This implies each per-cpu crypto_acomp_ctx associated with the zswap_pool can submit up to 8 acomp_reqs at a time to accomplish parallel compressions. If IAA is not present and/or CONFIG_ZSWAP_STORE_BATCHING_ENABLED is not set, SWAP_CRYPTO_SUB_BATCH_SIZE will be set to 1UL. On an Intel Sapphire Rapids server, each socket has 4 IAA, each of which has 2 compress engines and 8 decompress engines. Experiments modeling a contended system with say 72 processes running under a cgroup with a fixed memory-limit, have shown that there is a significant performance improvement with dispatching compress jobs from all cores to all the IAA devices on the socket. Hence, SWAP_CRYPTO_SUB_BATCH_SIZE is set to 8 to maximize compression throughput if IAA is available. The definition of "struct crypto_acomp_ctx" is modified to make the req/buffer be arrays of size SWAP_CRYPTO_SUB_BATCH_SIZE. Thus, the added memory footprint cost of this per-cpu structure for batching is incurred only for platforms that have Intel IAA. Suggested-by: Ying Huang Signed-off-by: Kanchana P Sridhar --- mm/swap.h | 11 ++++++ mm/zswap.c | 104 ++++++++++++++++++++++++++++++++++------------------- 2 files changed, 78 insertions(+), 37 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index ad2f121de970..566616c971d4 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -8,6 +8,17 @@ struct mempolicy; #include /* for swp_offset */ #include /* for bio_end_io_t */ +/* + * For IAA compression batching: + * Maximum number of IAA acomp compress requests that will be processed + * in a sub-batch. + */ +#if defined(CONFIG_ZSWAP_STORE_BATCHING_ENABLED) +#define SWAP_CRYPTO_SUB_BATCH_SIZE 8UL +#else +#define SWAP_CRYPTO_SUB_BATCH_SIZE 1UL +#endif + /* linux/mm/page_io.c */ int sio_pool_init(void); struct swap_iocb; diff --git a/mm/zswap.c b/mm/zswap.c index 4893302d8c34..579869d1bdf6 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -152,9 +152,9 @@ bool zswap_never_enabled(void) struct crypto_acomp_ctx { struct crypto_acomp *acomp; - struct acomp_req *req; + struct acomp_req *req[SWAP_CRYPTO_SUB_BATCH_SIZE]; + u8 *buffer[SWAP_CRYPTO_SUB_BATCH_SIZE]; struct crypto_wait wait; - u8 *buffer; struct mutex mutex; bool is_sleepable; }; @@ -832,49 +832,64 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); struct crypto_acomp *acomp; - struct acomp_req *req; int ret; + int i, j; mutex_init(&acomp_ctx->mutex); - acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); - if (!acomp_ctx->buffer) - return -ENOMEM; - acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); if (IS_ERR(acomp)) { pr_err("could not alloc crypto acomp %s : %ld\n", pool->tfm_name, PTR_ERR(acomp)); - ret = PTR_ERR(acomp); - goto acomp_fail; + return PTR_ERR(acomp); } acomp_ctx->acomp = acomp; acomp_ctx->is_sleepable = acomp_is_async(acomp); - req = acomp_request_alloc(acomp_ctx->acomp); - if (!req) { - pr_err("could not alloc crypto acomp_request %s\n", - pool->tfm_name); - ret = -ENOMEM; - goto req_fail; + for (i = 0; i < SWAP_CRYPTO_SUB_BATCH_SIZE; ++i) { + acomp_ctx->buffer[i] = kmalloc_node(PAGE_SIZE * 2, + GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->buffer[i]) { + for (j = 0; j < i; ++j) + kfree(acomp_ctx->buffer[j]); + ret = -ENOMEM; + goto buf_fail; + } + } + + for (i = 0; i < SWAP_CRYPTO_SUB_BATCH_SIZE; ++i) { + acomp_ctx->req[i] = acomp_request_alloc(acomp_ctx->acomp); + if (!acomp_ctx->req[i]) { + pr_err("could not alloc crypto acomp_request req[%d] %s\n", + i, pool->tfm_name); + for (j = 0; j < i; ++j) + acomp_request_free(acomp_ctx->req[j]); + ret = -ENOMEM; + goto req_fail; + } } - acomp_ctx->req = req; + /* + * The crypto_wait is used only in fully synchronous, i.e., with scomp + * or non-poll mode of acomp, hence there is only one "wait" per + * acomp_ctx, with callback set to req[0]. + */ crypto_init_wait(&acomp_ctx->wait); /* * if the backend of acomp is async zip, crypto_req_done() will wakeup * crypto_wait_req(); if the backend of acomp is scomp, the callback * won't be called, crypto_wait_req() will return without blocking. */ - acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + acomp_request_set_callback(acomp_ctx->req[0], CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &acomp_ctx->wait); return 0; req_fail: + for (i = 0; i < SWAP_CRYPTO_SUB_BATCH_SIZE; ++i) + kfree(acomp_ctx->buffer[i]); +buf_fail: crypto_free_acomp(acomp_ctx->acomp); -acomp_fail: - kfree(acomp_ctx->buffer); return ret; } @@ -884,11 +899,17 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); if (!IS_ERR_OR_NULL(acomp_ctx)) { - if (!IS_ERR_OR_NULL(acomp_ctx->req)) - acomp_request_free(acomp_ctx->req); + int i; + + for (i = 0; i < SWAP_CRYPTO_SUB_BATCH_SIZE; ++i) + if (!IS_ERR_OR_NULL(acomp_ctx->req[i])) + acomp_request_free(acomp_ctx->req[i]); + + for (i = 0; i < SWAP_CRYPTO_SUB_BATCH_SIZE; ++i) + kfree(acomp_ctx->buffer[i]); + if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) crypto_free_acomp(acomp_ctx->acomp); - kfree(acomp_ctx->buffer); } return 0; @@ -911,7 +932,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, mutex_lock(&acomp_ctx->mutex); - dst = acomp_ctx->buffer; + dst = acomp_ctx->buffer[0]; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -921,7 +942,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * giving the dst buffer with enough length to avoid buffer overflow. */ sg_init_one(&output, dst, PAGE_SIZE * 2); - acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SIZE, dlen); + acomp_request_set_params(acomp_ctx->req[0], &input, &output, PAGE_SIZE, dlen); /* * If the crypto_acomp provides an asynchronous poll() interface, @@ -940,19 +961,20 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * parallel. */ if (acomp_ctx->acomp->poll) { - comp_ret = crypto_acomp_compress(acomp_ctx->req); + comp_ret = crypto_acomp_compress(acomp_ctx->req[0]); if (comp_ret == -EINPROGRESS) { do { - comp_ret = crypto_acomp_poll(acomp_ctx->req); + comp_ret = crypto_acomp_poll(acomp_ctx->req[0]); if (comp_ret && comp_ret != -EAGAIN) break; } while (comp_ret); } } else { - comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait); + comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req[0]), + &acomp_ctx->wait); } - dlen = acomp_ctx->req->dlen; + dlen = acomp_ctx->req[0]->dlen; if (comp_ret) goto unlock; @@ -1006,31 +1028,39 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) */ if ((acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) || !virt_addr_valid(src)) { - memcpy(acomp_ctx->buffer, src, entry->length); - src = acomp_ctx->buffer; + memcpy(acomp_ctx->buffer[0], src, entry->length); + src = acomp_ctx->buffer[0]; zpool_unmap_handle(zpool, entry->handle); } sg_init_one(&input, src, entry->length); sg_init_table(&output, 1); sg_set_folio(&output, folio, PAGE_SIZE, 0); - acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE); + acomp_request_set_params(acomp_ctx->req[0], &input, &output, + entry->length, PAGE_SIZE); if (acomp_ctx->acomp->poll) { - ret = crypto_acomp_decompress(acomp_ctx->req); + ret = crypto_acomp_decompress(acomp_ctx->req[0]); if (ret == -EINPROGRESS) { do { - ret = crypto_acomp_poll(acomp_ctx->req); + ret = crypto_acomp_poll(acomp_ctx->req[0]); BUG_ON(ret && ret != -EAGAIN); } while (ret); } } else { - BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait)); + BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req[0]), + &acomp_ctx->wait)); } - BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE); - mutex_unlock(&acomp_ctx->mutex); + BUG_ON(acomp_ctx->req[0]->dlen != PAGE_SIZE); - if (src != acomp_ctx->buffer) + if (src != acomp_ctx->buffer[0]) zpool_unmap_handle(zpool, entry->handle); + + /* + * It is safer to unlock the mutex after the check for + * "src != acomp_ctx->buffer[0]" so that the value of "src" + * does not change. + */ + mutex_unlock(&acomp_ctx->mutex); } /********************************* From patchwork Fri Oct 18 06:40:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 837221 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 981FA18A6B9; Fri, 18 Oct 2024 06:41:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233673; cv=none; b=qoVrXoUm3Vr9URUXvWKwEEDg0wJGUskrC1RCFAjb6fVO8jleB/0c0pPb9vjQmhpgSDnkrPLAUDgL8DOfgbs5TPGyWRBqpk7W8ROSyi8I8dUKgpPzUQut0dYGJm8CR2n3Z920zFUK004pJkJ6dRpmv97y8uBpRCqmqRL3k8CwI9k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233673; c=relaxed/simple; bh=ehxQAVad+zYWKv7x3rvdIyaL5PCorY7OSY9LU2CsBiY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Rttw2gHx/pVOWjN57a78I2Z8odj8M3yjHEzPtOPtd43UYVax5IYQ/EXaFlamwuB7H4USDNh6oLl3TRCN7utOG9dVG8UmEdpaRK89oTA/5vo7+m1qi7T2GJ4gnkky74g2KDJNbxtS+2EL1yGkMpm2OHWrfNwhgH3d15DdchUuCLY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LFkijxfs; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LFkijxfs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233672; x=1760769672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ehxQAVad+zYWKv7x3rvdIyaL5PCorY7OSY9LU2CsBiY=; b=LFkijxfsDNJ7QVBuOc7wv/iwjh0BDrdBjZvFPZcJXlwLZ7/WBppQbGgT /LbR+vrj1xA98ePcJ/HbBb+iHaRtetBk7iHJeFDqw8Vmqtpjz28kcq1oR d4yCzYe1CgmdscSQupLDyJHbZJOY+o9j48vk9kXPpjYcfVbT1QqyO8cua EM2Cxxdk9IlQlWa1S8Z1SH39uogNn1c31AjCcaTAbagSqPwoGYOWFGhmV gFWa4LJBiqv2cxChJeuYu+a3gVekVu/B+/ch8u5xVgW9f5bXRTySkTk/+ s0L+pbhH6xs1QF0RXv1squtPbrnZeYl2pMqEoyBMTtXFU1SVkxhdVjfci Q==; X-CSE-ConnectionGUID: aY8uToQ7QgSVq5IhXkXaUA== X-CSE-MsgGUID: eqSbOCfpTvSXvfKBlupRLA== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884942" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884942" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:04 -0700 X-CSE-ConnectionGUID: PBvCfG+vRvKunH74mxz5gg== X-CSE-MsgGUID: nJbh3AsKSIKgmJHgOz/6cA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607534" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:04 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 11/13] mm: swap: Add IAA batch compression API swap_crypto_acomp_compress_batch(). Date: Thu, 17 Oct 2024 23:40:59 -0700 Message-Id: <20241018064101.336232-12-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Added a new API swap_crypto_acomp_compress_batch() that does batch compression. A system that has Intel IAA can avail of this API to submit a batch of compress jobs for parallel compression in the hardware, to improve performance. On a system without IAA, this API will process each compress job sequentially. The purpose of this API is to be invocable from any swap module that needs to compress large folios, or a batch of pages in the general case. For instance, zswap would batch compress up to SWAP_CRYPTO_SUB_BATCH_SIZE (i.e. 8 if the system has IAA) pages in the large folio in parallel to improve zswap_store() performance. Towards this eventual goal: 1) The definition of "struct crypto_acomp_ctx" is moved to mm/swap.h so that mm modules like swap_state.c and zswap.c can reference it. 2) The swap_crypto_acomp_compress_batch() interface is implemented in swap_state.c. It would be preferable for "struct crypto_acomp_ctx" to be defined in, and for swap_crypto_acomp_compress_batch() to be exported via include/linux/swap.h so that modules outside mm (for e.g. zram) can potentially use the API for batch compressions with IAA. I would appreciate RFC comments on this. Signed-off-by: Kanchana P Sridhar --- mm/swap.h | 45 +++++++++++++++++++ mm/swap_state.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++ mm/zswap.c | 9 ---- 3 files changed, 160 insertions(+), 9 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 566616c971d4..4dcb67e2cc33 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -7,6 +7,7 @@ struct mempolicy; #ifdef CONFIG_SWAP #include /* for swp_offset */ #include /* for bio_end_io_t */ +#include /* * For IAA compression batching: @@ -19,6 +20,39 @@ struct mempolicy; #define SWAP_CRYPTO_SUB_BATCH_SIZE 1UL #endif +/* linux/mm/swap_state.c, zswap.c */ +struct crypto_acomp_ctx { + struct crypto_acomp *acomp; + struct acomp_req *req[SWAP_CRYPTO_SUB_BATCH_SIZE]; + u8 *buffer[SWAP_CRYPTO_SUB_BATCH_SIZE]; + struct crypto_wait wait; + struct mutex mutex; + bool is_sleepable; +}; + +/** + * This API provides IAA compress batching functionality for use by swap + * modules. + * The acomp_ctx mutex should be locked/unlocked before/after calling this + * procedure. + * + * @pages: Pages to be compressed. + * @dsts: Pre-allocated destination buffers to store results of IAA compression. + * @dlens: Will contain the compressed lengths. + * @errors: Will contain a 0 if the page was successfully compressed, or a + * non-0 error value to be processed by the calling function. + * @nr_pages: The number of pages, up to SWAP_CRYPTO_SUB_BATCH_SIZE, + * to be compressed. + * @acomp_ctx: The acomp context for iaa_crypto/other compressor. + */ +void swap_crypto_acomp_compress_batch( + struct page *pages[], + u8 *dsts[], + unsigned int dlens[], + int errors[], + int nr_pages, + struct crypto_acomp_ctx *acomp_ctx); + /* linux/mm/page_io.c */ int sio_pool_init(void); struct swap_iocb; @@ -119,6 +153,17 @@ static inline int swap_zeromap_batch(swp_entry_t entry, int max_nr, #else /* CONFIG_SWAP */ struct swap_iocb; +struct crypto_acomp_ctx {}; +static inline void swap_crypto_acomp_compress_batch( + struct page *pages[], + u8 *dsts[], + unsigned int dlens[], + int errors[], + int nr_pages, + struct crypto_acomp_ctx *acomp_ctx) +{ +} + static inline void swap_read_folio(struct folio *folio, struct swap_iocb **plug) { } diff --git a/mm/swap_state.c b/mm/swap_state.c index 4669f29cf555..117c3caa5679 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -23,6 +23,8 @@ #include #include #include +#include +#include #include "internal.h" #include "swap.h" @@ -742,6 +744,119 @@ void exit_swap_address_space(unsigned int type) swapper_spaces[type] = NULL; } +#ifdef CONFIG_SWAP + +/** + * This API provides IAA compress batching functionality for use by swap + * modules. + * The acomp_ctx mutex should be locked/unlocked before/after calling this + * procedure. + * + * @pages: Pages to be compressed. + * @dsts: Pre-allocated destination buffers to store results of IAA compression. + * @dlens: Will contain the compressed lengths. + * @errors: Will contain a 0 if the page was successfully compressed, or a + * non-0 error value to be processed by the calling function. + * @nr_pages: The number of pages, up to SWAP_CRYPTO_SUB_BATCH_SIZE, + * to be compressed. + * @acomp_ctx: The acomp context for iaa_crypto/other compressor. + */ +void swap_crypto_acomp_compress_batch( + struct page *pages[], + u8 *dsts[], + unsigned int dlens[], + int errors[], + int nr_pages, + struct crypto_acomp_ctx *acomp_ctx) +{ + struct scatterlist inputs[SWAP_CRYPTO_SUB_BATCH_SIZE]; + struct scatterlist outputs[SWAP_CRYPTO_SUB_BATCH_SIZE]; + bool compressions_done = false; + int i, j; + + BUG_ON(nr_pages > SWAP_CRYPTO_SUB_BATCH_SIZE); + + /* + * Prepare and submit acomp_reqs to IAA. + * IAA will process these compress jobs in parallel in async mode. + * If the compressor does not support a poll() method, or if IAA is + * used in sync mode, the jobs will be processed sequentially using + * acomp_ctx->req[0] and acomp_ctx->wait. + */ + for (i = 0; i < nr_pages; ++i) { + j = acomp_ctx->acomp->poll ? i : 0; + sg_init_table(&inputs[i], 1); + sg_set_page(&inputs[i], pages[i], PAGE_SIZE, 0); + + /* + * Each acomp_ctx->buffer[] is of size (PAGE_SIZE * 2). + * Reflect same in sg_list. + */ + sg_init_one(&outputs[i], dsts[i], PAGE_SIZE * 2); + acomp_request_set_params(acomp_ctx->req[j], &inputs[i], + &outputs[i], PAGE_SIZE, dlens[i]); + + /* + * If the crypto_acomp provides an asynchronous poll() + * interface, submit the request to the driver now, and poll for + * a completion status later, after all descriptors have been + * submitted. If the crypto_acomp does not provide a poll() + * interface, submit the request and wait for it to complete, + * i.e., synchronously, before moving on to the next request. + */ + if (acomp_ctx->acomp->poll) { + errors[i] = crypto_acomp_compress(acomp_ctx->req[j]); + + if (errors[i] != -EINPROGRESS) + errors[i] = -EINVAL; + else + errors[i] = -EAGAIN; + } else { + errors[i] = crypto_wait_req( + crypto_acomp_compress(acomp_ctx->req[j]), + &acomp_ctx->wait); + if (!errors[i]) + dlens[i] = acomp_ctx->req[j]->dlen; + } + } + + /* + * If not doing async compressions, the batch has been processed at + * this point and we can return. + */ + if (!acomp_ctx->acomp->poll) + return; + + /* + * Poll for and process IAA compress job completions + * in out-of-order manner. + */ + while (!compressions_done) { + compressions_done = true; + + for (i = 0; i < nr_pages; ++i) { + /* + * Skip, if the compression has already completed + * successfully or with an error. + */ + if (errors[i] != -EAGAIN) + continue; + + errors[i] = crypto_acomp_poll(acomp_ctx->req[i]); + + if (errors[i]) { + if (errors[i] == -EAGAIN) + compressions_done = false; + } else { + dlens[i] = acomp_ctx->req[i]->dlen; + } + } + } +} +EXPORT_SYMBOL_GPL(swap_crypto_acomp_compress_batch); + +#endif /* CONFIG_SWAP */ + static int swap_vma_ra_win(struct vm_fault *vmf, unsigned long *start, unsigned long *end) { diff --git a/mm/zswap.c b/mm/zswap.c index 579869d1bdf6..cab3114321f9 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -150,15 +150,6 @@ bool zswap_never_enabled(void) * data structures **********************************/ -struct crypto_acomp_ctx { - struct crypto_acomp *acomp; - struct acomp_req *req[SWAP_CRYPTO_SUB_BATCH_SIZE]; - u8 *buffer[SWAP_CRYPTO_SUB_BATCH_SIZE]; - struct crypto_wait wait; - struct mutex mutex; - bool is_sleepable; -}; - /* * The lock ordering is zswap_tree.lock -> zswap_pool.lru_lock. * The only case where lru_lock is not acquired while holding tree.lock is From patchwork Fri Oct 18 06:41:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 836847 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C906D19048A; Fri, 18 Oct 2024 06:41:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233675; cv=none; b=Bvo2DbU28v0GMvfSE6BV9BS1uLR2dYxB6Yt/rGsQpcgqizW+cOXVWK8xJJcAVUjpWNDyNbVsLCrrKwPEnJ1kWdbsEyOwhaNwpD5eSDVaAQPMtfMwlvN5UjAwtk76+j8zEMjNqRVn+x49vEGSSVBgFYu1YP9zdkdo7MglwfGaXng= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729233675; c=relaxed/simple; bh=O8jxodCGMWtrByAFoEsRBLTTLZfK8XqMptqufBIfhNw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RvpdhE3uAQt9CZedF5wznknv7JkyIzcJIViwBVhhs7SDDw06I3phWvKWM9FtspBzMafRPRGSB6X5vDjQwOMYMFDjXpX6bwS/bmla2ClGhJnO5ggC5mQ4CQw9AzwoFaiVlO1/qoI9ChwO8tQGxxuF4e7bVfQsZZYQ+ukRHxU0HwI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=k60k+MRu; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="k60k+MRu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729233673; x=1760769673; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O8jxodCGMWtrByAFoEsRBLTTLZfK8XqMptqufBIfhNw=; b=k60k+MRuZ7rIda8ghpUOzCaLmZ+hepK1mnQBVBiHzrbZPE1UM5lcn109 phzGrAe6BCSPmSa3QPB6QA8SaRaMDnzc0Q8gKcJ3LYTu0NLtOKMDBKP9v 62B3ZGcEK1hUG8R5rUtMNpnaqVBwG/DiDQO/VsSmuPpEgEcqLJZr+CWzq yfKmU4Iz1lQzWUycXK00oKftq0OA3VkFs7C4qIrhGp0Fmc1xTY0jbHGtn /9dmUQbqF9DXxXead5R5iolHrFw3M6EooCIVUGlL/6pUrIDAR2N4DsoRQ qKx50DeZYGNEB0ga/jdytlAJe22/hqEBKr4o9esxf+f63Z1wgToO5CgKR Q==; X-CSE-ConnectionGUID: l47qHpM+TjSyBWD/7suNGQ== X-CSE-MsgGUID: 2QfD2714RzWjewOM/L3XKg== X-IronPort-AV: E=McAfee;i="6700,10204,11228"; a="28884957" X-IronPort-AV: E=Sophos;i="6.11,212,1725346800"; d="scan'208";a="28884957" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2024 23:41:04 -0700 X-CSE-ConnectionGUID: +AFCy/XaR8CvOJBAh1gLlQ== X-CSE-MsgGUID: WK41Ugd7SZ2i1qeniXmNog== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="83607538" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa003.jf.intel.com with ESMTP; 17 Oct 2024 23:41:04 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, mcgrof@kernel.org, kees@kernel.org, joel.granados@kernel.org, bfoster@redhat.com, willy@infradead.org, linux-fsdevel@vger.kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [RFC PATCH v1 12/13] mm: zswap: Compress batching with Intel IAA in zswap_store() of large folios. Date: Thu, 17 Oct 2024 23:41:00 -0700 Message-Id: <20241018064101.336232-13-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> References: <20241018064101.336232-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the system has Intel IAA, and if CONFIG_ZSWAP_STORE_BATCHING_ENABLED is set to "y", zswap_store() will call swap_crypto_acomp_compress_batch() to batch compress up to SWAP_CRYPTO_SUB_BATCH_SIZE pages in large folios in parallel using the multiple compress engines available in IAA hardware. On platforms with multiple IAA devices per socket, compress jobs from all cores in a socket will be distributed among all IAA devices on the socket by the iaa_crypto driver. If zswap_store() is called with a large folio, and if zswap_store_batching_enabled() returns "true", it will call the main __zswap_store_batch_core() interface for compress batching. The interface represents the extensible compress batching architecture that can potentially be called with a batch of any-order folios from shrink_folio_list(). In other words, although zswap_store() calls __zswap_store_batch_core() with exactly one large folio in this patch, we will reuse this API to reclaim a batch of folios in subsequent patches. The newly added functions that implement batched stores follow the general structure of zswap_store() of a large folio. Some amount of restructuring and optimization is done to minimize failure points for a batch, fail early and maximize the zswap store pipeline occupancy with SWAP_CRYPTO_SUB_BATCH_SIZE pages, potentially from multiple folios. This is intended to maximize reclaim throughput with the IAA hardware parallel compressions. Signed-off-by: Kanchana P Sridhar --- include/linux/zswap.h | 84 ++++++ mm/zswap.c | 591 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 671 insertions(+), 4 deletions(-) diff --git a/include/linux/zswap.h b/include/linux/zswap.h index 74ad2a24b309..9bbe330686f6 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -24,6 +24,88 @@ struct zswap_lruvec_state { atomic_long_t nr_disk_swapins; }; +/* + * struct zswap_store_sub_batch_page: + * + * This represents one "zswap batching element", namely, the + * attributes associated with a page in a large folio that will + * be compressed and stored in zswap. The term "batch" is reserved + * for a conceptual "batch" of folios that can be sent to + * zswap_store() by reclaim. The term "sub-batch" is used to describe + * a collection of "zswap batching elements", i.e., an array of + * "struct zswap_store_sub_batch_page *". + * + * The zswap compress sub-batch size is specified by + * SWAP_CRYPTO_SUB_BATCH_SIZE, currently set as 8UL if the + * platform has Intel IAA. This means zswap can store a large folio + * by creating sub-batches of up to 8 pages and compressing this + * batch using IAA to parallelize the 8 compress jobs in hardware. + * For e.g., a 64KB folio can be compressed as 2 sub-batches of + * 8 pages each. This can significantly improve the zswap_store() + * performance for large folios. + * + * Although the page itself is represented directly, the structure + * adds a "u8 batch_idx" to represent an index for the folio in a + * conceptual "batch of folios" that can be passed to zswap_store(). + * Conceptually, this allows for up to 256 folios that can be passed + * to zswap_store(). If this conceptual number of folios sent to + * zswap_store() exceeds 256, the "batch_idx" needs to become u16. + */ +struct zswap_store_sub_batch_page { + u8 batch_idx; + swp_entry_t swpentry; + struct obj_cgroup *objcg; + struct zswap_entry *entry; + int error; /* folio error status. */ +}; + +/* + * struct zswap_store_pipeline_state: + * + * This stores state during IAA compress batching of (conceptually, a batch of) + * folios. The term pipelining in this context, refers to breaking down + * the batch of folios being reclaimed into sub-batches of + * SWAP_CRYPTO_SUB_BATCH_SIZE pages, batch compressing and storing the + * sub-batch. This concept could be further evolved to use overlap of CPU + * computes with IAA computes. For instance, we could stage the post-compress + * computes for sub-batch "N-1" to happen in parallel with IAA batch + * compression of sub-batch "N". + * + * We begin by developing the concept of compress batching. Pipelining with + * overlap can be future work. + * + * @errors: The errors status for the batch of reclaim folios passed in from + * a higher mm layer such as swap_writepage(). + * @pool: A valid zswap_pool. + * @acomp_ctx: The per-cpu pointer to the crypto_acomp_ctx for the @pool. + * @sub_batch: This is an array that represents the sub-batch of up to + * SWAP_CRYPTO_SUB_BATCH_SIZE pages that are being stored + * in zswap. + * @comp_dsts: The destination buffers for crypto_acomp_compress() for each + * page being compressed. + * @comp_dlens: The destination buffers' lengths from crypto_acomp_compress() + * obtained after crypto_acomp_poll() returns completion status, + * for each page being compressed. + * @comp_errors: Compression errors for each page being compressed. + * @nr_comp_pages: Total number of pages in @sub_batch. + * + * Note: + * The max sub-batch size is SWAP_CRYPTO_SUB_BATCH_SIZE, currently 8UL. + * Hence, if SWAP_CRYPTO_SUB_BATCH_SIZE exceeds 256, some of the + * u8 members (except @comp_dsts) need to become u16. + */ +struct zswap_store_pipeline_state { + int *errors; + struct zswap_pool *pool; + struct crypto_acomp_ctx *acomp_ctx; + struct zswap_store_sub_batch_page *sub_batch; + struct page **comp_pages; + u8 **comp_dsts; + unsigned int *comp_dlens; + int *comp_errors; + u8 nr_comp_pages; +}; + bool zswap_store_batching_enabled(void); unsigned long zswap_total_pages(void); bool zswap_store(struct folio *folio); @@ -39,6 +121,8 @@ bool zswap_never_enabled(void); #else struct zswap_lruvec_state {}; +struct zswap_store_sub_batch_page {}; +struct zswap_store_pipeline_state {}; static inline bool zswap_store_batching_enabled(void) { diff --git a/mm/zswap.c b/mm/zswap.c index cab3114321f9..1c12a7b9f4ff 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -130,7 +130,7 @@ module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644); /* * Enable/disable batching of compressions if zswap_store is called with a * large folio. If enabled, and if IAA is the zswap compressor, pages are - * compressed in parallel in batches of say, 8 pages. + * compressed in parallel in batches of SWAP_CRYPTO_SUB_BATCH_SIZE pages. * If not, every page is compressed sequentially. */ static bool __zswap_store_batching_enabled = IS_ENABLED( @@ -246,6 +246,12 @@ __always_inline bool zswap_store_batching_enabled(void) return __zswap_store_batching_enabled; } +static void __zswap_store_batch_core( + int node_id, + struct folio **folios, + int *errors, + unsigned int nr_folios); + /********************************* * pool functions **********************************/ @@ -906,6 +912,9 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) return 0; } +/* + * The acomp_ctx->mutex must be locked/unlocked in the calling procedure. + */ static bool zswap_compress(struct page *page, struct zswap_entry *entry, struct zswap_pool *pool) { @@ -921,8 +930,6 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, acomp_ctx = raw_cpu_ptr(pool->acomp_ctx); - mutex_lock(&acomp_ctx->mutex); - dst = acomp_ctx->buffer[0]; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -992,7 +999,6 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, else if (alloc_ret) zswap_reject_alloc_fail++; - mutex_unlock(&acomp_ctx->mutex); return comp_ret == 0 && alloc_ret == 0; } @@ -1545,10 +1551,17 @@ static ssize_t zswap_store_page(struct page *page, return -EINVAL; } +/* + * Modified to use the IAA compress batching framework implemented in + * __zswap_store_batch_core() if zswap_store_batching_enabled() is true. + * The batching code is intended to significantly improve folio store + * performance over the sequential code. + */ bool zswap_store(struct folio *folio) { long nr_pages = folio_nr_pages(folio); swp_entry_t swp = folio->swap; + struct crypto_acomp_ctx *acomp_ctx; struct obj_cgroup *objcg = NULL; struct mem_cgroup *memcg = NULL; struct zswap_pool *pool; @@ -1556,6 +1569,17 @@ bool zswap_store(struct folio *folio) bool ret = false; long index; + /* + * Improve large folio zswap_store() latency with IAA compress batching. + */ + if (folio_test_large(folio) && zswap_store_batching_enabled()) { + int error = -1; + __zswap_store_batch_core(folio_nid(folio), &folio, &error, 1); + if (!error) + ret = true; + return ret; + } + VM_WARN_ON_ONCE(!folio_test_locked(folio)); VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); @@ -1588,6 +1612,9 @@ bool zswap_store(struct folio *folio) mem_cgroup_put(memcg); } + acomp_ctx = raw_cpu_ptr(pool->acomp_ctx); + mutex_lock(&acomp_ctx->mutex); + for (index = 0; index < nr_pages; ++index) { struct page *page = folio_page(folio, index); ssize_t bytes; @@ -1609,6 +1636,7 @@ bool zswap_store(struct folio *folio) ret = true; put_pool: + mutex_unlock(&acomp_ctx->mutex); zswap_pool_put(pool); put_objcg: obj_cgroup_put(objcg); @@ -1638,6 +1666,561 @@ bool zswap_store(struct folio *folio) return ret; } +/* + * Note: If SWAP_CRYPTO_SUB_BATCH_SIZE exceeds 256, change the + * u8 stack variables in the next several functions, to u16. + */ + +/* + * Propagate the "sbp" error condition to other batch elements belonging to + * the same folio as "sbp". + */ +static __always_inline void zswap_store_propagate_errors( + struct zswap_store_pipeline_state *zst, + u8 error_batch_idx) +{ + u8 i; + + if (zst->errors[error_batch_idx]) + return; + + for (i = 0; i < zst->nr_comp_pages; ++i) { + struct zswap_store_sub_batch_page *sbp = &zst->sub_batch[i]; + + if (sbp->batch_idx == error_batch_idx) { + if (!sbp->error) { + if (!IS_ERR_VALUE(sbp->entry->handle)) + zpool_free(zst->pool->zpool, sbp->entry->handle); + + if (sbp->entry) { + zswap_entry_cache_free(sbp->entry); + sbp->entry = NULL; + } + sbp->error = -EINVAL; + } + } + } + + /* + * Set zswap status for the folio to "error" + * for use in swap_writepage. + */ + zst->errors[error_batch_idx] = -EINVAL; +} + +static __always_inline void zswap_process_comp_errors( + struct zswap_store_pipeline_state *zst) +{ + u8 i; + + for (i = 0; i < zst->nr_comp_pages; ++i) { + struct zswap_store_sub_batch_page *sbp = &zst->sub_batch[i]; + + if (zst->comp_errors[i]) { + if (zst->comp_errors[i] == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_compress_fail++; + + if (!sbp->error) + zswap_store_propagate_errors(zst, + sbp->batch_idx); + } + } +} + +static void zswap_compress_batch(struct zswap_store_pipeline_state *zst) +{ + /* + * Compress up to SWAP_CRYPTO_SUB_BATCH_SIZE pages. + * If IAA is the zswap compressor, this compresses the + * pages in parallel, leading to significant performance + * improvements as compared to software compressors. + */ + swap_crypto_acomp_compress_batch( + zst->comp_pages, + zst->comp_dsts, + zst->comp_dlens, + zst->comp_errors, + zst->nr_comp_pages, + zst->acomp_ctx); + + /* + * Scan the sub-batch for any compression errors, + * and invalidate pages with errors, along with other + * pages belonging to the same folio as the error pages. + */ + zswap_process_comp_errors(zst); +} + +static void zswap_zpool_store_sub_batch( + struct zswap_store_pipeline_state *zst) +{ + u8 i; + + for (i = 0; i < zst->nr_comp_pages; ++i) { + struct zswap_store_sub_batch_page *sbp = &zst->sub_batch[i]; + struct zpool *zpool; + unsigned long handle; + char *buf; + gfp_t gfp; + int err; + + /* Skip pages that had compress errors. */ + if (sbp->error) + continue; + + zpool = zst->pool->zpool; + gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; + if (zpool_malloc_support_movable(zpool)) + gfp |= __GFP_HIGHMEM | __GFP_MOVABLE; + err = zpool_malloc(zpool, zst->comp_dlens[i], gfp, &handle); + + if (err) { + if (err == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_alloc_fail++; + + /* + * An error should be propagated to other pages of the + * same folio in the sub-batch, and zpool resources for + * those pages (in sub-batch order prior to this zpool + * error) should be de-allocated. + */ + zswap_store_propagate_errors(zst, sbp->batch_idx); + continue; + } + + buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO); + memcpy(buf, zst->comp_dsts[i], zst->comp_dlens[i]); + zpool_unmap_handle(zpool, handle); + + sbp->entry->handle = handle; + sbp->entry->length = zst->comp_dlens[i]; + } +} + +/* + * Returns true if the entry was successfully + * stored in the xarray, and false otherwise. + */ +static bool zswap_store_entry(swp_entry_t page_swpentry, + struct zswap_entry *entry) +{ + struct zswap_entry *old = xa_store(swap_zswap_tree(page_swpentry), + swp_offset(page_swpentry), + entry, GFP_KERNEL); + if (xa_is_err(old)) { + int err = xa_err(old); + + WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); + zswap_reject_alloc_fail++; + return false; + } + + /* + * We may have had an existing entry that became stale when + * the folio was redirtied and now the new version is being + * swapped out. Get rid of the old. + */ + if (old) + zswap_entry_free(old); + + return true; +} + +static void zswap_batch_compress_post_proc( + struct zswap_store_pipeline_state *zst) +{ + int nr_objcg_pages = 0, nr_pages = 0; + struct obj_cgroup *objcg = NULL; + size_t compressed_bytes = 0; + u8 i; + + zswap_zpool_store_sub_batch(zst); + + for (i = 0; i < zst->nr_comp_pages; ++i) { + struct zswap_store_sub_batch_page *sbp = &zst->sub_batch[i]; + + if (sbp->error) + continue; + + if (!zswap_store_entry(sbp->swpentry, sbp->entry)) { + zswap_store_propagate_errors(zst, sbp->batch_idx); + continue; + } + + /* + * The entry is successfully compressed and stored in the tree, + * there is no further possibility of failure. Grab refs to the + * pool and objcg. These refs will be dropped by + * zswap_entry_free() when the entry is removed from the tree. + */ + zswap_pool_get(zst->pool); + if (sbp->objcg) + obj_cgroup_get(sbp->objcg); + + /* + * We finish initializing the entry while it's already in xarray. + * This is safe because: + * + * 1. Concurrent stores and invalidations are excluded by folio + * lock. + * + * 2. Writeback is excluded by the entry not being on the LRU yet. + * The publishing order matters to prevent writeback from seeing + * an incoherent entry. + */ + sbp->entry->pool = zst->pool; + sbp->entry->swpentry = sbp->swpentry; + sbp->entry->objcg = sbp->objcg; + sbp->entry->referenced = true; + if (sbp->entry->length) { + INIT_LIST_HEAD(&sbp->entry->lru); + zswap_lru_add(&zswap_list_lru, sbp->entry); + } + + if (!objcg && sbp->objcg) { + objcg = sbp->objcg; + } else if (objcg && sbp->objcg && (objcg != sbp->objcg)) { + obj_cgroup_charge_zswap(objcg, compressed_bytes); + count_objcg_events(objcg, ZSWPOUT, nr_objcg_pages); + compressed_bytes = 0; + nr_objcg_pages = 0; + objcg = sbp->objcg; + } + + if (sbp->objcg) { + compressed_bytes += sbp->entry->length; + ++nr_objcg_pages; + } + + ++nr_pages; + } /* for sub-batch pages. */ + + if (objcg) { + obj_cgroup_charge_zswap(objcg, compressed_bytes); + count_objcg_events(objcg, ZSWPOUT, nr_objcg_pages); + } + + atomic_long_add(nr_pages, &zswap_stored_pages); + count_vm_events(ZSWPOUT, nr_pages); +} + +static void zswap_store_sub_batch(struct zswap_store_pipeline_state *zst) +{ + u8 i; + + for (i = 0; i < zst->nr_comp_pages; ++i) { + zst->comp_dsts[i] = zst->acomp_ctx->buffer[i]; + zst->comp_dlens[i] = PAGE_SIZE; + } /* for sub-batch pages. */ + + /* + * Batch compress sub-batch "N". If IAA is the compressor, the + * hardware will compress multiple pages in parallel. + */ + zswap_compress_batch(zst); + + zswap_batch_compress_post_proc(zst); +} + +static void zswap_add_folio_pages_to_sb( + struct zswap_store_pipeline_state *zst, + struct folio* folio, + u8 batch_idx, + struct obj_cgroup *objcg, + struct zswap_entry *entries[], + long start_idx, + u8 add_nr_pages) +{ + long index; + + for (index = start_idx; index < (start_idx + add_nr_pages); ++index) { + u8 i = zst->nr_comp_pages; + struct zswap_store_sub_batch_page *sbp = &zst->sub_batch[i]; + struct page *page = folio_page(folio, index); + zst->comp_pages[i] = page; + sbp->swpentry = page_swap_entry(page); + sbp->batch_idx = batch_idx; + sbp->objcg = objcg; + sbp->entry = entries[index - start_idx]; + sbp->error = 0; + ++zst->nr_comp_pages; + } +} + +static __always_inline void zswap_store_reset_sub_batch( + struct zswap_store_pipeline_state *zst) +{ + zst->nr_comp_pages = 0; +} + +/* Allocate entries for the next sub-batch. */ +static int zswap_alloc_entries(u8 nr_entries, + struct zswap_entry *entries[], + int node_id) +{ + u8 i; + + for (i = 0; i < nr_entries; ++i) { + entries[i] = zswap_entry_cache_alloc(GFP_KERNEL, node_id); + if (!entries[i]) { + u8 j; + + zswap_reject_kmemcache_fail++; + for (j = 0; j < i; ++j) + zswap_entry_cache_free(entries[j]); + return -EINVAL; + } + + entries[i]->handle = (unsigned long)ERR_PTR(-EINVAL); + } + + return 0; +} + +/* + * If the zswap store fails or zswap is disabled, we must invalidate + * the possibly stale entries which were previously stored at the + * offsets corresponding to each page of the folio. Otherwise, + * writeback could overwrite the new data in the swapfile. + */ +static void zswap_delete_stored_entries(struct folio *folio) +{ + swp_entry_t swp = folio->swap; + unsigned type = swp_type(swp); + pgoff_t offset = swp_offset(swp); + struct zswap_entry *entry; + struct xarray *tree; + long index; + + for (index = 0; index < folio_nr_pages(folio); ++index) { + tree = swap_zswap_tree(swp_entry(type, offset + index)); + entry = xa_erase(tree, offset + index); + if (entry) + zswap_entry_free(entry); + } +} + +static void zswap_store_process_folio_errors( + struct folio **folios, + int *errors, + unsigned int nr_folios) +{ + u8 batch_idx; + + for (batch_idx = 0; batch_idx < nr_folios; ++batch_idx) + if (errors[batch_idx]) + zswap_delete_stored_entries(folios[batch_idx]); +} + +/* + * Store a (batch of) any-order large folio(s) in zswap. Each folio will be + * broken into sub-batches of SWAP_CRYPTO_SUB_BATCH_SIZE pages, the + * sub-batch will be compressed by IAA in parallel, and stored in zpool/xarray. + * + * This the main procedure for batching of folios, and batching within + * large folios. + * + * This procedure should only be called if zswap supports batching of stores. + * Otherwise, the sequential implementation for storing folios as in the + * current zswap_store() should be used. + * + * The signature of this procedure is meant to allow the calling function, + * (for instance, swap_writepage()) to pass an array @folios + * (the "reclaim batch") of @nr_folios folios to be stored in zswap. + * All folios in the batch must have the same swap type and folio_nid @node_id + * (simplifying assumptions only to manage code complexity). + * + * @errors and @folios have @nr_folios number of entries, with one-one + * correspondence (@errors[i] represents the error status of @folios[i], + * for i in @nr_folios). + * The calling function (for instance, swap_writepage()) should initialize + * @errors[i] to a non-0 value. + * If zswap successfully stores @folios[i], it will set @errors[i] to 0. + * If there is an error in zswap, it will set @errors[i] to -EINVAL. + */ +static void __zswap_store_batch_core( + int node_id, + struct folio **folios, + int *errors, + unsigned int nr_folios) +{ + struct zswap_store_sub_batch_page sub_batch[SWAP_CRYPTO_SUB_BATCH_SIZE]; + struct page *comp_pages[SWAP_CRYPTO_SUB_BATCH_SIZE]; + u8 *comp_dsts[SWAP_CRYPTO_SUB_BATCH_SIZE] = { NULL }; + unsigned int comp_dlens[SWAP_CRYPTO_SUB_BATCH_SIZE]; + int comp_errors[SWAP_CRYPTO_SUB_BATCH_SIZE]; + struct crypto_acomp_ctx *acomp_ctx; + struct zswap_pool *pool; + /* + * For now, lets say a max of 256 large folios can be reclaimed + * at a time, as a batch. If this exceeds 256, change this to u16. + */ + u8 batch_idx; + + /* Initialize the compress batching pipeline state. */ + struct zswap_store_pipeline_state zst = { + .errors = errors, + .pool = NULL, + .acomp_ctx = NULL, + .sub_batch = sub_batch, + .comp_pages = comp_pages, + .comp_dsts = comp_dsts, + .comp_dlens = comp_dlens, + .comp_errors = comp_errors, + .nr_comp_pages = 0, + }; + + pool = zswap_pool_current_get(); + if (!pool) { + if (zswap_check_limits()) + queue_work(shrink_wq, &zswap_shrink_work); + goto check_old; + } + + acomp_ctx = raw_cpu_ptr(pool->acomp_ctx); + mutex_lock(&acomp_ctx->mutex); + zst.pool = pool; + zst.acomp_ctx = acomp_ctx; + + /* + * Iterate over the folios passed in. Construct sub-batches of up to + * SWAP_CRYPTO_SUB_BATCH_SIZE pages, if necessary, by iterating through + * multiple folios from the input "folios". Process each sub-batch + * with IAA batch compression. Detect errors from batch compression + * and set the impacted folio's error status (this happens in + * zswap_store_process_errors()). + */ + for (batch_idx = 0; batch_idx < nr_folios; ++batch_idx) { + struct folio *folio = folios[batch_idx]; + BUG_ON(!folio); + long folio_start_idx, nr_pages = folio_nr_pages(folio); + struct zswap_entry *entries[SWAP_CRYPTO_SUB_BATCH_SIZE]; + struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; + + VM_WARN_ON_ONCE(!folio_test_locked(folio)); + VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); + + /* + * If zswap is disabled, we must invalidate the possibly stale entry + * which was previously stored at this offset. Otherwise, writeback + * could overwrite the new data in the swapfile. + */ + if (!zswap_enabled) + continue; + + /* Check cgroup limits */ + objcg = get_obj_cgroup_from_folio(folio); + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto put_objcg; + } + mem_cgroup_put(memcg); + } + + if (zswap_check_limits()) + goto put_objcg; + + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { + mem_cgroup_put(memcg); + goto put_objcg; + } + mem_cgroup_put(memcg); + } + + /* + * By default, set zswap status to "success" for use in + * swap_writepage() when this returns. In case of errors, + * a negative error number will over-write this when + * zswap_store_process_errors() is called. + */ + errors[batch_idx] = 0; + + folio_start_idx = 0; + + while (nr_pages > 0) { + u8 add_nr_pages; + + /* + * If we have accumulated SWAP_CRYPTO_SUB_BATCH_SIZE + * pages, process the sub-batch: it could contain pages + * from multiple folios. + */ + if (zst.nr_comp_pages == SWAP_CRYPTO_SUB_BATCH_SIZE) { + zswap_store_sub_batch(&zst); + zswap_store_reset_sub_batch(&zst); + /* + * Stop processing this folio if it had + * compress errors. + */ + if (errors[batch_idx]) + goto put_objcg; + } + + add_nr_pages = min3(( + (long)SWAP_CRYPTO_SUB_BATCH_SIZE - + (long)zst.nr_comp_pages), + nr_pages, + (long)SWAP_CRYPTO_SUB_BATCH_SIZE); + + /* + * Allocate zswap_entries for this sub-batch. If we + * get errors while doing so, we can flag an error + * for the folio, call the shrinker and move on. + */ + if (zswap_alloc_entries(add_nr_pages, + entries, node_id)) { + zswap_store_reset_sub_batch(&zst); + errors[batch_idx] = -EINVAL; + goto put_objcg; + } + + zswap_add_folio_pages_to_sb( + &zst, + folio, + batch_idx, + objcg, + entries, + folio_start_idx, + add_nr_pages); + + nr_pages -= add_nr_pages; + folio_start_idx += add_nr_pages; + } /* this folio has pages to be compressed. */ + + obj_cgroup_put(objcg); + continue; + +put_objcg: + obj_cgroup_put(objcg); + if (zswap_pool_reached_full) + queue_work(shrink_wq, &zswap_shrink_work); + } /* for batch folios */ + + if (!zswap_enabled) + goto check_old; + + /* + * Process last sub-batch: it could contain pages from + * multiple folios. + */ + if (zst.nr_comp_pages) + zswap_store_sub_batch(&zst); + + mutex_unlock(&acomp_ctx->mutex); + zswap_pool_put(pool); +check_old: + zswap_store_process_folio_errors(folios, errors, nr_folios); +} + bool zswap_load(struct folio *folio) { swp_entry_t swp = folio->swap;