From patchwork Tue Oct 27 13:51:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 312723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B85BDC55178 for ; Tue, 27 Oct 2020 15:07:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D77E2074B for ; Tue, 27 Oct 2020 15:07:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603811267; bh=7ewLgsCCivTYiCibMhnxs7s1jxtciF0KSqWghiboHN4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=2n/n06FXoBhezQyiJ5ypGd/KkzuQJ3B99JYzLXvGpe0kdKapxSx0ZRulzA7mXxiXk ML1SbG6Bl8p0frjjM3JBOGymFnOMZUbHngtzx7m6o3j7CiegbTV4Rd+HkuS+q3F6K2 jErRUrF5UAHd8CRx2mgGFWzpY7XPFHnqW//mzlqU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1793677AbgJ0PHq (ORCPT ); Tue, 27 Oct 2020 11:07:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:39930 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1790980AbgJ0PFa (ORCPT ); Tue, 27 Oct 2020 11:05:30 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 417AA21707; Tue, 27 Oct 2020 15:05:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603811129; bh=7ewLgsCCivTYiCibMhnxs7s1jxtciF0KSqWghiboHN4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yBxukWSDnZ4Ub8tMh9cijkyg60BKRizdyaa6Z+/K1GPbFdGqP7RGqSDx4NIqLqn/j MFoL9140I0ekigB+WzRWfB7FrTBNwGS/sPjP8+2P7V/0H1Nq2Yphl8OzEb6KoFo6vU PBbu2FxjqPj2+4awU8AibzjqqQTCAw8GV64fFJ6M= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Leon Romanovsky , Shiraz Saleem , Jason Gunthorpe , Sasha Levin Subject: [PATCH 5.8 353/633] RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz() Date: Tue, 27 Oct 2020 14:51:36 +0100 Message-Id: <20201027135539.248493016@linuxfoundation.org> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201027135522.655719020@linuxfoundation.org> References: <20201027135522.655719020@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jason Gunthorpe [ Upstream commit 10c75ccb54e4fe548cb16d7ed426d7d709e6ae76 ] rdma_for_each_block() makes assumptions about how the SGL is constructed that don't work if the block size is below the page size used to to build the SGL. The rules for umem SGL construction require that the SG's all be PAGE_SIZE aligned and we don't encode the actual byte offset of the VA range inside the SGL using offset and length. So rdma_for_each_block() has no idea where the actual starting/ending point is to compute the first/last block boundary if the starting address should be within a SGL. Fixing the SGL construction turns out to be really hard, and will be the subject of other patches. For now block smaller pages. Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/2-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Reviewed-by: Leon Romanovsky Reviewed-by: Shiraz Saleem Signed-off-by: Jason Gunthorpe Signed-off-by: Sasha Levin --- drivers/infiniband/core/umem.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 1173b8cbe92b5..7e765fe211607 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -151,6 +151,12 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, dma_addr_t mask; int i; + /* rdma_for_each_block() has a bug if the page size is smaller than the + * page size used to build the umem. For now prevent smaller page sizes + * from being returned. + */ + pgsz_bitmap &= GENMASK(BITS_PER_LONG - 1, PAGE_SHIFT); + /* At minimum, drivers must support PAGE_SIZE or smaller */ if (WARN_ON(!(pgsz_bitmap & GENMASK(PAGE_SHIFT, 0)))) return 0;