From patchwork Mon Apr 5 05:24:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 416371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28FC9C43461 for ; Mon, 5 Apr 2021 05:24:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED209613AC for ; Mon, 5 Apr 2021 05:24:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232272AbhDEFYu (ORCPT ); Mon, 5 Apr 2021 01:24:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:57662 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232269AbhDEFYp (ORCPT ); Mon, 5 Apr 2021 01:24:45 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id E7DCD613AE; Mon, 5 Apr 2021 05:24:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600279; bh=kgFeNEGqbN9WM+uS4TOUKfA2KjjvWH4Sn9QKyrs2Esc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H/LrR2eygKEWxFLckWrWCNTnD6kKtvMcLmftsZoTnWCqk/rgGjPILP1O9Sb/m2viK unVBfkwguZGE9J6jSmdfrppM0HIlMce2I1CMFt3b8rq3SQaGOSdUrmAdgRZOndp/VF zIYWDg6Ge7lvfxKr803oVG0AT8FhZZ2x5c5l0N3hBRcjqBiaRPlr+4xs3g3l3XdBJF XvOyzE85/pRqLznPmdWpEyLjjT8s0HbLGGqxpaxyrGJxkVqP+3zTrUzHF9bbDXflGB pGiGbDaDmxBdvgIZL3qNem0va+qf59TNStPrQAG9mmWUREFLPlkFkrT/UVlRflsgKH ir0e3t698GSvQ== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 06/10] nvme-rdma: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:00 +0300 Message-Id: <20210405052404.213889-7-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering for nvme. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Max Gurtovoy Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/nvme/host/rdma.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 4dbc17311e0b..8f106b20b39c 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -532,9 +532,8 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) */ pages_per_mr = nvme_rdma_get_max_fr_pages(ibdev, queue->pi_support) + 1; ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs, - queue->queue_size, - IB_MR_TYPE_MEM_REG, - pages_per_mr, 0, 0); + queue->queue_size, IB_MR_TYPE_MEM_REG, + pages_per_mr, 0, IB_ACCESS_RELAXED_ORDERING); if (ret) { dev_err(queue->ctrl->ctrl.device, "failed to initialize MR pool sized %d for QID %d\n", @@ -545,7 +544,8 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) if (queue->pi_support) { ret = ib_mr_pool_init(queue->qp, &queue->qp->sig_mrs, queue->queue_size, IB_MR_TYPE_INTEGRITY, - pages_per_mr, pages_per_mr, 0); + pages_per_mr, pages_per_mr, + IB_ACCESS_RELAXED_ORDERING); if (ret) { dev_err(queue->ctrl->ctrl.device, "failed to initialize PI MR pool sized %d for QID %d\n", @@ -1382,9 +1382,9 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue, req->reg_wr.wr.num_sge = 0; req->reg_wr.mr = req->mr; req->reg_wr.key = req->mr->rkey; - req->reg_wr.access = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE; + req->reg_wr.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING; sg->addr = cpu_to_le64(req->mr->iova); put_unaligned_le24(req->mr->length, sg->length); @@ -1488,9 +1488,8 @@ static int nvme_rdma_map_sg_pi(struct nvme_rdma_queue *queue, wr->wr.send_flags = 0; wr->mr = req->mr; wr->key = req->mr->rkey; - wr->access = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE; + wr->access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE | IB_ACCESS_RELAXED_ORDERING; sg->addr = cpu_to_le64(req->mr->iova); put_unaligned_le24(req->mr->length, sg->length);