From patchwork Thu Oct 1 02:05:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 259782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD7AAC4727F for ; Thu, 1 Oct 2020 02:05:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E08620706 for ; Thu, 1 Oct 2020 02:05:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601517941; bh=JLtBBaLBa8IxkfO215riZykvji3154cSfHFgMwEbkuI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=VcGP8qZj0aHGfT7aHl+HMvJXhsWlpDxYPDxVmFq1yWW6F30JsptKKtUJKjJhtYnEG Nrv14wx/zky4wXEPDiUzLrKRrPi8PIuYywZ6fICSfzfNtjBTO5PHrkn6+soPULIg8K VY78hsTTy2tNKVvxjXjMZ1AneOowWvXNkNOUDFDI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730666AbgJACFi (ORCPT ); Wed, 30 Sep 2020 22:05:38 -0400 Received: from mail.kernel.org ([198.145.29.99]:52826 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726992AbgJACF0 (ORCPT ); Wed, 30 Sep 2020 22:05:26 -0400 Received: from sx1.mtl.com (c-24-6-56-119.hsd1.ca.comcast.net [24.6.56.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B7084221EF; Thu, 1 Oct 2020 02:05:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601517926; bh=JLtBBaLBa8IxkfO215riZykvji3154cSfHFgMwEbkuI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AZlOoYK5wCd3YrAVVpMUs1iaB8raD04O3PHOY/z1QiOWVufPYM1Xk2OC2aSJuz4U0 gRWwCBr89UUT07R+5dvWYJ0jXsLkeMvVefouto8ZQ7HVyit92ewoApFFn5Njg9giW2 FqAijTPbUokJeu/ot6IYhimVnQYvs635dYWPQVWI= From: saeed@kernel.org To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Eran Ben Elisha , Saeed Mahameed , Moshe Shemesh Subject: [net 05/15] net/mlx5: Add retry mechanism to the command entry index allocation Date: Wed, 30 Sep 2020 19:05:06 -0700 Message-Id: <20201001020516.41217-6-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201001020516.41217-1-saeed@kernel.org> References: <20201001020516.41217-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Eran Ben Elisha It is possible that new command entry index allocation will temporarily fail. The new command holds the semaphore, so it means that a free entry should be ready soon. Add one second retry mechanism before returning an error. Patch "net/mlx5: Avoid possible free of command entry while timeout comp handler" increase the possibility to bump into this temporarily failure as it delays the entry index release for non-callback commands. Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters") Signed-off-by: Eran Ben Elisha Signed-off-by: Saeed Mahameed Reviewed-by: Moshe Shemesh --- drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 21 ++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c index 65ae6ef2039e..4b54c9241fd7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c @@ -883,6 +883,25 @@ static bool opcode_allowed(struct mlx5_cmd *cmd, u16 opcode) return cmd->allowed_opcode == opcode; } +static int cmd_alloc_index_retry(struct mlx5_cmd *cmd) +{ + unsigned long alloc_end = jiffies + msecs_to_jiffies(1000); + int idx; + +retry: + idx = cmd_alloc_index(cmd); + if (idx < 0 && time_before(jiffies, alloc_end)) { + /* Index allocation can fail on heavy load of commands. This is a temporary + * situation as the current command already holds the semaphore, meaning that + * another command completion is being handled and it is expected to release + * the entry index soon. + */ + cond_resched(); + goto retry; + } + return idx; +} + static void cmd_work_handler(struct work_struct *work) { struct mlx5_cmd_work_ent *ent = container_of(work, struct mlx5_cmd_work_ent, work); @@ -900,7 +919,7 @@ static void cmd_work_handler(struct work_struct *work) sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem; down(sem); if (!ent->page_queue) { - alloc_ret = cmd_alloc_index(cmd); + alloc_ret = cmd_alloc_index_retry(cmd); if (alloc_ret < 0) { mlx5_core_err_rl(dev, "failed to allocate command entry\n"); if (ent->callback) {