From patchwork Wed Dec 9 21:13:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 341717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D7A8C4361B for ; Wed, 9 Dec 2020 21:14:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C63DE23B77 for ; Wed, 9 Dec 2020 21:14:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387834AbgLIVOh (ORCPT ); Wed, 9 Dec 2020 16:14:37 -0500 Received: from mga11.intel.com ([192.55.52.93]:24062 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387650AbgLIVOg (ORCPT ); Wed, 9 Dec 2020 16:14:36 -0500 IronPort-SDR: icRr+Ud+VLiGO8av+IGyErDQGdPpw8aemFSZByB9CiTzHKlS3KVsO96FzDVaHqwfJj5UCwmH6H az/r2Ygk8Dqw== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="170641629" X-IronPort-AV: E=Sophos;i="5.78,405,1599548400"; d="scan'208";a="170641629" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 13:13:53 -0800 IronPort-SDR: FJ6HBF78rOkGSuK5e0LAcHKsMpNGoYqtOSZqfy4HMIghOhic9rSoDtuLN3LhYkb29M4tQDUNT8 uWJ31LGNy58Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,405,1599548400"; d="scan'208";a="408228642" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga001.jf.intel.com with ESMTP; 09 Dec 2020 13:13:52 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Bruce Allan , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Harikumar Bokkena Subject: [net-next v4 1/9] ice: cleanup stack hog Date: Wed, 9 Dec 2020 13:13:04 -0800 Message-Id: <20201209211312.3850588-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201209211312.3850588-1-anthony.l.nguyen@intel.com> References: <20201209211312.3850588-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Bruce Allan In ice_flow_add_prof_sync(), struct ice_flow_prof_params has recently grown in size hogging stack space when allocated there. Hogging stack space should be avoided. Change allocation to be on the heap when needed. Signed-off-by: Bruce Allan Tested-by: Harikumar Bokkena Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_flow.c | 44 +++++++++++++---------- 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c index eadc85aee389..2a92071bd7d1 100644 --- a/drivers/net/ethernet/intel/ice/ice_flow.c +++ b/drivers/net/ethernet/intel/ice/ice_flow.c @@ -708,37 +708,42 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk, struct ice_flow_seg_info *segs, u8 segs_cnt, struct ice_flow_prof **prof) { - struct ice_flow_prof_params params; + struct ice_flow_prof_params *params; enum ice_status status; u8 i; if (!prof) return ICE_ERR_BAD_PTR; - memset(¶ms, 0, sizeof(params)); - params.prof = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*params.prof), - GFP_KERNEL); - if (!params.prof) + params = kzalloc(sizeof(*params), GFP_KERNEL); + if (!params) return ICE_ERR_NO_MEMORY; + params->prof = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*params->prof), + GFP_KERNEL); + if (!params->prof) { + status = ICE_ERR_NO_MEMORY; + goto free_params; + } + /* initialize extraction sequence to all invalid (0xff) */ for (i = 0; i < ICE_MAX_FV_WORDS; i++) { - params.es[i].prot_id = ICE_PROT_INVALID; - params.es[i].off = ICE_FV_OFFSET_INVAL; + params->es[i].prot_id = ICE_PROT_INVALID; + params->es[i].off = ICE_FV_OFFSET_INVAL; } - params.blk = blk; - params.prof->id = prof_id; - params.prof->dir = dir; - params.prof->segs_cnt = segs_cnt; + params->blk = blk; + params->prof->id = prof_id; + params->prof->dir = dir; + params->prof->segs_cnt = segs_cnt; /* Make a copy of the segments that need to be persistent in the flow * profile instance */ for (i = 0; i < segs_cnt; i++) - memcpy(¶ms.prof->segs[i], &segs[i], sizeof(*segs)); + memcpy(¶ms->prof->segs[i], &segs[i], sizeof(*segs)); - status = ice_flow_proc_segs(hw, ¶ms); + status = ice_flow_proc_segs(hw, params); if (status) { ice_debug(hw, ICE_DBG_FLOW, "Error processing a flow's packet segments\n"); @@ -746,19 +751,22 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk, } /* Add a HW profile for this flow profile */ - status = ice_add_prof(hw, blk, prof_id, (u8 *)params.ptypes, params.es); + status = ice_add_prof(hw, blk, prof_id, (u8 *)params->ptypes, + params->es); if (status) { ice_debug(hw, ICE_DBG_FLOW, "Error adding a HW flow profile\n"); goto out; } - INIT_LIST_HEAD(¶ms.prof->entries); - mutex_init(¶ms.prof->entries_lock); - *prof = params.prof; + INIT_LIST_HEAD(¶ms->prof->entries); + mutex_init(¶ms->prof->entries_lock); + *prof = params->prof; out: if (status) - devm_kfree(ice_hw_to_dev(hw), params.prof); + devm_kfree(ice_hw_to_dev(hw), params->prof); +free_params: + kfree(params); return status; }