From patchwork Sun Apr 26 02:13:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 220558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4FECC55191 for ; Sun, 26 Apr 2020 02:15:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8EA052098B for ; Sun, 26 Apr 2020 02:15:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726165AbgDZCPK (ORCPT ); Sat, 25 Apr 2020 22:15:10 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:2898 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726100AbgDZCPK (ORCPT ); Sat, 25 Apr 2020 22:15:10 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 1932F630D5F1A356162D; Sun, 26 Apr 2020 10:15:07 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.487.0; Sun, 26 Apr 2020 10:14:59 +0800 From: Huazhong Tan To: CC: , , , , , , Jian Shen , Huazhong Tan Subject: [PATCH V2 net-next 1/9] net: hns3: refine for unicast MAC VLAN space management Date: Sun, 26 Apr 2020 10:13:40 +0800 Message-ID: <1587867228-9955-2-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1587867228-9955-1-git-send-email-tanhuazhong@huawei.com> References: <1587867228-9955-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jian Shen Currently, firmware helps manage the unicast MAC VLAN table space for each PF. PF just needs to tell firmware its wanted space when initializing, and unnecessary to free it when un-intializing. So this patch removes the umv space free handle, and removes the forward statement of hclge_set_umv_space() by defining hclge_init_umv_space() after it. Signed-off-by: Jian Shen Signed-off-by: Huazhong Tan --- .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 72 ++++++++-------------- 1 file changed, 24 insertions(+), 48 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 0618f22..ccf269a 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -62,8 +62,6 @@ static int hclge_init_vlan_config(struct hclge_dev *hdev); static void hclge_sync_vlan_filter(struct hclge_dev *hdev); static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev); static bool hclge_get_hw_reset_stat(struct hnae3_handle *handle); -static int hclge_set_umv_space(struct hclge_dev *hdev, u16 space_size, - u16 *allocated_size, bool is_alloc); static void hclge_rfs_filter_expire(struct hclge_dev *hdev); static void hclge_clear_arfs_rules(struct hnae3_handle *handle); static enum hnae3_reset_type hclge_get_reset_level(struct hnae3_ae_dev *ae_dev, @@ -7196,50 +7194,6 @@ static int hclge_add_mac_vlan_tbl(struct hclge_vport *vport, return cfg_status; } -static int hclge_init_umv_space(struct hclge_dev *hdev) -{ - u16 allocated_size = 0; - int ret; - - ret = hclge_set_umv_space(hdev, hdev->wanted_umv_size, &allocated_size, - true); - if (ret) - return ret; - - if (allocated_size < hdev->wanted_umv_size) - dev_warn(&hdev->pdev->dev, - "Alloc umv space failed, want %u, get %u\n", - hdev->wanted_umv_size, allocated_size); - - mutex_init(&hdev->umv_mutex); - hdev->max_umv_size = allocated_size; - /* divide max_umv_size by (hdev->num_req_vfs + 2), in order to - * preserve some unicast mac vlan table entries shared by pf - * and its vfs. - */ - hdev->priv_umv_size = hdev->max_umv_size / (hdev->num_req_vfs + 2); - hdev->share_umv_size = hdev->priv_umv_size + - hdev->max_umv_size % (hdev->num_req_vfs + 2); - - return 0; -} - -static int hclge_uninit_umv_space(struct hclge_dev *hdev) -{ - int ret; - - if (hdev->max_umv_size > 0) { - ret = hclge_set_umv_space(hdev, hdev->max_umv_size, NULL, - false); - if (ret) - return ret; - hdev->max_umv_size = 0; - } - mutex_destroy(&hdev->umv_mutex); - - return 0; -} - static int hclge_set_umv_space(struct hclge_dev *hdev, u16 space_size, u16 *allocated_size, bool is_alloc) { @@ -7268,6 +7222,30 @@ static int hclge_set_umv_space(struct hclge_dev *hdev, u16 space_size, return 0; } +static int hclge_init_umv_space(struct hclge_dev *hdev) +{ + u16 allocated_size = 0; + int ret; + + ret = hclge_set_umv_space(hdev, hdev->wanted_umv_size, &allocated_size, + true); + if (ret) + return ret; + + if (allocated_size < hdev->wanted_umv_size) + dev_warn(&hdev->pdev->dev, + "failed to alloc umv space, want %u, get %u\n", + hdev->wanted_umv_size, allocated_size); + + mutex_init(&hdev->umv_mutex); + hdev->max_umv_size = allocated_size; + hdev->priv_umv_size = hdev->max_umv_size / (hdev->num_alloc_vport + 1); + hdev->share_umv_size = hdev->priv_umv_size + + hdev->max_umv_size % (hdev->num_alloc_vport + 1); + + return 0; +} + static void hclge_reset_umv_space(struct hclge_dev *hdev) { struct hclge_vport *vport; @@ -10041,8 +10019,6 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev) if (mac->phydev) mdiobus_unregister(mac->mdio_bus); - hclge_uninit_umv_space(hdev); - /* Disable MISC vector(vector0) */ hclge_enable_vector(&hdev->misc_vector, false); synchronize_irq(hdev->misc_vector.vector_irq);