From patchwork Fri Mar 31 11:20:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salil Mehta X-Patchwork-Id: 96372 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp678724qgd; Fri, 31 Mar 2017 04:26:36 -0700 (PDT) X-Received: by 10.84.253.15 with SMTP id z15mr2918363pll.142.1490959596572; Fri, 31 Mar 2017 04:26:36 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k72si4956935pge.110.2017.03.31.04.26.36; Fri, 31 Mar 2017 04:26:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933218AbdCaLWH (ORCPT + 21 others); Fri, 31 Mar 2017 07:22:07 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:4937 "EHLO dggrg03-dlp.huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S933176AbdCaLWB (ORCPT ); Fri, 31 Mar 2017 07:22:01 -0400 Received: from 172.30.72.57 (EHLO DGGEML404-HUB.china.huawei.com) ([172.30.72.57]) by dggrg03-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AKY89301; Fri, 31 Mar 2017 19:21:57 +0800 (CST) Received: from S00293818-DELL1.china.huawei.com (10.203.181.152) by DGGEML404-HUB.china.huawei.com (10.3.17.39) with Microsoft SMTP Server id 14.3.301.0; Fri, 31 Mar 2017 19:21:49 +0800 From: Salil Mehta To: CC: , , , , , , lipeng Subject: [PATCH V2 net-next 08/18] net: hns: Replace netif_tx_lock to ring spin lock Date: Fri, 31 Mar 2017 12:20:22 +0100 Message-ID: <20170331112032.4692-9-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20170331112032.4692-1-salil.mehta@huawei.com> References: <20170331112032.4692-1-salil.mehta@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.181.152] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.58DE3BD5.016A, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: a6a948049ca137c7fe7e3d62d0c06e11 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: lipeng netif_tx_lock is a global spin lock, it will take affect in all rings in the netdevice. In tx_poll_one process, it can only lock the current ring, in this case, we define a spin lock in hnae_ring struct for it. Signed-off-by: lipeng reviewed-by: Yisen Zhuang Signed-off-by: Salil Mehta --- drivers/net/ethernet/hisilicon/hns/hnae.c | 1 + drivers/net/ethernet/hisilicon/hns/hnae.h | 3 +++ drivers/net/ethernet/hisilicon/hns/hns_enet.c | 21 +++++++++++---------- 3 files changed, 15 insertions(+), 10 deletions(-) -- 2.7.4 diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.c b/drivers/net/ethernet/hisilicon/hns/hnae.c index 78af663..513c257 100644 --- a/drivers/net/ethernet/hisilicon/hns/hnae.c +++ b/drivers/net/ethernet/hisilicon/hns/hnae.c @@ -196,6 +196,7 @@ hnae_init_ring(struct hnae_queue *q, struct hnae_ring *ring, int flags) ring->q = q; ring->flags = flags; + spin_lock_init(&ring->lock); assert(!ring->desc && !ring->desc_cb && !ring->desc_dma_addr); /* not matter for tx or rx ring, the ntc and ntc start from 0 */ diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h index c66581d..859c536 100644 --- a/drivers/net/ethernet/hisilicon/hns/hnae.h +++ b/drivers/net/ethernet/hisilicon/hns/hnae.h @@ -275,6 +275,9 @@ struct hnae_ring { /* statistic */ struct ring_stats stats; + /* ring lock for poll one */ + spinlock_t lock; + dma_addr_t desc_dma_addr; u32 buf_size; /* size for hnae_desc->addr, preset by AE */ u16 desc_num; /* total number of desc */ diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c index 3449a18..3634366 100644 --- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c +++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c @@ -922,12 +922,13 @@ static int is_valid_clean_head(struct hnae_ring *ring, int h) /* netif_tx_lock will turn down the performance, set only when necessary */ #ifdef CONFIG_NET_POLL_CONTROLLER -#define NETIF_TX_LOCK(ndev) netif_tx_lock(ndev) -#define NETIF_TX_UNLOCK(ndev) netif_tx_unlock(ndev) +#define NETIF_TX_LOCK(ring) spin_lock(&ring->lock) +#define NETIF_TX_UNLOCK(ring) spin_unlock(&ring->lock) #else -#define NETIF_TX_LOCK(ndev) -#define NETIF_TX_UNLOCK(ndev) +#define NETIF_TX_LOCK(ring) +#define NETIF_TX_UNLOCK(ring) #endif + /* reclaim all desc in one budget * return error or number of desc left */ @@ -941,13 +942,13 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data, int head; int bytes, pkts; - NETIF_TX_LOCK(ndev); + NETIF_TX_LOCK(ring); head = readl_relaxed(ring->io_base + RCB_REG_HEAD); rmb(); /* make sure head is ready before touch any data */ if (is_ring_empty(ring) || head == ring->next_to_clean) { - NETIF_TX_UNLOCK(ndev); + NETIF_TX_UNLOCK(ring); return 0; /* no data to poll */ } @@ -955,7 +956,7 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data, netdev_err(ndev, "wrong head (%d, %d-%d)\n", head, ring->next_to_use, ring->next_to_clean); ring->stats.io_err_cnt++; - NETIF_TX_UNLOCK(ndev); + NETIF_TX_UNLOCK(ring); return -EIO; } @@ -967,7 +968,7 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data, prefetch(&ring->desc_cb[ring->next_to_clean]); } - NETIF_TX_UNLOCK(ndev); + NETIF_TX_UNLOCK(ring); dev_queue = netdev_get_tx_queue(ndev, ring_data->queue_index); netdev_tx_completed_queue(dev_queue, pkts, bytes); @@ -1028,7 +1029,7 @@ static void hns_nic_tx_clr_all_bufs(struct hns_nic_ring_data *ring_data) int head; int bytes, pkts; - NETIF_TX_LOCK(ndev); + NETIF_TX_LOCK(ring); head = ring->next_to_use; /* ntu :soft setted ring position*/ bytes = 0; @@ -1036,7 +1037,7 @@ static void hns_nic_tx_clr_all_bufs(struct hns_nic_ring_data *ring_data) while (head != ring->next_to_clean) hns_nic_reclaim_one_desc(ring, &bytes, &pkts); - NETIF_TX_UNLOCK(ndev); + NETIF_TX_UNLOCK(ring); dev_queue = netdev_get_tx_queue(ndev, ring_data->queue_index); netdev_tx_reset_queue(dev_queue);