From patchwork Tue Apr 26 08:21:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 566576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 959A1C4167B for ; Tue, 26 Apr 2022 08:52:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242904AbiDZIzO (ORCPT ); Tue, 26 Apr 2022 04:55:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346985AbiDZIu6 (ORCPT ); Tue, 26 Apr 2022 04:50:58 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFDCF171C19; Tue, 26 Apr 2022 01:39:18 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 391ACB81D18; Tue, 26 Apr 2022 08:39:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A8492C385A4; Tue, 26 Apr 2022 08:39:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1650962356; bh=dzVjioPkyw1/WFLU5VKNi3YYW0V1cEYT1JMw1y3d9TI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wdvzUHLn1Aah9/Ej6XRYq7Fpkh3oYk1py0ZEEfN6wXT6STKYr8lt3/pQnf5EiI3WK y8Yjo4Ll69BZAGVxQAw75cxwyQoWgXFob94c7HzcEKfQ4jW+w0ukqSs/m7Ns+8kq0q DHIBNySPtFtSRK8aGXpjf7v695tRNTIteIlkTx7I= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tomas Melin , Claudiu Beznea , Jakub Kicinski , Sasha Levin Subject: [PATCH 5.15 067/124] net: macb: Restart tx only if queue pointer is lagging Date: Tue, 26 Apr 2022 10:21:08 +0200 Message-Id: <20220426081749.205909953@linuxfoundation.org> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220426081747.286685339@linuxfoundation.org> References: <20220426081747.286685339@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Tomas Melin [ Upstream commit 5ad7f18cd82cee8e773d40cc7a1465a526f2615c ] commit 4298388574da ("net: macb: restart tx after tx used bit read") added support for restarting transmission. Restarting tx does not work in case controller asserts TXUBR interrupt and TQBP is already at the end of the tx queue. In that situation, restarting tx will immediately cause assertion of another TXUBR interrupt. The driver will end up in an infinite interrupt loop which it cannot break out of. For cases where TQBP is at the end of the tx queue, instead only clear TX_USED interrupt. As more data gets pushed to the queue, transmission will resume. This issue was observed on a Xilinx Zynq-7000 based board. During stress test of the network interface, driver would get stuck on interrupt loop within seconds or minutes causing CPU to stall. Signed-off-by: Tomas Melin Tested-by: Claudiu Beznea Reviewed-by: Claudiu Beznea Link: https://lore.kernel.org/r/20220407161659.14532-1-tomas.melin@vaisala.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/cadence/macb_main.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index 9705c49655ad..217c1a0f8940 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1689,6 +1689,7 @@ static void macb_tx_restart(struct macb_queue *queue) unsigned int head = queue->tx_head; unsigned int tail = queue->tx_tail; struct macb *bp = queue->bp; + unsigned int head_idx, tbqp; if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) queue_writel(queue, ISR, MACB_BIT(TXUBR)); @@ -1696,6 +1697,13 @@ static void macb_tx_restart(struct macb_queue *queue) if (head == tail) return; + tbqp = queue_readl(queue, TBQP) / macb_dma_desc_get_size(bp); + tbqp = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, tbqp)); + head_idx = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, head)); + + if (tbqp == head_idx) + return; + macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); }