From patchwork Tue Dec 29 06:24:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ikjoon Jang X-Patchwork-Id: 354153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D656DC433E0 for ; Tue, 29 Dec 2020 06:25:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC4AE21D7F for ; Tue, 29 Dec 2020 06:25:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726351AbgL2GZN (ORCPT ); Tue, 29 Dec 2020 01:25:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726281AbgL2GZN (ORCPT ); Tue, 29 Dec 2020 01:25:13 -0500 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4908CC061793 for ; Mon, 28 Dec 2020 22:24:33 -0800 (PST) Received: by mail-pf1-x430.google.com with SMTP id c79so7508462pfc.2 for ; Mon, 28 Dec 2020 22:24:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=DHWKOfIlULemhi5ysN15tt1vuyV8ns7onsr3JuvhwTg=; b=PKFRX78B2Ve7CFHiQWQtcoea3FAh4uT/ezml7jqKjdWRWTspbGvppeqgj5x0KMY0h0 /zGsD2WOqwewlWIVdR8BtpfRjeVogGc/VkJF4A5R8V8hFBf2WIYCRVCD//bQfzbSEQHR n6UozFSLE+K/Ih6gb2GBwe0vNEdhkqDjnONvE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=DHWKOfIlULemhi5ysN15tt1vuyV8ns7onsr3JuvhwTg=; b=Ff+ovkZTz2oaI4phuziVzkTEpcmFCpQ11GkRJr9Kb0TYPTvWcnho4uEZCKVPI5K34O fnTXzkJtDekHUOZkrTvZyJ5mnQ8wRigfUfJaBkUZnIEja8kDgcV5W9t6/I+bjgRZLbI0 W2dtYRh2v0I1AHiOHbmcQM44xo70GMhD9xGAkBAo4H3I2j2PuRR+17532WXRgXK2sJlu wos5LkMojxvg+G++nt66kf46i3bNSLqrjsZ9lymy37RcN9eroy9yBe+ygjWfYDl84Jpt zF96ZNcg+Cz3EMGZErkMXDE3O+WRx2SXFRCmJMSMNzYd3kCviJSi8mGmXUZzPcN6Gqnb Ph/A== X-Gm-Message-State: AOAM533eVCJAEf33Byg2AmDY+yO7Oc+iTe44EsUJeF1p9EqSWnx4J5l+ fOClFFEyBoYhkMTta1BKaRKeCw== X-Google-Smtp-Source: ABdhPJyj+qHpn8THMMRV5yWEZvzJd/efTszeHfGp7uUJ1X8xwWqxOPv9iYfujhCTmFTs1XLBfC6wdw== X-Received: by 2002:a63:3549:: with SMTP id c70mr35444598pga.361.1609223072261; Mon, 28 Dec 2020 22:24:32 -0800 (PST) Received: from ikjn-p920.tpe.corp.google.com ([2401:fa00:1:b:f693:9fff:fef4:a8fc]) by smtp.gmail.com with ESMTPSA id b72sm37336462pfb.129.2020.12.28.22.24.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Dec 2020 22:24:31 -0800 (PST) From: Ikjoon Jang To: linux-mediatek@lists.infradead.org, linux-usb@vger.kernel.org Cc: Chunfeng Yun , Tianping Fang , Zhanyong Wang , Ikjoon Jang , Greg Kroah-Hartman , Mathias Nyman , Matthias Brugger , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v5] usb: xhci-mtk: fix unreleased bandwidth data Date: Tue, 29 Dec 2020 14:24:27 +0800 Message-Id: <20201229142406.v5.1.Id0d31b5f3ddf5e734d2ab11161ac5821921b1e1e@changeid> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org xhci-mtk has hooks on add_endpoint() and drop_endpoint() from xhci to handle its own sw bandwidth managements and stores bandwidth data into internal table every time add_endpoint() is called, so when bandwidth allocation fails at one endpoint, all earlier allocation from the same interface could still remain at the table. This patch adds two more hooks from check_bandwidth() and reset_bandwidth(), and make mtk-xhci to releases all failed endpoints from reset_bandwidth(). Fixes: 08e469de87a2 ("usb: xhci-mtk: supports bandwidth scheduling with multi-TT") Signed-off-by: Ikjoon Jang --- Changes in v5: - Fix a wrong commit id in Fixes tag Changes in v4: - bugfix in v3, check_bandwidth() return uninitialized value when no new endpoints were added. - change Fixes tag to keep dependency Changes in v3: - drop unrelated code cleanups - change Fixes tag to keep dependency Changes in v2: - fix a 0-day warning from unused variable - split one big patch into three patches - fix wrong offset in mediatek hw flags drivers/usb/host/xhci-mtk-sch.c | 124 ++++++++++++++++++++++---------- drivers/usb/host/xhci-mtk.h | 13 ++++ drivers/usb/host/xhci.c | 9 +++ 3 files changed, 109 insertions(+), 37 deletions(-) diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c index 45c54d56ecbd..95d20de9fd1f 100644 --- a/drivers/usb/host/xhci-mtk-sch.c +++ b/drivers/usb/host/xhci-mtk-sch.c @@ -200,6 +200,7 @@ static struct mu3h_sch_ep_info *create_sch_ep(struct usb_device *udev, sch_ep->sch_tt = tt; sch_ep->ep = ep; + INIT_LIST_HEAD(&sch_ep->tt_endpoint); return sch_ep; } @@ -583,6 +584,8 @@ int xhci_mtk_sch_init(struct xhci_hcd_mtk *mtk) mtk->sch_array = sch_array; + INIT_LIST_HEAD(&mtk->bw_ep_list_new); + return 0; } EXPORT_SYMBOL_GPL(xhci_mtk_sch_init); @@ -601,19 +604,14 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct xhci_ep_ctx *ep_ctx; struct xhci_slot_ctx *slot_ctx; struct xhci_virt_device *virt_dev; - struct mu3h_sch_bw_info *sch_bw; struct mu3h_sch_ep_info *sch_ep; - struct mu3h_sch_bw_info *sch_array; unsigned int ep_index; - int bw_index; - int ret = 0; xhci = hcd_to_xhci(hcd); virt_dev = xhci->devs[udev->slot_id]; ep_index = xhci_get_endpoint_index(&ep->desc); slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->in_ctx); ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index); - sch_array = mtk->sch_array; xhci_dbg(xhci, "%s() type:%d, speed:%d, mpkt:%d, dir:%d, ep:%p\n", __func__, usb_endpoint_type(&ep->desc), udev->speed, @@ -632,39 +630,34 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, return 0; } - bw_index = get_bw_index(xhci, udev, ep); - sch_bw = &sch_array[bw_index]; - sch_ep = create_sch_ep(udev, ep, ep_ctx); if (IS_ERR_OR_NULL(sch_ep)) return -ENOMEM; setup_sch_info(udev, ep_ctx, sch_ep); - ret = check_sch_bw(udev, sch_bw, sch_ep); - if (ret) { - xhci_err(xhci, "Not enough bandwidth!\n"); - if (is_fs_or_ls(udev->speed)) - drop_tt(udev); - - kfree(sch_ep); - return -ENOSPC; - } + list_add_tail(&sch_ep->endpoint, &mtk->bw_ep_list_new); - list_add_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list); + return 0; +} +EXPORT_SYMBOL_GPL(xhci_mtk_add_ep_quirk); - ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts) - | EP_BCSCOUNT(sch_ep->cs_count) | EP_BBM(sch_ep->burst_mode)); - ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset) - | EP_BREPEAT(sch_ep->repeat)); +static void xhci_mtk_drop_ep(struct xhci_hcd_mtk *mtk, struct usb_device *udev, + struct mu3h_sch_ep_info *sch_ep) +{ + struct xhci_hcd *xhci = hcd_to_xhci(mtk->hcd); + int bw_index = get_bw_index(xhci, udev, sch_ep->ep); + struct mu3h_sch_bw_info *sch_bw = &mtk->sch_array[bw_index]; - xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n", - sch_ep->pkts, sch_ep->cs_count, sch_ep->burst_mode, - sch_ep->offset, sch_ep->repeat); + update_bus_bw(sch_bw, sch_ep, 0); + list_del(&sch_ep->endpoint); - return 0; + if (sch_ep->sch_tt) { + list_del(&sch_ep->tt_endpoint); + drop_tt(udev); + } + kfree(sch_ep); } -EXPORT_SYMBOL_GPL(xhci_mtk_add_ep_quirk); void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct usb_host_endpoint *ep) @@ -675,7 +668,7 @@ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct xhci_virt_device *virt_dev; struct mu3h_sch_bw_info *sch_array; struct mu3h_sch_bw_info *sch_bw; - struct mu3h_sch_ep_info *sch_ep; + struct mu3h_sch_ep_info *sch_ep, *tmp; int bw_index; xhci = hcd_to_xhci(hcd); @@ -694,17 +687,74 @@ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, bw_index = get_bw_index(xhci, udev, ep); sch_bw = &sch_array[bw_index]; - list_for_each_entry(sch_ep, &sch_bw->bw_ep_list, endpoint) { + list_for_each_entry_safe(sch_ep, tmp, &sch_bw->bw_ep_list, endpoint) { if (sch_ep->ep == ep) { - update_bus_bw(sch_bw, sch_ep, 0); - list_del(&sch_ep->endpoint); - if (is_fs_or_ls(udev->speed)) { - list_del(&sch_ep->tt_endpoint); - drop_tt(udev); - } - kfree(sch_ep); - break; + xhci_mtk_drop_ep(mtk, udev, sch_ep); } } } EXPORT_SYMBOL_GPL(xhci_mtk_drop_ep_quirk); + +int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev) +{ + struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd); + struct xhci_hcd *xhci = hcd_to_xhci(hcd); + struct xhci_virt_device *virt_dev = xhci->devs[udev->slot_id]; + struct mu3h_sch_bw_info *sch_bw; + struct mu3h_sch_ep_info *sch_ep, *tmp; + int bw_index, ret; + + dev_dbg(&udev->dev, "%s\n", __func__); + + if (list_empty(&mtk->bw_ep_list_new)) + return 0; + + list_for_each_entry(sch_ep, &mtk->bw_ep_list_new, endpoint) { + bw_index = get_bw_index(xhci, udev, sch_ep->ep); + sch_bw = &mtk->sch_array[bw_index]; + + ret = check_sch_bw(udev, sch_bw, sch_ep); + if (ret) { + xhci_err(xhci, "Not enough bandwidth!\n"); + return -ENOSPC; + } + } + + list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_list_new, endpoint) { + struct xhci_ep_ctx *ep_ctx; + struct usb_host_endpoint *ep = sch_ep->ep; + unsigned int ep_index = xhci_get_endpoint_index(&ep->desc); + + bw_index = get_bw_index(xhci, udev, ep); + sch_bw = &mtk->sch_array[bw_index]; + + list_move_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list); + + ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index); + ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts) + | EP_BCSCOUNT(sch_ep->cs_count) + | EP_BBM(sch_ep->burst_mode)); + ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset) + | EP_BREPEAT(sch_ep->repeat)); + + xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n", + sch_ep->pkts, sch_ep->cs_count, sch_ep->burst_mode, + sch_ep->offset, sch_ep->repeat); + } + + return 0; +} +EXPORT_SYMBOL_GPL(xhci_mtk_check_bandwidth); + +void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev) +{ + struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd); + struct mu3h_sch_ep_info *sch_ep, *tmp; + + dev_dbg(&udev->dev, "%s\n", __func__); + + list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_list_new, endpoint) { + xhci_mtk_drop_ep(mtk, udev, sch_ep); + } +} +EXPORT_SYMBOL_GPL(xhci_mtk_reset_bandwidth); diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h index a93cfe817904..577f431c5c93 100644 --- a/drivers/usb/host/xhci-mtk.h +++ b/drivers/usb/host/xhci-mtk.h @@ -130,6 +130,7 @@ struct mu3c_ippc_regs { struct xhci_hcd_mtk { struct device *dev; struct usb_hcd *hcd; + struct list_head bw_ep_list_new; struct mu3h_sch_bw_info *sch_array; struct mu3c_ippc_regs __iomem *ippc_regs; bool has_ippc; @@ -166,6 +167,8 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct usb_host_endpoint *ep); void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct usb_host_endpoint *ep); +int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev); +void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev); #else static inline int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, @@ -179,6 +182,16 @@ static inline void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, { } +static inline int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, + struct usb_device *udev) +{ + return 0; +} + +static inline void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, + struct usb_device *udev) +{ +} #endif #endif /* _XHCI_MTK_H_ */ diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index d4a8d0efbbc4..e1fcd3cf723f 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -2882,6 +2882,12 @@ static int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev) xhci_dbg(xhci, "%s called for udev %p\n", __func__, udev); virt_dev = xhci->devs[udev->slot_id]; + if (xhci->quirks & XHCI_MTK_HOST) { + ret = xhci_mtk_check_bandwidth(hcd, udev); + if (ret < 0) + return ret; + } + command = xhci_alloc_command(xhci, true, GFP_KERNEL); if (!command) return -ENOMEM; @@ -2970,6 +2976,9 @@ static void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev) return; xhci = hcd_to_xhci(hcd); + if (xhci->quirks & XHCI_MTK_HOST) + xhci_mtk_reset_bandwidth(hcd, udev); + xhci_dbg(xhci, "%s called for udev %p\n", __func__, udev); virt_dev = xhci->devs[udev->slot_id]; /* Free any rings allocated for added endpoints */