From patchwork Thu Dec 10 11:31:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 340990 Delivered-To: patch@linaro.org Received: by 2002:a17:906:4755:0:0:0:0 with SMTP id j21csp5542019ejs; Thu, 10 Dec 2020 03:26:05 -0800 (PST) X-Google-Smtp-Source: ABdhPJzhNYH8zWiYT6xbhvr4GffDD1e1bvsyNNstnZ3JqwhtNrU+KOMmRRZcFP/s+1Gu6ucgQDGs X-Received: by 2002:a17:906:da08:: with SMTP id fi8mr5903486ejb.517.1607599565034; Thu, 10 Dec 2020 03:26:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607599565; cv=none; d=google.com; s=arc-20160816; b=n7TIT3tftK1vWp/2uRA8YDurCN5QG0ueZRTklr1zWDiRahm6Ko6UGljoyap1C4ioA4 9S8jJqddw0A5euaMGOiK4GcuJiTZRYnGVermsengPOCcsOYTFoNB0+BLw18wgrcNkdR6 7tyvDALP8RwhKF0n1B+dprzoDZ9DQ5XTytOaSylBX34Vtmz3xNmYgSCP9Gul4jaZqv7m hR7UAePHhVyxrwQ5o9RMPKYmpfqCfFXZ6Z6ihUdPjxVorx1DkYuH94cf+rTcrX7u4Gb5 xw1tSbkJ0Cti8Bp7oTlauHmzYB152iPN+iDF1oOLWhGdXl5tL4IupwffqBaAuLWIDHfK oJKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:date:subject:cc:to:from :dkim-signature; bh=rLIPwBgHZWqNGgtAxh4E7wuXHCAF5fzgocFXGE+ukF8=; b=nlgEhzKJDllHyUKSaB1pzAXNmv/mAjhmiopSWie66kfdZ4IClp7KTPd92nfSWJg9kp 5m377/0lPJjkijtvnq3k5oo6CIfzW8kPY2BwLqlp0IWkVQF2ZB70kCQgjRQpeI6kNbD0 2vl8VEHkAA1ekkbIkAyEfCGFzaKvhlBtdADIXfmSOVOuaxopwZNjw6AJuVkxnFQSg9v4 qJCsx5xrukdwc6m9gQ7lRsmr6vcg9U3qa6hea1J6kbpg8CGTKWXbRA/nWiWt45RFaFoc N22evv0EJKiLscshNHCPsRJ7C9x1na7b+Y71XZeOSu/htwnehrBrAXlFCwuKsfDmgbxg gpXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="EzCK/IIx"; spf=pass (google.com: domain of linux-arm-msm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u8si2576228edi.15.2020.12.10.03.26.04; Thu, 10 Dec 2020 03:26:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-arm-msm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="EzCK/IIx"; spf=pass (google.com: domain of linux-arm-msm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730470AbgLJLZW (ORCPT + 15 others); Thu, 10 Dec 2020 06:25:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727306AbgLJLZN (ORCPT ); Thu, 10 Dec 2020 06:25:13 -0500 Received: from mail-wm1-x342.google.com (mail-wm1-x342.google.com [IPv6:2a00:1450:4864:20::342]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CF83C0613D6 for ; Thu, 10 Dec 2020 03:24:33 -0800 (PST) Received: by mail-wm1-x342.google.com with SMTP id a3so4910301wmb.5 for ; Thu, 10 Dec 2020 03:24:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=rLIPwBgHZWqNGgtAxh4E7wuXHCAF5fzgocFXGE+ukF8=; b=EzCK/IIx0OAxy1MAaetmfntJ2TcRGup0oBvGeFpMIxvml2eaCMel48yTgxXY/29UDs yMYOtC+GMxAktV4sB5tF/2dxBl19JsBpzP8jP9wJBcGKD/6KDpSJrkQ02BPhp+3yjdmV tNba7sIqBf0MnV0XVahnEVxOJ0R6Yx0ht0J32I5u4VaGY7z+sGso3ZXudNcSJbFgp9J5 TzBF3pEwJxlPijHgZerisvu0v6HCjxsA3aUqTMzwu3J6b7Zmr0XjcQWkGBqV2TUIB4h5 j+xD8Eki9jwlegeUsCx7lVWUkJqgCbJK/kzMn7n7uNpsbWfPMbsLl/N3+VujPnkUx6FC KKBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=rLIPwBgHZWqNGgtAxh4E7wuXHCAF5fzgocFXGE+ukF8=; b=m7vPtBf5xYM5amOtRUNXvcgLiApr/qG0Ozd2PB6/wsW0JtFMNqszHbW7g7Js5Jis+Q kJ6bQXwCnDx5JE6xcbZZmxxvQ+5SyYkoHginO9SmpYj43KwS3a1TP8WQZv9F/EXI+5Vk LYMarJbmqNGIdR4WrdZyVtG8QFY8IbVikv9kodFDWUis95enOa4f/4HFhlwBK3oV6D4c MrebcGRi9Iwc9/kJVliOH1q/RuSShtF27FZQq0S9oXnJO4+UePMe8KR7+7QDOG5il9gH M/e4MeAZPnQiB9UaUsID8PoaJC+PQxLJ2ojDIsSv1aKvIHuhcKAedQYM8D6MsKQ0YrBJ 5/uQ== X-Gm-Message-State: AOAM533aJi2yf3kR69A1wwjSjhlYwXitwOoACCGHUB4Nn4duBUorEH4v 1jOyxSD7GUGOOw30dPwT31Si3Q== X-Received: by 2002:a1c:35c2:: with SMTP id c185mr7793890wma.74.1607599472161; Thu, 10 Dec 2020 03:24:32 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:490:8730:4468:1cc2:be0c:233f]) by smtp.gmail.com with ESMTPSA id d3sm8985234wrr.2.2020.12.10.03.24.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Dec 2020 03:24:31 -0800 (PST) From: Loic Poulain To: manivannan.sadhasivam@linaro.org, hemantk@codeaurora.org Cc: linux-arm-msm@vger.kernel.org, Loic Poulain Subject: [PATCH] mhi: core: Factorize mhi queuing Date: Thu, 10 Dec 2020 12:31:32 +0100 Message-Id: <1607599892-6229-1-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Instead of duplicating queuing procedure in mhi_queue_dma(), mhi_queue_buf() and mhi_queue_skb(), add a new generic mhi_queue() as common helper. Signed-off-by: Loic Poulain --- drivers/bus/mhi/core/main.c | 160 +++++++++++--------------------------------- 1 file changed, 38 insertions(+), 122 deletions(-) -- 2.7.4 diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c index 3871ef0..4fa4c88 100644 --- a/drivers/bus/mhi/core/main.c +++ b/drivers/bus/mhi/core/main.c @@ -963,118 +963,78 @@ static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl, return (tmp == ring->rp); } -int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir, - struct sk_buff *skb, size_t len, enum mhi_flags mflags) +static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info, + enum dma_data_direction dir, enum mhi_flags mflags) { struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan : mhi_dev->dl_chan; struct mhi_ring *tre_ring = &mhi_chan->tre_ring; - struct mhi_buf_info buf_info = { }; + unsigned long flags; int ret; - /* If MHI host pre-allocates buffers then client drivers cannot queue */ - if (mhi_chan->pre_alloc) - return -EINVAL; + if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) + return -EIO; - if (mhi_is_ring_full(mhi_cntrl, tre_ring)) - return -ENOMEM; + read_lock_irqsave(&mhi_cntrl->pm_lock, flags); - read_lock_bh(&mhi_cntrl->pm_lock); - if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) { - read_unlock_bh(&mhi_cntrl->pm_lock); - return -EIO; + ret = mhi_is_ring_full(mhi_cntrl, tre_ring); + if (unlikely(ret)) { + ret = -ENOMEM; + goto exit_unlock; } - /* we're in M3 or transitioning to M3 */ + ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf_info, mflags); + if (unlikely(ret)) + goto exit_unlock; + + /* trigger M3 exit if necessary */ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) mhi_trigger_resume(mhi_cntrl); - /* Toggle wake to exit out of M2 */ + /* Assert dev_wake (to exit/prevent M1/M2)*/ mhi_cntrl->wake_toggle(mhi_cntrl); - buf_info.v_addr = skb->data; - buf_info.cb_buf = skb; - buf_info.len = len; - - ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags); - if (unlikely(ret)) { - read_unlock_bh(&mhi_cntrl->pm_lock); - return ret; - } - if (mhi_chan->dir == DMA_TO_DEVICE) atomic_inc(&mhi_cntrl->pending_pkts); - if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) { - read_lock_bh(&mhi_chan->lock); - mhi_ring_chan_db(mhi_cntrl, mhi_chan); - read_unlock_bh(&mhi_chan->lock); + if (unlikely(!MHI_DB_ACCESS_VALID(mhi_cntrl))) { + ret = -EIO; + goto exit_unlock; } - read_unlock_bh(&mhi_cntrl->pm_lock); + mhi_ring_chan_db(mhi_cntrl, mhi_chan); - return 0; +exit_unlock: + read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags); + + return ret; } -EXPORT_SYMBOL_GPL(mhi_queue_skb); -int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir, - struct mhi_buf *mhi_buf, size_t len, enum mhi_flags mflags) +int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir, + struct sk_buff *skb, size_t len, enum mhi_flags mflags) { - struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; - struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan : - mhi_dev->dl_chan; - struct device *dev = &mhi_cntrl->mhi_dev->dev; - struct mhi_ring *tre_ring = &mhi_chan->tre_ring; struct mhi_buf_info buf_info = { }; - int ret; - /* If MHI host pre-allocates buffers then client drivers cannot queue */ - if (mhi_chan->pre_alloc) - return -EINVAL; - - if (mhi_is_ring_full(mhi_cntrl, tre_ring)) - return -ENOMEM; - - read_lock_bh(&mhi_cntrl->pm_lock); - if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) { - dev_err(dev, "MHI is not in activate state, PM state: %s\n", - to_mhi_pm_state_str(mhi_cntrl->pm_state)); - read_unlock_bh(&mhi_cntrl->pm_lock); - - return -EIO; - } + buf_info.v_addr = skb->data; + buf_info.cb_buf = skb; + buf_info.len = len; - /* we're in M3 or transitioning to M3 */ - if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) - mhi_trigger_resume(mhi_cntrl); + return mhi_queue(mhi_dev, &buf_info, dir, mflags); +} +EXPORT_SYMBOL_GPL(mhi_queue_skb); - /* Toggle wake to exit out of M2 */ - mhi_cntrl->wake_toggle(mhi_cntrl); +int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir, + struct mhi_buf *mhi_buf, size_t len, enum mhi_flags mflags) +{ + struct mhi_buf_info buf_info = { }; buf_info.p_addr = mhi_buf->dma_addr; buf_info.cb_buf = mhi_buf; buf_info.pre_mapped = true; buf_info.len = len; - ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags); - if (unlikely(ret)) { - read_unlock_bh(&mhi_cntrl->pm_lock); - return ret; - } - - if (mhi_chan->dir == DMA_TO_DEVICE) - atomic_inc(&mhi_cntrl->pending_pkts); - - if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) { - read_lock_bh(&mhi_chan->lock); - mhi_ring_chan_db(mhi_cntrl, mhi_chan); - read_unlock_bh(&mhi_chan->lock); - } - - read_unlock_bh(&mhi_cntrl->pm_lock); - - return 0; + return mhi_queue(mhi_dev, &buf_info, dir, mflags); } EXPORT_SYMBOL_GPL(mhi_queue_dma); @@ -1128,57 +1088,13 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan, int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir, void *buf, size_t len, enum mhi_flags mflags) { - struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; - struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan : - mhi_dev->dl_chan; - struct mhi_ring *tre_ring; struct mhi_buf_info buf_info = { }; - unsigned long flags; - int ret; - - /* - * this check here only as a guard, it's always - * possible mhi can enter error while executing rest of function, - * which is not fatal so we do not need to hold pm_lock - */ - if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) - return -EIO; - - tre_ring = &mhi_chan->tre_ring; - if (mhi_is_ring_full(mhi_cntrl, tre_ring)) - return -ENOMEM; buf_info.v_addr = buf; buf_info.cb_buf = buf; buf_info.len = len; - ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags); - if (unlikely(ret)) - return ret; - - read_lock_irqsave(&mhi_cntrl->pm_lock, flags); - - /* we're in M3 or transitioning to M3 */ - if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) - mhi_trigger_resume(mhi_cntrl); - - /* Toggle wake to exit out of M2 */ - mhi_cntrl->wake_toggle(mhi_cntrl); - - if (mhi_chan->dir == DMA_TO_DEVICE) - atomic_inc(&mhi_cntrl->pending_pkts); - - if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) { - unsigned long flags; - - read_lock_irqsave(&mhi_chan->lock, flags); - mhi_ring_chan_db(mhi_cntrl, mhi_chan); - read_unlock_irqrestore(&mhi_chan->lock, flags); - } - - read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags); - - return 0; + return mhi_queue(mhi_dev, &buf_info, dir, mflags); } EXPORT_SYMBOL_GPL(mhi_queue_buf);