diff mbox series

[v3,1/2] PCI: dwc: Implement general suspend/resume functionality for L2/L3 transitions

Message ID 20230419164118.596300-1-Frank.Li@nxp.com
State Superseded
Headers show
Series [v3,1/2] PCI: dwc: Implement general suspend/resume functionality for L2/L3 transitions | expand

Commit Message

Frank Li April 19, 2023, 4:41 p.m. UTC
Introduced helper function dw_pcie_get_ltssm to retrieve SMLH_LTSS_STATE.
Added API pme_turn_off and exit_from_l2 for managing L2/L3 state transitions.

Typical L2 entry workflow:

1. Transmit PME turn off signal to PCI devices.
2. Await link entering L2_IDLE state.
3. Transition Root complex to D3 state.

Typical L2 exit workflow:

1. Transition Root complex to D0 state.
2. Issue exit from L2 command.
3. Reinitialize PCI host.
4. Wait for link to become active.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
Change from v2 to v3: 
- Basic rewrite whole patch according rob herry suggestion. 
  put common function into dwc, so more soc can share the same logic.
  
 .../pci/controller/dwc/pcie-designware-host.c | 80 +++++++++++++++++++
 drivers/pci/controller/dwc/pcie-designware.h  | 28 +++++++
 2 files changed, 108 insertions(+)

Comments

Manivannan Sadhasivam July 20, 2023, 2:20 p.m. UTC | #1
On Wed, Jul 19, 2023 at 03:16:48PM -0400, Frank Li wrote:
> On Tue, Jul 18, 2023 at 03:34:00PM +0530, Manivannan Sadhasivam wrote:
> > On Mon, Jul 17, 2023 at 02:36:19PM -0400, Frank Li wrote:
> > 
> > Fine then. But can we check for PM_LINKST_IN_L2 SII System Information Interface
> > (SII) instead of LTSSM state?
> >
> where define PM_LINKST_IN_L2? I just find one at
> 
> drivers/pci/controller/dwc/pcie-qcom.c:#define PARF_DEBUG_CNT_PM_LINKST_IN_L2           0xc04
> 

Yes, this register is not exposed by the DWC core itself but the vendors may
expose such as Qcom as you pointed out. If it is not available on all platforms,
then feel free to ignore my advice.

- Mani

> Frank
>
Frank Li July 20, 2023, 2:37 p.m. UTC | #2
On Thu, Jul 20, 2023 at 07:55:09PM +0530, Manivannan Sadhasivam wrote:
> On Tue, Jul 18, 2023 at 03:34:26PM +0530, Manivannan Sadhasivam wrote:
> > On Mon, Jul 17, 2023 at 02:36:19PM -0400, Frank Li wrote:
> > > On Mon, Jul 17, 2023 at 10:15:26PM +0530, Manivannan Sadhasivam wrote:
> > > > On Wed, Apr 19, 2023 at 12:41:17PM -0400, Frank Li wrote:
> > > > > Introduced helper function dw_pcie_get_ltssm to retrieve SMLH_LTSS_STATE.
> > > > > Added API pme_turn_off and exit_from_l2 for managing L2/L3 state transitions.
> > > > > 
> > > > > Typical L2 entry workflow:
> > > > > 
> > > > > 1. Transmit PME turn off signal to PCI devices.
> > > > > 2. Await link entering L2_IDLE state.
> > > > 
> > > > AFAIK, typical workflow is to wait for PME_To_Ack.
> > > 
> > > 1 Already wait for PME_to_ACK,  2, just wait for link actual enter L2.
> > > I think PCI RC needs some time to set link enter L2 after get ACK from
> > > PME.
> > > 
> 
> One more comment. If you transition the device to L2/L3, then it can loose power
> if Vaux was not provided. In that case, can all the devices work after resume?
> Most notably NVMe?

I have not hardware to do such test, NVMe driver will reinit everything after
resume if no L1.1\L1.2 support. If there are L1.1\L1.2, NVME expect it leave
at L1.2 at suspend to get better resume latency.

This API help remove duplicate codes and it can be improved gradually.


> 
> - Mani
> 
> 
> -- 
> மணிவண்ணன் சதாசிவம்
Manivannan Sadhasivam July 20, 2023, 4:07 p.m. UTC | #3
On Thu, Jul 20, 2023 at 10:37:36AM -0400, Frank Li wrote:
> On Thu, Jul 20, 2023 at 07:55:09PM +0530, Manivannan Sadhasivam wrote:
> > On Tue, Jul 18, 2023 at 03:34:26PM +0530, Manivannan Sadhasivam wrote:
> > > On Mon, Jul 17, 2023 at 02:36:19PM -0400, Frank Li wrote:
> > > > On Mon, Jul 17, 2023 at 10:15:26PM +0530, Manivannan Sadhasivam wrote:
> > > > > On Wed, Apr 19, 2023 at 12:41:17PM -0400, Frank Li wrote:
> > > > > > Introduced helper function dw_pcie_get_ltssm to retrieve SMLH_LTSS_STATE.
> > > > > > Added API pme_turn_off and exit_from_l2 for managing L2/L3 state transitions.
> > > > > > 
> > > > > > Typical L2 entry workflow:
> > > > > > 
> > > > > > 1. Transmit PME turn off signal to PCI devices.
> > > > > > 2. Await link entering L2_IDLE state.
> > > > > 
> > > > > AFAIK, typical workflow is to wait for PME_To_Ack.
> > > > 
> > > > 1 Already wait for PME_to_ACK,  2, just wait for link actual enter L2.
> > > > I think PCI RC needs some time to set link enter L2 after get ACK from
> > > > PME.
> > > > 
> > 
> > One more comment. If you transition the device to L2/L3, then it can loose power
> > if Vaux was not provided. In that case, can all the devices work after resume?
> > Most notably NVMe?
> 
> I have not hardware to do such test, NVMe driver will reinit everything after
> resume if no L1.1\L1.2 support. If there are L1.1\L1.2, NVME expect it leave
> at L1.2 at suspend to get better resume latency.
> 

To be precise, NVMe driver will shutdown the device if there is no ASPM support
and keep it in low power mode otherwise (there are other cases as well but we do
not need to worry).

But here you are not checking for ASPM state in the suspend path, and just
forcing the link to be in L2/L3 (thereby D3Cold) even though NVMe driver may
expect it to be in low power state like ASPM/APST.

So you should only put the link to L2/L3 if there is no ASPM support. Otherwise,
you'll ending up with bug reports when users connect NVMe to it.

- Mani

> This API help remove duplicate codes and it can be improved gradually.
> 
> 
> > 
> > - Mani
> > 
> > 
> > -- 
> > மணிவண்ணன் சதாசிவம்
Frank Li July 20, 2023, 4:26 p.m. UTC | #4
On Thu, Jul 20, 2023 at 09:37:38PM +0530, Manivannan Sadhasivam wrote:
> On Thu, Jul 20, 2023 at 10:37:36AM -0400, Frank Li wrote:
> > On Thu, Jul 20, 2023 at 07:55:09PM +0530, Manivannan Sadhasivam wrote:
> > > On Tue, Jul 18, 2023 at 03:34:26PM +0530, Manivannan Sadhasivam wrote:
> > > > On Mon, Jul 17, 2023 at 02:36:19PM -0400, Frank Li wrote:
> > > > > On Mon, Jul 17, 2023 at 10:15:26PM +0530, Manivannan Sadhasivam wrote:
> > > > > > On Wed, Apr 19, 2023 at 12:41:17PM -0400, Frank Li wrote:
> > > > > > > Introduced helper function dw_pcie_get_ltssm to retrieve SMLH_LTSS_STATE.
> > > > > > > Added API pme_turn_off and exit_from_l2 for managing L2/L3 state transitions.
> > > > > > > 
> > > > > > > Typical L2 entry workflow:
> > > > > > > 
> > > > > > > 1. Transmit PME turn off signal to PCI devices.
> > > > > > > 2. Await link entering L2_IDLE state.
> > > > > > 
> > > > > > AFAIK, typical workflow is to wait for PME_To_Ack.
> > > > > 
> > > > > 1 Already wait for PME_to_ACK,  2, just wait for link actual enter L2.
> > > > > I think PCI RC needs some time to set link enter L2 after get ACK from
> > > > > PME.
> > > > > 
> > > 
> > > One more comment. If you transition the device to L2/L3, then it can loose power
> > > if Vaux was not provided. In that case, can all the devices work after resume?
> > > Most notably NVMe?
> > 
> > I have not hardware to do such test, NVMe driver will reinit everything after
> > resume if no L1.1\L1.2 support. If there are L1.1\L1.2, NVME expect it leave
> > at L1.2 at suspend to get better resume latency.
> > 
> 
> To be precise, NVMe driver will shutdown the device if there is no ASPM support
> and keep it in low power mode otherwise (there are other cases as well but we do
> not need to worry).

I supposed this should work. but I have not hardware to test it now. NMVE already
sent shut down command to SSD, which can safely turn off. after resume, that most
likely a cold reset.

> 
> But here you are not checking for ASPM state in the suspend path, and just
> forcing the link to be in L2/L3 (thereby D3Cold) even though NVMe driver may
> expect it to be in low power state like ASPM/APST.

This function is not called defaultly and need platform driver to call it as need.
Actually, I think PCI framework should handle L1.2 and L2 case, some devices
or user case want to L1.2 to get better resume latency, some devices want to
L2 to get better power saving, which out of scope of this patches.

This patch just handle L2 case, I remember L1.2 don't expect send PME at all.

> 
> So you should only put the link to L2/L3 if there is no ASPM support. Otherwise,
> you'll ending up with bug reports when users connect NVMe to it.

Platform should choose call or no call this function according to their
user case. So far, I have not found a good mathod to let ASPM to affect
suspend/resume. 

> 
> - Mani
> 
> > This API help remove duplicate codes and it can be improved gradually.
> > 
> > 
> > > 
> > > - Mani
> > > 
> > > 
> > > -- 
> > > மணிவண்ணன் சதாசிவம்
> 
> -- 
> மணிவண்ணன் சதாசிவம்
Manivannan Sadhasivam July 20, 2023, 4:43 p.m. UTC | #5
On Thu, Jul 20, 2023 at 12:26:38PM -0400, Frank Li wrote:
> On Thu, Jul 20, 2023 at 09:37:38PM +0530, Manivannan Sadhasivam wrote:
> > On Thu, Jul 20, 2023 at 10:37:36AM -0400, Frank Li wrote:
> > > On Thu, Jul 20, 2023 at 07:55:09PM +0530, Manivannan Sadhasivam wrote:
> > > > On Tue, Jul 18, 2023 at 03:34:26PM +0530, Manivannan Sadhasivam wrote:
> > > > > On Mon, Jul 17, 2023 at 02:36:19PM -0400, Frank Li wrote:
> > > > > > On Mon, Jul 17, 2023 at 10:15:26PM +0530, Manivannan Sadhasivam wrote:
> > > > > > > On Wed, Apr 19, 2023 at 12:41:17PM -0400, Frank Li wrote:
> > > > > > > > Introduced helper function dw_pcie_get_ltssm to retrieve SMLH_LTSS_STATE.
> > > > > > > > Added API pme_turn_off and exit_from_l2 for managing L2/L3 state transitions.
> > > > > > > > 
> > > > > > > > Typical L2 entry workflow:
> > > > > > > > 
> > > > > > > > 1. Transmit PME turn off signal to PCI devices.
> > > > > > > > 2. Await link entering L2_IDLE state.
> > > > > > > 
> > > > > > > AFAIK, typical workflow is to wait for PME_To_Ack.
> > > > > > 
> > > > > > 1 Already wait for PME_to_ACK,  2, just wait for link actual enter L2.
> > > > > > I think PCI RC needs some time to set link enter L2 after get ACK from
> > > > > > PME.
> > > > > > 
> > > > 
> > > > One more comment. If you transition the device to L2/L3, then it can loose power
> > > > if Vaux was not provided. In that case, can all the devices work after resume?
> > > > Most notably NVMe?
> > > 
> > > I have not hardware to do such test, NVMe driver will reinit everything after
> > > resume if no L1.1\L1.2 support. If there are L1.1\L1.2, NVME expect it leave
> > > at L1.2 at suspend to get better resume latency.
> > > 
> > 
> > To be precise, NVMe driver will shutdown the device if there is no ASPM support
> > and keep it in low power mode otherwise (there are other cases as well but we do
> > not need to worry).
> 
> I supposed this should work. but I have not hardware to test it now. NMVE already
> sent shut down command to SSD, which can safely turn off. after resume, that most
> likely a cold reset.
> 

NO, it won't work and that's the reason the Qcom platforms are not transitioning
the link to L2/L3 state during suspend. This applies to other platforms
including layerscape as well.

> > 
> > But here you are not checking for ASPM state in the suspend path, and just
> > forcing the link to be in L2/L3 (thereby D3Cold) even though NVMe driver may
> > expect it to be in low power state like ASPM/APST.
> 
> This function is not called defaultly and need platform driver to call it as need.
> Actually, I think PCI framework should handle L1.2 and L2 case, some devices
> or user case want to L1.2 to get better resume latency, some devices want to
> L2 to get better power saving, which out of scope of this patches.
> 

I'm referring to the platform where these helper functions are being used which
is layerscape. It doesn't matter whether you test this series with NVMe or not,
it will not work unless you disable ASPM.

> This patch just handle L2 case, I remember L1.2 don't expect send PME at all.
> 
> > 
> > So you should only put the link to L2/L3 if there is no ASPM support. Otherwise,
> > you'll ending up with bug reports when users connect NVMe to it.
> 
> Platform should choose call or no call this function according to their
> user case. So far, I have not found a good mathod to let ASPM to affect
> suspend/resume. 
> 

You are missing my point here. If any platform decides to use these helper
functions, they will face problems with NVMe. So please add a check for ASPM
state before doing any L2/L3 handling.

I agree that it may not be optimal w.r.t power savings, but the PCIe controller
driver should work for all devices.

- Mani

> > 
> > - Mani
> > 
> > > This API help remove duplicate codes and it can be improved gradually.
> > > 
> > > 
> > > > 
> > > > - Mani
> > > > 
> > > > 
> > > > -- 
> > > > மணிவண்ணன் சதாசிவம்
> > 
> > -- 
> > மணிவண்ணன் சதாசிவம்
Frank Li July 20, 2023, 4:59 p.m. UTC | #6
On Thu, Jul 20, 2023 at 10:13:18PM +0530, Manivannan Sadhasivam wrote:
> On Thu, Jul 20, 2023 at 12:26:38PM -0400, Frank Li wrote:
> > On Thu, Jul 20, 2023 at 09:37:38PM +0530, Manivannan Sadhasivam wrote:
> > > On Thu, Jul 20, 2023 at 10:37:36AM -0400, Frank Li wrote:
> > > > On Thu, Jul 20, 2023 at 07:55:09PM +0530, Manivannan Sadhasivam wrote:
> > > > > On Tue, Jul 18, 2023 at 03:34:26PM +0530, Manivannan Sadhasivam wrote:
> > > > > > On Mon, Jul 17, 2023 at 02:36:19PM -0400, Frank Li wrote:
> > > > > > > On Mon, Jul 17, 2023 at 10:15:26PM +0530, Manivannan Sadhasivam wrote:
> > > > > > > > On Wed, Apr 19, 2023 at 12:41:17PM -0400, Frank Li wrote:
> > > > > > > > > Introduced helper function dw_pcie_get_ltssm to retrieve SMLH_LTSS_STATE.
> > > > > > > > > Added API pme_turn_off and exit_from_l2 for managing L2/L3 state transitions.
> > > > > > > > > 
> > > > > > > > > Typical L2 entry workflow:
> > > > > > > > > 
> > > > > > > > > 1. Transmit PME turn off signal to PCI devices.
> > > > > > > > > 2. Await link entering L2_IDLE state.
> > > > > > > > 
> > > > > > > > AFAIK, typical workflow is to wait for PME_To_Ack.
> > > > > > > 
> > > > > > > 1 Already wait for PME_to_ACK,  2, just wait for link actual enter L2.
> > > > > > > I think PCI RC needs some time to set link enter L2 after get ACK from
> > > > > > > PME.
> > > > > > > 
> > > > > 
> > > > > One more comment. If you transition the device to L2/L3, then it can loose power
> > > > > if Vaux was not provided. In that case, can all the devices work after resume?
> > > > > Most notably NVMe?
> > > > 
> > > > I have not hardware to do such test, NVMe driver will reinit everything after
> > > > resume if no L1.1\L1.2 support. If there are L1.1\L1.2, NVME expect it leave
> > > > at L1.2 at suspend to get better resume latency.
> > > > 
> > > 
> > > To be precise, NVMe driver will shutdown the device if there is no ASPM support
> > > and keep it in low power mode otherwise (there are other cases as well but we do
> > > not need to worry).
> > 
> > I supposed this should work. but I have not hardware to test it now. NMVE already
> > sent shut down command to SSD, which can safely turn off. after resume, that most
> > likely a cold reset.
> > 
> 
> NO, it won't work and that's the reason the Qcom platforms are not transitioning
> the link to L2/L3 state during suspend. This applies to other platforms
> including layerscape as well.
> 
> > > 
> > > But here you are not checking for ASPM state in the suspend path, and just
> > > forcing the link to be in L2/L3 (thereby D3Cold) even though NVMe driver may
> > > expect it to be in low power state like ASPM/APST.
> > 
> > This function is not called defaultly and need platform driver to call it as need.
> > Actually, I think PCI framework should handle L1.2 and L2 case, some devices
> > or user case want to L1.2 to get better resume latency, some devices want to
> > L2 to get better power saving, which out of scope of this patches.
> > 
> 
> I'm referring to the platform where these helper functions are being used which
> is layerscape. It doesn't matter whether you test this series with NVMe or not,
> it will not work unless you disable ASPM.

I tested with NVNE. You are right, ASPM is disabled at layerscape platform.

> 
> > This patch just handle L2 case, I remember L1.2 don't expect send PME at all.
> > 
> > > 
> > > So you should only put the link to L2/L3 if there is no ASPM support. Otherwise,
> > > you'll ending up with bug reports when users connect NVMe to it.
> > 
> > Platform should choose call or no call this function according to their
> > user case. So far, I have not found a good mathod to let ASPM to affect
> > suspend/resume. 
> > 
> 
> You are missing my point here. If any platform decides to use these helper
> functions, they will face problems with NVMe. So please add a check for ASPM
> state before doing any L2/L3 handling.
> 
> I agree that it may not be optimal w.r.t power savings, but the PCIe controller
> driver should work for all devices.

I can add aspm check here. but I hope there are a flags in future, which can
show if expect pci controller enter L2.

Frank

> 
> - Mani
> 
> > > 
> > > - Mani
> > > 
> > > > This API help remove duplicate codes and it can be improved gradually.
> > > > 
> > > > 
> > > > > 
> > > > > - Mani
> > > > > 
> > > > > 
> > > > > -- 
> > > > > மணிவண்ணன் சதாசிவம்
> > > 
> > > -- 
> > > மணிவண்ணன் சதாசிவம்
> 
> -- 
> மணிவண்ணன் சதாசிவம்
diff mbox series

Patch

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index 9952057c8819..ef6869488bde 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -8,6 +8,7 @@ 
  * Author: Jingoo Han <jg1.han@samsung.com>
  */
 
+#include <linux/iopoll.h>
 #include <linux/irqchip/chained_irq.h>
 #include <linux/irqdomain.h>
 #include <linux/msi.h>
@@ -807,3 +808,82 @@  int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
 	return 0;
 }
 EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
+
+/*
+ * There are for configuring host controllers, which are bridges *to* PCI devices
+ * but are not PCI devices themselves.
+ */
+static void dw_pcie_set_dstate(struct dw_pcie *pci, u32 dstate)
+{
+	u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_PM);
+	u32 val;
+
+	val = dw_pcie_readw_dbi(pci, offset + PCI_PM_CTRL);
+	val &= ~PCI_PM_CTRL_STATE_MASK;
+	val |= dstate;
+	dw_pcie_writew_dbi(pci, offset + PCI_PM_CTRL, val);
+}
+
+int dw_pcie_suspend_noirq(struct dw_pcie *pci)
+{
+	u32 val;
+	int ret;
+
+	if (dw_pcie_get_ltssm(pci) <= DW_PCIE_LTSSM_DETECT_ACT)
+		return 0;
+
+	pci->pp.ops->pme_turn_off(&pci->pp);
+
+	/*
+	 * PCI Express Base Specification Rev 4.0
+	 * 5.3.3.2.1 PME Synchronization
+	 * Recommand 1ms to 10ms timeout to check L2 ready
+	 */
+	ret = read_poll_timeout(dw_pcie_get_ltssm, val, val == DW_PCIE_LTSSM_L2_IDLE,
+				100, 10000, false, pci);
+	if (ret) {
+		dev_err(pci->dev, "PCIe link enter L2 timeout! ltssm = 0x%x\n", val);
+		return ret;
+	}
+
+	dw_pcie_set_dstate(pci, 0x3);
+
+	pci->suspended = true;
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(dw_pcie_suspend_noirq);
+
+int dw_pcie_resume_noirq(struct dw_pcie *pci)
+{
+	int ret;
+
+	if (!pci->suspended)
+		return 0;
+
+	pci->suspended = false;
+
+	dw_pcie_set_dstate(pci, 0x0);
+
+	pci->pp.ops->exit_from_l2(&pci->pp);
+
+	/* delay 10 ms to access EP */
+	mdelay(10);
+
+	ret = pci->pp.ops->host_init(&pci->pp);
+	if (ret) {
+		dev_err(pci->dev, "ls_pcie_host_init failed! ret = 0x%x\n", ret);
+		return ret;
+	}
+
+	dw_pcie_setup_rc(&pci->pp);
+
+	ret = dw_pcie_wait_for_link(pci);
+	if (ret) {
+		dev_err(pci->dev, "wait link up timeout! ret = 0x%x\n", ret);
+		return ret;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(dw_pcie_resume_noirq);
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index 79713ce075cc..effb07a506e4 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -288,10 +288,21 @@  enum dw_pcie_core_rst {
 	DW_PCIE_NUM_CORE_RSTS
 };
 
+enum dw_pcie_ltssm {
+	DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF,
+	/* Need align PCIE_PORT_DEBUG0 bit0:5 */
+	DW_PCIE_LTSSM_DETECT_QUIET = 0x0,
+	DW_PCIE_LTSSM_DETECT_ACT = 0x1,
+	DW_PCIE_LTSSM_L0 = 0x11,
+	DW_PCIE_LTSSM_L2_IDLE = 0x15,
+};
+
 struct dw_pcie_host_ops {
 	int (*host_init)(struct dw_pcie_rp *pp);
 	void (*host_deinit)(struct dw_pcie_rp *pp);
 	int (*msi_host_init)(struct dw_pcie_rp *pp);
+	void (*pme_turn_off)(struct dw_pcie_rp *pp);
+	void (*exit_from_l2)(struct dw_pcie_rp *pp);
 };
 
 struct dw_pcie_rp {
@@ -364,6 +375,7 @@  struct dw_pcie_ops {
 	void    (*write_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg,
 			      size_t size, u32 val);
 	int	(*link_up)(struct dw_pcie *pcie);
+	enum dw_pcie_ltssm (*get_ltssm)(struct dw_pcie *pcie);
 	int	(*start_link)(struct dw_pcie *pcie);
 	void	(*stop_link)(struct dw_pcie *pcie);
 };
@@ -393,6 +405,7 @@  struct dw_pcie {
 	struct reset_control_bulk_data	app_rsts[DW_PCIE_NUM_APP_RSTS];
 	struct reset_control_bulk_data	core_rsts[DW_PCIE_NUM_CORE_RSTS];
 	struct gpio_desc		*pe_rst;
+	bool			suspended;
 };
 
 #define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp)
@@ -430,6 +443,9 @@  void dw_pcie_iatu_detect(struct dw_pcie *pci);
 int dw_pcie_edma_detect(struct dw_pcie *pci);
 void dw_pcie_edma_remove(struct dw_pcie *pci);
 
+int dw_pcie_suspend_noirq(struct dw_pcie *pci);
+int dw_pcie_resume_noirq(struct dw_pcie *pci);
+
 static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
 {
 	dw_pcie_write_dbi(pci, reg, 0x4, val);
@@ -501,6 +517,18 @@  static inline void dw_pcie_stop_link(struct dw_pcie *pci)
 		pci->ops->stop_link(pci);
 }
 
+static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci)
+{
+	u32 val;
+
+	if (pci->ops && pci->ops->get_ltssm)
+		return pci->ops->get_ltssm(pci);
+
+	val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0);
+
+	return (enum dw_pcie_ltssm)FIELD_GET(PORT_LOGIC_LTSSM_STATE_MASK, val);
+}
+
 #ifdef CONFIG_PCIE_DW_HOST
 irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp);
 int dw_pcie_setup_rc(struct dw_pcie_rp *pp);