diff mbox series

[V4,2/5] mailbox: Add support for QTI CPUCP mailbox controller

Message ID 20240422164035.1045501-3-quic_sibis@quicinc.com
State New
Headers show
Series qcom: x1e80100: Enable CPUFreq | expand

Commit Message

Sibi Sankar April 22, 2024, 4:40 p.m. UTC
Add support for CPUSS Control Processor (CPUCP) mailbox controller,
this driver enables communication between AP and CPUCP by acting as
a doorbell between them.

Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
---

V3:
* Fix copyright info in cpucp driver. [Bjorn]
* Drop unused APSS_CPUCP_TX_MBOX_IDR, value init and drv_data. [Bjorn/Dmitry]
* Convert to lower case hex. [Bjorn]
* Convert irq and dev to local variables. [Bjorn]
* Replace for and if with for_each_set_bit. [Bjorn]
* Document the need for spinlock. [Bjorn]
* Add space after " for aesthetics. [Bjorn]
* Include io.h and re-order platform_device.h
* Use GENMASK_ULL to generate APSS_CPUCP_RX_MBOX_CMD_MASK.
* Pick Rb.

 drivers/mailbox/Kconfig           |   8 ++
 drivers/mailbox/Makefile          |   2 +
 drivers/mailbox/qcom-cpucp-mbox.c | 179 ++++++++++++++++++++++++++++++
 3 files changed, 189 insertions(+)
 create mode 100644 drivers/mailbox/qcom-cpucp-mbox.c

Comments

Konrad Dybcio April 22, 2024, 11:17 p.m. UTC | #1
On 4/22/24 18:40, Sibi Sankar wrote:
> Add support for CPUSS Control Processor (CPUCP) mailbox controller,
> this driver enables communication between AP and CPUCP by acting as
> a doorbell between them.
> 
> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
> ---

[...]

> +
> +static int qcom_cpucp_mbox_send_data(struct mbox_chan *chan, void *data)
> +{
> +	struct qcom_cpucp_mbox *cpucp = container_of(chan->mbox, struct qcom_cpucp_mbox, mbox);
> +	unsigned long chan_id = channel_number(chan);
> +	u32 *val = data;
> +
> +	writel(*val, cpucp->tx_base + APSS_CPUCP_TX_MBOX_CMD(chan_id) + APSS_CPUCP_MBOX_CMD_OFF);

Just checking in, is *this access only* supposed to be 32b instead of 64 like others?

[...]

> +
> +	writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_EN);
> +	writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_CLEAR);
> +	writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_MAP);

If these writes are here to prevent a possible interrupt storm type tragedy,
you need to read back these registers to ensure the writes have left the CPU
complex and reached the observer at the other end of the bus (not to be
confused with barriers which only ensure that such accesses are ordered
*when still possibly within the CPU complex*).

Moreover, if the order of them arriving (en/clear/mask) doesn't matter, you
can add _relaxed for a possible nanosecond-order perf gain

> +
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0)
> +		return irq;
> +
> +	ret = devm_request_irq(dev, irq, qcom_cpucp_mbox_irq_fn,
> +			       IRQF_TRIGGER_HIGH, "apss_cpucp_mbox", cpucp);
> +	if (ret < 0)
> +		return dev_err_probe(dev, ret, "Failed to register irq: %d\n", irq);
> +
> +	writeq(APSS_CPUCP_RX_MBOX_CMD_MASK, cpucp->rx_base + APSS_CPUCP_RX_MBOX_MAP);

Similarly here, unless read back, we may potentially miss some interrupts if
e.g. a channel is opened and that write "is decided" (by the silicon) to leave
the internal buffer first


> +
> +	mbox = &cpucp->mbox;
> +	mbox->dev = dev;
> +	mbox->num_chans = APSS_CPUCP_IPC_CHAN_SUPPORTED;
> +	mbox->chans = cpucp->chans;
> +	mbox->ops = &qcom_cpucp_mbox_chan_ops;
> +	mbox->txdone_irq = false;
> +	mbox->txdone_poll = false;

"false" == 0 is the default value (as you're using k*z*alloc)


> +
> +	ret = devm_mbox_controller_register(dev, mbox);
> +	if (ret)
> +		return dev_err_probe(dev, ret, "Failed to create mailbox\n");
> +
> +	return 0;
> +}
> +
> +static const struct of_device_id qcom_cpucp_mbox_of_match[] = {
> +	{ .compatible = "qcom,x1e80100-cpucp-mbox" },
> +	{}
> +};
> +MODULE_DEVICE_TABLE(of, qcom_cpucp_mbox_of_match);
> +
> +static struct platform_driver qcom_cpucp_mbox_driver = {
> +	.probe = qcom_cpucp_mbox_probe,
> +	.driver = {
> +		.name = "qcom_cpucp_mbox",
> +		.of_match_table = qcom_cpucp_mbox_of_match,
> +	},
> +};
> +module_platform_driver(qcom_cpucp_mbox_driver);

That's turbo late. Go core_initcall.

Konrad
Jassi Brar May 1, 2024, 2:14 a.m. UTC | #2
On Mon, Apr 22, 2024 at 11:41 AM Sibi Sankar <quic_sibis@quicinc.com> wrote:
>
> Add support for CPUSS Control Processor (CPUCP) mailbox controller,
> this driver enables communication between AP and CPUCP by acting as
> a doorbell between them.
>
> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
> ---

Do you want to add an entry in the MAINTAINERS ?

> diff --git a/drivers/mailbox/qcom-cpucp-mbox.c b/drivers/mailbox/qcom-cpucp-mbox.c
 .....
> +static irqreturn_t qcom_cpucp_mbox_irq_fn(int irq, void *data)
> +{
> +       struct qcom_cpucp_mbox *cpucp = data;
> +       struct mbox_chan *chan;
> +       unsigned long flags;
> +       u64 status;
> +       u32 val;
> +       int i;
> +
The variables flags, val and chan are better inside the for loop below.

> +       status = readq(cpucp->rx_base + APSS_CPUCP_RX_MBOX_STAT);
> +
> +       for_each_set_bit(i, (unsigned long *)&status, APSS_CPUCP_IPC_CHAN_SUPPORTED) {
> +               val = readl(cpucp->rx_base + APSS_CPUCP_RX_MBOX_CMD(i) + APSS_CPUCP_MBOX_CMD_OFF);
> +               chan = &cpucp->chans[i];
> +               /* Provide mutual exclusion with changes to chan->cl */
> +               spin_lock_irqsave(&chan->lock, flags);
> +               if (chan->cl)
> +                       mbox_chan_received_data(chan, &val);
> +               writeq(BIT(i), cpucp->rx_base + APSS_CPUCP_RX_MBOX_CLEAR);
> +               spin_unlock_irqrestore(&chan->lock, flags);
> +       }
> +
> +       return IRQ_HANDLED;
> +}
> +

Thanks
Jassi
Cristian Marussi May 3, 2024, 12:48 p.m. UTC | #3
On Mon, Apr 22, 2024 at 10:10:32PM +0530, Sibi Sankar wrote:
> Add support for CPUSS Control Processor (CPUCP) mailbox controller,
> this driver enables communication between AP and CPUCP by acting as
> a doorbell between them.
> 

Hi Sibi,

one small reflection about locking on the RX path down below...

> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
> ---
>

[snip]

> +struct qcom_cpucp_mbox {
> +	struct mbox_chan chans[APSS_CPUCP_IPC_CHAN_SUPPORTED];
> +	struct mbox_controller mbox;
> +	void __iomem *tx_base;
> +	void __iomem *rx_base;
> +};
> +
> +static inline int channel_number(struct mbox_chan *chan)
> +{
> +	return chan - chan->mbox->chans;
> +}
> +
> +static irqreturn_t qcom_cpucp_mbox_irq_fn(int irq, void *data)
> +{
> +	struct qcom_cpucp_mbox *cpucp = data;
> +	struct mbox_chan *chan;
> +	unsigned long flags;
> +	u64 status;
> +	u32 val;
> +	int i;
> +
> +	status = readq(cpucp->rx_base + APSS_CPUCP_RX_MBOX_STAT);
> +
> +	for_each_set_bit(i, (unsigned long *)&status, APSS_CPUCP_IPC_CHAN_SUPPORTED) {
> +		val = readl(cpucp->rx_base + APSS_CPUCP_RX_MBOX_CMD(i) + APSS_CPUCP_MBOX_CMD_OFF);
> +		chan = &cpucp->chans[i];
> +		/* Provide mutual exclusion with changes to chan->cl */
> +		spin_lock_irqsave(&chan->lock, flags);
> +		if (chan->cl)

So the spinlock here is needed to properly check for races on chan->cl
being NULLified by concurrent calls to mbox_channel_free()...the end
result, though, is that you disable IRQs here on each single data
processed on the RX path, while calling mbox_chan_received_data(), in order
to avoid the remote (but real) possibility that the mbox users could free
the channel while some traffic is still in-flight and processed by this ISR.

Note that, though, that mbox_channel_free() calls straight away at start
your controller provided qcom_cpucp_mbox_shutdown() method, where you disable
the IRQ at the HW level in the chip: this means that the only race which could
then happen between the call to .shutdown and chan->cl = NULL, would happen in
any already executing qcom_cpucp_mbox_irq_fn() ISR...

So, I was thinking, what if you add a

	sincronize_irq(cpucp->irq);

in your shutdown right after having disabled the HW IRQs.

This would mean waiting for the termination of any IRQ handlers pending on your
cpucp->irq (field that does not exist as of now :D), right after having
disabled such irq and so just before NULLifying chan->cl...in this way you
should be able to safely drop this spinlock call from the host RX path,
because once you chan->cl = NULL is executed, the IRQs are disabled and
any ongoing ISR would have been terminated.

syncronize_irq() is blocking of course, potentially, but the shutdown
method in mbox_chan_ops is allowed to be blocking looking at the comments.

...not sure if all of this is worth to avoid this small section of code to be
run with IRQs disabled....note though that the mbox_chan_received_data() calls
straight away into the client provided cl->callback....so the real lenght of this
code path is uncertain ....

...just an idea to reason about...

Thanks,
Cristian
Sibi Sankar May 14, 2024, 9:36 a.m. UTC | #4
On 5/1/24 07:44, Jassi Brar wrote:
> On Mon, Apr 22, 2024 at 11:41 AM Sibi Sankar <quic_sibis@quicinc.com> wrote:
>>
>> Add support for CPUSS Control Processor (CPUCP) mailbox controller,
>> this driver enables communication between AP and CPUCP by acting as
>> a doorbell between them.
>>

Hey Jassi,

Thanks for taking time to review the series :).

>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>> ---
> 
> Do you want to add an entry in the MAINTAINERS ?

Thanks will add in the next re-spin.

> 
>> diff --git a/drivers/mailbox/qcom-cpucp-mbox.c b/drivers/mailbox/qcom-cpucp-mbox.c
>   .....
>> +static irqreturn_t qcom_cpucp_mbox_irq_fn(int irq, void *data)
>> +{
>> +       struct qcom_cpucp_mbox *cpucp = data;
>> +       struct mbox_chan *chan;
>> +       unsigned long flags;
>> +       u64 status;
>> +       u32 val;
>> +       int i;
>> +
> The variables flags, val and chan are better inside the for loop below.

Ack.

-Sibi

> 
>> +       status = readq(cpucp->rx_base + APSS_CPUCP_RX_MBOX_STAT);
>> +
>> +       for_each_set_bit(i, (unsigned long *)&status, APSS_CPUCP_IPC_CHAN_SUPPORTED) {
>> +               val = readl(cpucp->rx_base + APSS_CPUCP_RX_MBOX_CMD(i) + APSS_CPUCP_MBOX_CMD_OFF);
>> +               chan = &cpucp->chans[i];
>> +               /* Provide mutual exclusion with changes to chan->cl */
>> +               spin_lock_irqsave(&chan->lock, flags);
>> +               if (chan->cl)
>> +                       mbox_chan_received_data(chan, &val);
>> +               writeq(BIT(i), cpucp->rx_base + APSS_CPUCP_RX_MBOX_CLEAR);
>> +               spin_unlock_irqrestore(&chan->lock, flags);
>> +       }
>> +
>> +       return IRQ_HANDLED;
>> +}
>> +
> 
> Thanks
> Jassi
Sibi Sankar May 14, 2024, 10:54 a.m. UTC | #5
On 5/3/24 18:18, Cristian Marussi wrote:
> On Mon, Apr 22, 2024 at 10:10:32PM +0530, Sibi Sankar wrote:
>> Add support for CPUSS Control Processor (CPUCP) mailbox controller,
>> this driver enables communication between AP and CPUCP by acting as
>> a doorbell between them.
>>
> 
> Hi Sibi,
> 
> one small reflection about locking on the RX path down below...
> 
>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>> ---
>>

Hey Christian,
Thanks for taking time to review the series :)

> 
> [snip]
> 
>> +struct qcom_cpucp_mbox {
>> +	struct mbox_chan chans[APSS_CPUCP_IPC_CHAN_SUPPORTED];
>> +	struct mbox_controller mbox;
>> +	void __iomem *tx_base;
>> +	void __iomem *rx_base;
>> +};
>> +
>> +static inline int channel_number(struct mbox_chan *chan)
>> +{
>> +	return chan - chan->mbox->chans;
>> +}
>> +
>> +static irqreturn_t qcom_cpucp_mbox_irq_fn(int irq, void *data)
>> +{
>> +	struct qcom_cpucp_mbox *cpucp = data;
>> +	struct mbox_chan *chan;
>> +	unsigned long flags;
>> +	u64 status;
>> +	u32 val;
>> +	int i;
>> +
>> +	status = readq(cpucp->rx_base + APSS_CPUCP_RX_MBOX_STAT);
>> +
>> +	for_each_set_bit(i, (unsigned long *)&status, APSS_CPUCP_IPC_CHAN_SUPPORTED) {
>> +		val = readl(cpucp->rx_base + APSS_CPUCP_RX_MBOX_CMD(i) + APSS_CPUCP_MBOX_CMD_OFF);
>> +		chan = &cpucp->chans[i];
>> +		/* Provide mutual exclusion with changes to chan->cl */
>> +		spin_lock_irqsave(&chan->lock, flags);
>> +		if (chan->cl)
> 
> So the spinlock here is needed to properly check for races on chan->cl
> being NULLified by concurrent calls to mbox_channel_free()...the end
> result, though, is that you disable IRQs here on each single data
> processed on the RX path, while calling mbox_chan_received_data(), in order
> to avoid the remote (but real) possibility that the mbox users could free
> the channel while some traffic is still in-flight and processed by this ISR.
> 
> Note that, though, that mbox_channel_free() calls straight away at start
> your controller provided qcom_cpucp_mbox_shutdown() method, where you disable
> the IRQ at the HW level in the chip: this means that the only race which could
> then happen between the call to .shutdown and chan->cl = NULL, would happen in
> any already executing qcom_cpucp_mbox_irq_fn() ISR...
> 
> So, I was thinking, what if you add a
> 
> 	sincronize_irq(cpucp->irq);
> 
> in your shutdown right after having disabled the HW IRQs.
> 
> This would mean waiting for the termination of any IRQ handlers pending on your
> cpucp->irq (field that does not exist as of now :D), right after having
> disabled such irq and so just before NULLifying chan->cl...in this way you
> should be able to safely drop this spinlock call from the host RX path,
> because once you chan->cl = NULL is executed, the IRQs are disabled and
> any ongoing ISR would have been terminated.
> 
> syncronize_irq() is blocking of course, potentially, but the shutdown
> method in mbox_chan_ops is allowed to be blocking looking at the comments.
> 
> ...not sure if all of this is worth to avoid this small section of code to be
> run with IRQs disabled....note though that the mbox_chan_received_data() calls
> straight away into the client provided cl->callback....so the real lenght of this
> code path is uncertain ....
> 
> ...just an idea to reason about...

In addition to shutdown, Bjorn was worried about handling the setup
scenario as well. Since there are multiple channels, irq handler could
end up dereferencing a half baked cl of the second channel while it's
still being setup. Of course this could happen only if the status bits
aren't cleared by the bootloader though. If it's just the shutdown path
your rec should work fine :)

-Sibi

> 
> Thanks,
> Cristian
Sibi Sankar May 14, 2024, 11:19 a.m. UTC | #6
On 4/23/24 22:40, Sibi Sankar wrote:
> 
> 
> On 4/23/24 04:47, Konrad Dybcio wrote:
>>
>>
>> On 4/22/24 18:40, Sibi Sankar wrote:
>>> Add support for CPUSS Control Processor (CPUCP) mailbox controller,
>>> this driver enables communication between AP and CPUCP by acting as
>>> a doorbell between them.
>>>
>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>>> ---
>>
>> [...]
>>
>>> +
>>> +static int qcom_cpucp_mbox_send_data(struct mbox_chan *chan, void 
>>> *data)
>>> +{
>>> +    struct qcom_cpucp_mbox *cpucp = container_of(chan->mbox, struct 
>>> qcom_cpucp_mbox, mbox);
>>> +    unsigned long chan_id = channel_number(chan);
>>> +    u32 *val = data;
>>> +
>>> +    writel(*val, cpucp->tx_base + APSS_CPUCP_TX_MBOX_CMD(chan_id) + 
>>> APSS_CPUCP_MBOX_CMD_OFF);
>>
> 
> Hey Konrad,
> 
> Thanks for taking time to review the series.
> 
>> Just checking in, is *this access only* supposed to be 32b instead of 
>> 64 like others?
> 
> yeah, the readl and writely in the driver were used intentionally.
> 
>>
>> [...]
>>
>>> +
>>> +    writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_EN);
>>> +    writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_CLEAR);
>>> +    writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_MAP);
>>
>> If these writes are here to prevent a possible interrupt storm type 
>> tragedy,
>> you need to read back these registers to ensure the writes have left 
>> the CPU
>> complex and reached the observer at the other end of the bus (not to be
>> confused with barriers which only ensure that such accesses are ordered
>> *when still possibly within the CPU complex*).
> 
> I couldn't find anything alluding to ^^. This sequence was just
> meant to reset the mailbox. Looks like we do need to preserve the
> ordering so relaxed read/writes aren't an option.
> 
> -Sibi
> 
>>
>> Moreover, if the order of them arriving (en/clear/mask) doesn't 
>> matter, you
>> can add _relaxed for a possible nanosecond-order perf gain
>>
>>> +
>>> +    irq = platform_get_irq(pdev, 0);
>>> +    if (irq < 0)
>>> +        return irq;
>>> +
>>> +    ret = devm_request_irq(dev, irq, qcom_cpucp_mbox_irq_fn,
>>> +                   IRQF_TRIGGER_HIGH, "apss_cpucp_mbox", cpucp);
>>> +    if (ret < 0)
>>> +        return dev_err_probe(dev, ret, "Failed to register irq: 
>>> %d\n", irq);
>>> +
>>> +    writeq(APSS_CPUCP_RX_MBOX_CMD_MASK, cpucp->rx_base + 
>>> APSS_CPUCP_RX_MBOX_MAP);
>>
>> Similarly here, unless read back, we may potentially miss some 
>> interrupts if
>> e.g. a channel is opened and that write "is decided" (by the silicon) 
>> to leave
>> the internal buffer first
> 
> At this point in time we don't expect any interrupts. They are expected
> only after channel activation. Also there were no recommendations for
> reading it back here as well.
> 
> -Sibi
> 
>>
>>
>>> +
>>> +    mbox = &cpucp->mbox;
>>> +    mbox->dev = dev;
>>> +    mbox->num_chans = APSS_CPUCP_IPC_CHAN_SUPPORTED;
>>> +    mbox->chans = cpucp->chans;
>>> +    mbox->ops = &qcom_cpucp_mbox_chan_ops;
>>> +    mbox->txdone_irq = false;
>>> +    mbox->txdone_poll = false;
>>
>> "false" == 0 is the default value (as you're using k*z*alloc)
>>
>>
>>> +
>>> +    ret = devm_mbox_controller_register(dev, mbox);
>>> +    if (ret)
>>> +        return dev_err_probe(dev, ret, "Failed to create mailbox\n");
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static const struct of_device_id qcom_cpucp_mbox_of_match[] = {
>>> +    { .compatible = "qcom,x1e80100-cpucp-mbox" },
>>> +    {}
>>> +};
>>> +MODULE_DEVICE_TABLE(of, qcom_cpucp_mbox_of_match);
>>> +
>>> +static struct platform_driver qcom_cpucp_mbox_driver = {
>>> +    .probe = qcom_cpucp_mbox_probe,
>>> +    .driver = {
>>> +        .name = "qcom_cpucp_mbox",
>>> +        .of_match_table = qcom_cpucp_mbox_of_match,
>>> +    },
>>> +};
>>> +module_platform_driver(qcom_cpucp_mbox_driver);
>>
>> That's turbo late. Go core_initcall.

Christian/Sudeep,

Looks like making the cpucp mbox as part of the core initcall and having
the vendor protocol as a module_scmi_driver causes a race as follows:

scmi_core: SCMI protocol bus registered
scmi_core: Requesting SCMI device (clocks) for protocol 14
scmi_core: Registered new scmi driver scmi-clocks
scmi_core: Requesting SCMI device (qcom_scmi_vendor_protocol) for 
protocol 80
scmi_core: Registered new scmi driver qcom-scmi-driver
scmi_core: Requesting SCMI device (perf) for protocol 13
scmi_core: Registered new scmi driver scmi-perf-domain
scmi_core: Requesting SCMI device (genpd) for protocol 11
scmi_core: Registered new scmi driver scmi-power-domain
scmi_core: Requesting SCMI device (reset) for protocol 16
scmi_core: Registered new scmi driver scmi-reset
scmi_core: Requesting SCMI device (hwmon) for protocol 15
scmi_core: Registered new scmi driver scmi-hwmon
scmi_core: Requesting SCMI device (cpufreq) for protocol 13
scmi_core: Registered new scmi driver scmi-cpufreq
scmi_module: Registered SCMI Protocol 0x10
scmi_module: Registered SCMI Protocol 0x14
scmi_module: Registered SCMI Protocol 0x13
scmi_module: Registered SCMI Protocol 0x11
scmi_module: Registered SCMI Protocol 0x16
scmi_module: Registered SCMI Protocol 0x15
scmi_module: Registered SCMI Protocol 0x17
scmi_module: Registered SCMI Protocol 0x12
scmi_module: Registered SCMI Protocol 0x18
scmi_module: Registered SCMI Protocol 0x19
scmi_core: (scmi) Created SCMI device 'scmi_dev.1' for protocol 0x10 
(__scmi_transport_device_tx_10)
scmi_core: (scmi) Created SCMI device 'scmi_dev.2' for protocol 0x10 
(__scmi_transport_device_rx_10)
arm-scmi firmware:scmi: SCMI Notifications - Core Enabled.
scmi_module: Found SCMI Protocol 0x10
arm-scmi firmware:scmi: SCMI Protocol v2.0 'Qualcomm:' Firmware version 
0x20000
scmi_module: Found SCMI Protocol 0x13
scmi_core: (scmi) Created SCMI device 'scmi_dev.3' for protocol 0x13 
(cpufreq)
scmi-perf-domain scmi_dev.4: Initialized 3 performance domains
scmi_core: (scmi) Created SCMI device 'scmi_dev.4' for protocol 0x13 (perf)
scmi_module: SCMI Protocol 0x80 not found!
scmi_core: (scmi) Created SCMI device 'scmi_dev.5' for protocol 0x80 
(qcom_scmi_vendor_protocol)
scmi_module: Registered SCMI Protocol 0x80

By the time the vendor protocol get's registered it's already reported
as not found.

-Sibi

>>
>> Konrad
>
diff mbox series

Patch

diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
index 42940108a187..23741a6f054e 100644
--- a/drivers/mailbox/Kconfig
+++ b/drivers/mailbox/Kconfig
@@ -273,6 +273,14 @@  config SPRD_MBOX
 	  to send message between application processors and MCU. Say Y here if
 	  you want to build the Spreatrum mailbox controller driver.
 
+config QCOM_CPUCP_MBOX
+	tristate "Qualcomm Technologies, Inc. CPUCP mailbox driver"
+	depends on ARCH_QCOM || COMPILE_TEST
+	help
+	  Qualcomm Technologies, Inc. CPUSS Control Processor (CPUCP) mailbox
+	  controller driver enables communication between AP and CPUCP. Say
+	  Y here if you want to build this driver.
+
 config QCOM_IPCC
 	tristate "Qualcomm Technologies, Inc. IPCC driver"
 	depends on ARCH_QCOM || COMPILE_TEST
diff --git a/drivers/mailbox/Makefile b/drivers/mailbox/Makefile
index 18793e6caa2f..53b512800bde 100644
--- a/drivers/mailbox/Makefile
+++ b/drivers/mailbox/Makefile
@@ -59,4 +59,6 @@  obj-$(CONFIG_SUN6I_MSGBOX)	+= sun6i-msgbox.o
 
 obj-$(CONFIG_SPRD_MBOX)		+= sprd-mailbox.o
 
+obj-$(CONFIG_QCOM_CPUCP_MBOX)	+= qcom-cpucp-mbox.o
+
 obj-$(CONFIG_QCOM_IPCC)		+= qcom-ipcc.o
diff --git a/drivers/mailbox/qcom-cpucp-mbox.c b/drivers/mailbox/qcom-cpucp-mbox.c
new file mode 100644
index 000000000000..78d0872a0e33
--- /dev/null
+++ b/drivers/mailbox/qcom-cpucp-mbox.c
@@ -0,0 +1,179 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/bitops.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
+#include <linux/mailbox_controller.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+
+#define APSS_CPUCP_IPC_CHAN_SUPPORTED		3
+#define APSS_CPUCP_MBOX_CMD_OFF			0x4
+
+/* Tx Registers */
+#define APSS_CPUCP_TX_MBOX_CMD(i)		(0x100 + ((i) * 8))
+
+/* Rx Registers */
+#define APSS_CPUCP_RX_MBOX_CMD(i)		(0x100 + ((i) * 8))
+#define APSS_CPUCP_RX_MBOX_MAP			0x4000
+#define APSS_CPUCP_RX_MBOX_STAT			0x4400
+#define APSS_CPUCP_RX_MBOX_CLEAR		0x4800
+#define APSS_CPUCP_RX_MBOX_EN			0x4c00
+#define APSS_CPUCP_RX_MBOX_CMD_MASK		GENMASK_ULL(63, 0)
+
+/**
+ * struct qcom_cpucp_mbox - Holder for the mailbox driver
+ * @chans:			The mailbox channel
+ * @mbox:			The mailbox controller
+ * @tx_base:			Base address of the CPUCP tx registers
+ * @rx_base:			Base address of the CPUCP rx registers
+ */
+struct qcom_cpucp_mbox {
+	struct mbox_chan chans[APSS_CPUCP_IPC_CHAN_SUPPORTED];
+	struct mbox_controller mbox;
+	void __iomem *tx_base;
+	void __iomem *rx_base;
+};
+
+static inline int channel_number(struct mbox_chan *chan)
+{
+	return chan - chan->mbox->chans;
+}
+
+static irqreturn_t qcom_cpucp_mbox_irq_fn(int irq, void *data)
+{
+	struct qcom_cpucp_mbox *cpucp = data;
+	struct mbox_chan *chan;
+	unsigned long flags;
+	u64 status;
+	u32 val;
+	int i;
+
+	status = readq(cpucp->rx_base + APSS_CPUCP_RX_MBOX_STAT);
+
+	for_each_set_bit(i, (unsigned long *)&status, APSS_CPUCP_IPC_CHAN_SUPPORTED) {
+		val = readl(cpucp->rx_base + APSS_CPUCP_RX_MBOX_CMD(i) + APSS_CPUCP_MBOX_CMD_OFF);
+		chan = &cpucp->chans[i];
+		/* Provide mutual exclusion with changes to chan->cl */
+		spin_lock_irqsave(&chan->lock, flags);
+		if (chan->cl)
+			mbox_chan_received_data(chan, &val);
+		writeq(BIT(i), cpucp->rx_base + APSS_CPUCP_RX_MBOX_CLEAR);
+		spin_unlock_irqrestore(&chan->lock, flags);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static int qcom_cpucp_mbox_startup(struct mbox_chan *chan)
+{
+	struct qcom_cpucp_mbox *cpucp = container_of(chan->mbox, struct qcom_cpucp_mbox, mbox);
+	unsigned long chan_id = channel_number(chan);
+	u64 val;
+
+	val = readq(cpucp->rx_base + APSS_CPUCP_RX_MBOX_EN);
+	val |= BIT(chan_id);
+	writeq(val, cpucp->rx_base + APSS_CPUCP_RX_MBOX_EN);
+
+	return 0;
+}
+
+static void qcom_cpucp_mbox_shutdown(struct mbox_chan *chan)
+{
+	struct qcom_cpucp_mbox *cpucp = container_of(chan->mbox, struct qcom_cpucp_mbox, mbox);
+	unsigned long chan_id = channel_number(chan);
+	u64 val;
+
+	val = readq(cpucp->rx_base + APSS_CPUCP_RX_MBOX_EN);
+	val &= ~BIT(chan_id);
+	writeq(val, cpucp->rx_base + APSS_CPUCP_RX_MBOX_EN);
+}
+
+static int qcom_cpucp_mbox_send_data(struct mbox_chan *chan, void *data)
+{
+	struct qcom_cpucp_mbox *cpucp = container_of(chan->mbox, struct qcom_cpucp_mbox, mbox);
+	unsigned long chan_id = channel_number(chan);
+	u32 *val = data;
+
+	writel(*val, cpucp->tx_base + APSS_CPUCP_TX_MBOX_CMD(chan_id) + APSS_CPUCP_MBOX_CMD_OFF);
+
+	return 0;
+}
+
+static const struct mbox_chan_ops qcom_cpucp_mbox_chan_ops = {
+	.startup = qcom_cpucp_mbox_startup,
+	.send_data = qcom_cpucp_mbox_send_data,
+	.shutdown = qcom_cpucp_mbox_shutdown
+};
+
+static int qcom_cpucp_mbox_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct qcom_cpucp_mbox *cpucp;
+	struct mbox_controller *mbox;
+	int irq, ret;
+
+	cpucp = devm_kzalloc(dev, sizeof(*cpucp), GFP_KERNEL);
+	if (!cpucp)
+		return -ENOMEM;
+
+	cpucp->rx_base = devm_of_iomap(dev, dev->of_node, 0, NULL);
+	if (IS_ERR(cpucp->rx_base))
+		return PTR_ERR(cpucp->rx_base);
+
+	cpucp->tx_base = devm_of_iomap(dev, dev->of_node, 1, NULL);
+	if (IS_ERR(cpucp->tx_base))
+		return PTR_ERR(cpucp->tx_base);
+
+	writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_EN);
+	writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_CLEAR);
+	writeq(0, cpucp->rx_base + APSS_CPUCP_RX_MBOX_MAP);
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0)
+		return irq;
+
+	ret = devm_request_irq(dev, irq, qcom_cpucp_mbox_irq_fn,
+			       IRQF_TRIGGER_HIGH, "apss_cpucp_mbox", cpucp);
+	if (ret < 0)
+		return dev_err_probe(dev, ret, "Failed to register irq: %d\n", irq);
+
+	writeq(APSS_CPUCP_RX_MBOX_CMD_MASK, cpucp->rx_base + APSS_CPUCP_RX_MBOX_MAP);
+
+	mbox = &cpucp->mbox;
+	mbox->dev = dev;
+	mbox->num_chans = APSS_CPUCP_IPC_CHAN_SUPPORTED;
+	mbox->chans = cpucp->chans;
+	mbox->ops = &qcom_cpucp_mbox_chan_ops;
+	mbox->txdone_irq = false;
+	mbox->txdone_poll = false;
+
+	ret = devm_mbox_controller_register(dev, mbox);
+	if (ret)
+		return dev_err_probe(dev, ret, "Failed to create mailbox\n");
+
+	return 0;
+}
+
+static const struct of_device_id qcom_cpucp_mbox_of_match[] = {
+	{ .compatible = "qcom,x1e80100-cpucp-mbox" },
+	{}
+};
+MODULE_DEVICE_TABLE(of, qcom_cpucp_mbox_of_match);
+
+static struct platform_driver qcom_cpucp_mbox_driver = {
+	.probe = qcom_cpucp_mbox_probe,
+	.driver = {
+		.name = "qcom_cpucp_mbox",
+		.of_match_table = qcom_cpucp_mbox_of_match,
+	},
+};
+module_platform_driver(qcom_cpucp_mbox_driver);
+
+MODULE_DESCRIPTION("QTI CPUCP MBOX Driver");
+MODULE_LICENSE("GPL");