Message ID | 1514904162-11201-1-git-send-email-sudeep.holla@arm.com |
---|---|
Headers | show |
Series | firmware: ARM System Control and Management Interface(SCMI) support | expand |
Hi Sudeep, thank you for working on this. On Tue, Jan 2, 2018 at 2:42 PM, Sudeep Holla <sudeep.holla@arm.com> wrote: [...] > diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c > new file mode 100644 > index 000000000000..58d8f88893e6 > --- /dev/null > +++ b/drivers/firmware/arm_scmi/driver.c [..] > + * Return: 0 is successfully released > + * if null was passed, it returns -EINVAL; > + */ > +int scmi_handle_put(const struct scmi_handle *handle) > +{ > + struct scmi_info *info; > + > + if (!handle) > + return -EINVAL; > + > + info = handle_to_scmi_info(handle); > + mutex_lock(&scmi_list_mutex); > + if (!WARN_ON(!info->users)) > + info->users--; > + mutex_unlock(&scmi_list_mutex); > + > + return 0; > +} > + > +static const struct scmi_desc scmi_generic_desc = { > + .max_rx_timeout_ms = 30, /* we may increase this if required */ What are your thoughts about making it a module parameter? IIRC, this may required to be increased when someone uses debugging version of firmware, for example. In such case someone might need to recompile the kernel in order to boot with enabled and initialized scmi. Also, there can be a chance that another transport will be used that will require larger than 5 * 30 ms delay (such kind of transport can be kinda useless, I know, but can help with development). With module parameter you can still boot passing the larger timeout parameter to the module from cmdline. > + .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */ > + .max_msg_size = 128, > +}; Best regards, Alexey -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 02/01/18 14:42, Sudeep Holla wrote: > The SCMI is intended to allow OSPM to manage various functions that are > provided by the hardware platform it is running on, including power and > performance functions. SCMI provides two levels of abstraction, protocols > and transports. Protocols define individual groups of system control and > management messages. A protocol specification describes the messages > that it supports. Transports describe the method by which protocol > messages are communicated between agents and the platform. > > This patch adds basic infrastructure to manage the message allocation, > initialisation, packing/unpacking and shared memory management. > Any chance you could review patches 3-14 ? All the drivers (15-20) are already acked by the maintainer. I know it's late for v4.16, I want it to be ready for v4.17 ASAP. It's on the list without much progress for few months now :(, hence the push. Sorry for the nag. -- Regards, Sudeep -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jan 2, 2018 at 2:42 PM, Sudeep Holla <sudeep.holla@arm.com> wrote: > The performance protocol is intended for the performance management of > group(s) of device(s) that run in the same performance domain. It > includes even the CPUs. A performance domain is defined by a set of > devices that always have to run at the same performance level. > For example, a set of CPUs that share a voltage domain, and have a > common frequency control, is said to be in the same performance domain. > > The commands in this protocol provide functionality to describe the > protocol version, describe various attribute flags, set and get the > performance level of a domain. It also supports discovery of the list > of performance levels supported by a performance domain, and the > properties of each performance level. > > This patch adds basic support for the performance protocol. > > Cc: Arnd Bergmann <arnd@arndb.de> > Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> > --- > drivers/firmware/arm_scmi/Makefile | 2 +- > drivers/firmware/arm_scmi/common.h | 1 + > drivers/firmware/arm_scmi/perf.c | 527 +++++++++++++++++++++++++++++++++++++ > include/linux/scmi_protocol.h | 34 +++ > 4 files changed, 563 insertions(+), 1 deletion(-) [...] > diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c > new file mode 100644 > index 000000000000..a1f5cf136748 > --- /dev/null > +++ b/drivers/firmware/arm_scmi/perf.c > @@ -0,0 +1,527 @@ > +/* > + * System Control and Management Interface (SCMI) Performance Protocol > + * > + * Copyright (C) 2017 ARM Ltd. > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms and conditions of the GNU General Public License, > + * version 2, as published by the Free Software Foundation. > + * > + * This program is distributed in the hope it will be useful, but WITHOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > + * more details. > + * > + * You should have received a copy of the GNU General Public License along > + * with this program. If not, see <http://www.gnu.org/licenses/>. > + */ > + > +#include <linux/of.h> > +#include <linux/platform_device.h> > +#include <linux/pm_opp.h> > +#include <linux/sort.h> > + > +#include "common.h" > + > +enum scmi_performance_protocol_cmd { > + PERF_DOMAIN_ATTRIBUTES = 0x3, > + PERF_DESCRIBE_LEVELS = 0x4, > + PERF_LIMITS_SET = 0x5, > + PERF_LIMITS_GET = 0x6, > + PERF_LEVEL_SET = 0x7, > + PERF_LEVEL_GET = 0x8, > + PERF_NOTIFY_LIMITS = 0x9, > + PERF_NOTIFY_LEVEL = 0xa, > +}; > + > +struct scmi_opp { > + u32 perf; > + u32 power; > + u32 trans_latency_us; > +}; > + > +struct scmi_msg_resp_perf_attributes { > + __le16 num_domains; > + __le16 flags; > +#define POWER_SCALE_IN_MILLIWATT(x) ((x) & BIT(0)) > + __le32 stats_addr_low; > + __le32 stats_addr_high; > + __le32 stats_size; > +}; > + > +struct scmi_msg_resp_perf_domain_attributes { > + __le32 flags; > +#define SUPPORTS_SET_LIMITS(x) ((x) & BIT(31)) > +#define SUPPORTS_SET_PERF_LVL(x) ((x) & BIT(30)) > +#define SUPPORTS_PERF_LIMIT_NOTIFY(x) ((x) & BIT(29)) > +#define SUPPORTS_PERF_LEVEL_NOTIFY(x) ((x) & BIT(28)) > + __le32 rate_limit_us; > + __le32 sustained_freq_khz; > + __le32 sustained_perf_level; > + u8 name[SCMI_MAX_STR_SIZE]; > +}; > + > +struct scmi_msg_perf_describe_levels { > + __le32 domain; > + __le32 level_index; > +}; > + > +struct scmi_perf_set_limits { > + __le32 domain; > + __le32 max_level; > + __le32 min_level; > +}; > + > +struct scmi_perf_get_limits { > + __le32 max_level; > + __le32 min_level; > +}; > + > +struct scmi_perf_set_level { > + __le32 domain; > + __le32 level; > +}; > + > +struct scmi_perf_notify_level_or_limits { > + __le32 domain; > + __le32 notify_enable; > +}; > + > +struct scmi_msg_resp_perf_describe_levels { > + __le16 num_returned; > + __le16 num_remaining; > + struct { > + __le32 perf_val; > + __le32 power; > + __le16 transition_latency_us; > + __le16 reserved; > + } opp[0]; > +}; > + > +struct perf_dom_info { > + bool set_limits; > + bool set_perf; > + bool perf_limit_notify; > + bool perf_level_notify; > + u32 opp_count; > + u32 sustained_freq_khz; > + u32 sustained_perf_level; > + u32 mult_factor; > + char name[SCMI_MAX_STR_SIZE]; > + struct scmi_opp opp[MAX_OPPS]; > +}; > + > +struct scmi_perf_info { > + int num_domains; > + bool power_scale_mw; > + u64 stats_addr; > + u32 stats_size; > + struct perf_dom_info *dom_info; > +}; > + > +static int scmi_perf_attributes_get(const struct scmi_handle *handle, > + struct scmi_perf_info *pi) > +{ > + int ret; > + struct scmi_xfer *t; > + struct scmi_msg_resp_perf_attributes *attr; > + > + ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, > + SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t); > + if (ret) > + return ret; > + > + attr = t->rx.buf; > + > + ret = scmi_do_xfer(handle, t); > + if (!ret) { > + u16 flags = le16_to_cpu(attr->flags); > + > + pi->num_domains = le16_to_cpu(attr->num_domains); > + pi->power_scale_mw = POWER_SCALE_IN_MILLIWATT(flags); > + pi->stats_addr = le32_to_cpu(attr->stats_addr_low) | > + (u64)le32_to_cpu(attr->stats_addr_high) << 32; > + pi->stats_size = le32_to_cpu(attr->stats_size); > + } > + > + scmi_one_xfer_put(handle, t); > + return ret; > +} > + > +static int > +scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain, > + struct perf_dom_info *dom_info) > +{ > + int ret; > + struct scmi_xfer *t; > + struct scmi_msg_resp_perf_domain_attributes *attr; > + > + ret = scmi_one_xfer_init(handle, PERF_DOMAIN_ATTRIBUTES, > + SCMI_PROTOCOL_PERF, sizeof(domain), > + sizeof(*attr), &t); > + if (ret) > + return ret; > + > + *(__le32 *)t->tx.buf = cpu_to_le32(domain); > + attr = t->rx.buf; > + > + ret = scmi_do_xfer(handle, t); > + if (!ret) { > + u32 flags = le32_to_cpu(attr->flags); > + > + dom_info->set_limits = SUPPORTS_SET_LIMITS(flags); > + dom_info->set_perf = SUPPORTS_SET_PERF_LVL(flags); > + dom_info->perf_limit_notify = SUPPORTS_PERF_LIMIT_NOTIFY(flags); > + dom_info->perf_level_notify = SUPPORTS_PERF_LEVEL_NOTIFY(flags); > + dom_info->sustained_freq_khz = > + le32_to_cpu(attr->sustained_freq_khz); > + dom_info->sustained_perf_level = > + le32_to_cpu(attr->sustained_perf_level); > + dom_info->mult_factor = (dom_info->sustained_freq_khz * 1000) / > + dom_info->sustained_perf_level; > + memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); > + } > + > + scmi_one_xfer_put(handle, t); > + return ret; > +} > + > +static int opp_cmp_func(const void *opp1, const void *opp2) > +{ > + const struct scmi_opp *t1 = opp1, *t2 = opp2; > + > + return t1->perf - t2->perf; > +} > + > +static int > +scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain, > + struct perf_dom_info *perf_dom) > +{ > + int ret, cnt; > + u32 tot_opp_cnt = 0; > + u16 num_returned, num_remaining; > + struct scmi_xfer *t; > + struct scmi_opp *opp; > + struct scmi_msg_perf_describe_levels *dom_info; > + struct scmi_msg_resp_perf_describe_levels *level_info; > + > + ret = scmi_one_xfer_init(handle, PERF_DESCRIBE_LEVELS, > + SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t); > + if (ret) > + return ret; > + > + dom_info = t->tx.buf; > + level_info = t->rx.buf; > + > + do { > + dom_info->domain = cpu_to_le32(domain); > + /* Set the number of OPPs to be skipped/already read */ > + dom_info->level_index = cpu_to_le32(tot_opp_cnt); > + > + ret = scmi_do_xfer(handle, t); > + if (ret) > + break; > + > + num_returned = le16_to_cpu(level_info->num_returned); > + num_remaining = le16_to_cpu(level_info->num_remaining); > + if (tot_opp_cnt + num_returned > MAX_OPPS) { > + dev_err(handle->dev, "No. of OPPs exceeded MAX_OPPS"); > + break; > + } > + > + opp = &perf_dom->opp[tot_opp_cnt]; > + for (cnt = 0; cnt < num_returned; cnt++, opp++) { > + opp->perf = le32_to_cpu(level_info->opp[cnt].perf_val); > + opp->power = le32_to_cpu(level_info->opp[cnt].power); > + opp->trans_latency_us = le16_to_cpu( > + level_info->opp[cnt].transition_latency_us); > + > + dev_dbg(handle->dev, "Level %d Power %d Latency %dus\n", > + opp->perf, opp->power, opp->trans_latency_us); > + } > + > + tot_opp_cnt += num_returned; > + /* > + * check for both returned and remaining to avoid infinite > + * loop due to buggy firmware > + */ > + } while (num_returned && num_remaining); > + > + perf_dom->opp_count = tot_opp_cnt; > + scmi_one_xfer_put(handle, t); > + > + sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL); > + return ret; > +} > + > +static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain, > + u32 max_perf, u32 min_perf) > +{ > + int ret; > + struct scmi_xfer *t; > + struct scmi_perf_set_limits *limits; > + > + ret = scmi_one_xfer_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF, > + sizeof(*limits), 0, &t); > + if (ret) > + return ret; > + > + limits = t->tx.buf; > + limits->domain = cpu_to_le32(domain); > + limits->max_level = cpu_to_le32(max_perf); > + limits->min_level = cpu_to_le32(min_perf); > + > + ret = scmi_do_xfer(handle, t); > + > + scmi_one_xfer_put(handle, t); > + return ret; > +} > + > +static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain, > + u32 *max_perf, u32 *min_perf) > +{ > + int ret; > + struct scmi_xfer *t; > + struct scmi_perf_get_limits *limits; > + > + ret = scmi_one_xfer_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF, > + sizeof(__le32), 0, &t); > + if (ret) > + return ret; > + > + *(__le32 *)t->tx.buf = cpu_to_le32(domain); > + > + ret = scmi_do_xfer(handle, t); > + if (!ret) { > + limits = t->rx.buf; > + > + *max_perf = le32_to_cpu(limits->max_level); > + *min_perf = le32_to_cpu(limits->min_level); > + } > + > + scmi_one_xfer_put(handle, t); > + return ret; > +} > + > +static int > +scmi_perf_level_set(const struct scmi_handle *handle, u32 domain, u32 level) > +{ > + int ret; > + struct scmi_xfer *t; > + struct scmi_perf_set_level *lvl; > + > + ret = scmi_one_xfer_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF, > + sizeof(*lvl), 0, &t); > + if (ret) > + return ret; > + > + lvl = t->tx.buf; > + lvl->domain = cpu_to_le32(domain); > + lvl->level = cpu_to_le32(level); > + > + ret = scmi_do_xfer(handle, t); > + > + scmi_one_xfer_put(handle, t); > + return ret; > +} > + > +static int > +scmi_perf_level_get(const struct scmi_handle *handle, u32 domain, u32 *level) > +{ > + int ret; > + struct scmi_xfer *t; > + > + ret = scmi_one_xfer_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF, > + sizeof(u32), sizeof(u32), &t); > + if (ret) > + return ret; > + > + *(__le32 *)t->tx.buf = cpu_to_le32(domain); > + > + ret = scmi_do_xfer(handle, t); > + if (!ret) > + *level = le32_to_cpu(*(__le32 *)t->rx.buf); > + > + scmi_one_xfer_put(handle, t); > + return ret; > +} > + > +static int __scmi_perf_notify_enable(const struct scmi_handle *handle, u32 cmd, > + u32 domain, bool enable) > +{ > + int ret; > + struct scmi_xfer *t; > + struct scmi_perf_notify_level_or_limits *notify; > + > + ret = scmi_one_xfer_init(handle, cmd, SCMI_PROTOCOL_PERF, > + sizeof(*notify), 0, &t); > + if (ret) > + return ret; > + > + notify = t->tx.buf; > + notify->domain = cpu_to_le32(domain); > + notify->notify_enable = cpu_to_le32(enable & BIT(0)); > + > + ret = scmi_do_xfer(handle, t); > + > + scmi_one_xfer_put(handle, t); > + return ret; > +} > + > +static int scmi_perf_limits_notify_enable(const struct scmi_handle *handle, > + u32 domain, bool enable) > +{ > + return __scmi_perf_notify_enable(handle, PERF_NOTIFY_LIMITS, > + domain, enable); > +} > + > +static int scmi_perf_level_notify_enable(const struct scmi_handle *handle, > + u32 domain, bool enable) > +{ > + return __scmi_perf_notify_enable(handle, PERF_NOTIFY_LEVEL, > + domain, enable); > +} > + Do you have any support to correctly handle notifications without errors/warnings? It looks like this two functions are accessible to some user through perf_ops. But are you sure that notifications will be correctly handled by transport, mailbox framework and scmi protocol? The reason I ask is that it looks like it's better to return -EOPNOTSUPP or -ENODEV, maybe -EINVAL here. When you add notifications support you can allow these operations when it's safe to do it. [..] Best regards, Alexey Klimov -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 12/01/18 14:55, Alexey Klimov wrote: > On Tue, Jan 2, 2018 at 2:42 PM, Sudeep Holla <sudeep.holla@arm.com> wrote: >> The performance protocol is intended for the performance management of >> group(s) of device(s) that run in the same performance domain. It >> includes even the CPUs. A performance domain is defined by a set of >> devices that always have to run at the same performance level. >> For example, a set of CPUs that share a voltage domain, and have a >> common frequency control, is said to be in the same performance domain. >> >> The commands in this protocol provide functionality to describe the >> protocol version, describe various attribute flags, set and get the >> performance level of a domain. It also supports discovery of the list >> of performance levels supported by a performance domain, and the >> properties of each performance level. >> >> This patch adds basic support for the performance protocol. >> >> Cc: Arnd Bergmann <arnd@arndb.de> >> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> >> --- >> drivers/firmware/arm_scmi/Makefile | 2 +- >> drivers/firmware/arm_scmi/common.h | 1 + >> drivers/firmware/arm_scmi/perf.c | 527 +++++++++++++++++++++++++++++++++++++ >> include/linux/scmi_protocol.h | 34 +++ >> 4 files changed, 563 insertions(+), 1 deletion(-) > > [...] > [..] >> + >> +static int scmi_perf_limits_notify_enable(const struct scmi_handle *handle, >> + u32 domain, bool enable) >> +{ >> + return __scmi_perf_notify_enable(handle, PERF_NOTIFY_LIMITS, >> + domain, enable); >> +} >> + >> +static int scmi_perf_level_notify_enable(const struct scmi_handle *handle, >> + u32 domain, bool enable) >> +{ >> + return __scmi_perf_notify_enable(handle, PERF_NOTIFY_LEVEL, >> + domain, enable); >> +} >> + > > Do you have any support to correctly handle notifications without > errors/warnings? Good catch. > It looks like this two functions are accessible to some user through > perf_ops. But are you sure that notifications will be correctly > handled by transport, mailbox framework and scmi protocol? > Indeed, it slipeed through the cracks. I have some rudimentary notifier support with I have not put it as part of this series due to lack of firmware to test. > The reason I ask is that it looks like it's better to return > -EOPNOTSUPP or -ENODEV, maybe -EINVAL here. I agree, will change it. > When you add notifications support you can allow these operations when > it's safe to do it. > Yes, sounds like that's good plan. -- Regards, Sudeep -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jan 2, 2018 at 3:42 PM, Sudeep Holla <sudeep.holla@arm.com> wrote: > +#define SCMI_MAX_POLLING_TIMEOUT_NS (100 * NSEC_PER_USEC) > /** > * scmi_do_xfer() - Do one transfer > * > @@ -389,14 +406,30 @@ int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer) > + if (xfer->hdr.poll_completion) { > + ktime_t stop, cur; > + > + stop = ktime_add_ns(ktime_get(), SCMI_MAX_POLLING_TIMEOUT_NS); > + do { > + udelay(5); > + cur = ktime_get(); > + } while (!scmi_xfer_poll_done(info, xfer) && > + ktime_before(cur, stop)); The 5 microsecond back-off isn't that much smaller than the 100 microsecond timeout, given that udelay() often waits much longer than the specified time. How did you come up with those two numbers? Are you sure this is better than just using a cpu_relax() instead of the udelay()? Arnd -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 19/02/18 11:32, Arnd Bergmann wrote: > On Tue, Jan 2, 2018 at 3:42 PM, Sudeep Holla <sudeep.holla@arm.com> wrote: > >> +#define SCMI_MAX_POLLING_TIMEOUT_NS (100 * NSEC_PER_USEC) >> /** >> * scmi_do_xfer() - Do one transfer >> * >> @@ -389,14 +406,30 @@ int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer) > >> + if (xfer->hdr.poll_completion) { >> + ktime_t stop, cur; >> + >> + stop = ktime_add_ns(ktime_get(), SCMI_MAX_POLLING_TIMEOUT_NS); >> + do { >> + udelay(5); >> + cur = ktime_get(); >> + } while (!scmi_xfer_poll_done(info, xfer) && >> + ktime_before(cur, stop)); > > The 5 microsecond back-off isn't that much smaller than the 100 microsecond > timeout, given that udelay() often waits much longer than the specified time. > > How did you come up with those two numbers? Are you sure this is better > than just using a cpu_relax() instead of the udelay()? > Somehow I assumed that cpu_relax will schedule out and since this is called in the fast switching path, I can't do that. But now I see that it's just an hint and so I can use it. Sorry for missing it earlier, you did point this out in previous version and I retained it based on my wrong assumption. Thanks. -- Regards, Sudeep -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html