Message ID | 20180731161340.13000-1-georgi.djakov@linaro.org |
---|---|
Headers | show |
Series | Introduce on-chip interconnect API | expand |
On 2018-07-31 09:13, Georgi Djakov wrote: > Currently we support only platform data for specifying the interconnect > endpoints. As now the endpoints are hard-coded into the consumer driver > this may lead to complications when a single driver is used by multiple > SoCs, which may have different interconnect topology. > To avoid cluttering the consumer drivers, introduce a translation > function > to help us get the board specific interconnect data from device-tree. > > Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org> > --- > drivers/interconnect/core.c | 62 ++++++++++++++++++++++++++++++++++++ > include/linux/interconnect.h | 7 ++++ > 2 files changed, 69 insertions(+) > > diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c > index 9fef180cf77e..d1b6adff0a3d 100644 > --- a/drivers/interconnect/core.c > +++ b/drivers/interconnect/core.c > @@ -16,6 +16,7 @@ > #include <linux/module.h> > #include <linux/mutex.h> > #include <linux/slab.h> > +#include <linux/of.h> > #include <linux/overflow.h> > #include <linux/uaccess.h> > > @@ -251,6 +252,67 @@ static int apply_constraints(struct icc_path > *path) > return ret; > } > > +struct icc_path *of_icc_get(struct device *dev, const char *name) > +{ > + struct device_node *np = NULL; > + struct of_phandle_args src_args, dst_args; > + u32 src_id, dst_id; > + int idx = 0; > + int ret; > + > + if (!dev || !dev->of_node) > + return ERR_PTR(-ENODEV); > + > + np = dev->of_node; > + > + /* > + * When the consumer DT node do not have "interconnects" property > + * return a NULL path to skip setting constraints. > + */ > + if (!of_find_property(np, "interconnects", NULL)) > + return NULL; > + > + /* > + * We use a combination of phandle and specifier for endpoint. For > now > + * lets support only global ids and extend this is the future if > needed > + * without breaking DT compatibility. > + */ > + if (name) { > + idx = of_property_match_string(np, "interconnect-names", name); > + if (idx < 0) > + return ERR_PTR(idx); > + } > + > + ret = of_parse_phandle_with_args(np, "interconnects", > + "#interconnect-cells", idx * 2, > + &src_args); > + if (ret) > + return ERR_PTR(ret); > + > + of_node_put(src_args.np); > + > + if (!src_args.args_count || src_args.args_count > 1) > + return ERR_PTR(-EINVAL); > + > + src_id = src_args.args[0]; > + > + ret = of_parse_phandle_with_args(np, "interconnects", > + "#interconnect-cells", idx * 2 + 1, > + &dst_args); > + if (ret) > + return ERR_PTR(ret); > + > + of_node_put(dst_args.np); > + > + if (!dst_args.args_count || dst_args.args_count > 1) > + return ERR_PTR(-EINVAL); > + > + dst_id = dst_args.args[0]; > + > + return icc_get(dev, src_id, dst_id); > +} > +EXPORT_SYMBOL_GPL(of_icc_get); > + > /** > * icc_set() - set constraints on an interconnect path between two > endpoints > * @path: reference to the path returned by icc_get() > diff --git a/include/linux/interconnect.h > b/include/linux/interconnect.h > index 593215371fd6..ae6744da9bc2 100644 > --- a/include/linux/interconnect.h > +++ b/include/linux/interconnect.h > @@ -17,6 +17,7 @@ struct device; > > struct icc_path *icc_get(struct device *dev, const int src_id, > const int dst_id); > +struct icc_path *of_icc_get(struct device *dev, const char *name); > void icc_put(struct icc_path *path); > int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw); > > @@ -28,6 +29,12 @@ static inline struct icc_path *icc_get(struct > device *dev, const int src_id, > return NULL; > } > > +static inline struct icc_path *of_icc_get(struct device *dev, > + const char *name) > +{ > + return NULL; > +} > + Might want to return PTR(-ENODEV) or some error code so that client doesn't have to do NULL check AND an error check? -Saravana -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/31/2018 09:13 AM, Georgi Djakov wrote: > This patch introduces a new API to get requirements and configure the > interconnect buses across the entire chipset to fit with the current > demand. > > The API is using a consumer/provider-based model, where the providers are > the interconnect buses and the consumers could be various drivers. > The consumers request interconnect resources (path) between endpoints and > set the desired constraints on this data flow path. The providers receive > requests from consumers and aggregate these requests for all master-slave > pairs on that path. Then the providers configure each participating in the > topology node according to the requested data flow path, physical links and > constraints. The topology could be complicated and multi-tiered and is SoC > specific. > > Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org> > --- > Documentation/interconnect/interconnect.rst | 96 ++++ > drivers/Kconfig | 2 + > drivers/Makefile | 1 + > drivers/interconnect/Kconfig | 10 + > drivers/interconnect/Makefile | 2 + > drivers/interconnect/core.c | 569 ++++++++++++++++++++ > include/linux/interconnect-provider.h | 125 +++++ > include/linux/interconnect.h | 42 ++ > 8 files changed, 847 insertions(+) > create mode 100644 Documentation/interconnect/interconnect.rst > create mode 100644 drivers/interconnect/Kconfig > create mode 100644 drivers/interconnect/Makefile > create mode 100644 drivers/interconnect/core.c > create mode 100644 include/linux/interconnect-provider.h > create mode 100644 include/linux/interconnect.h > > diff --git a/Documentation/interconnect/interconnect.rst b/Documentation/interconnect/interconnect.rst > new file mode 100644 > index 000000000000..e628881ee218 > --- /dev/null > +++ b/Documentation/interconnect/interconnect.rst > @@ -0,0 +1,96 @@ > +.. SPDX-License-Identifier: GPL-2.0 > + > +===================================== > +GENERIC SYSTEM INTERCONNECT SUBSYSTEM > +===================================== > + > +Introduction > +------------ > + > +This framework is designed to provide a standard kernel interface to control > +the settings of the interconnects on a SoC. These settings can be throughput, I would say: on an SoC. Do you pronounce that as "sock" or the letters S.O.C.? > +latency and priority between multiple interconnected devices or functional > +blocks. This can be controlled dynamically in order to save power or provide > +maximum performance. > + > +The interconnect bus is a hardware with configurable parameters, which can be bus is hardware > +set on a data path according to the requests received from various drivers. > +An example of interconnect buses are the interconnects between various > +components or functional blocks in chipsets. There can be multiple interconnects > +on a SoC that can be multi-tiered. an SoC > + > +Below is a simplified diagram of a real-world SoC interconnect bus topology. > + > +:: > + > + +----------------+ +----------------+ > + | HW Accelerator |--->| M NoC |<---------------+ > + +----------------+ +----------------+ | > + | | +------------+ > + +-----+ +-------------+ V +------+ | | > + | DDR | | +--------+ | PCIe | | | > + +-----+ | | Slaves | +------+ | | > + ^ ^ | +--------+ | | C NoC | > + | | V V | | > + +------------------+ +------------------------+ | | +-----+ > + | |-->| |-->| |-->| CPU | > + | |-->| |<--| | +-----+ > + | Mem NoC | | S NoC | +------------+ > + | |<--| |---------+ | > + | |<--| |<------+ | | +--------+ > + +------------------+ +------------------------+ | | +-->| Slaves | > + ^ ^ ^ ^ ^ | | +--------+ > + | | | | | | V > + +------+ | +-----+ +-----+ +---------+ +----------------+ +--------+ > + | CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves | > + +------+ | +-----+ +-----+ +---------+ +----------------+ +--------+ > + | > + +-------+ > + | Modem | > + +-------+ > + > +Terminology > +----------- > + > +Interconnect provider is the software definition of the interconnect hardware. > +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC > +and Mem NoC. > + > +Interconnect node is the software definition of the interconnect hardware > +port. Each interconnect provider consists of multiple interconnect nodes, > +which are connected to other SoC components including other interconnect > +providers. The point on the diagram where the CPUs connect to the memory is > +called an interconnect node, which belongs to the Mem NoC interconnect provider. > + > +Interconnect endpoints are the first or the last element of the path. Every > +endpoint is a node, but not every node is an endpoint. > + > +Interconnect path is everything between two endpoints including all the nodes > +that have to be traversed to reach from a source to destination node. It may > +include multiple master-slave pairs across several interconnect providers. > + > +Interconnect consumers are the entities which make use of the data paths exposed > +by the providers. The consumers send requests to providers requesting various > +throughput, latency and priority. Usually the consumers are device drivers, that > +send request based on their needs. An example for a consumer is a video decoder > +that supports various formats and image sizes. > + > +Interconnect providers > +---------------------- > + > +Interconnect provider is an entity that implements methods to initialize and > +configure a interconnect bus hardware. The interconnect provider drivers should configure interconnect bus hardware. (i.e., drop the "a") > +be registered with the interconnect provider core. > + > +The interconnect framework provider API functions are documented in > +.. kernel-doc:: include/linux/interconnect-provider.h What do you want that to do? and does that happen? The .. kernel-doc:: line won't be printed. It will just be expanded to the contents of that header file, so the preceding sentence fragment will look/sound odd. > + > +Interconnect consumers > +---------------------- > + > +Interconnect consumers are the clients which use the interconnect APIs to > +get paths between endpoints and set their bandwidth/latency/QoS requirements > +for these interconnect paths. > + > +The interconnect framework consumer API functions are documented in > +.. kernel-doc:: include/linux/interconnect.h same as above. -- ~Randy -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 2018-08-02 05:07, Georgi Djakov wrote: > Hi Saravana, > > On 08/02/2018 01:57 AM, skannan@codeaurora.org wrote: >> On 2018-07-31 09:13, Georgi Djakov wrote: >>> Currently we support only platform data for specifying the >>> interconnect >>> endpoints. As now the endpoints are hard-coded into the consumer >>> driver >>> this may lead to complications when a single driver is used by >>> multiple >>> SoCs, which may have different interconnect topology. >>> To avoid cluttering the consumer drivers, introduce a translation >>> function >>> to help us get the board specific interconnect data from device-tree. >>> >>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org> >>> --- >>> drivers/interconnect/core.c | 62 >>> ++++++++++++++++++++++++++++++++++++ >>> include/linux/interconnect.h | 7 ++++ >>> 2 files changed, 69 insertions(+) >>> >>> diff --git a/drivers/interconnect/core.c >>> b/drivers/interconnect/core.c >>> index 9fef180cf77e..d1b6adff0a3d 100644 >>> --- a/drivers/interconnect/core.c >>> +++ b/drivers/interconnect/core.c > [..] >>> --- a/include/linux/interconnect.h >>> +++ b/include/linux/interconnect.h >>> @@ -17,6 +17,7 @@ struct device; >>> >>> struct icc_path *icc_get(struct device *dev, const int src_id, >>> const int dst_id); >>> +struct icc_path *of_icc_get(struct device *dev, const char *name); >>> void icc_put(struct icc_path *path); >>> int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw); >>> >>> @@ -28,6 +29,12 @@ static inline struct icc_path *icc_get(struct >>> device *dev, const int src_id, >>> return NULL; >>> } >>> >>> +static inline struct icc_path *of_icc_get(struct device *dev, >>> + const char *name) >>> +{ >>> + return NULL; >>> +} >>> + >> >> Might want to return PTR(-ENODEV) or some error code so that client >> doesn't have to do NULL check AND an error check? >> >> -Saravana > > NULL is returned when CONFIG_INTERCONNECT=n. Configuration of > interconnects by consumer drivers could be optional and that's why null > is returned instead of an error. The consumer drivers decide how to > proceed in this case and if there is a hard requirement for > interconnect > support, then i would suggest to express it as a dependency in Kconfig. Ehh... you could make the same argument with an error. If it's not mandatory for functioning, they can ignore a specific error and continue? At a minimum, these stub functions returning NULL doesn't match with the documentation that says these APIs will only ever return ERR_PTR(). -Saravana -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Maxime, On 08/20/2018 06:32 PM, Maxime Ripard wrote: > Hi Georgi, > > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote: >>> There is also a patch series from Maxime Ripard that's addressing the >>> same general area. See "dt-bindings: Add a dma-parent property". We >>> don't need multiple ways to address describing the device to memory >>> paths, so you all had better work out a common solution. >> >> Looks like this fits exactly into the interconnect API concept. I see >> MBUS as interconnect provider and display/camera as consumers, that >> report their bandwidth needs. I am also planning to add support for >> priority. > > Thanks for working on this. After looking at your serie, the one thing > I'm a bit uncertain about (and the most important one to us) is how we > would be able to tell through which interconnect the DMA are done. > > This is important to us since our topology is actually quite simple as > you've seen, but the RAM is not mapped on that bus and on the CPU's, > so we need to apply an offset to each buffer being DMA'd. Ok, i see - your problem is not about bandwidth scaling but about using different memory ranges by the driver to access the same location. So this is not really the same and your problem is different. Also the interconnect bindings are describing a path and endpoints. However i am open to any ideas. Thanks, Georgi
On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > > Hi Maxime, > > On 08/20/2018 06:32 PM, Maxime Ripard wrote: > > Hi Georgi, > > > > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote: > >>> There is also a patch series from Maxime Ripard that's addressing the > >>> same general area. See "dt-bindings: Add a dma-parent property". We > >>> don't need multiple ways to address describing the device to memory > >>> paths, so you all had better work out a common solution. > >> > >> Looks like this fits exactly into the interconnect API concept. I see > >> MBUS as interconnect provider and display/camera as consumers, that > >> report their bandwidth needs. I am also planning to add support for > >> priority. > > > > Thanks for working on this. After looking at your serie, the one thing > > I'm a bit uncertain about (and the most important one to us) is how we > > would be able to tell through which interconnect the DMA are done. > > > > This is important to us since our topology is actually quite simple as > > you've seen, but the RAM is not mapped on that bus and on the CPU's, > > so we need to apply an offset to each buffer being DMA'd. > > Ok, i see - your problem is not about bandwidth scaling but about using > different memory ranges by the driver to access the same location. So > this is not really the same and your problem is different. Also the > interconnect bindings are describing a path and endpoints. However i am > open to any ideas. It may be different things you need, but both are related to the path between a bus master and memory. We can't have each 'problem' described in a different way. Well, we could as long as each platform has different problems, but that's unlikely. It could turn out that the only commonality is property naming convention, but that's still better than 2 independent solutions. I know you each want to just fix your issues, but the fact that DT doesn't model the DMA side of the bus structure has been an issue at least since the start of DT on ARM. Either we should address this in a flexible way or we can just continue to manage without. So I'm not inclined to take something that only addresses one SoC family. Rob
Hi Rob and Maxime, On 08/27/2018 06:11 PM, Maxime Ripard wrote: > On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote: >> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>> >>> Hi Maxime, >>> >>> On 08/20/2018 06:32 PM, Maxime Ripard wrote: >>>> Hi Georgi, >>>> >>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote: >>>>>> There is also a patch series from Maxime Ripard that's addressing the >>>>>> same general area. See "dt-bindings: Add a dma-parent property". We >>>>>> don't need multiple ways to address describing the device to memory >>>>>> paths, so you all had better work out a common solution. >>>>> >>>>> Looks like this fits exactly into the interconnect API concept. I see >>>>> MBUS as interconnect provider and display/camera as consumers, that >>>>> report their bandwidth needs. I am also planning to add support for >>>>> priority. >>>> >>>> Thanks for working on this. After looking at your serie, the one thing >>>> I'm a bit uncertain about (and the most important one to us) is how we >>>> would be able to tell through which interconnect the DMA are done. >>>> >>>> This is important to us since our topology is actually quite simple as >>>> you've seen, but the RAM is not mapped on that bus and on the CPU's, >>>> so we need to apply an offset to each buffer being DMA'd. >>> >>> Ok, i see - your problem is not about bandwidth scaling but about using >>> different memory ranges by the driver to access the same location. So >>> this is not really the same and your problem is different. Also the >>> interconnect bindings are describing a path and endpoints. However i am >>> open to any ideas. >> >> It may be different things you need, but both are related to the path >> between a bus master and memory. We can't have each 'problem' >> described in a different way. Well, we could as long as each platform >> has different problems, but that's unlikely. >> >> It could turn out that the only commonality is property naming >> convention, but that's still better than 2 independent solutions. > > Yeah, I really don't think the two issues are unrelated. Can we maybe > have a particular interconnect-names value to mark the interconnect > being used to perform DMA? We can call one of the paths "dma" and use it to perform DMA for the current device. I don't see a problem with this. The name of the path is descriptive and makes sense. And by doing we avoid adding more DT properties, which would be an other option. This also makes me think that it might be a good idea to have a standard name for the path to memory as i expect some people will call it "mem", others "ddr" etc. Thanks, Georgi >> I know you each want to just fix your issues, but the fact that DT >> doesn't model the DMA side of the bus structure has been an issue at >> least since the start of DT on ARM. Either we should address this in a >> flexible way or we can just continue to manage without. So I'm not >> inclined to take something that only addresses one SoC family. > > I'd really like to have it addressed. We're getting bit by this, and > the hacks we have don't work well anymore.
Hi, On Wed, Aug 29, 2018 at 03:33:29PM +0300, Georgi Djakov wrote: > On 08/27/2018 06:11 PM, Maxime Ripard wrote: > > On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote: > >> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > >>> > >>> Hi Maxime, > >>> > >>> On 08/20/2018 06:32 PM, Maxime Ripard wrote: > >>>> Hi Georgi, > >>>> > >>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote: > >>>>>> There is also a patch series from Maxime Ripard that's addressing the > >>>>>> same general area. See "dt-bindings: Add a dma-parent property". We > >>>>>> don't need multiple ways to address describing the device to memory > >>>>>> paths, so you all had better work out a common solution. > >>>>> > >>>>> Looks like this fits exactly into the interconnect API concept. I see > >>>>> MBUS as interconnect provider and display/camera as consumers, that > >>>>> report their bandwidth needs. I am also planning to add support for > >>>>> priority. > >>>> > >>>> Thanks for working on this. After looking at your serie, the one thing > >>>> I'm a bit uncertain about (and the most important one to us) is how we > >>>> would be able to tell through which interconnect the DMA are done. > >>>> > >>>> This is important to us since our topology is actually quite simple as > >>>> you've seen, but the RAM is not mapped on that bus and on the CPU's, > >>>> so we need to apply an offset to each buffer being DMA'd. > >>> > >>> Ok, i see - your problem is not about bandwidth scaling but about using > >>> different memory ranges by the driver to access the same location. So > >>> this is not really the same and your problem is different. Also the > >>> interconnect bindings are describing a path and endpoints. However i am > >>> open to any ideas. > >> > >> It may be different things you need, but both are related to the path > >> between a bus master and memory. We can't have each 'problem' > >> described in a different way. Well, we could as long as each platform > >> has different problems, but that's unlikely. > >> > >> It could turn out that the only commonality is property naming > >> convention, but that's still better than 2 independent solutions. > > > > Yeah, I really don't think the two issues are unrelated. Can we maybe > > have a particular interconnect-names value to mark the interconnect > > being used to perform DMA? > > We can call one of the paths "dma" and use it to perform DMA for the > current device. I don't see a problem with this. The name of the path is > descriptive and makes sense. And by doing we avoid adding more DT > properties, which would be an other option. That works for me. If Rob is fine with it too, I'll send an updated version of my serie based on yours. Thanks! Maxime -- Maxime Ripard, Bootlin Embedded Linux and Kernel engineering https://bootlin.com