mbox series

[RFC,v2,0/8] nvmem: add block device NVMEM provider

Message ID cover.1709667858.git.daniel@makrotopia.org
Headers show
Series nvmem: add block device NVMEM provider | expand

Message

Daniel Golle March 5, 2024, 8:23 p.m. UTC
On embedded devices using an eMMC it is common that one or more (hw/sw)
partitions on the eMMC are used to store MAC addresses and Wi-Fi
calibration EEPROM data.

Implement an NVMEM provider backed by block devices as typically the
NVMEM framework is used to have kernel drivers read and use binary data
from EEPROMs, efuses, flash memory (MTD), ...

In order to be able to reference hardware partitions on an eMMC, add code
to bind each hardware partition to a specific firmware subnode.

This series is meant to open the discussion on how exactly the device
tree schema for block devices and partitions may look like, and even
if using the block layer to back the NVMEM device is at all the way to
go -- to me it seemed to be a good solution because it will be reuable
e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.

This series has previously been submitted on July 19th 2023[1] and most of
the basic idea did not change since.

However, the recent introduction of bdev_file_open_by_dev() allow to
get rid of most use of block layer internals which supposedly was the
main objection raised by Christoph Hellwig back then.

Most of the other comments received for in the first RFC have also
been addressed, however, what remains is the use of class_interface
(lacking an alternative way to get notifications about addition or
removal of block devices from the system). As this has been criticized
in the past I'm specifically interested in suggestions on how to solve
this in another way -- ideally without having to implement a whole new
way for in-kernel notifications of appearing or disappearing block
devices...

And, in a way just like in case of MTD and UBI, I believe acting as an
NVMEM provider *is* a functionality which belongs to the block layer
itself and, other than e.g. filesystems, is inconvenient to implement
elsewhere.

[1]: https://patchwork.kernel.org/project/linux-block/list/?series=767565

Daniel Golle (8):
  dt-bindings: block: add basic bindings for block devices
  block: partitions: populate fwnode
  block: add new genhd flag GENHD_FL_NVMEM
  block: implement NVMEM provider
  dt-bindings: mmc: mmc-card: add block device nodes
  mmc: core: set card fwnode_handle
  mmc: block: set fwnode of disk devices
  mmc: block: set GENHD_FL_NVMEM

 .../bindings/block/block-device.yaml          |  22 +++
 .../devicetree/bindings/block/partition.yaml  |  51 +++++
 .../devicetree/bindings/block/partitions.yaml |  20 ++
 .../devicetree/bindings/mmc/mmc-card.yaml     |  45 +++++
 block/Kconfig                                 |   9 +
 block/Makefile                                |   1 +
 block/blk-nvmem.c                             | 175 ++++++++++++++++++
 block/partitions/core.c                       |  41 ++++
 drivers/mmc/core/block.c                      |   8 +
 drivers/mmc/core/bus.c                        |   2 +
 include/linux/blkdev.h                        |   2 +
 11 files changed, 376 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/block/block-device.yaml
 create mode 100644 Documentation/devicetree/bindings/block/partition.yaml
 create mode 100644 Documentation/devicetree/bindings/block/partitions.yaml
 create mode 100644 block/blk-nvmem.c

Comments

Sascha Hauer March 6, 2024, 7:32 a.m. UTC | #1
On Wed, Mar 06, 2024 at 08:22:59AM +0100, Sascha Hauer wrote:
> Hi Daniel,
> 
> On Tue, Mar 05, 2024 at 08:23:20PM +0000, Daniel Golle wrote:
> > diff --git a/Documentation/devicetree/bindings/block/partition.yaml b/Documentation/devicetree/bindings/block/partition.yaml
> > new file mode 100644
> > index 0000000000000..df561dd33cbc9
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/block/partition.yaml
> > @@ -0,0 +1,51 @@
> > +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
> > +%YAML 1.2
> > +---
> > +$id: http://devicetree.org/schemas/block/partition.yaml#
> > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > +
> > +title: Partition on a block device
> > +
> > +description: |
> > +  This binding describes a partition on a block storage device.
> > +  Partitions may be matched by a combination of partition number, name,
> > +  and UUID.
> > +
> > +maintainers:
> > +  - Daniel Golle <daniel@makrotopia.org>
> > +
> > +properties:
> > +  $nodename:
> > +    pattern: '^block-partition-.+$'
> > +
> > +  partnum:
> > +    $ref: /schemas/types.yaml#/definitions/uint32
> > +    description:
> > +      Matches partition by number if present.
> > +
> > +  partname:
> > +    $ref: /schemas/types.yaml#/definitions/string
> > +    description:
> > +      Matches partition by PARTNAME if present.
> 
> In the mtd world we originally had the partition nodes directly under
> the hardware device node as well. That was changed to put a
> partitions subnode between the hardware device node and the partitions.
> 
> From fe2585e9c29a ("doc: dt: mtd: support partitions in a special
> 'partitions' subnode"):
> 
>     To avoid conflict with other drivers using subnodes of the mtd device
>     create only one ofpart-specific node rather than any number of
>     arbitrary partition subnodes.
> 
> Does it make sense to do the same for block devices?

Hm, looking at the example in 5/8 it seems you've already done that. I
think I have misread the binding.

Sascha
Chad Monroe March 11, 2024, 7:40 p.m. UTC | #2
> -----Original Message-----
> From: Rob Herring <robh@kernel.org>
> Sent: Thursday, March 7, 2024 6:51 AM
> To: Daniel Golle <daniel@makrotopia.org>
> Cc: Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>; Conor Dooley
> <conor+dt@kernel.org>; Ulf Hansson <ulf.hansson@linaro.org>; Jens Axboe
> <axboe@kernel.dk>; Dave Chinner <dchinner@redhat.com>; Jan Kara
> <jack@suse.cz>; Thomas Weißschuh <linux@weissschuh.net>; Christian Brauner
> <brauner@kernel.org>; Li Lingfeng <lilingfeng3@huawei.com>; Damien Le Moal
> <dlemoal@kernel.org>; Min Li <min15.li@samsung.com>; Adrian Hunter
> <adrian.hunter@intel.com>; Hannes Reinecke <hare@suse.de>; Christian Loehle
> <CLoehle@hyperstone.com>; Avri Altman <avri.altman@wdc.com>; Bean Huo
> <beanhuo@micron.com>; Yeqi Fu <asuk4.q@gmail.com>; Victor Shih
> <victor.shih@genesyslogic.com.tw>; Christophe JAILLET
> <christophe.jaillet@wanadoo.fr>; Ricardo B. Marliere <ricardo@marliere.net>;
> Greg Kroah-Hartman <gregkh@linuxfoundation.org>; devicetree@vger.kernel.org;
> linux-kernel@vger.kernel.org; linux-mmc@vger.kernel.org; linux-
> block@vger.kernel.org; Diping Zhang <diping.zhang@gl-inet.com>; Jianhui Zhao
> <zhaojh329@gmail.com>; Jieying Zeng <jieying.zeng@gl-inet.com>; Chad Monroe
> <chad.monroe@adtran.com>; Adam Fox <adam.fox@adtran.com>; John Crispin
> <john@phrozen.org>
> Subject: [EXTERNAL] Re: [RFC PATCH v2 1/8] dt-bindings: block: add basic bindings
> for block devices
> 
> On Tue, Mar 05, 2024 at 08:23:20PM +0000, Daniel Golle wrote:
> > Add bindings for block devices which are used to allow referencing
> > nvmem bits on them.
> >
> > Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> > ---
> > .../bindings/block/block-device.yaml | 22 ++++++++
> > .../devicetree/bindings/block/partition.yaml | 51 +++++++++++++++++++
> > .../devicetree/bindings/block/partitions.yaml | 20 ++++++++
> > 3 files changed, 93 insertions(+)
> > create mode 100644 Documentation/devicetree/bindings/block/block-
> device.yaml
> > create mode 100644 Documentation/devicetree/bindings/block/partition.yaml
> > create mode 100644 Documentation/devicetree/bindings/block/partitions.yaml
> >
> > diff --git a/Documentation/devicetree/bindings/block/block-device.yaml
> b/Documentation/devicetree/bindings/block/block-device.yaml
> > new file mode 100644
> > index 0000000000000..c83ea525650ba
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/block/block-device.yaml
> > @@ -0,0 +1,22 @@
> > +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
> > +%YAML 1.2
> > +---
> > +$id: http://devicetree.org/schemas/block/block-device.yaml# <https://protect-
> de.mimecast.com/s/gI6FCDqGk9uBM0gMFZVG39?domain=devicetree.org>
> > +$schema: http://devicetree.org/meta-schemas/core.yaml# <https://protect-
> de.mimecast.com/s/rGEGCEqJR9uWnMRnHZK6FQ?domain=devicetree.org>
> > +
> > +title: block storage device
> > +
> > +description: |
> > + This binding is generic and describes a block-oriented storage device.
> > +
> > +maintainers:
> > + - Daniel Golle <daniel@makrotopia.org>
> > +
> > +properties:
> > + partitions:
> > + $ref: /schemas/block/partitions.yaml
> > +
> > + nvmem-layout:
> > + $ref: /schemas/nvmem/layouts/nvmem-layout.yaml#
> > +
> > +unevaluatedProperties: false
> > diff --git a/Documentation/devicetree/bindings/block/partition.yaml
> b/Documentation/devicetree/bindings/block/partition.yaml
> > new file mode 100644
> > index 0000000000000..df561dd33cbc9
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/block/partition.yaml
> > @@ -0,0 +1,51 @@
> > +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
> > +%YAML 1.2
> > +---
> > +$id: http://devicetree.org/schemas/block/partition.yaml# <https://protect-
> de.mimecast.com/s/8Tf9CGRL65UJqGjqu07YGR?domain=devicetree.org>
> > +$schema: http://devicetree.org/meta-schemas/core.yaml# <https://protect-
> de.mimecast.com/s/rGEGCEqJR9uWnMRnHZK6FQ?domain=devicetree.org>
> > +
> > +title: Partition on a block device
> > +
> > +description: |
> > + This binding describes a partition on a block storage device.
> > + Partitions may be matched by a combination of partition number, name,
> > + and UUID.
> > +
> > +maintainers:
> > + - Daniel Golle <daniel@makrotopia.org>
> > +
> > +properties:
> > + $nodename:
> > + pattern: '^block-partition-.+$'
> > +
> > + partnum:
> > + $ref: /schemas/types.yaml#/definitions/uint32
> > + description:
> > + Matches partition by number if present.
> > +
> > + partname:
> > + $ref: /schemas/types.yaml#/definitions/string
> > + description:
> > + Matches partition by PARTNAME if present.
> 
> Why do we need something new here? The existing fixed-partitions can
> already define block device partitions. It just matches by
> address/offset which works whether its MBR or GPT. Also, in DT we always
> have an address when there is an address.
> 
> I'm sure you want to statically define this and have it work even if the
> partitions move, but sorry...

The partitions which hold this data are typically defined as a MBR or GPT
partition and referenced by PARTNAME, PARTUUID or PARTNO. The data is
referenced as an offset within that partition. It's possible for the offset
of the RF/calibration partition to change if the size of eMMC chip changes
between builds of the same device for example. Within the RF partition the
data is always available at the same offsets.

Based on this, we don't always know the offset of the RF partition and
simply want to use the partition table to point us at the right location.

> 
> > +
> > + uuid:
> > + $ref: /schemas/types.yaml#/definitions/string
> > + description:
> > + Matches partition by PARTUUID if present.
> 
> If this remains it will need some work in the dtschema tools. The reason
> is json-schema already has support for UUIDs as a defined 'format' key
> value and we should use that.
> 
> > +
> > + nvmem-layout:
> > + $ref: /schemas/nvmem/layouts/nvmem-layout.yaml#
> > + description:
> > + This container may reference an NVMEM layout parser.
> > +
> > +anyOf:
> > + - required:
> > + - partnum
> > +
> > + - required:
> > + - partname
> > +
> > + - required:
> > + - uuid
> > +
> > +unevaluatedProperties: false
Ulf Hansson March 12, 2024, 12:22 p.m. UTC | #3
On Tue, 5 Mar 2024 at 21:23, Daniel Golle <daniel@makrotopia.org> wrote:
>
> On embedded devices using an eMMC it is common that one or more (hw/sw)
> partitions on the eMMC are used to store MAC addresses and Wi-Fi
> calibration EEPROM data.
>
> Implement an NVMEM provider backed by block devices as typically the
> NVMEM framework is used to have kernel drivers read and use binary data
> from EEPROMs, efuses, flash memory (MTD), ...
>
> In order to be able to reference hardware partitions on an eMMC, add code
> to bind each hardware partition to a specific firmware subnode.
>
> This series is meant to open the discussion on how exactly the device
> tree schema for block devices and partitions may look like, and even
> if using the block layer to back the NVMEM device is at all the way to
> go -- to me it seemed to be a good solution because it will be reuable
> e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
>
> This series has previously been submitted on July 19th 2023[1] and most of
> the basic idea did not change since.
>
> However, the recent introduction of bdev_file_open_by_dev() allow to
> get rid of most use of block layer internals which supposedly was the
> main objection raised by Christoph Hellwig back then.
>
> Most of the other comments received for in the first RFC have also
> been addressed, however, what remains is the use of class_interface
> (lacking an alternative way to get notifications about addition or
> removal of block devices from the system). As this has been criticized
> in the past I'm specifically interested in suggestions on how to solve
> this in another way -- ideally without having to implement a whole new
> way for in-kernel notifications of appearing or disappearing block
> devices...
>
> And, in a way just like in case of MTD and UBI, I believe acting as an
> NVMEM provider *is* a functionality which belongs to the block layer
> itself and, other than e.g. filesystems, is inconvenient to implement
> elsewhere.

I don't object to the above, however to keep things scalable at the
block device driver level, such as the MMC subsystem, I think we
should avoid having *any* knowledge about the binary format at these
kinds of lower levels.

Even if most of the NVMEM format is managed elsewhere, the support for
NVMEM partitions seems to be dealt with from the MMC subsystem too.
Why can't NVMEM partitions be managed the usual way via the MBR/GPT?

[...]

Kind regards
Uffe
Daniel Golle March 12, 2024, 12:30 p.m. UTC | #4
Hi Ulf,

On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> On Tue, 5 Mar 2024 at 21:23, Daniel Golle <daniel@makrotopia.org> wrote:
> >
> > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > calibration EEPROM data.
> >
> > Implement an NVMEM provider backed by block devices as typically the
> > NVMEM framework is used to have kernel drivers read and use binary data
> > from EEPROMs, efuses, flash memory (MTD), ...
> >
> > In order to be able to reference hardware partitions on an eMMC, add code
> > to bind each hardware partition to a specific firmware subnode.
> >
> > This series is meant to open the discussion on how exactly the device
> > tree schema for block devices and partitions may look like, and even
> > if using the block layer to back the NVMEM device is at all the way to
> > go -- to me it seemed to be a good solution because it will be reuable
> > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> >
> > This series has previously been submitted on July 19th 2023[1] and most of
> > the basic idea did not change since.
> >
> > However, the recent introduction of bdev_file_open_by_dev() allow to
> > get rid of most use of block layer internals which supposedly was the
> > main objection raised by Christoph Hellwig back then.
> >
> > Most of the other comments received for in the first RFC have also
> > been addressed, however, what remains is the use of class_interface
> > (lacking an alternative way to get notifications about addition or
> > removal of block devices from the system). As this has been criticized
> > in the past I'm specifically interested in suggestions on how to solve
> > this in another way -- ideally without having to implement a whole new
> > way for in-kernel notifications of appearing or disappearing block
> > devices...
> >
> > And, in a way just like in case of MTD and UBI, I believe acting as an
> > NVMEM provider *is* a functionality which belongs to the block layer
> > itself and, other than e.g. filesystems, is inconvenient to implement
> > elsewhere.
> 
> I don't object to the above, however to keep things scalable at the
> block device driver level, such as the MMC subsystem, I think we
> should avoid having *any* knowledge about the binary format at these
> kinds of lower levels.
> 
> Even if most of the NVMEM format is managed elsewhere, the support for
> NVMEM partitions seems to be dealt with from the MMC subsystem too.

In an earlier iteration of this RFC it was requested to make NVMEM
support opt-in (instead of opt-out for mtdblock and ubiblock, which
already got their own NVMEM provider implementation).
Hence at least a change to opt-in for NVMEM support is required in the
MMC subsystem, together with making sure that MMC devices have their
fwnode assigned.

> Why can't NVMEM partitions be managed the usual way via the MBR/GPT?

Absolutely, maybe my wording was not clear, but that's exactly what
I'm suggesting here. There are no added parsers nor any knowledge
about binary formats in this patchset.

Or did I misunderstand your comment?
Ulf Hansson March 12, 2024, 12:57 p.m. UTC | #5
On Tue, 12 Mar 2024 at 13:30, Daniel Golle <daniel@makrotopia.org> wrote:
>
> Hi Ulf,
>
> On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> > On Tue, 5 Mar 2024 at 21:23, Daniel Golle <daniel@makrotopia.org> wrote:
> > >
> > > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > > calibration EEPROM data.
> > >
> > > Implement an NVMEM provider backed by block devices as typically the
> > > NVMEM framework is used to have kernel drivers read and use binary data
> > > from EEPROMs, efuses, flash memory (MTD), ...
> > >
> > > In order to be able to reference hardware partitions on an eMMC, add code
> > > to bind each hardware partition to a specific firmware subnode.
> > >
> > > This series is meant to open the discussion on how exactly the device
> > > tree schema for block devices and partitions may look like, and even
> > > if using the block layer to back the NVMEM device is at all the way to
> > > go -- to me it seemed to be a good solution because it will be reuable
> > > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> > >
> > > This series has previously been submitted on July 19th 2023[1] and most of
> > > the basic idea did not change since.
> > >
> > > However, the recent introduction of bdev_file_open_by_dev() allow to
> > > get rid of most use of block layer internals which supposedly was the
> > > main objection raised by Christoph Hellwig back then.
> > >
> > > Most of the other comments received for in the first RFC have also
> > > been addressed, however, what remains is the use of class_interface
> > > (lacking an alternative way to get notifications about addition or
> > > removal of block devices from the system). As this has been criticized
> > > in the past I'm specifically interested in suggestions on how to solve
> > > this in another way -- ideally without having to implement a whole new
> > > way for in-kernel notifications of appearing or disappearing block
> > > devices...
> > >
> > > And, in a way just like in case of MTD and UBI, I believe acting as an
> > > NVMEM provider *is* a functionality which belongs to the block layer
> > > itself and, other than e.g. filesystems, is inconvenient to implement
> > > elsewhere.
> >
> > I don't object to the above, however to keep things scalable at the
> > block device driver level, such as the MMC subsystem, I think we
> > should avoid having *any* knowledge about the binary format at these
> > kinds of lower levels.
> >
> > Even if most of the NVMEM format is managed elsewhere, the support for
> > NVMEM partitions seems to be dealt with from the MMC subsystem too.
>
> In an earlier iteration of this RFC it was requested to make NVMEM
> support opt-in (instead of opt-out for mtdblock and ubiblock, which
> already got their own NVMEM provider implementation).
> Hence at least a change to opt-in for NVMEM support is required in the
> MMC subsystem, together with making sure that MMC devices have their
> fwnode assigned.

So, the NVMEM support needs to be turned on (opt-in) for each and
every block device driver?

It's not a big deal for me - and I would be happy to apply such a
change. On the other hand, it is just some binary data that is stored
on the flash, why should MMC have to opt-in or opt-out at all? It
should be the upper layers who decide what to store on the flash, not
the MMC subsystem, if you get my point.

>
> > Why can't NVMEM partitions be managed the usual way via the MBR/GPT?
>
> Absolutely, maybe my wording was not clear, but that's exactly what
> I'm suggesting here. There are no added parsers nor any knowledge
> about binary formats in this patchset.

Right, but there are new DT bindings added in the $subject series that
allows us to describe NVMEM partitions for an eMMC. Why isn't that
parsed from the MBR/GPT, etc, rather than encoded in DT?

>
> Or did I misunderstand your comment?

Maybe. I am just trying to understand this, so apologize if you find
my questions silly. :-)

Kind regards
Uffe
Daniel Golle March 12, 2024, 1:12 p.m. UTC | #6
On Tue, Mar 12, 2024 at 01:57:39PM +0100, Ulf Hansson wrote:
> On Tue, 12 Mar 2024 at 13:30, Daniel Golle <daniel@makrotopia.org> wrote:
> >
> > Hi Ulf,
> >
> > On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> > > On Tue, 5 Mar 2024 at 21:23, Daniel Golle <daniel@makrotopia.org> wrote:
> > > >
> > > > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > > > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > > > calibration EEPROM data.
> > > >
> > > > Implement an NVMEM provider backed by block devices as typically the
> > > > NVMEM framework is used to have kernel drivers read and use binary data
> > > > from EEPROMs, efuses, flash memory (MTD), ...
> > > >
> > > > In order to be able to reference hardware partitions on an eMMC, add code
> > > > to bind each hardware partition to a specific firmware subnode.
> > > >
> > > > This series is meant to open the discussion on how exactly the device
> > > > tree schema for block devices and partitions may look like, and even
> > > > if using the block layer to back the NVMEM device is at all the way to
> > > > go -- to me it seemed to be a good solution because it will be reuable
> > > > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> > > >
> > > > This series has previously been submitted on July 19th 2023[1] and most of
> > > > the basic idea did not change since.
> > > >
> > > > However, the recent introduction of bdev_file_open_by_dev() allow to
> > > > get rid of most use of block layer internals which supposedly was the
> > > > main objection raised by Christoph Hellwig back then.
> > > >
> > > > Most of the other comments received for in the first RFC have also
> > > > been addressed, however, what remains is the use of class_interface
> > > > (lacking an alternative way to get notifications about addition or
> > > > removal of block devices from the system). As this has been criticized
> > > > in the past I'm specifically interested in suggestions on how to solve
> > > > this in another way -- ideally without having to implement a whole new
> > > > way for in-kernel notifications of appearing or disappearing block
> > > > devices...
> > > >
> > > > And, in a way just like in case of MTD and UBI, I believe acting as an
> > > > NVMEM provider *is* a functionality which belongs to the block layer
> > > > itself and, other than e.g. filesystems, is inconvenient to implement
> > > > elsewhere.
> > >
> > > I don't object to the above, however to keep things scalable at the
> > > block device driver level, such as the MMC subsystem, I think we
> > > should avoid having *any* knowledge about the binary format at these
> > > kinds of lower levels.
> > >
> > > Even if most of the NVMEM format is managed elsewhere, the support for
> > > NVMEM partitions seems to be dealt with from the MMC subsystem too.
> >
> > In an earlier iteration of this RFC it was requested to make NVMEM
> > support opt-in (instead of opt-out for mtdblock and ubiblock, which
> > already got their own NVMEM provider implementation).
> > Hence at least a change to opt-in for NVMEM support is required in the
> > MMC subsystem, together with making sure that MMC devices have their
> > fwnode assigned.
> 
> So, the NVMEM support needs to be turned on (opt-in) for each and
> every block device driver?
> 
> It's not a big deal for me - and I would be happy to apply such a
> change. On the other hand, it is just some binary data that is stored
> on the flash, why should MMC have to opt-in or opt-out at all? It
> should be the upper layers who decide what to store on the flash, not
> the MMC subsystem, if you get my point.
> 

I agree, and that's exactly how I originally wrote it. However, in the
first round of rewiew it was requested to be in that way (ie. opt-in
for each subsystem; rather than opt-out for subsystems already
providing NVMEM in another way, such as MTD or UBI), see here:

https://patchwork.kernel.org/comment/25432948/

> >
> > > Why can't NVMEM partitions be managed the usual way via the MBR/GPT?
> >
> > Absolutely, maybe my wording was not clear, but that's exactly what
> > I'm suggesting here. There are no added parsers nor any knowledge
> > about binary formats in this patchset.
> 
> Right, but there are new DT bindings added in the $subject series that
> allows us to describe NVMEM partitions for an eMMC. Why isn't that
> parsed from the MBR/GPT, etc, rather than encoded in DT?

The added dt-bindings merely allow to **identify** the partition by
it's PARTNAME, PARTNO or PARTUUID, so we can reference them in DT.
We'd still rely on MBR or GPT to do the actual parsing of the on-disk
format.

> 
> >
> > Or did I misunderstand your comment?
> 
> Maybe. I am just trying to understand this, so apologize if you find
> my questions silly. :-)

Let's make sure to all be on the same page and everything is fully
understood by everyone. Everyone has to bare the noise, but I guess
that's ok ;)


Cheers


Daniel
Ulf Hansson March 13, 2024, 10:19 a.m. UTC | #7
On Tue, 12 Mar 2024 at 14:12, Daniel Golle <daniel@makrotopia.org> wrote:
>
> On Tue, Mar 12, 2024 at 01:57:39PM +0100, Ulf Hansson wrote:
> > On Tue, 12 Mar 2024 at 13:30, Daniel Golle <daniel@makrotopia.org> wrote:
> > >
> > > Hi Ulf,
> > >
> > > On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> > > > On Tue, 5 Mar 2024 at 21:23, Daniel Golle <daniel@makrotopia.org> wrote:
> > > > >
> > > > > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > > > > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > > > > calibration EEPROM data.
> > > > >
> > > > > Implement an NVMEM provider backed by block devices as typically the
> > > > > NVMEM framework is used to have kernel drivers read and use binary data
> > > > > from EEPROMs, efuses, flash memory (MTD), ...
> > > > >
> > > > > In order to be able to reference hardware partitions on an eMMC, add code
> > > > > to bind each hardware partition to a specific firmware subnode.
> > > > >
> > > > > This series is meant to open the discussion on how exactly the device
> > > > > tree schema for block devices and partitions may look like, and even
> > > > > if using the block layer to back the NVMEM device is at all the way to
> > > > > go -- to me it seemed to be a good solution because it will be reuable
> > > > > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> > > > >
> > > > > This series has previously been submitted on July 19th 2023[1] and most of
> > > > > the basic idea did not change since.
> > > > >
> > > > > However, the recent introduction of bdev_file_open_by_dev() allow to
> > > > > get rid of most use of block layer internals which supposedly was the
> > > > > main objection raised by Christoph Hellwig back then.
> > > > >
> > > > > Most of the other comments received for in the first RFC have also
> > > > > been addressed, however, what remains is the use of class_interface
> > > > > (lacking an alternative way to get notifications about addition or
> > > > > removal of block devices from the system). As this has been criticized
> > > > > in the past I'm specifically interested in suggestions on how to solve
> > > > > this in another way -- ideally without having to implement a whole new
> > > > > way for in-kernel notifications of appearing or disappearing block
> > > > > devices...
> > > > >
> > > > > And, in a way just like in case of MTD and UBI, I believe acting as an
> > > > > NVMEM provider *is* a functionality which belongs to the block layer
> > > > > itself and, other than e.g. filesystems, is inconvenient to implement
> > > > > elsewhere.
> > > >
> > > > I don't object to the above, however to keep things scalable at the
> > > > block device driver level, such as the MMC subsystem, I think we
> > > > should avoid having *any* knowledge about the binary format at these
> > > > kinds of lower levels.
> > > >
> > > > Even if most of the NVMEM format is managed elsewhere, the support for
> > > > NVMEM partitions seems to be dealt with from the MMC subsystem too.
> > >
> > > In an earlier iteration of this RFC it was requested to make NVMEM
> > > support opt-in (instead of opt-out for mtdblock and ubiblock, which
> > > already got their own NVMEM provider implementation).
> > > Hence at least a change to opt-in for NVMEM support is required in the
> > > MMC subsystem, together with making sure that MMC devices have their
> > > fwnode assigned.
> >
> > So, the NVMEM support needs to be turned on (opt-in) for each and
> > every block device driver?
> >
> > It's not a big deal for me - and I would be happy to apply such a
> > change. On the other hand, it is just some binary data that is stored
> > on the flash, why should MMC have to opt-in or opt-out at all? It
> > should be the upper layers who decide what to store on the flash, not
> > the MMC subsystem, if you get my point.
> >
>
> I agree, and that's exactly how I originally wrote it. However, in the
> first round of rewiew it was requested to be in that way (ie. opt-in
> for each subsystem; rather than opt-out for subsystems already
> providing NVMEM in another way, such as MTD or UBI), see here:
>
> https://patchwork.kernel.org/comment/25432948/

Okay, got it, thanks!

>
> > >
> > > > Why can't NVMEM partitions be managed the usual way via the MBR/GPT?
> > >
> > > Absolutely, maybe my wording was not clear, but that's exactly what
> > > I'm suggesting here. There are no added parsers nor any knowledge
> > > about binary formats in this patchset.
> >
> > Right, but there are new DT bindings added in the $subject series that
> > allows us to describe NVMEM partitions for an eMMC. Why isn't that
> > parsed from the MBR/GPT, etc, rather than encoded in DT?
>
> The added dt-bindings merely allow to **identify** the partition by
> it's PARTNAME, PARTNO or PARTUUID, so we can reference them in DT.
> We'd still rely on MBR or GPT to do the actual parsing of the on-disk
> format.

Thanks for clarifying!

So, it looks like this all relies on what DT maintainers think then.

[...]

Kind regards
Uffe