mbox series

[v2,0/4] rpmb subsystem, uapi and virtio-rpmb driver

Message ID 20220405093759.1126835-1-alex.bennee@linaro.org
Headers show
Series rpmb subsystem, uapi and virtio-rpmb driver | expand

Message

Alex Bennée April 5, 2022, 9:37 a.m. UTC
Hi,

This is another attempt to come up with an RPMB API for the kernel.
The last discussion of this was in the thread:

  Subject: [RFC PATCH  0/5] RPMB internal and user-space API + WIP virtio-rpmb frontend
  Date: Wed,  3 Mar 2021 13:54:55 +0000
  Message-Id: <20210303135500.24673-1-alex.bennee@linaro.org>

The series provides for the RPMB sub-system, a new chardev API driven
by ioctls and a full multi-block capable virtio-rpmb driver. You can
find a working vhost-user backend in my QEMU branch here:

  https://github.com/stsquad/qemu/commits/virtio/vhost-user-rpmb-v2

The branch is a little messy but I'll be posting a cleaned up version
in the following weeks. The only real changes to the backend is the
multi-block awareness and some tweaks to deal with QEMU internals
handling VirtIO config space messages which weren't previously
exercised. The test.sh script in tools/rpmb works through the various
transactions but isn't comprehensive.

Changes since the last posting:

  - frame construction is mostly back in userspace

  The previous discussion showed there wasn't any appetite for using
  the kernels keyctl() interface so userspace yet again takes
  responsibility for constructing most* frames. Currently these are
  all pure virtio-rpmb frames but the code is written so we can plug
  in additional frame types. The virtio-rpmb driver does some
  validation and in some cases (* read-blocks) constructs the request
  frame in the driver. It would take someone implementing a driver for
  another RPMB device type to see if this makes sense.

  - user-space interface is still split across several ioctls

  Although 3 of the ioctls share the common rpmb_ioc_reqresp_cmd
  structure it does mean things like capacity, write_count and
  read_blocks can have their own structure associated with the
  command.

As before I shall follow up with the QEMU based vhost-user backend and
hopefully a rust-vmm re-implementation. However I've no direct
interest in implementing the interfaces to real hardware. I leave that
to people who have access to such things and are willing to take up
the maintainer burden if this is merged.

Regards,

Alex
    

Alex Bennée (4):
  rpmb: add Replay Protected Memory Block (RPMB) subsystem
  char: rpmb: provide a user space interface
  rpmb: create virtio rpmb frontend driver
  tools rpmb: add RPBM access tool

 .../userspace-api/ioctl/ioctl-number.rst      |    1 +
 MAINTAINERS                                   |    9 +
 drivers/Kconfig                               |    2 +
 drivers/Makefile                              |    1 +
 drivers/rpmb/Kconfig                          |   28 +
 drivers/rpmb/Makefile                         |    9 +
 drivers/rpmb/cdev.c                           |  309 +++++
 drivers/rpmb/core.c                           |  439 +++++++
 drivers/rpmb/rpmb-cdev.h                      |   17 +
 drivers/rpmb/virtio_rpmb.c                    |  518 ++++++++
 include/linux/rpmb.h                          |  182 +++
 include/uapi/linux/rpmb.h                     |   99 ++
 include/uapi/linux/virtio_rpmb.h              |   54 +
 tools/Makefile                                |   16 +-
 tools/rpmb/.gitignore                         |    2 +
 tools/rpmb/Makefile                           |   41 +
 tools/rpmb/key                                |    1 +
 tools/rpmb/rpmb.c                             | 1083 +++++++++++++++++
 tools/rpmb/test.sh                            |   22 +
 19 files changed, 2828 insertions(+), 5 deletions(-)
 create mode 100644 drivers/rpmb/Kconfig
 create mode 100644 drivers/rpmb/Makefile
 create mode 100644 drivers/rpmb/cdev.c
 create mode 100644 drivers/rpmb/core.c
 create mode 100644 drivers/rpmb/rpmb-cdev.h
 create mode 100644 drivers/rpmb/virtio_rpmb.c
 create mode 100644 include/linux/rpmb.h
 create mode 100644 include/uapi/linux/rpmb.h
 create mode 100644 include/uapi/linux/virtio_rpmb.h
 create mode 100644 tools/rpmb/.gitignore
 create mode 100644 tools/rpmb/Makefile
 create mode 100644 tools/rpmb/key
 create mode 100644 tools/rpmb/rpmb.c
 create mode 100755 tools/rpmb/test.sh

Comments

Bean Huo April 5, 2022, 2:54 p.m. UTC | #1
Hi Alex,

Thanks for this unified RPMB interface, I wanted to verify this on our
UFS, it seems you didn't add the UFS access interface in this version 
from your userspace tools, right?


Kind regards,
Bean
Alex Bennée April 5, 2022, 3:43 p.m. UTC | #2
Bean Huo <huobean@gmail.com> writes:

> Hi Alex,
>
> Thanks for this unified RPMB interface, I wanted to verify this on our
> UFS, it seems you didn't add the UFS access interface in this version 
> from your userspace tools, right?

No I didn't but it should be easy enough to add some function pointer
redirection everywhere one of the op_* functions calls a vrpmb_*
function. Do you already have a UFS RPMB device driver?
Bean Huo April 5, 2022, 5:03 p.m. UTC | #3
On Tue, 2022-04-05 at 16:43 +0100, Alex Bennée wrote:
> 
> Bean Huo <huobean@gmail.com> writes:
> 
> > Hi Alex,
> > 
> > Thanks for this unified RPMB interface, I wanted to verify this on
> > our
> > UFS, it seems you didn't add the UFS access interface in this
> > version 
> > from your userspace tools, right?
> 
> No I didn't but it should be easy enough to add some function pointer
> redirection everywhere one of the op_* functions calls a vrpmb_*
> function. Do you already have a UFS RPMB device driver?
> 

Hi Alex,
Thanks for your feedback.

We now access UFS RPMB through the RPMB LUN BSG device, RPMB is a well-
known LU and we have a userspace tool to access it.

I see that if we're going to use your interface, "static struct
rpmb_ops" should be registered from a lower-level driver, for example
in a UFS driver, yes there should be no problem with this registration,
but I don't know with the current way Compared, what are the advantages
to add a driver. maybe the main advantage is that we will have an
unified user space tool for RPMB. right?

Kind regards,
Bean
Bart Van Assche April 5, 2022, 9:59 p.m. UTC | #4
On 4/5/22 02:37, Alex Bennée wrote:
> +int rpmb_get_write_count(struct rpmb_dev *rdev, int len, u8 *request, int rlen, u8 *resp)
> +{
> +	int err;
> +
> +	if (!rdev)
> +		return -EINVAL;
> +
> +	mutex_lock(&rdev->lock);
> +	err = -EOPNOTSUPP;
> +	if (rdev->ops && rdev->ops->get_write_count)
> +		err = rdev->ops->get_write_count(rdev->dev.parent, rdev->target,
> +						 len, request, rlen, resp);
> +	mutex_unlock(&rdev->lock);
> +
> +	return err;
> +}

The names rpmb_get_write_count() and get_write_count() look confusing to 
me since these functions query the write counter. How about adding "er" 
at the end of both function names?

Are there any plans to add an implementation of struct rpmb_ops for UFS 
devices?

Thanks,

Bart.
Alex Bennée April 6, 2022, 11:21 a.m. UTC | #5
Bart Van Assche <bvanassche@acm.org> writes:

> On 4/5/22 02:37, Alex Bennée wrote:
>> +int rpmb_get_write_count(struct rpmb_dev *rdev, int len, u8 *request, int rlen, u8 *resp)
>> +{
>> +	int err;
>> +
>> +	if (!rdev)
>> +		return -EINVAL;
>> +
>> +	mutex_lock(&rdev->lock);
>> +	err = -EOPNOTSUPP;
>> +	if (rdev->ops && rdev->ops->get_write_count)
>> +		err = rdev->ops->get_write_count(rdev->dev.parent, rdev->target,
>> +						 len, request, rlen, resp);
>> +	mutex_unlock(&rdev->lock);
>> +
>> +	return err;
>> +}
>
> The names rpmb_get_write_count() and get_write_count() look confusing
> to me since these functions query the write counter. How about adding
> "er" at the end of both function names?
>
> Are there any plans to add an implementation of struct rpmb_ops for
> UFS devices?

Not by me but I agree it would be a useful exercise to see if a unified
API makes sense.
Alex Bennée April 6, 2022, 11:22 a.m. UTC | #6
Bean Huo <huobean@gmail.com> writes:

> On Tue, 2022-04-05 at 16:43 +0100, Alex Bennée wrote:
>> 
>> Bean Huo <huobean@gmail.com> writes:
>> 
>> > Hi Alex,
>> > 
>> > Thanks for this unified RPMB interface, I wanted to verify this on
>> > our
>> > UFS, it seems you didn't add the UFS access interface in this
>> > version 
>> > from your userspace tools, right?
>> 
>> No I didn't but it should be easy enough to add some function pointer
>> redirection everywhere one of the op_* functions calls a vrpmb_*
>> function. Do you already have a UFS RPMB device driver?
>> 
>
> Hi Alex,
> Thanks for your feedback.
>
> We now access UFS RPMB through the RPMB LUN BSG device, RPMB is a well-
> known LU and we have a userspace tool to access it.
>
> I see that if we're going to use your interface, "static struct
> rpmb_ops" should be registered from a lower-level driver, for example
> in a UFS driver, yes there should be no problem with this registration,
> but I don't know with the current way Compared, what are the advantages
> to add a driver. maybe the main advantage is that we will have an
> unified user space tool for RPMB. right?

Pretty much. The main issue for virtio-rpmb is it doesn't really fit
neatly into the block stack because all it does is the RPMB part so a
non-block orientate API makes sense.

Can you point be to where the UFS driver does it's current RPMB stuff?

>
> Kind regards,
> Bean
Bean Huo April 6, 2022, 5:19 p.m. UTC | #7
On Wed, 2022-04-06 at 12:22 +0100, Alex Bennée wrote:
> 
> Bean Huo <huobean@gmail.com> writes:
> 
> > On Tue, 2022-04-05 at 16:43 +0100, Alex Bennée wrote:
> > > 
> > > Bean Huo <huobean@gmail.com> writes:
> > > 
> > > > Hi Alex,
> > > > 
> > > > Thanks for this unified RPMB interface, I wanted to verify this
> > > > on
> > > > our
> > > > UFS, it seems you didn't add the UFS access interface in this
> > > > version 
> > > > from your userspace tools, right?
> > > 
> > > No I didn't but it should be easy enough to add some function
> > > pointer
> > > redirection everywhere one of the op_* functions calls a vrpmb_*
> > > function. Do you already have a UFS RPMB device driver?
> > > 
> > 
> > Hi Alex,
> > Thanks for your feedback.
> > 
> > We now access UFS RPMB through the RPMB LUN BSG device, RPMB is a
> > well-
> > known LU and we have a userspace tool to access it.
> > 
> > I see that if we're going to use your interface, "static struct
> > rpmb_ops" should be registered from a lower-level driver, for
> > example
> > in a UFS driver, yes there should be no problem with this
> > registration,
> > but I don't know with the current way Compared, what are the
> > advantages
> > to add a driver. maybe the main advantage is that we will have an
> > unified user space tool for RPMB. right?
> 
> Pretty much. The main issue for virtio-rpmb is it doesn't really fit
> neatly into the block stack because all it does is the RPMB part so a
> non-block orientate API makes sense.
> 
> Can you point be to where the UFS driver does it's current RPMB
> stuff?
> 

It's the SCSI BSG driver, in fact, we don't have a dedicated UFS RPMB
driver in the kernel. RPMB is a well known LU, we are using userspace
tools to issue SCSI commands directly to the UFS RPMB LU via ioctl()
from the BSG device node in the /dev/sg/ folder.

Here is the BSG part of the code in the userspace tools:

        io_hdr_v4.guard = 'Q';                                        
        io_hdr_v4.protocol = BSG_PROTOCOL_SCSI;                       
        io_hdr_v4.subprotocol = BSG_SUB_PROTOCOL_SCSI_CMD;            
        io_hdr_v4.response = (__u64)sense_buffer;                     
        io_hdr_v4.max_response_len = SENSE_BUFF_LEN;                  
        io_hdr_v4.request_len = cmd_len;                              
        io_hdr_v4.request = (__u64)cdb;                               
                                                                                                                          
                                                                                 
        ioctl(fd, SG_IO, &io_hdr_v4))
...


> > 
> > Kind regards,
> > Bean
> 
>
Bean Huo April 6, 2022, 5:27 p.m. UTC | #8
> 
> Can you point be to where the UFS driver does it's current RPMB
> stuff?
> 

If you want to understand the UFS RPMB functionality/command sequence,
you can refer to the ufs-utils tool, it is much like mmc-utils.

https://github.com/westerndigitalcorporation/ufs-utils/blob/dev/ufs_rpmb.c

> > 
> > Kind regards,
> > Bean
> 
>
Bart Van Assche April 6, 2022, 5:32 p.m. UTC | #9
On 4/6/22 10:19, Bean Huo wrote:
> On Wed, 2022-04-06 at 12:22 +0100, Alex Bennée wrote:
>>
>> Bean Huo <huobean@gmail.com> writes:
>>
>>> On Tue, 2022-04-05 at 16:43 +0100, Alex Bennée wrote:
>>>>
>>>> Bean Huo <huobean@gmail.com> writes:
>>>>
>>>>> Hi Alex,
>>>>>
>>>>> Thanks for this unified RPMB interface, I wanted to verify this
>>>>> on
>>>>> our
>>>>> UFS, it seems you didn't add the UFS access interface in this
>>>>> version
>>>>> from your userspace tools, right?
>>>>
>>>> No I didn't but it should be easy enough to add some function
>>>> pointer
>>>> redirection everywhere one of the op_* functions calls a vrpmb_*
>>>> function. Do you already have a UFS RPMB device driver?
>>>>
>>>
>>> Hi Alex,
>>> Thanks for your feedback.
>>>
>>> We now access UFS RPMB through the RPMB LUN BSG device, RPMB is a
>>> well-
>>> known LU and we have a userspace tool to access it.
>>>
>>> I see that if we're going to use your interface, "static struct
>>> rpmb_ops" should be registered from a lower-level driver, for
>>> example
>>> in a UFS driver, yes there should be no problem with this
>>> registration,
>>> but I don't know with the current way Compared, what are the
>>> advantages
>>> to add a driver. maybe the main advantage is that we will have an
>>> unified user space tool for RPMB. right?
>>
>> Pretty much. The main issue for virtio-rpmb is it doesn't really fit
>> neatly into the block stack because all it does is the RPMB part so a
>> non-block orientate API makes sense.
>>
>> Can you point be to where the UFS driver does it's current RPMB
>> stuff?
>>
> 
> It's the SCSI BSG driver, in fact, we don't have a dedicated UFS RPMB
> driver in the kernel. RPMB is a well known LU, we are using userspace
> tools to issue SCSI commands directly to the UFS RPMB LU via ioctl()
> from the BSG device node in the /dev/sg/ folder.
> 
> Here is the BSG part of the code in the userspace tools:
> 
>          io_hdr_v4.guard = 'Q';
>          io_hdr_v4.protocol = BSG_PROTOCOL_SCSI;
>          io_hdr_v4.subprotocol = BSG_SUB_PROTOCOL_SCSI_CMD;
>          io_hdr_v4.response = (__u64)sense_buffer;
>          io_hdr_v4.max_response_len = SENSE_BUFF_LEN;
>          io_hdr_v4.request_len = cmd_len;
>          io_hdr_v4.request = (__u64)cdb;
>                                                                                                                            
>                                                                                   
>          ioctl(fd, SG_IO, &io_hdr_v4))

Hi Bean,

I'm not sure where the above comes from? The Android RPMB client uses 
slightly different code. Additionally, the retry loop around the 
submission of SG/IO commands is very important. See also the 
check_sg_io_hdr() call in send_ufs_rpmb_req(). See also 
https://cs.android.com/android/platform/superproject/+/master:system/core/trusty/storage/proxy/rpmb.c

Thanks,

Bart.
Bean Huo April 6, 2022, 6:12 p.m. UTC | #10
On Wed, 2022-04-06 at 10:32 -0700, Bart Van Assche wrote:
> > It's the SCSI BSG driver, in fact, we don't have a dedicated UFS
> > RPMB
> > driver in the kernel. RPMB is a well known LU, we are using
> > userspace
> > tools to issue SCSI commands directly to the UFS RPMB LU via
> > ioctl()
> > from the BSG device node in the /dev/sg/ folder.
> > 
> > Here is the BSG part of the code in the userspace tools:
> > 
> >           io_hdr_v4.guard = 'Q';
> >           io_hdr_v4.protocol = BSG_PROTOCOL_SCSI;
> >           io_hdr_v4.subprotocol = BSG_SUB_PROTOCOL_SCSI_CMD;
> >           io_hdr_v4.response = (__u64)sense_buffer;
> >           io_hdr_v4.max_response_len = SENSE_BUFF_LEN;
> >           io_hdr_v4.request_len = cmd_len;
> >           io_hdr_v4.request = (__u64)cdb;
> >                                                                    
> >                                                          
> >                                                                    
> >                 
> >           ioctl(fd, SG_IO, &io_hdr_v4))
> 
> Hi Bean,
> 
> I'm not sure where the above comes from? The Android RPMB client uses
> slightly different code. Additionally, the retry loop around the 
> submission of SG/IO commands is very important. See also the 
> check_sg_io_hdr() call in send_ufs_rpmb_req(). See also 
> https://cs.android.com/android/platform/superproject/+/master:system/core/trusty/storage/proxy/rpmb.c
> 
> 
Bart,

It is from the ufs-utils.

So, do you vote to add the UFS RPMB driver based on this new framework
to resolve this conflict?


Kind regards,
Bean

> Thanks,
> 


> Bart.
Bart Van Assche April 6, 2022, 8:20 p.m. UTC | #11
On 4/6/22 11:12, Bean Huo wrote:
> It is from the ufs-utils.
> 
> So, do you vote to add the UFS RPMB driver based on this new framework
> to resolve this conflict?

Are any applications using the RPMB code from ufs-utils? It seems to me 
that the ufs-utils code doe not handle SCSI unit attentions correctly. 
If a POWER ON unit attention is received as reply to a SECURITY PROTOCOL 
OUT transaction, the write counter should be reread instead of retrying 
the SECURITY PROTOCOL OUT command with the same write counter.

Regarding adding a UFS RPMB driver: that seems useful to me since 
multiple applications make use of the UFS RPMB functionality. My 
understanding is that currently storageproxyd multiplexes UFS RPMB 
accesses in Android.

Thanks,

Bart.
Bean Huo April 7, 2022, 4:28 p.m. UTC | #12
On Wed, 2022-04-06 at 13:20 -0700, Bart Van Assche wrote:
> On 4/6/22 11:12, Bean Huo wrote:
> > It is from the ufs-utils.
> > 
> > So, do you vote to add the UFS RPMB driver based on this new
> > framework
> > to resolve this conflict?
> 
> Are any applications using the RPMB code from ufs-utils? It seems to
> me 
> that the ufs-utils code doe not handle SCSI unit attentions
> correctly. 
> If a POWER ON unit attention is received as reply to a SECURITY
> PROTOCOL 
> OUT transaction, the write counter should be reread instead of
> retrying 
> the SECURITY PROTOCOL OUT command with the same write counter.
> 

Not much sure how customers use this tool, based on my little
information from the field, the customer developed their own RPMB code
in the application. Here utils code is good example for them to study
and verify RPMB functinalities.

> Regarding adding a UFS RPMB driver: that seems useful to me since 
> multiple applications make use of the UFS RPMB functionality. My 
> understanding is that currently storageproxyd multiplexes UFS RPMB 
> accesses in Android.
> 

I have the same opinion with you, if we have an unified RPMB access
interface, and adding RPMB driver in the kernel, thus is better to
manage RPMB.

Kind regards,
Bean

> Thanks,
> 
> Bart.
Alex Bennée April 22, 2022, 2:21 p.m. UTC | #13
Alex Bennée <alex.bennee@linaro.org> writes:

> Hi,
>
> This is another attempt to come up with an RPMB API for the kernel.
> The last discussion of this was in the thread:

Ping?

Any other comments or reviews? Is there a desire to make other devices
that provide RPMB functionality visible via a common API?

>
>   Subject: [RFC PATCH  0/5] RPMB internal and user-space API + WIP virtio-rpmb frontend
>   Date: Wed,  3 Mar 2021 13:54:55 +0000
>   Message-Id: <20210303135500.24673-1-alex.bennee@linaro.org>
>
> The series provides for the RPMB sub-system, a new chardev API driven
> by ioctls and a full multi-block capable virtio-rpmb driver. You can
> find a working vhost-user backend in my QEMU branch here:
>
>   https://github.com/stsquad/qemu/commits/virtio/vhost-user-rpmb-v2
>
> The branch is a little messy but I'll be posting a cleaned up version
> in the following weeks. The only real changes to the backend is the
> multi-block awareness and some tweaks to deal with QEMU internals
> handling VirtIO config space messages which weren't previously
> exercised. The test.sh script in tools/rpmb works through the various
> transactions but isn't comprehensive.
>
> Changes since the last posting:
>
>   - frame construction is mostly back in userspace
>
>   The previous discussion showed there wasn't any appetite for using
>   the kernels keyctl() interface so userspace yet again takes
>   responsibility for constructing most* frames. Currently these are
>   all pure virtio-rpmb frames but the code is written so we can plug
>   in additional frame types. The virtio-rpmb driver does some
>   validation and in some cases (* read-blocks) constructs the request
>   frame in the driver. It would take someone implementing a driver for
>   another RPMB device type to see if this makes sense.
>
>   - user-space interface is still split across several ioctls
>
>   Although 3 of the ioctls share the common rpmb_ioc_reqresp_cmd
>   structure it does mean things like capacity, write_count and
>   read_blocks can have their own structure associated with the
>   command.
>
> As before I shall follow up with the QEMU based vhost-user backend and
> hopefully a rust-vmm re-implementation. However I've no direct
> interest in implementing the interfaces to real hardware. I leave that
> to people who have access to such things and are willing to take up
> the maintainer burden if this is merged.
>
> Regards,
>
> Alex
>     
>
> Alex Bennée (4):
>   rpmb: add Replay Protected Memory Block (RPMB) subsystem
>   char: rpmb: provide a user space interface
>   rpmb: create virtio rpmb frontend driver
>   tools rpmb: add RPBM access tool
>
>  .../userspace-api/ioctl/ioctl-number.rst      |    1 +
>  MAINTAINERS                                   |    9 +
>  drivers/Kconfig                               |    2 +
>  drivers/Makefile                              |    1 +
>  drivers/rpmb/Kconfig                          |   28 +
>  drivers/rpmb/Makefile                         |    9 +
>  drivers/rpmb/cdev.c                           |  309 +++++
>  drivers/rpmb/core.c                           |  439 +++++++
>  drivers/rpmb/rpmb-cdev.h                      |   17 +
>  drivers/rpmb/virtio_rpmb.c                    |  518 ++++++++
>  include/linux/rpmb.h                          |  182 +++
>  include/uapi/linux/rpmb.h                     |   99 ++
>  include/uapi/linux/virtio_rpmb.h              |   54 +
>  tools/Makefile                                |   16 +-
>  tools/rpmb/.gitignore                         |    2 +
>  tools/rpmb/Makefile                           |   41 +
>  tools/rpmb/key                                |    1 +
>  tools/rpmb/rpmb.c                             | 1083 +++++++++++++++++
>  tools/rpmb/test.sh                            |   22 +
>  19 files changed, 2828 insertions(+), 5 deletions(-)
>  create mode 100644 drivers/rpmb/Kconfig
>  create mode 100644 drivers/rpmb/Makefile
>  create mode 100644 drivers/rpmb/cdev.c
>  create mode 100644 drivers/rpmb/core.c
>  create mode 100644 drivers/rpmb/rpmb-cdev.h
>  create mode 100644 drivers/rpmb/virtio_rpmb.c
>  create mode 100644 include/linux/rpmb.h
>  create mode 100644 include/uapi/linux/rpmb.h
>  create mode 100644 include/uapi/linux/virtio_rpmb.h
>  create mode 100644 tools/rpmb/.gitignore
>  create mode 100644 tools/rpmb/Makefile
>  create mode 100644 tools/rpmb/key
>  create mode 100644 tools/rpmb/rpmb.c
>  create mode 100755 tools/rpmb/test.sh
Harald Mommer June 16, 2022, 1:13 p.m. UTC | #14
Hello,

On 05.04.22 11:37, Alex Bennée wrote:
> +/*
> + * Utility functions
> + */
> +static int open_dev_file(const char *devfile, struct rpmb_ioc_cap_cmd *cap)
> +{
> ...
> +	rpmb_dbg("RPMB rpmb_target = %d\n", cap->target);
> +	rpmb_dbg("RPMB capacity    = %d\n", cap->capacity);
> +	rpmb_dbg("RPMB block_size  = %d\n", cap->block_size);
> +	rpmb_dbg("RPMB wr_cnt_max  = %d\n", cap->wr_cnt_max);
> +	rpmb_dbg("RPMB rd_cnt_max  = %d\n", cap->rd_cnt_max);
> +	rpmb_dbg("RPMB auth_method = %d\n", cap->auth_method);

The previously present device_type has already gone. Do we loose an 
opportunity to inform the user about anything which may not be virtio 
RPMB in the future or is this intentional? A new device type VIRTIO_RPMB 
instead and keeping the door open for UFS, NVMe, eMMC etc.?

And if the removal was intentional: rpmb_target, block_size, 
auth_method: Still needed and if so for what is it good for?

> ...
> +}
> +
> +static struct virtio_rpmb_frame * vrpmb_alloc_frames(int n)
> +{
> +	struct virtio_rpmb_frame *frames;
> +
> +	frames = calloc(n, sizeof(struct virtio_rpmb_frame));
> +	if (frames)
> +		memset(frames, 0, n *  sizeof(struct virtio_rpmb_frame));
Very minor: calloc() already zeroes.
> +	return frames;
> +}
> +

/*
+ * To read blocks we receive a bunch of frames from the vrpmb device
+ * which we need to validate and extract the actual data into
+ * requested buffer.
+ */
+static int vrpmb_read_blocks(int dev_fd, void *key, int addr, int count, void *data, int len)
+{
+...
+	ret = ioctl(dev_fd, RPMB_IOC_RBLOCKS_CMD, &cmd);
+	if (ret < 0) {
+		rpmb_err("rblocks ioctl failure %d: %s.\n", ret,
+			 strerror(errno));
+		goto out;
+	}

The (older) rpmb tool always puzzles me with complaining about the mismatch MAC when there was also a result != VIRTIO_RPMB_RES_OK.
I guess the ioctl() returns 0 if the ioctl() as such succeeded but there is an error code indicated in the frame.
Consider adding to print the result if it indicated a problem.

I can remember the old tool used the result of the last frame for this purpose which was probably not a good choice, not sure why this was that way, probably the first frame would be a better choice.

+	for (i = 0; i < count; i++) {
+		struct virtio_rpmb_frame *f = &frames[i];
+
+		vrpmb_dump_frame("block data", f);
+
+		if (key) {
+			ret = vrpmb_check_mac(key, f, 1);
+			if (ret) {
+				rpmb_err("%s: check mac error frame %d/%d\n", __func__, i, ret);
+				break;
+			}
+		}
+
+		memcpy(data, &f->data, RPMB_BLOCK_SIZE);
+		data += RPMB_BLOCK_SIZE;
+	}
+	ret = 0;
+
+ out:
+	free(frames);
+	return ret;
+}
+
+
+static void *read_key(const char *path)
+{
+	int key_fd = open_rd_file(path, "key file");
+	void *key;
+
+	if (key_fd < 0)
+		return NULL;
+
+	key = malloc(RPMB_KEY_SIZE);

Very minor: There's not a single free() for key in the code. Plays no 
role here as the program runs only for a fraction of a second, just saw 
it. You are making your life harder as necessary by using dynamic memory 
when it can be avoided, all those NULL pointer checks and the free(s) 
which have to be done...
> +
> +	if (!key) {
> +		rpmb_err("out of memory for key\n");
> +		return NULL;
> +	}
> +
> +	if (read(key_fd, key, RPMB_KEY_SIZE) != RPMB_KEY_SIZE) {
> +		rpmb_err("couldn't read key (%s)\n", strerror(errno));
> +		return NULL;
> +	}
> +
> +	close(key_fd);
> +	return key;
> +}
> +
>
> +
> +static const struct rpmb_cmd cmds[] = {
> +	{
> +		"get-info",
> +		op_get_info,
> +		"<RPMB_DEVICE>",
> +		"    Get RPMB device info\n",
> +	},
> +	{
> +		"program-key",
> +		op_rpmb_program_key,
> +		"<RPMB_DEVICE> <KEYFILE>",
> +		"    Program authentication KEYFILE\n"
> +		"    NOTE: This is a one-time programmable irreversible change.\n",
> +	},
> +	{
> +		"write-counter",
> +		op_rpmb_get_write_counter,
> +		"<RPMB_DEVICE>",
> +		"    Rertrive write counter value from the <RPMB_DEVICE> to stdout.\n"

Optional parameter explanation [KEYFILE] got lost in write-counter.

> +	},
> +	{
> +		"write-blocks",
> +		op_rpmb_write_blocks,
> +		"<RPMB_DEVICE> <address> <block_count> <DATA_FILE> <KEYID>",
> +		"    <block count> of 256 bytes will be written from the DATA_FILE\n"
> +		"    to the <RPMB_DEVICE> at block offset <address>.\n"
> +		"    When DATA_FILE is -, read from standard input.\n",
Puzzling naming change. The KEYID is indeed the <KEYFILE>.
> +	},
> +	{
> +		"read-blocks",
> +		op_rpmb_read_blocks,
> +		"<RPMB_DEVICE> <address> <blocks count> <OUTPUT_FILE>",
> +		"    <block count> of 256 bytes will be read from <RPMB_DEVICE>\n"
> +		"    to the OUTPUT_FILE\n"
> +		"    When OUTPUT_FILE is -, write to standard output\n",

Here also the optional parameter explanation [KEYFILE] got lost in 
read-blocks.

For people not knowing the tool trying to get into RPMB a complete and 
consistent help is helpful I can still remember when I was in the 
situation to understand more playing around with the (older) tool.

> +	},
> +
> +	{ NULL, NULL, NULL, NULL }
> +};
Zhu, Bing June 1, 2023, 1:02 a.m. UTC | #15
As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)

Bing

IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
System Software Engineering
Software and Advanced Technology Group
Zizhu Science Park, Shanghai, China

-----Original Message-----
From: Shyam Saini <shyamsaini@linux.microsoft.com> 
Sent: Thursday, June 1, 2023 3:10 AM
To: alex.bennee@linaro.org
Cc: code@tyhicks.com; Matti.Moell@opensynergy.com; arnd@linaro.org; Zhu, Bing <bing.zhu@intel.com>; hmo@opensynergy.com; ilias.apalodimas@linaro.org; joakim.bech@linaro.org; linux-kernel@vger.kernel.org; linux-mmc@vger.kernel.org; linux-scsi@vger.kernel.org; maxim.uvarov@linaro.org; ruchika.gupta@linaro.org; Winkler, Tomas <tomas.winkler@intel.com>; ulf.hansson@linaro.org; Huang, Yang <yang.huang@intel.com>; sumit.garg@linaro.org; jens.wiklander@linaro.org; op-tee@lists.trustedfirmware.org
Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver

Hi Alex,

[ Resending, Sorry for the noise ]
 
Are you still working on it or planning to resubmit it ?

[1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.

The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs. 

But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.

To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
 
Please let me know what's your plan on this.
 
[1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html

Best Regards,
Shyam
Ilias Apalodimas June 1, 2023, 5:31 a.m. UTC | #16
Hi Bing

On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <bing.zhu@intel.com> wrote:
>
> As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)

I am not entirely sure this will solve our problem here.  You are
right that we shouldn't depend on the supplicant to extend PCRs. But
what happens if an object is sealed against certain PCR values?  We
are back to the same problem

Thanks
/Ilias
>
> Bing
>
> IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> System Software Engineering
> Software and Advanced Technology Group
> Zizhu Science Park, Shanghai, China
>
> -----Original Message-----
> From: Shyam Saini <shyamsaini@linux.microsoft.com>
> Sent: Thursday, June 1, 2023 3:10 AM
> To: alex.bennee@linaro.org
> Cc: code@tyhicks.com; Matti.Moell@opensynergy.com; arnd@linaro.org; Zhu, Bing <bing.zhu@intel.com>; hmo@opensynergy.com; ilias.apalodimas@linaro.org; joakim.bech@linaro.org; linux-kernel@vger.kernel.org; linux-mmc@vger.kernel.org; linux-scsi@vger.kernel.org; maxim.uvarov@linaro.org; ruchika.gupta@linaro.org; Winkler, Tomas <tomas.winkler@intel.com>; ulf.hansson@linaro.org; Huang, Yang <yang.huang@intel.com>; sumit.garg@linaro.org; jens.wiklander@linaro.org; op-tee@lists.trustedfirmware.org
> Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
>
> Hi Alex,
>
> [ Resending, Sorry for the noise ]
>
> Are you still working on it or planning to resubmit it ?
>
> [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
>
> The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
>
> But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
>
> To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
>
> Please let me know what's your plan on this.
>
> [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
>
> Best Regards,
> Shyam
Sumit Garg June 1, 2023, 5:48 a.m. UTC | #17
On Thu, 1 Jun 2023 at 11:02, Ilias Apalodimas
<ilias.apalodimas@linaro.org> wrote:
>
> Hi Bing
>
> On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <bing.zhu@intel.com> wrote:
> >
> > As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
>
> I am not entirely sure this will solve our problem here.  You are
> right that we shouldn't depend on the supplicant to extend PCRs. But
> what happens if an object is sealed against certain PCR values?  We
> are back to the same problem

+1

Temporary storage may be a stop gap solution for some use-cases but
having a fast path access to RPMB via kernel should be our final goal.
I would suggest we start small with the MMC subsystem to expose RPMB
access APIs for OP-TEE driver rather than a complete RPMB subsystem.

-Sumit

>
> Thanks
> /Ilias
> >
> > Bing
> >
> > IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> > System Software Engineering
> > Software and Advanced Technology Group
> > Zizhu Science Park, Shanghai, China
> >
> > -----Original Message-----
> > From: Shyam Saini <shyamsaini@linux.microsoft.com>
> > Sent: Thursday, June 1, 2023 3:10 AM
> > To: alex.bennee@linaro.org
> > Cc: code@tyhicks.com; Matti.Moell@opensynergy.com; arnd@linaro.org; Zhu, Bing <bing.zhu@intel.com>; hmo@opensynergy.com; ilias.apalodimas@linaro.org; joakim.bech@linaro.org; linux-kernel@vger.kernel.org; linux-mmc@vger.kernel.org; linux-scsi@vger.kernel.org; maxim.uvarov@linaro.org; ruchika.gupta@linaro.org; Winkler, Tomas <tomas.winkler@intel.com>; ulf.hansson@linaro.org; Huang, Yang <yang.huang@intel.com>; sumit.garg@linaro.org; jens.wiklander@linaro.org; op-tee@lists.trustedfirmware.org
> > Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
> >
> > Hi Alex,
> >
> > [ Resending, Sorry for the noise ]
> >
> > Are you still working on it or planning to resubmit it ?
> >
> > [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
> >
> > The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
> >
> > But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
> >
> > To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
> >
> > Please let me know what's your plan on this.
> >
> > [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
> >
> > Best Regards,
> > Shyam
Ilias Apalodimas June 2, 2023, 8:25 a.m. UTC | #18
On Thu, 1 Jun 2023 at 08:49, Sumit Garg <sumit.garg@linaro.org> wrote:
>
> On Thu, 1 Jun 2023 at 11:02, Ilias Apalodimas
> <ilias.apalodimas@linaro.org> wrote:
> >
> > Hi Bing
> >
> > On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <bing.zhu@intel.com> wrote:
> > >
> > > As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
> >
> > I am not entirely sure this will solve our problem here.  You are
> > right that we shouldn't depend on the supplicant to extend PCRs. But
> > what happens if an object is sealed against certain PCR values?  We
> > are back to the same problem
>
> +1
>
> Temporary storage may be a stop gap solution for some use-cases but
> having a fast path access to RPMB via kernel should be our final goal.
> I would suggest we start small with the MMC subsystem to expose RPMB
> access APIs for OP-TEE driver rather than a complete RPMB subsystem.

I discussed with the OP-TEE maintainers about adding parts of the
supplicant in the kernel.  The supplicant 'just' sends an ioctl to
store/read stuff anyway.  So it would make sense to have a closer and
see if that looks reasonable.
Thanks

/Ilias

>
> -Sumit
>
> >
> > Thanks
> > /Ilias
> > >
> > > Bing
> > >
> > > IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> > > System Software Engineering
> > > Software and Advanced Technology Group
> > > Zizhu Science Park, Shanghai, China
> > >
> > > -----Original Message-----
> > > From: Shyam Saini <shyamsaini@linux.microsoft.com>
> > > Sent: Thursday, June 1, 2023 3:10 AM
> > > To: alex.bennee@linaro.org
> > > Cc: code@tyhicks.com; Matti.Moell@opensynergy.com; arnd@linaro.org; Zhu, Bing <bing.zhu@intel.com>; hmo@opensynergy.com; ilias.apalodimas@linaro.org; joakim.bech@linaro.org; linux-kernel@vger.kernel.org; linux-mmc@vger.kernel.org; linux-scsi@vger.kernel.org; maxim.uvarov@linaro.org; ruchika.gupta@linaro.org; Winkler, Tomas <tomas.winkler@intel.com>; ulf.hansson@linaro.org; Huang, Yang <yang.huang@intel.com>; sumit.garg@linaro.org; jens.wiklander@linaro.org; op-tee@lists.trustedfirmware.org
> > > Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
> > >
> > > Hi Alex,
> > >
> > > [ Resending, Sorry for the noise ]
> > >
> > > Are you still working on it or planning to resubmit it ?
> > >
> > > [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
> > >
> > > The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
> > >
> > > But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
> > >
> > > To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
> > >
> > > Please let me know what's your plan on this.
> > >
> > > [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
> > >
> > > Best Regards,
> > > Shyam
Jens Wiklander June 12, 2023, 5:06 p.m. UTC | #19
Hi Alex,

On Fri, Jun 2, 2023 at 10:26 AM Ilias Apalodimas
<ilias.apalodimas@linaro.org> wrote:
>
> On Thu, 1 Jun 2023 at 08:49, Sumit Garg <sumit.garg@linaro.org> wrote:
> >
> > On Thu, 1 Jun 2023 at 11:02, Ilias Apalodimas
> > <ilias.apalodimas@linaro.org> wrote:
> > >
> > > Hi Bing
> > >
> > > On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <bing.zhu@intel.com> wrote:
> > > >
> > > > As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
> > >
> > > I am not entirely sure this will solve our problem here.  You are
> > > right that we shouldn't depend on the supplicant to extend PCRs. But
> > > what happens if an object is sealed against certain PCR values?  We
> > > are back to the same problem
> >
> > +1
> >
> > Temporary storage may be a stop gap solution for some use-cases but
> > having a fast path access to RPMB via kernel should be our final goal.
> > I would suggest we start small with the MMC subsystem to expose RPMB
> > access APIs for OP-TEE driver rather than a complete RPMB subsystem.
>
> I discussed with the OP-TEE maintainers about adding parts of the
> supplicant in the kernel.  The supplicant 'just' sends an ioctl to
> store/read stuff anyway.  So it would make sense to have a closer and
> see if that looks reasonable.
> Thanks

I was trying to create a setup to test this. I've added the kernel
patches on top of https://github.com/linaro-swg/linux/tree/optee. The
QEMU branch is a bit dated and I had to add
3a845a214b42 target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is
implemented
d4a7b0ef1a03 hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when
booting Linux with EL3
9745a003f878 hw/intc/arm_gicv3: fix prio masking on pmr write
beeec926d24a target/arm: mark SP_EL1 with ARM_CP_EL3_NO_EL2_KEEP
on top of that branch to be able to boot to the Linux kernel.

I have the vhost-user-rpmb process running and connected with QEMU,
but around (guessing really) when the RPMB subsystem is initializing
the process dies with:
================ Vhost user message ================
Request: VHOST_USER_SET_VRING_ADDR (9)
Flags:   0x1
Size:    40
vhost-user-rpmb-INFO: 18:58:08.312: vrpmb_process_msg: msg
VHOST_USER_SET_VRING_ADDR(9)
vhost_vring_addr:
    index:  0
    flags:  0
    desc_user_addr:   0x00007ff15fa91000
    used_user_addr:   0x00007ff15fa91080
    avail_user_addr:  0x00007ff15fa91040
    log_guest_addr:   0x0000000041c91080
Setting virtq addresses:
    vring_desc  at (nil)
    vring_used  at (nil)
    vring_avail at (nil)

(vhost-user-rpmb:3236474): vhost-user-rpmb-CRITICAL **: 18:58:08.312:
Invalid vring_addr message

Among other options, I'm starting QEMU with -machine
virt,secure=on,mte=off,gic-version=3,virtualization=false to enable
the secure world.

Do you have an idea of what might be wrong? Where should I start looking?

Thanks,
Jens

>
> /Ilias
>
> >
> > -Sumit
> >
> > >
> > > Thanks
> > > /Ilias
> > > >
> > > > Bing
> > > >
> > > > IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
> > > > System Software Engineering
> > > > Software and Advanced Technology Group
> > > > Zizhu Science Park, Shanghai, China
> > > >
> > > > -----Original Message-----
> > > > From: Shyam Saini <shyamsaini@linux.microsoft.com>
> > > > Sent: Thursday, June 1, 2023 3:10 AM
> > > > To: alex.bennee@linaro.org
> > > > Cc: code@tyhicks.com; Matti.Moell@opensynergy.com; arnd@linaro.org; Zhu, Bing <bing.zhu@intel.com>; hmo@opensynergy.com; ilias.apalodimas@linaro.org; joakim.bech@linaro.org; linux-kernel@vger.kernel.org; linux-mmc@vger.kernel.org; linux-scsi@vger.kernel.org; maxim.uvarov@linaro.org; ruchika.gupta@linaro.org; Winkler, Tomas <tomas.winkler@intel.com>; ulf.hansson@linaro.org; Huang, Yang <yang.huang@intel.com>; sumit.garg@linaro.org; jens.wiklander@linaro.org; op-tee@lists.trustedfirmware.org
> > > > Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
> > > >
> > > > Hi Alex,
> > > >
> > > > [ Resending, Sorry for the noise ]
> > > >
> > > > Are you still working on it or planning to resubmit it ?
> > > >
> > > > [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
> > > >
> > > > The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
> > > >
> > > > But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
> > > >
> > > > To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
> > > >
> > > > Please let me know what's your plan on this.
> > > >
> > > > [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
> > > >
> > > > Best Regards,
> > > > Shyam
Shyam Saini June 13, 2023, 12:49 a.m. UTC | #20
Thank you everyone for your valueable feedback.

Alex, are you planning submit this patch series ?
Please let me know.

> On Thu, 1 Jun 2023 at 08:49, Sumit Garg <sumit.garg@linaro.org> wrote:
>>
>> On Thu, 1 Jun 2023 at 11:02, Ilias Apalodimas
>> <ilias.apalodimas@linaro.org> wrote:
>>>
>>> Hi Bing
>>>
>>> On Thu, 1 Jun 2023 at 04:03, Zhu, Bing <bing.zhu@intel.com> wrote:
>>>>
>>>> As an alternative, Is it possible to change ftpm design not to depend on RPMB access at the earlier/boot stage? Because to my understanding, typically PCRs don't require persistent/NV storage (for example, before RPMB or tee-supplicant is ready, use TEE memory instead as temporary storage)
>>>
>>> I am not entirely sure this will solve our problem here.  You are
>>> right that we shouldn't depend on the supplicant to extend PCRs. But
>>> what happens if an object is sealed against certain PCR values?  We
>>> are back to the same problem
>>
>> +1
>>
>> Temporary storage may be a stop gap solution for some use-cases but
>> having a fast path access to RPMB via kernel should be our final goal.
>> I would suggest we start small with the MMC subsystem to expose RPMB
>> access APIs for OP-TEE driver rather than a complete RPMB subsystem.
>
> I discussed with the OP-TEE maintainers about adding parts of the
> supplicant in the kernel.  The supplicant 'just' sends an ioctl to
> store/read stuff anyway.  So it would make sense to have a closer and
> see if that looks reasonable.
> Thanks
>
> /Ilias
>
>>
>> -Sumit
>>
>>>
>>> Thanks
>>> /Ilias
>>>>
>>>> Bing
>>>>
>>>> IPAS Security Brown Belt (https://www.credly.com/badges/69ea809f-3a96-4bc7-bb2f-442c1b17af26)
>>>> System Software Engineering
>>>> Software and Advanced Technology Group
>>>> Zizhu Science Park, Shanghai, China
>>>>
>>>> -----Original Message-----
>>>> From: Shyam Saini <shyamsaini@linux.microsoft.com>
>>>> Sent: Thursday, June 1, 2023 3:10 AM
>>>> To: alex.bennee@linaro.org
>>>> Cc: code@tyhicks.com; Matti.Moell@opensynergy.com; arnd@linaro.org; Zhu, Bing <bing.zhu@intel.com>; hmo@opensynergy.com; ilias.apalodimas@linaro.org; joakim.bech@linaro.org; linux-kernel@vger.kernel.org; linux-mmc@vger.kernel.org; linux-scsi@vger.kernel.org; maxim.uvarov@linaro.org; ruchika.gupta@linaro.org; Winkler, Tomas <tomas.winkler@intel.com>; ulf.hansson@linaro.org; Huang, Yang <yang.huang@intel.com>; sumit.garg@linaro.org; jens.wiklander@linaro.org; op-tee@lists.trustedfirmware.org
>>>> Subject: [PATCH v2 0/4] rpmb subsystem, uapi and virtio-rpmb driver
>>>>
>>>> Hi Alex,
>>>>
>>>> [ Resending, Sorry for the noise ]
>>>>
>>>> Are you still working on it or planning to resubmit it ?
>>>>
>>>> [1] The current optee tee kernel driver implementation doesn't work when IMA is used with optee implemented ftpm.
>>>>
>>>> The ftpm has dependency on tee-supplicant which comes once the user space is up and running and IMA attestation happens at boot time and it requires to extend ftpm PCRs.
>>>>
>>>> But IMA can't use PCRs if ftpm use secure emmc RPMB partition. As optee can only access RPMB via tee-supplicant(user space). So, there should be a fast path to allow optee os to access the RPMB parititon without waiting for user-space tee supplicant.
>>>>
>>>> To achieve this fast path linux optee driver and mmc driver needs some work and finally it will need RPMB driver which you posted.
>>>>
>>>> Please let me know what's your plan on this.
>>>>
>>>> [1] https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
>>>>
>>>> Best Regards,
>>>> Shyam
>