mbox series

[00/18] net: iosm: PCIe Driver for Intel M.2 Modem

Message ID 20210107170523.26531-1-m.chetan.kumar@intel.com
Headers show
Series net: iosm: PCIe Driver for Intel M.2 Modem | expand

Message

Kumar, M Chetan Jan. 7, 2021, 5:05 p.m. UTC
The IOSM (IPC over Shared Memory) driver is a PCIe host driver implemented
for linux or chrome platform for data exchange over PCIe interface between
Host platform & Intel M.2 Modem. The driver exposes interface conforming to the
MBIM protocol [1]. Any front end application ( eg: Modem Manager) could easily
manage the MBIM interface to enable data communication towards WWAN.

Intel M.2 modem uses 2 BAR regions. The first region is dedicated to Doorbell
register for IRQs and the second region is used as scratchpad area for book
keeping modem execution stage details along with host system shared memory
region context details. The upper edge of the driver exposes the control and
data channels for user space application interaction. At lower edge these data
and control channels are associated to pipes. The pipes are lowest level
interfaces used over PCIe as a logical channel for message exchange. A single
channel maps to UL and DL pipe and are initialized on device open.

On UL path, driver copies application sent data to SKBs associate it with
transfer descriptor and puts it on to ring buffer for DMA transfer. Once
information has been updated in shared memory region, host gives a Doorbell
to modem to perform DMA and modem uses MSI to communicate back to host.
For receiving data in DL path, SKBs are pre-allocated during pipe open and
transfer descriptors are given to modem for DMA transfer.

The driver exposes two types of ports, namely "wwanctrl", a char device node
which is used for MBIM control operation and "INMx",(x = 0,1,2..7) network
interfaces for IP data communication.
1) MBIM Control Interface:
This node exposes an interface between modem and application using char device
exposed by "IOSM" driver to establish and manage the MBIM data communication
with PCIe based Intel M.2 Modems.

It also support an IOCTL command, apart from read and write methods. The IOCTL
command, "IOCTL_WDM_MAX_COMMAND" could be used by applications to fetch the
Maximum Command buffer length supported by the driver which is restricted to
4096 bytes.

2) MBIM Data Interface:
The IOSM driver represents the MBIM data channel as a single root network
device of the "wwan0" type which is mapped as default IP session 0. Several IP
sessions(INMx) could be multiplexed over the single data channel using
sub devices of master wwanY devices. The driver models such IP sessions as
802.1q VLAN devices which are mapped to a unique VLAN ID.

M Chetan Kumar (18):
  net: iosm: entry point
  net: iosm: irq handling
  net: iosm: mmio scratchpad
  net: iosm: shared memory IPC interface
  net: iosm: shared memory I/O operations
  net: iosm: channel configuration
  net: iosm: char device for FW flash & coredump
  net: iosm: MBIM control device
  net: iosm: bottom half
  net: iosm: multiplex IP sessions
  net: iosm: encode or decode datagram
  net: iosm: power management
  net: iosm: shared memory protocol
  net: iosm: protocol operations
  net: iosm: uevent support
  net: iosm: net driver
  net: iosm: readme file
  net: iosm: infrastructure

 MAINTAINERS                                   |    7 +
 drivers/net/Kconfig                           |    1 +
 drivers/net/Makefile                          |    1 +
 drivers/net/wwan/Kconfig                      |   13 +
 drivers/net/wwan/Makefile                     |    5 +
 drivers/net/wwan/iosm/Kconfig                 |   10 +
 drivers/net/wwan/iosm/Makefile                |   27 +
 drivers/net/wwan/iosm/README                  |  126 +++
 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c     |   86 ++
 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h     |   55 +
 drivers/net/wwan/iosm/iosm_ipc_imem.c         | 1487 +++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_imem.h         |  601 ++++++++++
 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c     |  768 +++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h     |  129 +++
 drivers/net/wwan/iosm/iosm_ipc_irq.c          |   89 ++
 drivers/net/wwan/iosm/iosm_ipc_irq.h          |   35 +
 drivers/net/wwan/iosm/iosm_ipc_mbim.c         |  286 +++++
 drivers/net/wwan/iosm/iosm_ipc_mbim.h         |   25 +
 drivers/net/wwan/iosm/iosm_ipc_mmio.c         |  223 ++++
 drivers/net/wwan/iosm/iosm_ipc_mmio.h         |  193 ++++
 drivers/net/wwan/iosm/iosm_ipc_mux.c          |  458 ++++++++
 drivers/net/wwan/iosm/iosm_ipc_mux.h          |  345 ++++++
 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c    |  901 +++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h    |  194 ++++
 drivers/net/wwan/iosm/iosm_ipc_pcie.c         |  561 ++++++++++
 drivers/net/wwan/iosm/iosm_ipc_pcie.h         |  210 ++++
 drivers/net/wwan/iosm/iosm_ipc_pm.c           |  326 ++++++
 drivers/net/wwan/iosm/iosm_ipc_pm.h           |  228 ++++
 drivers/net/wwan/iosm/iosm_ipc_protocol.c     |  287 +++++
 drivers/net/wwan/iosm/iosm_ipc_protocol.h     |  239 ++++
 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c |  547 +++++++++
 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h |  456 ++++++++
 drivers/net/wwan/iosm/iosm_ipc_sio.c          |  266 +++++
 drivers/net/wwan/iosm/iosm_ipc_sio.h          |   78 ++
 drivers/net/wwan/iosm/iosm_ipc_task_queue.c   |  247 ++++
 drivers/net/wwan/iosm/iosm_ipc_task_queue.h   |   46 +
 drivers/net/wwan/iosm/iosm_ipc_uevent.c       |   43 +
 drivers/net/wwan/iosm/iosm_ipc_uevent.h       |   41 +
 drivers/net/wwan/iosm/iosm_ipc_wwan.c         |  649 +++++++++++
 drivers/net/wwan/iosm/iosm_ipc_wwan.h         |   72 ++
 40 files changed, 10361 insertions(+)
 create mode 100644 drivers/net/wwan/Kconfig
 create mode 100644 drivers/net/wwan/Makefile
 create mode 100644 drivers/net/wwan/iosm/Kconfig
 create mode 100644 drivers/net/wwan/iosm/Makefile
 create mode 100644 drivers/net/wwan/iosm/README
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mbim.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mbim.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mmio.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mmio.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_sio.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_sio.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.h

Comments

Andrew Lunn Jan. 7, 2021, 7:35 p.m. UTC | #1
On Thu, Jan 07, 2021 at 10:35:12PM +0530, M Chetan Kumar wrote:
> Implements a char device for flashing Modem FW image while Device
> is in boot rom phase and for collecting traces on modem crash.

Since this is a network device, you might want to take a look at
devlink support for flashing devices.

https://www.kernel.org/doc/html/latest/networking/devlink/devlink-flash.html

And for collecting crashes and other health information, consider
devlink region and devlink health.

It is much better to reuse existing infrastructure than do something
proprietary with a char dev.

	    Andrew
Johannes Berg Jan. 15, 2021, 9:15 a.m. UTC | #2
Hi Andrew, all,

> > +For example, adding a link for a MBIM IP session with SessionId 5:

> > +

> > +  ip link add link wwan0 name wwan0.<name> type vlan id 5

> 

> So, this is what all the Ethernet nonsense is all about. You have a

> session ID you need to somehow represent to user space. And you

> decided to use VLANs. But to use VLANs, you need an Ethernet

> header. So you added a bogus Ethernet header.


So yeah, I don't think anyone likes that. I had half-heartedly started
working on a replacement framework (*1), but then things happened and I
didn't really have much time, and you also reviewed it and had some
comments but when I looked the component framework really didn't seem
appropriate, but didn't really have time to do anything on this either.

(*1) https://lore.kernel.org/netdev/20200225100053.16385-1-johannes@sipsolutions.net/


In the mean time, the team doing this driver (I'm not directly involved,
just helping them out with upstream processes) really needed/wanted to
continue on this, and this is what they had already, more or less.

Now, the question here at this point of course is they already had it
that way. But that's easily explained - that's how it works upstream
today, unfortunately, cf. for example drivers/net/usb/cdc_mbim.c.

Now, granted, some of the newer ones such as drivers/net/ipa/ _don't_
things that way and come out with ARPHRD_RAWIP, but that requires
userspace to actually be aware of this, and know how to create the
necessary channels etc. For IPA this is handled by 'rmnet', but rmnet is
just Qualcomm's proprietary protocol exposed as an rtnetlink type, so is
rather unsuitable for this driver.


Hence originally the thought we could come up with a generic framework
to handle this all. Unfortunately, I never had the time to follow up on
everything there.

T be honest I also lost interest when IPA got merged without any
thoughts given to unifying this, despite my involvement in the reviews
and time spent on trying to make a suitable framework that would serve
both IPA and this IOSM driver.


> Is any of this VLAN stuff required by MBIM?


Yes and no. It's not required to do _VLAN_ stuff, but that's one of the
few ways that userspace currently knows of. Note that as far as I can
tell Qualcomm (with rmnet/IPA etc.) has basically "reinvented" the world
here - requiring the use of either their proprietary modem stack, or
libqmi that knows specifically how to drive their modems.

This was something we wanted to avoid (unless perhaps we could arrive at
a standardised solution, see above) - thus being left with the VLAN
method that's used elsewhere in the kernel.

> Linux allows you to dynamically create/destroy network

> interfaces. So you want to do something like

> 

> ip link add link wwan0 name wwan42 type mbim id 42

> 

> Which will create a new mbim netdev interface using session id 42 on

> top of the device which provides wwan0. I don't actually like this

> last bit, but you somehow need to indicate on which MBIM transport you

> want to create the new session, since you could have multiple bits of

> hardware providing MBIM services.


I don't even like the fact that 'wwan0' exists there in the first place
(or how it exists in this driver), because it cannot ever actually
transport traffic since it's just the root device of sorts.

Hence the proposal to have - similar what we do in wifi - a separate
abstraction of what a modem device is, and then just allow channels to
be created on it, and those channels are exposed as netdevs.



In any case - I'm not sure how we resolve this.

On the one hand, as a technical person going for the most technically
correct solution, I'd say you're completely right and this should expose
pure IP netdevs, and have a (custom or not) way to configure channels.
That still leaves the "dead" wwan0 interface that can't do anything, but
at least it's better for the channel netdevs.
Perhaps like with the framework I was trying to do. We could even
initially side-step the issue with the component framework and simply
not allow that in the framework from the start.

However, I'm not sure of the value of this. Qualcomm's newer stuff is
already locked in to their custom APIs in rmnet and IPA, with QMI etc.

If we're honest with ourselves, older stuff that exists in the kernel
today is highly unlikely to be converted since it works now and very few
people really care about anything else.


Which basically leaves only this driver
 - either doing some old-fashioned way like it is now, or
 - doing its own custom way like rmnet/IPA, or
 - coming with a framework that pretends to be more general than rmnet
   but really is only used for this driver.

The later two choices both require significant investment on the
userspace side, so I don't think it's any wonder the first is what the
driver chose, especially after my more or less failed attempt at getting
traction for the common framework (before IPA got merged, after all.)


Also, non-technically speaking, I'm really not sure as to what we can
and should require from a single driver like this in terms of "cleaning
up the ecosystem". Yes, having a common framework would be nice, but if
nobody's going to use it, what's the point? And we didn't require such
from IPA. Now, granted, IPA already ships with a slightly better way of
doing things than ethernet+802.1q, but there's precedent for that as
well...

johannes
Bjørn Mork Jan. 17, 2021, 5:26 p.m. UTC | #3
Sorry about being much too late into this discussion.  I'm not having
the bandwidth to read netdev anymore, and just stumbled across this now.

Andrew Lunn <andrew@lunn.ch> writes:

> So, this is what all the Ethernet nonsense is all about. You have a

> session ID you need to somehow represent to user space. And you

> decided to use VLANs. But to use VLANs, you need an Ethernet

> header. So you added a bogus Ethernet header.


Actually, the original reasoning was the other way around.

The bogus ethernet header was added because I had seen the 3G modem
vendors do that for a few years already, in the modem firmware.  And I
didn't think enough about it to realize that it was a really bad idea,
or even that it was something I could change.  Or should change.

I cannot blame the MBIM sesison to VLAN mapping idea on anyone else.  As
far as I can remember, that was just something that popped up in my head
while working on the cdc_mbim driver. But it came as a consequence of
already having the bogus ethernet header.  And I didn't really
understand that I could define a new wwan subsystem with new device
types. I thought I had to use whatever was there already.

I was young and stupid. Now I'm not that young anymore ;-)

Never ever imagined that this would be replicated in another driver,
though.  That doesn't really make much sense.  We have learned by now,
haven't we?  This subject has been discussed a few times in the past,
and Johannes summary is my understanding as well:
"I don't think anyone likes that"

The DSS mapping sucks even more that the IPS mapping, BTW.  I don't
think there are any real users?  Not that I know of, at least.  DSS is
much better implmeneted as some per-session character device, as
requested by numerous people for years.  Sorry for not listening. Looks
like it is too late now.

> Is any of this VLAN stuff required by MBIM?


No.  It's my fault and mine alone.

> I suggest you throw away the pretence this is an Ethernet device. It

> is not.


I completely agree.  I wish I had gone for simple raw-ip devices both in
the qmi_wwan and cdc_mbim.  But qmi_wwan got them later, so there is
already support for such things in wwan userspace.


Bjørn
Andrew Lunn Jan. 20, 2021, 7:34 p.m. UTC | #4
On Sun, Jan 17, 2021 at 06:26:54PM +0100, Bjørn Mork wrote:
> I was young and stupid. Now I'm not that young anymore ;-)


We all make mistakes, when we don't have the knowledge there are other
ways. That is partially what code review is about.

> Never ever imagined that this would be replicated in another driver,

> though.  That doesn't really make much sense.  We have learned by now,

> haven't we?  This subject has been discussed a few times in the past,

> and Johannes summary is my understanding as well:

> "I don't think anyone likes that"


So there seems to be agreement there. But what is not clear, is
anybody willing to do the work to fix this, and is there enough ROI.

Do we expect more devices like this? Will 6G, 7G modems look very
different? Be real network devices and not need any of this odd stuff?
Or will they be just be incrementally better but mostly the same?

I went into the review thinking it was an Ethernet driver, and kept
having WTF moments. Now i know it is not an Ethernet driver, i can say
it is not my domain, i don't know the field well enough to say if all
these hacks are acceptable or not.

It probably needs David and Jakub to set the direction to be followed.

   Andrew
Jakub Kicinski Jan. 20, 2021, 11:32 p.m. UTC | #5
On Wed, 20 Jan 2021 20:34:51 +0100 Andrew Lunn wrote:
> On Sun, Jan 17, 2021 at 06:26:54PM +0100, Bjørn Mork wrote:

> > I was young and stupid. Now I'm not that young anymore ;-)  

> 

> We all make mistakes, when we don't have the knowledge there are other

> ways. That is partially what code review is about.

> 

> > Never ever imagined that this would be replicated in another driver,

> > though.  That doesn't really make much sense.  We have learned by now,

> > haven't we?  This subject has been discussed a few times in the past,

> > and Johannes summary is my understanding as well:

> > "I don't think anyone likes that"  

> 

> So there seems to be agreement there. But what is not clear, is

> anybody willing to do the work to fix this, and is there enough ROI.

> 

> Do we expect more devices like this? Will 6G, 7G modems look very

> different? 


Didn't Intel sell its 5G stuff off to Apple?

> Be real network devices and not need any of this odd stuff?

> Or will they be just be incrementally better but mostly the same?

> 

> I went into the review thinking it was an Ethernet driver, and kept

> having WTF moments. Now i know it is not an Ethernet driver, i can say

> it is not my domain, i don't know the field well enough to say if all

> these hacks are acceptable or not.

> 

> It probably needs David and Jakub to set the direction to be followed.


AFAIU all those cellar modems are relatively slow and FW-heavy, so the
ideal solution IMO is not even a common kernel interface but actually
a common device interface, like NVMe (or virtio for lack of better
examples).
Dan Williams Jan. 21, 2021, 1:34 a.m. UTC | #6
On Wed, 2021-01-20 at 15:32 -0800, Jakub Kicinski wrote:
> On Wed, 20 Jan 2021 20:34:51 +0100 Andrew Lunn wrote:

> > On Sun, Jan 17, 2021 at 06:26:54PM +0100, Bjørn Mork wrote:

> > > I was young and stupid. Now I'm not that young anymore ;-)  

> > 

> > We all make mistakes, when we don't have the knowledge there are

> > other

> > ways. That is partially what code review is about.

> > 

> > > Never ever imagined that this would be replicated in another

> > > driver,

> > > though.  That doesn't really make much sense.  We have learned by

> > > now,

> > > haven't we?  This subject has been discussed a few times in the

> > > past,

> > > and Johannes summary is my understanding as well:

> > > "I don't think anyone likes that"  

> > 

> > So there seems to be agreement there. But what is not clear, is

> > anybody willing to do the work to fix this, and is there enough

> > ROI.

> > 

> > Do we expect more devices like this? Will 6G, 7G modems look very

> > different? 

> 

> Didn't Intel sell its 5G stuff off to Apple?


Yes, but they kept the ability to continue with 3G/4G hardware and
other stuff.

> > Be real network devices and not need any of this odd stuff?

> > Or will they be just be incrementally better but mostly the same?

> > 

> > I went into the review thinking it was an Ethernet driver, and kept

> > having WTF moments. Now i know it is not an Ethernet driver, i can

> > say

> > it is not my domain, i don't know the field well enough to say if

> > all

> > these hacks are acceptable or not.

> > 

> > It probably needs David and Jakub to set the direction to be

> > followed.

> 

> AFAIU all those cellar modems are relatively slow and FW-heavy, so

> the

> ideal solution IMO is not even a common kernel interface but actually

> a common device interface, like NVMe (or virtio for lack of better

> examples).


That was supposed to be MBIM, but unfortunately those involved didn't
iterate and MBIM got stuck. I don't think we'll see a standard as long
as some vendors are dominant and see no need for it.

Dan
Andrew Lunn Jan. 22, 2021, 11:45 p.m. UTC | #7
On Wed, Jan 20, 2021 at 07:34:48PM -0600, Dan Williams wrote:
> On Wed, 2021-01-20 at 15:32 -0800, Jakub Kicinski wrote:

> > On Wed, 20 Jan 2021 20:34:51 +0100 Andrew Lunn wrote:

> > > On Sun, Jan 17, 2021 at 06:26:54PM +0100, Bjørn Mork wrote:

> > > > I was young and stupid. Now I'm not that young anymore ;-)  

> > > 

> > > We all make mistakes, when we don't have the knowledge there are

> > > other

> > > ways. That is partially what code review is about.

> > > 

> > > > Never ever imagined that this would be replicated in another

> > > > driver,

> > > > though.  That doesn't really make much sense.  We have learned by

> > > > now,

> > > > haven't we?  This subject has been discussed a few times in the

> > > > past,

> > > > and Johannes summary is my understanding as well:

> > > > "I don't think anyone likes that"  

> > > 

> > > So there seems to be agreement there. But what is not clear, is

> > > anybody willing to do the work to fix this, and is there enough

> > > ROI.

> > > 

> > > Do we expect more devices like this? Will 6G, 7G modems look very

> > > different? 

> > 

> > Didn't Intel sell its 5G stuff off to Apple?

> 

> Yes, but they kept the ability to continue with 3G/4G hardware and

> other stuff.


But we can expect 6G in what, 2030? And 7G in 2040? Are they going to
look different? Or is it going to be more of the same, meaningless
ethernet headers, VLANs where VLANs make little sense?

> > > Be real network devices and not need any of this odd stuff?

> > > Or will they be just be incrementally better but mostly the same?

> > > 

> > > I went into the review thinking it was an Ethernet driver, and kept

> > > having WTF moments. Now i know it is not an Ethernet driver, i can

> > > say

> > > it is not my domain, i don't know the field well enough to say if

> > > all

> > > these hacks are acceptable or not.

> > > 

> > > It probably needs David and Jakub to set the direction to be

> > > followed.

> > 

> > AFAIU all those cellar modems are relatively slow and FW-heavy, so

> > the

> > ideal solution IMO is not even a common kernel interface but actually

> > a common device interface, like NVMe (or virtio for lack of better

> > examples).

> 

> That was supposed to be MBIM, but unfortunately those involved didn't

> iterate and MBIM got stuck. I don't think we'll see a standard as long

> as some vendors are dominant and see no need for it.


We the kernel community need to decide, we are happy for this broken
architecture to live on, and we should give suggest how to make this
submission better. Or we need to push back and say for the long term
good, this driver is not going to be accepted, use a more sensible
architecture.

	Andrew