Message ID | 1534328582-17664-2-git-send-email-thunder.leizhen@huawei.com |
---|---|
State | New |
Headers | show |
Series | bugfix and optimization about CMD_SYNC | expand |
On 15/08/18 11:23, Zhen Lei wrote: > The condition "(int)(VAL - sync_idx) >= 0" to break loop in function > __arm_smmu_sync_poll_msi requires that sync_idx must be increased > monotonously according to the sequence of the CMDs in the cmdq. > > But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected > by spinlock, so the following scenarios may appear: > cpu0 cpu1 > msidata=0 > msidata=1 > insert cmd1 > insert cmd0 > smmu execute cmd1 > smmu execute cmd0 > poll timeout, because msidata=1 is overridden by > cmd0, that means VAL=0, sync_idx=1. > > This is not a functional problem, just make the caller wait for a long > time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs > during the waiting period will break it. > > Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> > --- > drivers/iommu/arm-smmu-v3.c | 12 ++++++++---- > 1 file changed, 8 insertions(+), 4 deletions(-) > > diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c > index 1d64710..3f5c236 100644 > --- a/drivers/iommu/arm-smmu-v3.c > +++ b/drivers/iommu/arm-smmu-v3.c > @@ -566,7 +566,7 @@ struct arm_smmu_device { > > int gerr_irq; > int combined_irq; > - atomic_t sync_nr; > + u32 sync_nr; > > unsigned long ias; /* IPA */ > unsigned long oas; /* PA */ > @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) > return 0; > } > > +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) If we *are* going to go down this route then I think it would make sense to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync command, then calling this guy would convert it to an MSI-based one. As-is, having bits of mutually-dependent data handled across two separate places just seems too messy and error-prone. That said, I still don't think that just building the whole command under the lock is really all that bad - even when it doesn't get optimised into one of the assignments that memset you call out is only a single "stp xzr, xzr, ...", and a couple of extra branches doesn't seem a huge deal compared to the DSB and MMIO accesses (and potentially polling) that we're about to do anyway. I've tried hacking things up enough to convince GCC to inline a specialisation of the relevant switch case when ent->opcode is known, and that reduces the "overhead" down to just a handful of ALU instructions. I still need to try cleaning said hack up and double-check that it doesn't have any adverse impact on all the other SMMUv3 stuff in development, but watch this space... Robin. > +{ > + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, msidata); > +} > + > /* High-level queue accessors */ > static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) > { > @@ -836,7 +841,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) > cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV); > cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); > cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); > - cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata); > cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; > break; > default: > @@ -947,7 +951,6 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) > struct arm_smmu_cmdq_ent ent = { > .opcode = CMDQ_OP_CMD_SYNC, > .sync = { > - .msidata = atomic_inc_return_relaxed(&smmu->sync_nr), > .msiaddr = virt_to_phys(&smmu->sync_count), > }, > }; > @@ -955,6 +958,8 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) > arm_smmu_cmdq_build_cmd(cmd, &ent); > > spin_lock_irqsave(&smmu->cmdq.lock, flags); > + ent.sync.msidata = ++smmu->sync_nr; > + arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata); > arm_smmu_cmdq_insert_cmd(smmu, cmd); > spin_unlock_irqrestore(&smmu->cmdq.lock, flags); > > @@ -2179,7 +2184,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) > { > int ret; > > - atomic_set(&smmu->sync_nr, 0); > ret = arm_smmu_init_queues(smmu); > if (ret) > return ret; > -- > 1.8.3 > >
On Wed, Aug 15, 2018 at 01:26:31PM +0100, Robin Murphy wrote: > On 15/08/18 11:23, Zhen Lei wrote: > >The condition "(int)(VAL - sync_idx) >= 0" to break loop in function > >__arm_smmu_sync_poll_msi requires that sync_idx must be increased > >monotonously according to the sequence of the CMDs in the cmdq. > > > >But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected > >by spinlock, so the following scenarios may appear: > >cpu0 cpu1 > >msidata=0 > > msidata=1 > > insert cmd1 > >insert cmd0 > > smmu execute cmd1 > >smmu execute cmd0 > > poll timeout, because msidata=1 is overridden by > > cmd0, that means VAL=0, sync_idx=1. > > > >This is not a functional problem, just make the caller wait for a long > >time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs > >during the waiting period will break it. > > > >Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> > >--- > > drivers/iommu/arm-smmu-v3.c | 12 ++++++++---- > > 1 file changed, 8 insertions(+), 4 deletions(-) > > > >diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c > >index 1d64710..3f5c236 100644 > >--- a/drivers/iommu/arm-smmu-v3.c > >+++ b/drivers/iommu/arm-smmu-v3.c > >@@ -566,7 +566,7 @@ struct arm_smmu_device { > > > > int gerr_irq; > > int combined_irq; > >- atomic_t sync_nr; > >+ u32 sync_nr; > > > > unsigned long ias; /* IPA */ > > unsigned long oas; /* PA */ > >@@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) > > return 0; > > } > > > >+static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) > > If we *are* going to go down this route then I think it would make sense to > move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. > arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync > command, then calling this guy would convert it to an MSI-based one. As-is, > having bits of mutually-dependent data handled across two separate places > just seems too messy and error-prone. Yeah, but I'd first like to see some number showing that doing all of this under the lock actually has an impact. Will
On 15/08/2018 14:00, Will Deacon wrote: > On Wed, Aug 15, 2018 at 01:26:31PM +0100, Robin Murphy wrote: >> On 15/08/18 11:23, Zhen Lei wrote: >>> The condition "(int)(VAL - sync_idx) >= 0" to break loop in function >>> __arm_smmu_sync_poll_msi requires that sync_idx must be increased >>> monotonously according to the sequence of the CMDs in the cmdq. >>> >>> But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected >>> by spinlock, so the following scenarios may appear: >>> cpu0 cpu1 >>> msidata=0 >>> msidata=1 >>> insert cmd1 >>> insert cmd0 >>> smmu execute cmd1 >>> smmu execute cmd0 >>> poll timeout, because msidata=1 is overridden by >>> cmd0, that means VAL=0, sync_idx=1. >>> >>> This is not a functional problem, just make the caller wait for a long >>> time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs >>> during the waiting period will break it. >>> >>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> >>> --- >>> drivers/iommu/arm-smmu-v3.c | 12 ++++++++---- >>> 1 file changed, 8 insertions(+), 4 deletions(-) >>> >>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c >>> index 1d64710..3f5c236 100644 >>> --- a/drivers/iommu/arm-smmu-v3.c >>> +++ b/drivers/iommu/arm-smmu-v3.c >>> @@ -566,7 +566,7 @@ struct arm_smmu_device { >>> >>> int gerr_irq; >>> int combined_irq; >>> - atomic_t sync_nr; >>> + u32 sync_nr; >>> >>> unsigned long ias; /* IPA */ >>> unsigned long oas; /* PA */ >>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) >>> return 0; >>> } >>> >>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) >> >> If we *are* going to go down this route then I think it would make sense to >> move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. >> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync >> command, then calling this guy would convert it to an MSI-based one. As-is, >> having bits of mutually-dependent data handled across two separate places >> just seems too messy and error-prone. > > Yeah, but I'd first like to see some number showing that doing all of this > under the lock actually has an impact. Update: I tested this patch versus a modified version which builds the command under the queue spinlock (* below). From my testing there is a small difference: Setup: Testing Single NVME card fio 15 processes No process pinning Average Results: v3 patch read/r,w/write (IOPS): 301K/149K,149K/307K Build under lock version read/r,w/write (IOPS): 304K/150K,150K/311K I don't know why it's better to build under the lock. We can test more. I suppose there is no justification to build the command outside the spinlock based on these results alone... Cheers, John * Modified version: static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) { u64 cmd[CMDQ_ENT_DWORDS]; unsigned long flags; struct arm_smmu_cmdq_ent ent = { .opcode = CMDQ_OP_CMD_SYNC, .sync = { .msiaddr = virt_to_phys(&smmu->sync_count), }, }; spin_lock_irqsave(&smmu->cmdq.lock, flags); ent.sync.msidata = ++smmu->sync_nr; arm_smmu_cmdq_build_cmd(cmd, &ent); arm_smmu_cmdq_insert_cmd(smmu, cmd); spin_unlock_irqrestore(&smmu->cmdq.lock, flags); return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata); } > Will > > . >
On 2018/8/16 2:08, John Garry wrote: > On 15/08/2018 14:00, Will Deacon wrote: >> On Wed, Aug 15, 2018 at 01:26:31PM +0100, Robin Murphy wrote: >>> On 15/08/18 11:23, Zhen Lei wrote: >>>> The condition "(int)(VAL - sync_idx) >= 0" to break loop in function >>>> __arm_smmu_sync_poll_msi requires that sync_idx must be increased >>>> monotonously according to the sequence of the CMDs in the cmdq. >>>> >>>> But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected >>>> by spinlock, so the following scenarios may appear: >>>> cpu0 cpu1 >>>> msidata=0 >>>> msidata=1 >>>> insert cmd1 >>>> insert cmd0 >>>> smmu execute cmd1 >>>> smmu execute cmd0 >>>> poll timeout, because msidata=1 is overridden by >>>> cmd0, that means VAL=0, sync_idx=1. >>>> >>>> This is not a functional problem, just make the caller wait for a long >>>> time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs >>>> during the waiting period will break it. >>>> >>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> >>>> --- >>>> drivers/iommu/arm-smmu-v3.c | 12 ++++++++---- >>>> 1 file changed, 8 insertions(+), 4 deletions(-) >>>> >>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c >>>> index 1d64710..3f5c236 100644 >>>> --- a/drivers/iommu/arm-smmu-v3.c >>>> +++ b/drivers/iommu/arm-smmu-v3.c >>>> @@ -566,7 +566,7 @@ struct arm_smmu_device { >>>> >>>> int gerr_irq; >>>> int combined_irq; >>>> - atomic_t sync_nr; >>>> + u32 sync_nr; >>>> >>>> unsigned long ias; /* IPA */ >>>> unsigned long oas; /* PA */ >>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) >>>> return 0; >>>> } >>>> >>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) >>> >>> If we *are* going to go down this route then I think it would make sense to >>> move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. >>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync >>> command, then calling this guy would convert it to an MSI-based one. As-is, >>> having bits of mutually-dependent data handled across two separate places >>> just seems too messy and error-prone. >> >> Yeah, but I'd first like to see some number showing that doing all of this >> under the lock actually has an impact. > > Update: > > I tested this patch versus a modified version which builds the command under the queue spinlock (* below). From my testing there is a small difference: > > Setup: > Testing Single NVME card > fio 15 processes > No process pinning > > Average Results: > v3 patch read/r,w/write (IOPS): 301K/149K,149K/307K > Build under lock version read/r,w/write (IOPS): 304K/150K,150K/311K > > I don't know why it's better to build under the lock. We can test more. I have analysed the assembly code, the memset will be optimized as Robin said to be "stp xzr, xzr, [x0]", and the switch..case is as below: ffff0000085e5744 <arm_smmu_cmdq_build_cmd>: ffff0000085e5744: a9007c1f stp xzr, xzr, [x0] //memset ffff0000085e5748: 39400023 ldrb w3, [x1] ffff0000085e574c: f9400002 ldr x2, [x0] ffff0000085e5750: aa020062 orr x2, x3, x2 ffff0000085e5754: f9000002 str x2, [x0] ffff0000085e5758: 39400023 ldrb w3, [x1] //ent->opcode ffff0000085e575c: 51000463 sub w3, w3, #0x1 ffff0000085e5760: 7101147f cmp w3, #0x45 ffff0000085e5764: 54000069 b.ls ffff0000085e5770 ffff0000085e5768: 12800023 mov w3, #0xfffffffe ffff0000085e576c: 1400000e b ffff0000085e57a4 ffff0000085e5770: b0003024 adrp x4, ffff000008bea000 ffff0000085e5774: 91096084 add x4, x4, #0x258 //static table in rodata ffff0000085e5778: 38634883 ldrb w3, [x4,w3,uxtw] //use ent->opcode as index ffff0000085e577c: 10000064 adr x4, ffff0000085e5788 ffff0000085e5780: 8b238883 add x3, x4, w3, sxtb #2 ffff0000085e5784: d61f0060 br x3 //jump to "case xxx:" Actually, after apply the patch "inline arm_smmu_cmdq_build_cmd" sent by Robin, the memset and static table will be removed: ffff0000085e68a8: 94123207 bl ffff000008a730c4 <_raw_spin_lock_irqsave> ffff0000085e68ac: b9410ad5 ldr w21, [x22,#264] ffff0000085e68b0: aa0003fa mov x26, x0 ffff0000085e68b4: 110006b5 add w21, w21, #0x1 //++smmu->sync_nr ffff0000085e68b8: b9010ad5 str w21, [x22,#264] ffff0000085e68bc: b50005f3 cbnz x19, ffff0000085e6978 //if (ent->sync.msiaddr) ffff0000085e68c0: d28408c2 mov x2, #0x2046 ffff0000085e68c4: f2a1f802 movk x2, #0xfc0, lsl #16 //the constant part of CMD_SYNC ffff0000085e68c8: aa158042 orr x2, x2, x21, lsl #32 //or msidata ffff0000085e68cc: aa1603e0 mov x0, x22 //x0 = x22 = smmu ffff0000085e68d0: 910163a1 add x1, x29, #0x58 //x1 = the address of local variable "cmd" ffff0000085e68d4: f9002fa2 str x2, [x29,#88] //save cmd[0] ffff0000085e68d8: 927ec673 and x19, x19, #0xffffffffffffc ffff0000085e68dc: f90033b3 str x19, [x29,#96] //save cmd[1] ffff0000085e68e0: 97fffd0d bl ffff0000085e5d14 <arm_smmu_cmdq_insert_cmd> So that, my patch v2 plus Robin's "inline arm_smmu_cmdq_build_cmd()" is a good choice. But the assembly code of my patch v3, it seems that is still shorter than above: ffff0000085e695c: 9412320a bl ffff000008a73184 <_raw_spin_lock_irqsave> ffff0000085e6960: aa0003f6 mov x22, x0 ffff0000085e6964: b9410a62 ldr w2, [x19,#264] ffff0000085e6968: aa1303e0 mov x0, x19 ffff0000085e696c: f94023a3 ldr x3, [x29,#64] ffff0000085e6970: 910103a1 add x1, x29, #0x40 ffff0000085e6974: 11000442 add w2, w2, #0x1 //++smmu->sync_nr ffff0000085e6978: b9010a62 str w2, [x19,#264] ffff0000085e697c: b9005ba2 str w2, [x29,#88] ffff0000085e6980: aa028062 orr x2, x3, x2, lsl #32 ffff0000085e6984: f90023a2 str x2, [x29,#64] ffff0000085e6988: 97fffd58 bl ffff0000085e5ee8 <arm_smmu_cmdq_insert_cmd> > > I suppose there is no justification to build the command outside the spinlock based on these results alone... > > Cheers, > John > > * Modified version: > static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) > { > u64 cmd[CMDQ_ENT_DWORDS]; > unsigned long flags; > struct arm_smmu_cmdq_ent ent = { > .opcode = CMDQ_OP_CMD_SYNC, > .sync = { > .msiaddr = virt_to_phys(&smmu->sync_count), > }, > }; > > spin_lock_irqsave(&smmu->cmdq.lock, flags); > ent.sync.msidata = ++smmu->sync_nr; > arm_smmu_cmdq_build_cmd(cmd, &ent); > arm_smmu_cmdq_insert_cmd(smmu, cmd); > spin_unlock_irqrestore(&smmu->cmdq.lock, flags); > > return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata); > } > > >> Will >> >> . >> > > > > . > -- Thanks! BestRegards
On 2018/8/15 20:26, Robin Murphy wrote: > On 15/08/18 11:23, Zhen Lei wrote: >> The condition "(int)(VAL - sync_idx) >= 0" to break loop in function >> __arm_smmu_sync_poll_msi requires that sync_idx must be increased >> monotonously according to the sequence of the CMDs in the cmdq. >> >> But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected >> by spinlock, so the following scenarios may appear: >> cpu0 cpu1 >> msidata=0 >> msidata=1 >> insert cmd1 >> insert cmd0 >> smmu execute cmd1 >> smmu execute cmd0 >> poll timeout, because msidata=1 is overridden by >> cmd0, that means VAL=0, sync_idx=1. >> >> This is not a functional problem, just make the caller wait for a long >> time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs >> during the waiting period will break it. >> >> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> >> --- >> drivers/iommu/arm-smmu-v3.c | 12 ++++++++---- >> 1 file changed, 8 insertions(+), 4 deletions(-) >> >> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c >> index 1d64710..3f5c236 100644 >> --- a/drivers/iommu/arm-smmu-v3.c >> +++ b/drivers/iommu/arm-smmu-v3.c >> @@ -566,7 +566,7 @@ struct arm_smmu_device { >> >> int gerr_irq; >> int combined_irq; >> - atomic_t sync_nr; >> + u32 sync_nr; >> >> unsigned long ias; /* IPA */ >> unsigned long oas; /* PA */ >> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) >> return 0; >> } >> >> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) > > If we *are* going to go down this route then I think it would make sense to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync command, then calling this guy would convert it to an MSI-based one. As-is, having bits of mutually-dependent data handled across two separate places just seems too messy and error-prone. Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"? static inline void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) { cmd[0] = FIELD_PREP(CMDQ_0_OP, ent->opcode); cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); cmd[1] = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; } > > That said, I still don't think that just building the whole command under the lock is really all that bad - even when it doesn't get optimised into one of the assignments that memset you call out is only a single "stp xzr, xzr, ...", and a couple of extra branches doesn't seem a huge deal compared to the DSB and MMIO accesses (and potentially polling) that we're about to do anyway. I've tried hacking things up enough to convince GCC to inline a specialisation of the relevant switch case when ent->opcode is known, and that reduces the "overhead" down to just a handful of ALU instructions. I still need to try cleaning said hack up and double-check that it doesn't have any adverse impact on all the other SMMUv3 stuff in development, but watch this space... > > Robin. > >> +{ >> + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, msidata); >> +} >> + >> /* High-level queue accessors */ >> static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) >> { >> @@ -836,7 +841,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) >> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV); >> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); >> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); >> - cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata); >> cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; >> break; >> default: >> @@ -947,7 +951,6 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) >> struct arm_smmu_cmdq_ent ent = { >> .opcode = CMDQ_OP_CMD_SYNC, >> .sync = { >> - .msidata = atomic_inc_return_relaxed(&smmu->sync_nr), >> .msiaddr = virt_to_phys(&smmu->sync_count), >> }, >> }; >> @@ -955,6 +958,8 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) >> arm_smmu_cmdq_build_cmd(cmd, &ent); >> >> spin_lock_irqsave(&smmu->cmdq.lock, flags); >> + ent.sync.msidata = ++smmu->sync_nr; >> + arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata); >> arm_smmu_cmdq_insert_cmd(smmu, cmd); >> spin_unlock_irqrestore(&smmu->cmdq.lock, flags); >> >> @@ -2179,7 +2184,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) >> { >> int ret; >> >> - atomic_set(&smmu->sync_nr, 0); >> ret = arm_smmu_init_queues(smmu); >> if (ret) >> return ret; >> -- >> 1.8.3 >> >> > > . > -- Thanks! BestRegards
On Thu, Aug 16, 2018 at 04:21:17PM +0800, Leizhen (ThunderTown) wrote: > On 2018/8/15 20:26, Robin Murphy wrote: > > On 15/08/18 11:23, Zhen Lei wrote: > >> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c > >> index 1d64710..3f5c236 100644 > >> --- a/drivers/iommu/arm-smmu-v3.c > >> +++ b/drivers/iommu/arm-smmu-v3.c > >> @@ -566,7 +566,7 @@ struct arm_smmu_device { > >> > >> int gerr_irq; > >> int combined_irq; > >> - atomic_t sync_nr; > >> + u32 sync_nr; > >> > >> unsigned long ias; /* IPA */ > >> unsigned long oas; /* PA */ > >> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) > >> return 0; > >> } > >> > >> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) > > > > If we *are* going to go down this route then I think it would make sense > > to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. > > arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync > > command, then calling this guy would convert it to an MSI-based one. > > As-is, having bits of mutually-dependent data handled across two > > separate places just seems too messy and error-prone. > > Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"? > > static inline > void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) > { > cmd[0] = FIELD_PREP(CMDQ_0_OP, ent->opcode); > cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); > cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); > cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); > cmd[1] = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; > } None of this seems justified given the numbers from John, so please just do the simple thing and build the command with the lock held. Will
On 2018-08-16 10:18 AM, Will Deacon wrote: > On Thu, Aug 16, 2018 at 04:21:17PM +0800, Leizhen (ThunderTown) wrote: >> On 2018/8/15 20:26, Robin Murphy wrote: >>> On 15/08/18 11:23, Zhen Lei wrote: >>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c >>>> index 1d64710..3f5c236 100644 >>>> --- a/drivers/iommu/arm-smmu-v3.c >>>> +++ b/drivers/iommu/arm-smmu-v3.c >>>> @@ -566,7 +566,7 @@ struct arm_smmu_device { >>>> >>>> int gerr_irq; >>>> int combined_irq; >>>> - atomic_t sync_nr; >>>> + u32 sync_nr; >>>> >>>> unsigned long ias; /* IPA */ >>>> unsigned long oas; /* PA */ >>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) >>>> return 0; >>>> } >>>> >>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) >>> >>> If we *are* going to go down this route then I think it would make sense >>> to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. >>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync >>> command, then calling this guy would convert it to an MSI-based one. >>> As-is, having bits of mutually-dependent data handled across two >>> separate places just seems too messy and error-prone. >> >> Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"? >> >> static inline >> void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) >> { >> cmd[0] = FIELD_PREP(CMDQ_0_OP, ent->opcode); >> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); >> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); >> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); >> cmd[1] = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; >> } > > None of this seems justified given the numbers from John, so please just do > the simple thing and build the command with the lock held. Agreed - sorry if my wording was unclear, but that suggestion was only for the possibility of it proving genuinely worthwhile to build the command outside the lock. Since that isn't the case, I definitely prefer the simpler approach too. Robin.
On 2018/8/16 17:27, Robin Murphy wrote: > On 2018-08-16 10:18 AM, Will Deacon wrote: >> On Thu, Aug 16, 2018 at 04:21:17PM +0800, Leizhen (ThunderTown) wrote: >>> On 2018/8/15 20:26, Robin Murphy wrote: >>>> On 15/08/18 11:23, Zhen Lei wrote: >>>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c >>>>> index 1d64710..3f5c236 100644 >>>>> --- a/drivers/iommu/arm-smmu-v3.c >>>>> +++ b/drivers/iommu/arm-smmu-v3.c >>>>> @@ -566,7 +566,7 @@ struct arm_smmu_device { >>>>> >>>>> int gerr_irq; >>>>> int combined_irq; >>>>> - atomic_t sync_nr; >>>>> + u32 sync_nr; >>>>> >>>>> unsigned long ias; /* IPA */ >>>>> unsigned long oas; /* PA */ >>>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) >>>>> return 0; >>>>> } >>>>> >>>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) >>>> >>>> If we *are* going to go down this route then I think it would make sense >>>> to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. >>>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync >>>> command, then calling this guy would convert it to an MSI-based one. >>>> As-is, having bits of mutually-dependent data handled across two >>>> separate places just seems too messy and error-prone. >>> >>> Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"? >>> >>> static inline >>> void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) >>> { >>> cmd[0] = FIELD_PREP(CMDQ_0_OP, ent->opcode); >>> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); >>> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); >>> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); miss: cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata); >>> cmd[1] = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; >>> } >> >> None of this seems justified given the numbers from John, so please just do >> the simple thing and build the command with the lock held. In order to observe the optimization effect, I conducted 5 tests for each case. Although the test result is volatility, but we can still get which case is good or bad. It accords with our theoretical analysis. Test command: fio -numjobs=8 -rw=randread -runtime=30 ... -bs=4k Test Result: IOPS, for example: read : io=86790MB, bw=2892.1MB/s, iops=740586, runt= 30001msec Case 1: (without these patches) 675480 672055 665275 648610 661146 Case 2: (move arm_smmu_cmdq_build_cmd into lock) 688714 697355 632951 700540 678459 Case 3: (base on case 2, replace arm_smmu_cmdq_build_cmd with arm_smmu_cmdq_build_sync_msi_cmd) 721582 729226 689574 679710 727770 Case 4: (base on case 3, plus patch 2) 734077 742868 738194 682544 740586 Case 2 is better than case 1, I think the main reason is the atomic_inc_return_relaxed(&smmu->sync_nr) has been removed. Case 3 is better than case 2, because the assembly code is reduced, see below. > > Agreed - sorry if my wording was unclear, but that suggestion was only for the possibility of it proving genuinely worthwhile to build the command outside the lock. Since that isn't the case, I definitely prefer the simpler approach too. Yes, I mean replace arm_smmu_cmdq_build_cmd with arm_smmu_cmdq_build_sync_msi_cmd to build the command inside the lock. spin_lock_irqsave(&smmu->cmdq.lock, flags); + ent.sync.msidata = ++smmu->sync_nr; + arm_smmu_cmdq_build_sync_msi_cmd(cmd, &ent); arm_smmu_cmdq_insert_cmd(smmu, cmd); spin_unlock_irqrestore(&smmu->cmdq.lock, flags); The assembly code showed me that it's very nice. ffff0000085e6928: 94123207 bl ffff000008a73144 <_raw_spin_lock_irqsave> ffff0000085e692c: b9410ad5 ldr w21, [x22,#264] ffff0000085e6930: d28208c2 mov x2, #0x1046 // #4166 ffff0000085e6934: aa0003fa mov x26, x0 ffff0000085e6938: 110006b5 add w21, w21, #0x1 ffff0000085e693c: f2a1f802 movk x2, #0xfc0, lsl #16 ffff0000085e6940: aa1603e0 mov x0, x22 ffff0000085e6944: 910163a1 add x1, x29, #0x58 ffff0000085e6948: aa158042 orr x2, x2, x21, lsl #32 ffff0000085e694c: b9010ad5 str w21, [x22,#264] ffff0000085e6950: f9002fa2 str x2, [x29,#88] ffff0000085e6954: d2994016 mov x22, #0xca00 // #51712 ffff0000085e6958: f90033b3 str x19, [x29,#96] ffff0000085e695c: 97fffd5b bl ffff0000085e5ec8 <arm_smmu_cmdq_insert_cmd> ffff0000085e6960: aa1903e0 mov x0, x25 ffff0000085e6964: aa1a03e1 mov x1, x26 ffff0000085e6968: f2a77356 movk x22, #0x3b9a, lsl #16 ffff0000085e696c: 94123145 bl ffff000008a72e80 <_raw_spin_unlock_irqrestore> > > Robin. > > . > -- Thanks! BestRegards
On 2018/8/19 15:02, Leizhen (ThunderTown) wrote: > > > On 2018/8/16 17:27, Robin Murphy wrote: >> On 2018-08-16 10:18 AM, Will Deacon wrote: >>> On Thu, Aug 16, 2018 at 04:21:17PM +0800, Leizhen (ThunderTown) wrote: >>>> On 2018/8/15 20:26, Robin Murphy wrote: >>>>> On 15/08/18 11:23, Zhen Lei wrote: >>>>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c >>>>>> index 1d64710..3f5c236 100644 >>>>>> --- a/drivers/iommu/arm-smmu-v3.c >>>>>> +++ b/drivers/iommu/arm-smmu-v3.c >>>>>> @@ -566,7 +566,7 @@ struct arm_smmu_device { >>>>>> >>>>>> int gerr_irq; >>>>>> int combined_irq; >>>>>> - atomic_t sync_nr; >>>>>> + u32 sync_nr; >>>>>> >>>>>> unsigned long ias; /* IPA */ >>>>>> unsigned long oas; /* PA */ >>>>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) >>>>>> return 0; >>>>>> } >>>>>> >>>>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) >>>>> >>>>> If we *are* going to go down this route then I think it would make sense >>>>> to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. >>>>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync >>>>> command, then calling this guy would convert it to an MSI-based one. >>>>> As-is, having bits of mutually-dependent data handled across two >>>>> separate places just seems too messy and error-prone. >>>> >>>> Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"? >>>> >>>> static inline >>>> void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) >>>> { >>>> cmd[0] = FIELD_PREP(CMDQ_0_OP, ent->opcode); >>>> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); >>>> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); >>>> cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); > > miss: cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata); > >>>> cmd[1] = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; >>>> } >>> >>> None of this seems justified given the numbers from John, so please just do >>> the simple thing and build the command with the lock held. > > In order to observe the optimization effect, I conducted 5 tests for each > case. Although the test result is volatility, but we can still get which case > is good or bad. It accords with our theoretical analysis. > > Test command: fio -numjobs=8 -rw=randread -runtime=30 ... -bs=4k > Test Result: IOPS, for example: read : io=86790MB, bw=2892.1MB/s, iops=740586, runt= 30001msec > > Case 1: (without these patches) > 675480 > 672055 > 665275 > 648610 > 661146 > > Case 2: (move arm_smmu_cmdq_build_cmd into lock) https://lore.kernel.org/patchwork/patch/973121/ [v2,1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout > 688714 > 697355 > 632951 > 700540 > 678459 > > Case 3: (base on case 2, replace arm_smmu_cmdq_build_cmd with arm_smmu_cmdq_build_sync_msi_cmd) https://patchwork.kernel.org/patch/10569675/ [v4,1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout > 721582 > 729226 > 689574 > 679710 > 727770 > > Case 4: (base on case 3, plus patch 2) > 734077 > 742868 > 738194 > 682544 > 740586 > > Case 2 is better than case 1, I think the main reason is the atomic_inc_return_relaxed(&smmu->sync_nr) > has been removed. Case 3 is better than case 2, because the assembly code is reduced, see below. Hi, Will Have you received this email? Which case do you prefer? Suppose we don't consider patch 2, according to the test result, maybe we should choose case3. Because John Garry wants patch 2 to cover the non-MSI branch also, this may take some time. So can you decide and apply patch 1 first? > > >> >> Agreed - sorry if my wording was unclear, but that suggestion was only for the possibility of it proving genuinely worthwhile to build the command outside the lock. Since that isn't the case, I definitely prefer the simpler approach too. > > Yes, I mean replace arm_smmu_cmdq_build_cmd with arm_smmu_cmdq_build_sync_msi_cmd to build the command inside the lock. > spin_lock_irqsave(&smmu->cmdq.lock, flags); > + ent.sync.msidata = ++smmu->sync_nr; > + arm_smmu_cmdq_build_sync_msi_cmd(cmd, &ent); > arm_smmu_cmdq_insert_cmd(smmu, cmd); > spin_unlock_irqrestore(&smmu->cmdq.lock, flags); > > The assembly code showed me that it's very nice. > ffff0000085e6928: 94123207 bl ffff000008a73144 <_raw_spin_lock_irqsave> > ffff0000085e692c: b9410ad5 ldr w21, [x22,#264] > ffff0000085e6930: d28208c2 mov x2, #0x1046 // #4166 > ffff0000085e6934: aa0003fa mov x26, x0 > ffff0000085e6938: 110006b5 add w21, w21, #0x1 > ffff0000085e693c: f2a1f802 movk x2, #0xfc0, lsl #16 > ffff0000085e6940: aa1603e0 mov x0, x22 > ffff0000085e6944: 910163a1 add x1, x29, #0x58 > ffff0000085e6948: aa158042 orr x2, x2, x21, lsl #32 > ffff0000085e694c: b9010ad5 str w21, [x22,#264] > ffff0000085e6950: f9002fa2 str x2, [x29,#88] > ffff0000085e6954: d2994016 mov x22, #0xca00 // #51712 > ffff0000085e6958: f90033b3 str x19, [x29,#96] > ffff0000085e695c: 97fffd5b bl ffff0000085e5ec8 <arm_smmu_cmdq_insert_cmd> > ffff0000085e6960: aa1903e0 mov x0, x25 > ffff0000085e6964: aa1a03e1 mov x1, x26 > ffff0000085e6968: f2a77356 movk x22, #0x3b9a, lsl #16 > ffff0000085e696c: 94123145 bl ffff000008a72e80 <_raw_spin_unlock_irqrestore> > > >> >> Robin. >> >> . >> > -- Thanks! BestRegards
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 1d64710..3f5c236 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -566,7 +566,7 @@ struct arm_smmu_device { int gerr_irq; int combined_irq; - atomic_t sync_nr; + u32 sync_nr; unsigned long ias; /* IPA */ unsigned long oas; /* PA */ @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) return 0; } +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata) +{ + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, msidata); +} + /* High-level queue accessors */ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) { @@ -836,7 +841,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV); cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); - cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata); cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; break; default: @@ -947,7 +951,6 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) struct arm_smmu_cmdq_ent ent = { .opcode = CMDQ_OP_CMD_SYNC, .sync = { - .msidata = atomic_inc_return_relaxed(&smmu->sync_nr), .msiaddr = virt_to_phys(&smmu->sync_count), }, }; @@ -955,6 +958,8 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) arm_smmu_cmdq_build_cmd(cmd, &ent); spin_lock_irqsave(&smmu->cmdq.lock, flags); + ent.sync.msidata = ++smmu->sync_nr; + arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata); arm_smmu_cmdq_insert_cmd(smmu, cmd); spin_unlock_irqrestore(&smmu->cmdq.lock, flags); @@ -2179,7 +2184,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) { int ret; - atomic_set(&smmu->sync_nr, 0); ret = arm_smmu_init_queues(smmu); if (ret) return ret;
The condition "(int)(VAL - sync_idx) >= 0" to break loop in function __arm_smmu_sync_poll_msi requires that sync_idx must be increased monotonously according to the sequence of the CMDs in the cmdq. But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected by spinlock, so the following scenarios may appear: cpu0 cpu1 msidata=0 msidata=1 insert cmd1 insert cmd0 smmu execute cmd1 smmu execute cmd0 poll timeout, because msidata=1 is overridden by cmd0, that means VAL=0, sync_idx=1. This is not a functional problem, just make the caller wait for a long time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs during the waiting period will break it. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> --- drivers/iommu/arm-smmu-v3.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) -- 1.8.3