Message ID | 20171123183210.12045-8-julien.grall@linaro.org |
---|---|
State | Superseded |
Headers | show |
Series | xen/arm: Stage-2 handling cleanup | expand |
On 23/11/17 18:32, Julien Grall wrote: > This new function will be used in a follow-up patch to copy data to the guest > using the IPA (aka guest physical address) and then clean the cache. > > Signed-off-by: Julien Grall <julien.grall@linaro.org> > --- > xen/arch/arm/guestcopy.c | 10 ++++++++++ > xen/include/asm-arm/guest_access.h | 6 ++++++ > 2 files changed, 16 insertions(+) > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > index be53bee559..7958663970 100644 > --- a/xen/arch/arm/guestcopy.c > +++ b/xen/arch/arm/guestcopy.c > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le > COPY_from_guest | COPY_linear); > } > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > + paddr_t gpa, > + void *buf, > + unsigned int len) > +{ > + /* P2M is shared between all vCPUs, so the vCPU used does not matter. */ Be very careful with this line of thinking. It is only works after DOMCTL_max_vcpus has succeeded, and before that point, it is a latent NULL pointer dereference. Also, what about vcpus configured with alternative views? ~Andrew > + return copy_guest(buf, gpa, len, d->vcpu[0], > + COPY_to_guest | COPY_ipa | COPY_flush_dcache); > +} > + > int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf, > uint32_t size, bool is_write) > { > diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h > index 6796801cfe..224d2a033b 100644 > --- a/xen/include/asm-arm/guest_access.h > +++ b/xen/include/asm-arm/guest_access.h > @@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from, > unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len); > unsigned long raw_clear_guest(void *to, unsigned len); > > +/* Copy data to guest physical address, then clean the region. */ > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > + paddr_t phys, > + void *buf, > + unsigned int len); > + > int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf, > uint32_t size, bool is_write); >
Hi Andrew, On 23/11/17 18:49, Andrew Cooper wrote: > On 23/11/17 18:32, Julien Grall wrote: >> This new function will be used in a follow-up patch to copy data to the guest >> using the IPA (aka guest physical address) and then clean the cache. >> >> Signed-off-by: Julien Grall <julien.grall@linaro.org> >> --- >> xen/arch/arm/guestcopy.c | 10 ++++++++++ >> xen/include/asm-arm/guest_access.h | 6 ++++++ >> 2 files changed, 16 insertions(+) >> >> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c >> index be53bee559..7958663970 100644 >> --- a/xen/arch/arm/guestcopy.c >> +++ b/xen/arch/arm/guestcopy.c >> @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le >> COPY_from_guest | COPY_linear); >> } >> >> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, >> + paddr_t gpa, >> + void *buf, >> + unsigned int len) >> +{ >> + /* P2M is shared between all vCPUs, so the vCPU used does not matter. */ > > Be very careful with this line of thinking. It is only works after > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > NULL pointer dereference. I really don't expect that function been used before DOMCT_max_vcpus is set. It is only used for hardware emulation or Xen loading image into the hardware domain memory. I could add a check d->vcpus to be safe. > > Also, what about vcpus configured with alternative views? It is not important because the underlying call is get_page_from_gfn that does not care about the alternative view (that function take a domain in parameter). I can update the comment. Cheers,
On Thu, 23 Nov 2017, Julien Grall wrote: > Hi Andrew, > > On 23/11/17 18:49, Andrew Cooper wrote: > > On 23/11/17 18:32, Julien Grall wrote: > > > This new function will be used in a follow-up patch to copy data to the > > > guest > > > using the IPA (aka guest physical address) and then clean the cache. > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org> > > > --- > > > xen/arch/arm/guestcopy.c | 10 ++++++++++ > > > xen/include/asm-arm/guest_access.h | 6 ++++++ > > > 2 files changed, 16 insertions(+) > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > > > index be53bee559..7958663970 100644 > > > --- a/xen/arch/arm/guestcopy.c > > > +++ b/xen/arch/arm/guestcopy.c > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const > > > void __user *from, unsigned le > > > COPY_from_guest | COPY_linear); > > > } > > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > > > + paddr_t gpa, > > > + void *buf, > > > + unsigned int len) > > > +{ > > > + /* P2M is shared between all vCPUs, so the vCPU used does not matter. > > > */ > > > > Be very careful with this line of thinking. It is only works after > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > > NULL pointer dereference. > > I really don't expect that function been used before DOMCT_max_vcpus is set. > It is only used for hardware emulation or Xen loading image into the hardware > domain memory. I could add a check d->vcpus to be safe. > > > > > Also, what about vcpus configured with alternative views? > > It is not important because the underlying call is get_page_from_gfn that does > not care about the alternative view (that function take a domain in > parameter). I can update the comment. Since this is a new function, would it make sense to take a struct vcpu* as parameter, instead of a struct domain* ?
Hi Stefano, On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > On Thu, 23 Nov 2017, Julien Grall wrote: >> Hi Andrew, >> >> On 23/11/17 18:49, Andrew Cooper wrote: >>> On 23/11/17 18:32, Julien Grall wrote: >>>> This new function will be used in a follow-up patch to copy data to the >>>> guest >>>> using the IPA (aka guest physical address) and then clean the cache. >>>> >>>> Signed-off-by: Julien Grall <julien.grall@linaro.org> >>>> --- >>>> xen/arch/arm/guestcopy.c | 10 ++++++++++ >>>> xen/include/asm-arm/guest_access.h | 6 ++++++ >>>> 2 files changed, 16 insertions(+) >>>> >>>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c >>>> index be53bee559..7958663970 100644 >>>> --- a/xen/arch/arm/guestcopy.c >>>> +++ b/xen/arch/arm/guestcopy.c >>>> @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const >>>> void __user *from, unsigned le >>>> COPY_from_guest | COPY_linear); >>>> } >>>> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, >>>> + paddr_t gpa, >>>> + void *buf, >>>> + unsigned int len) >>>> +{ >>>> + /* P2M is shared between all vCPUs, so the vCPU used does not matter. >>>> */ >>> >>> Be very careful with this line of thinking. It is only works after >>> DOMCTL_max_vcpus has succeeded, and before that point, it is a latent >>> NULL pointer dereference. >> >> I really don't expect that function been used before DOMCT_max_vcpus is set. >> It is only used for hardware emulation or Xen loading image into the hardware >> domain memory. I could add a check d->vcpus to be safe. >> >>> >>> Also, what about vcpus configured with alternative views? >> >> It is not important because the underlying call is get_page_from_gfn that does >> not care about the alternative view (that function take a domain in >> parameter). I can update the comment. > > Since this is a new function, would it make sense to take a struct > vcpu* as parameter, instead of a struct domain* ? Well, I suggested this patch this way because likely everyone will use with d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not d->vcpus[1]... Cheers,
Hi, On 06/12/17 12:27, Julien Grall wrote: > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: >> On Thu, 23 Nov 2017, Julien Grall wrote: >>> Hi Andrew, >>> >>> On 23/11/17 18:49, Andrew Cooper wrote: >>>> On 23/11/17 18:32, Julien Grall wrote: >>>>> This new function will be used in a follow-up patch to copy data to >>>>> the >>>>> guest >>>>> using the IPA (aka guest physical address) and then clean the cache. >>>>> >>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org> >>>>> --- >>>>> xen/arch/arm/guestcopy.c | 10 ++++++++++ >>>>> xen/include/asm-arm/guest_access.h | 6 ++++++ >>>>> 2 files changed, 16 insertions(+) >>>>> >>>>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c >>>>> index be53bee559..7958663970 100644 >>>>> --- a/xen/arch/arm/guestcopy.c >>>>> +++ b/xen/arch/arm/guestcopy.c >>>>> @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const >>>>> void __user *from, unsigned le >>>>> COPY_from_guest | COPY_linear); >>>>> } >>>>> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, >>>>> + paddr_t gpa, >>>>> + void *buf, >>>>> + unsigned int len) >>>>> +{ >>>>> + /* P2M is shared between all vCPUs, so the vCPU used does not >>>>> matter. >>>>> */ >>>> >>>> Be very careful with this line of thinking. It is only works after >>>> DOMCTL_max_vcpus has succeeded, and before that point, it is a latent >>>> NULL pointer dereference. >>> >>> I really don't expect that function been used before DOMCT_max_vcpus >>> is set. >>> It is only used for hardware emulation or Xen loading image into the >>> hardware >>> domain memory. I could add a check d->vcpus to be safe. >>> >>>> >>>> Also, what about vcpus configured with alternative views? >>> >>> It is not important because the underlying call is get_page_from_gfn >>> that does >>> not care about the alternative view (that function take a domain in >>> parameter). I can update the comment. >> Since this is a new function, would it make sense to take a struct >> vcpu* as parameter, instead of a struct domain* ? > > Well, I suggested this patch this way because likely everyone will use > with d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and > not d->vcpus[1]... Thinking a bit more to this, it might be better/safer to pass either a domain or a vCPU to copy_guest. I can see 2 solutions: 1# Introduce a union that use the same parameter: union { struct { struct domain *d; } ipa; struct { struct vcpu *v; } gva; } The structure here would be to ensure that it is clear that only domain (resp. vcpu) should be used with ipa (resp. gva). 2# Have 2 parameters, vcpu and domain. Any opinions? Cheers,
On Fri, 8 Dec 2017, Julien Grall wrote: > On 06/12/17 12:27, Julien Grall wrote: > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > > > On Thu, 23 Nov 2017, Julien Grall wrote: > > > > Hi Andrew, > > > > > > > > On 23/11/17 18:49, Andrew Cooper wrote: > > > > > On 23/11/17 18:32, Julien Grall wrote: > > > > > > This new function will be used in a follow-up patch to copy data to > > > > > > the > > > > > > guest > > > > > > using the IPA (aka guest physical address) and then clean the cache. > > > > > > > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org> > > > > > > --- > > > > > > xen/arch/arm/guestcopy.c | 10 ++++++++++ > > > > > > xen/include/asm-arm/guest_access.h | 6 ++++++ > > > > > > 2 files changed, 16 insertions(+) > > > > > > > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > > > > > > index be53bee559..7958663970 100644 > > > > > > --- a/xen/arch/arm/guestcopy.c > > > > > > +++ b/xen/arch/arm/guestcopy.c > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, > > > > > > const > > > > > > void __user *from, unsigned le > > > > > > COPY_from_guest | COPY_linear); > > > > > > } > > > > > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > > > > > > + paddr_t gpa, > > > > > > + void *buf, > > > > > > + unsigned int len) > > > > > > +{ > > > > > > + /* P2M is shared between all vCPUs, so the vCPU used does not > > > > > > matter. > > > > > > */ > > > > > > > > > > Be very careful with this line of thinking. It is only works after > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > > > > > NULL pointer dereference. > > > > > > > > I really don't expect that function been used before DOMCT_max_vcpus is > > > > set. > > > > It is only used for hardware emulation or Xen loading image into the > > > > hardware > > > > domain memory. I could add a check d->vcpus to be safe. > > > > > > > > > > > > > > Also, what about vcpus configured with alternative views? > > > > > > > > It is not important because the underlying call is get_page_from_gfn > > > > that does > > > > not care about the alternative view (that function take a domain in > > > > parameter). I can update the comment. > > > Since this is a new function, would it make sense to take a struct > > > vcpu* as parameter, instead of a struct domain* ? > > > > Well, I suggested this patch this way because likely everyone will use with > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not > > d->vcpus[1]... > > Thinking a bit more to this, it might be better/safer to pass either a domain > or a vCPU to copy_guest. I can see 2 solutions: > 1# Introduce a union that use the same parameter: > union > { > struct > { > struct domain *d; > } ipa; > struct > { > struct vcpu *v; > } gva; > } > The structure here would be to ensure that it is clear that only > domain (resp. vcpu) should be used with ipa (resp. gva). > > 2# Have 2 parameters, vcpu and domain. > > Any opinions? I think that would be clearer. You could also add a paddr_t/vaddr_t correspondingly inside the union maybe.
On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@kernel.org> wrote: On Fri, 8 Dec 2017, Julien Grall wrote: > On 06/12/17 12:27, Julien Grall wrote: > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > > > On Thu, 23 Nov 2017, Julien Grall wrote: > > > > Hi Andrew, > > > > > > > > On 23/11/17 18:49, Andrew Cooper wrote: > > > > > On 23/11/17 18:32, Julien Grall wrote: > > > > > > This new function will be used in a follow-up patch to copy data to > > > > > > the > > > > > > guest > > > > > > using the IPA (aka guest physical address) and then clean the cache. > > > > > > > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org> > > > > > > --- > > > > > > xen/arch/arm/guestcopy.c | 10 ++++++++++ > > > > > > xen/include/asm-arm/guest_access.h | 6 ++++++ > > > > > > 2 files changed, 16 insertions(+) > > > > > > > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > > > > > > index be53bee559..7958663970 100644 > > > > > > --- a/xen/arch/arm/guestcopy.c > > > > > > +++ b/xen/arch/arm/guestcopy.c > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, > > > > > > const > > > > > > void __user *from, unsigned le > > > > > > COPY_from_guest | COPY_linear); > > > > > > } > > > > > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > > > > > > + paddr_t gpa, > > > > > > + void *buf, > > > > > > + unsigned int len) > > > > > > +{ > > > > > > + /* P2M is shared between all vCPUs, so the vCPU used does not > > > > > > matter. > > > > > > */ > > > > > > > > > > Be very careful with this line of thinking. It is only works after > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > > > > > NULL pointer dereference. > > > > > > > > I really don't expect that function been used before DOMCT_max_vcpus is > > > > set. > > > > It is only used for hardware emulation or Xen loading image into the > > > > hardware > > > > domain memory. I could add a check d->vcpus to be safe. > > > > > > > > > > > > > > Also, what about vcpus configured with alternative views? > > > > > > > > It is not important because the underlying call is get_page_from_gfn > > > > that does > > > > not care about the alternative view (that function take a domain in > > > > parameter). I can update the comment. > > > Since this is a new function, would it make sense to take a struct > > > vcpu* as parameter, instead of a struct domain* ? > > > > Well, I suggested this patch this way because likely everyone will use with > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not > > d->vcpus[1]... > > Thinking a bit more to this, it might be better/safer to pass either a domain > or a vCPU to copy_guest. I can see 2 solutions: > 1# Introduce a union that use the same parameter: > union > { > struct > { > struct domain *d; > } ipa; > struct > { > struct vcpu *v; > } gva; > } > The structure here would be to ensure that it is clear that only > domain (resp. vcpu) should be used with ipa (resp. gva). > > 2# Have 2 parameters, vcpu and domain. > > Any opinions? I think that would be clearer. You could also add a paddr_t/vaddr_t correspondingly inside the union maybe. Well, you will have nameclash happening I think. And vaddr_t and paddr_t are confusing because you don't know which address you speak about (guest vs hypervisor). At least ipa/gpa, gva are known naming. Cheers, <div dir="auto"><div><br><div class="gmail_extra"><br><div class="gmail_quote">On 8 Dec 2017 22:26, "Stefano Stabellini" <<a href="mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>> wrote:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="elided-text">On Fri, 8 Dec 2017, Julien Grall wrote:<br> > On 06/12/17 12:27, Julien Grall wrote:<br> > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote:<br> > > > On Thu, 23 Nov 2017, Julien Grall wrote:<br> > > > > Hi Andrew,<br> > > > ><br> > > > > On 23/11/17 18:49, Andrew Cooper wrote:<br> > > > > > On 23/11/17 18:32, Julien Grall wrote:<br> > > > > > > This new function will be used in a follow-up patch to copy data to<br> > > > > > > the<br> > > > > > > guest<br> > > > > > > using the IPA (aka guest physical address) and then clean the cache.<br> > > > > > ><br> > > > > > > Signed-off-by: Julien Grall <<a href="mailto:julien.grall@linaro.org">julien.grall@linaro.org</a>><br> > > > > > > ---<br> > > > > > > xen/arch/arm/guestcopy.c <wbr> | 10 ++++++++++<br> > > > > > > xen/include/asm-arm/guest_<wbr>access.h | 6 ++++++<br> > > > > > > 2 files changed, 16 insertions(+)<br> > > > > > ><br> > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c<br> > > > > > > index be53bee559..7958663970 100644<br> > > > > > > --- a/xen/arch/arm/guestcopy.c<br> > > > > > > +++ b/xen/arch/arm/guestcopy.c<br> > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to,<br> > > > > > > const<br> > > > > > > void __user *from, unsigned le<br> > > > > > > COPY_from_guest | COPY_linear);<br> > > > > > > }<br> > > > > > > +unsigned long copy_to_guest_phys_flush_<wbr>dcache(struct domain *d,<br> > > > > > > + <wbr> paddr_t gpa,<br> > > > > > > + <wbr> void *buf,<br> > > > > > > + <wbr> unsigned int len)<br> > > > > > > +{<br> > > > > > > + /* P2M is shared between all vCPUs, so the vCPU used does not<br> > > > > > > matter.<br> > > > > > > */<br> > > > > ><br> > > > > > Be very careful with this line of thinking. It is only works after<br> > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent<br> > > > > > NULL pointer dereference.<br> > > > ><br> > > > > I really don't expect that function been used before DOMCT_max_vcpus is<br> > > > > set.<br> > > > > It is only used for hardware emulation or Xen loading image into the<br> > > > > hardware<br> > > > > domain memory. I could add a check d->vcpus to be safe.<br> > > > ><br> > > > > ><br> > > > > > Also, what about vcpus configured with alternative views?<br> > > > ><br> > > > > It is not important because the underlying call is get_page_from_gfn<br> > > > > that does<br> > > > > not care about the alternative view (that function take a domain in<br> > > > > parameter). I can update the comment.<br> > > > Since this is a new function, would it make sense to take a struct<br> > > > vcpu* as parameter, instead of a struct domain* ?<br> > ><br> > > Well, I suggested this patch this way because likely everyone will use with<br> > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not<br> > > d->vcpus[1]...<br> ><br> > Thinking a bit more to this, it might be better/safer to pass either a domain<br> > or a vCPU to copy_guest. I can see 2 solutions:<br> > 1# Introduce a union that use the same parameter:<br> > union<br> > {<br> > struct<br> > {<br> > struct domain *d;<br> > } ipa;<br> > struct<br> > {<br> > struct vcpu *v;<br> > } gva;<br> > }<br> > The structure here would be to ensure that it is clear that only<br> > domain (resp. vcpu) should be used with ipa (resp. gva).<br> ><br> > 2# Have 2 parameters, vcpu and domain.<br> ><br> > Any opinions?<br> <br> </div>I think that would be clearer. You could also add a paddr_t/vaddr_t<br> correspondingly inside the union maybe.</blockquote></div></div></div><div dir="auto"><br></div><div dir="auto">Well, you will have nameclash happening I think.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">And vaddr_t and paddr_t are confusing because you don't know which address you speak about (guest vs hypervisor). At least ipa/gpa, gva are known naming.<br></div><div dir="auto"><br></div><div dir="auto">Cheers,</div><div dir="auto"><div class="gmail_extra"><br></div></div></div>
On Fri, 8 Dec 2017, Julien Grall wrote: > On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@kernel.org> wrote: > On Fri, 8 Dec 2017, Julien Grall wrote: > > On 06/12/17 12:27, Julien Grall wrote: > > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > > > > On Thu, 23 Nov 2017, Julien Grall wrote: > > > > > Hi Andrew, > > > > > > > > > > On 23/11/17 18:49, Andrew Cooper wrote: > > > > > > On 23/11/17 18:32, Julien Grall wrote: > > > > > > > This new function will be used in a follow-up patch to copy data to > > > > > > > the > > > > > > > guest > > > > > > > using the IPA (aka guest physical address) and then clean the cache. > > > > > > > > > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org> > > > > > > > --- > > > > > > > xen/arch/arm/guestcopy.c | 10 ++++++++++ > > > > > > > xen/include/asm-arm/guest_access.h | 6 ++++++ > > > > > > > 2 files changed, 16 insertions(+) > > > > > > > > > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > > > > > > > index be53bee559..7958663970 100644 > > > > > > > --- a/xen/arch/arm/guestcopy.c > > > > > > > +++ b/xen/arch/arm/guestcopy.c > > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, > > > > > > > const > > > > > > > void __user *from, unsigned le > > > > > > > COPY_from_guest | COPY_linear); > > > > > > > } > > > > > > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > > > > > > > + paddr_t gpa, > > > > > > > + void *buf, > > > > > > > + unsigned int len) > > > > > > > +{ > > > > > > > + /* P2M is shared between all vCPUs, so the vCPU used does not > > > > > > > matter. > > > > > > > */ > > > > > > > > > > > > Be very careful with this line of thinking. It is only works after > > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > > > > > > NULL pointer dereference. > > > > > > > > > > I really don't expect that function been used before DOMCT_max_vcpus is > > > > > set. > > > > > It is only used for hardware emulation or Xen loading image into the > > > > > hardware > > > > > domain memory. I could add a check d->vcpus to be safe. > > > > > > > > > > > > > > > > > Also, what about vcpus configured with alternative views? > > > > > > > > > > It is not important because the underlying call is get_page_from_gfn > > > > > that does > > > > > not care about the alternative view (that function take a domain in > > > > > parameter). I can update the comment. > > > > Since this is a new function, would it make sense to take a struct > > > > vcpu* as parameter, instead of a struct domain* ? > > > > > > Well, I suggested this patch this way because likely everyone will use with > > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not > > > d->vcpus[1]... > > > > Thinking a bit more to this, it might be better/safer to pass either a domain > > or a vCPU to copy_guest. I can see 2 solutions: > > 1# Introduce a union that use the same parameter: > > union > > { > > struct > > { > > struct domain *d; > > } ipa; > > struct > > { > > struct vcpu *v; > > } gva; > > } > > The structure here would be to ensure that it is clear that only > > domain (resp. vcpu) should be used with ipa (resp. gva). > > > > 2# Have 2 parameters, vcpu and domain. > > > > Any opinions? > > I think that would be clearer. You could also add a paddr_t/vaddr_t > correspondingly inside the union maybe. > > > Well, you will have nameclash happening I think. > > > And vaddr_t and paddr_t are confusing because you don't know which address you speak about (guest vs hypervisor). At least ipa/gpa, gva are known naming. That's not what I meant. ipa and gva are good names. I was suggesting to put an additional address field inside the union to avoid the issue with paddr_t and vaddr_t discussed elsewhere (alpine.DEB.2.10.1712081313070.8052@sstabellini-ThinkPad-X260). I am happy either way, it was just a suggestion.
Hi Stefano, On 12/08/2017 10:43 PM, Stefano Stabellini wrote: > On Fri, 8 Dec 2017, Julien Grall wrote: >> On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@kernel.org> wrote: >> On Fri, 8 Dec 2017, Julien Grall wrote: >> > On 06/12/17 12:27, Julien Grall wrote: >> > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: >> > > > On Thu, 23 Nov 2017, Julien Grall wrote: >> > > > > Hi Andrew, >> > > > > >> > > > > On 23/11/17 18:49, Andrew Cooper wrote: >> > > > > > On 23/11/17 18:32, Julien Grall wrote: >> > > > > > > This new function will be used in a follow-up patch to copy data to >> > > > > > > the >> > > > > > > guest >> > > > > > > using the IPA (aka guest physical address) and then clean the cache. >> > > > > > > >> > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org> >> > > > > > > --- >> > > > > > > xen/arch/arm/guestcopy.c | 10 ++++++++++ >> > > > > > > xen/include/asm-arm/guest_access.h | 6 ++++++ >> > > > > > > 2 files changed, 16 insertions(+) >> > > > > > > >> > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c >> > > > > > > index be53bee559..7958663970 100644 >> > > > > > > --- a/xen/arch/arm/guestcopy.c >> > > > > > > +++ b/xen/arch/arm/guestcopy.c >> > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, >> > > > > > > const >> > > > > > > void __user *from, unsigned le >> > > > > > > COPY_from_guest | COPY_linear); >> > > > > > > } >> > > > > > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, >> > > > > > > + paddr_t gpa, >> > > > > > > + void *buf, >> > > > > > > + unsigned int len) >> > > > > > > +{ >> > > > > > > + /* P2M is shared between all vCPUs, so the vCPU used does not >> > > > > > > matter. >> > > > > > > */ >> > > > > > >> > > > > > Be very careful with this line of thinking. It is only works after >> > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent >> > > > > > NULL pointer dereference. >> > > > > >> > > > > I really don't expect that function been used before DOMCT_max_vcpus is >> > > > > set. >> > > > > It is only used for hardware emulation or Xen loading image into the >> > > > > hardware >> > > > > domain memory. I could add a check d->vcpus to be safe. >> > > > > >> > > > > > >> > > > > > Also, what about vcpus configured with alternative views? >> > > > > >> > > > > It is not important because the underlying call is get_page_from_gfn >> > > > > that does >> > > > > not care about the alternative view (that function take a domain in >> > > > > parameter). I can update the comment. >> > > > Since this is a new function, would it make sense to take a struct >> > > > vcpu* as parameter, instead of a struct domain* ? >> > > >> > > Well, I suggested this patch this way because likely everyone will use with >> > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not >> > > d->vcpus[1]... >> > >> > Thinking a bit more to this, it might be better/safer to pass either a domain >> > or a vCPU to copy_guest. I can see 2 solutions: >> > 1# Introduce a union that use the same parameter: >> > union >> > { >> > struct >> > { >> > struct domain *d; >> > } ipa; >> > struct >> > { >> > struct vcpu *v; >> > } gva; >> > } >> > The structure here would be to ensure that it is clear that only >> > domain (resp. vcpu) should be used with ipa (resp. gva). >> > >> > 2# Have 2 parameters, vcpu and domain. >> > >> > Any opinions? >> >> I think that would be clearer. You could also add a paddr_t/vaddr_t >> correspondingly inside the union maybe. >> >> >> Well, you will have nameclash happening I think. >> >> >> And vaddr_t and paddr_t are confusing because you don't know which address you speak about (guest vs hypervisor). At least ipa/gpa, gva are known naming. > > That's not what I meant. ipa and gva are good names. > > I was suggesting to put an additional address field inside the union to > avoid the issue with paddr_t and vaddr_t discussed elsewhere > (alpine.DEB.2.10.1712081313070.8052@sstabellini-ThinkPad-X260). > > I am happy either way, it was just a suggestion. Actually looking at it. It will not be that handy to have the paddr_t/vaddr_t inside the union. Mostly because of the common code using the address parameter. I could add another union for the address, but I don't much like it. As you are happy with either way, I will use uint64_t. Cheers,
On Tue, 12 Dec 2017, Julien Grall wrote: > Hi Stefano, > > On 12/08/2017 10:43 PM, Stefano Stabellini wrote: > > On Fri, 8 Dec 2017, Julien Grall wrote: > > > On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@kernel.org> wrote: > > > On Fri, 8 Dec 2017, Julien Grall wrote: > > > > On 06/12/17 12:27, Julien Grall wrote: > > > > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > > > > > > On Thu, 23 Nov 2017, Julien Grall wrote: > > > > > > > Hi Andrew, > > > > > > > > > > > > > > On 23/11/17 18:49, Andrew Cooper wrote: > > > > > > > > On 23/11/17 18:32, Julien Grall wrote: > > > > > > > > > This new function will be used in a follow-up patch to > > > copy data to > > > > > > > > > the > > > > > > > > > guest > > > > > > > > > using the IPA (aka guest physical address) and then > > > clean the cache. > > > > > > > > > > > > > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org> > > > > > > > > > --- > > > > > > > > > xen/arch/arm/guestcopy.c | 10 ++++++++++ > > > > > > > > > xen/include/asm-arm/guest_access.h | 6 ++++++ > > > > > > > > > 2 files changed, 16 insertions(+) > > > > > > > > > > > > > > > > > > diff --git a/xen/arch/arm/guestcopy.c > > > b/xen/arch/arm/guestcopy.c > > > > > > > > > index be53bee559..7958663970 100644 > > > > > > > > > --- a/xen/arch/arm/guestcopy.c > > > > > > > > > +++ b/xen/arch/arm/guestcopy.c > > > > > > > > > @@ -110,6 +110,16 @@ unsigned long > > > raw_copy_from_guest(void *to, > > > > > > > > > const > > > > > > > > > void __user *from, unsigned le > > > > > > > > > COPY_from_guest | > > > COPY_linear); > > > > > > > > > } > > > > > > > > > +unsigned long > > > copy_to_guest_phys_flush_dcache(struct domain *d, > > > > > > > > > + paddr_t > > > gpa, > > > > > > > > > + void > > > *buf, > > > > > > > > > + unsigned > > > int len) > > > > > > > > > +{ > > > > > > > > > + /* P2M is shared between all vCPUs, so the vCPU > > > used does not > > > > > > > > > matter. > > > > > > > > > */ > > > > > > > > > > > > > > > > Be very careful with this line of thinking. It is only > > > works after > > > > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it > > > is a latent > > > > > > > > NULL pointer dereference. > > > > > > > > > > > > > > I really don't expect that function been used before > > > DOMCT_max_vcpus is > > > > > > > set. > > > > > > > It is only used for hardware emulation or Xen loading image > > > into the > > > > > > > hardware > > > > > > > domain memory. I could add a check d->vcpus to be safe. > > > > > > > > > > > > > > > > > > > > > > > Also, what about vcpus configured with alternative views? > > > > > > > > > > > > > > It is not important because the underlying call is > > > get_page_from_gfn > > > > > > > that does > > > > > > > not care about the alternative view (that function take a > > > domain in > > > > > > > parameter). I can update the comment. > > > > > > Since this is a new function, would it make sense to take a > > > struct > > > > > > vcpu* as parameter, instead of a struct domain* ? > > > > > > > > > > Well, I suggested this patch this way because likely everyone > > > will use with > > > > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] > > > and not > > > > > d->vcpus[1]... > > > > > > > > Thinking a bit more to this, it might be better/safer to pass > > > either a domain > > > > or a vCPU to copy_guest. I can see 2 solutions: > > > > 1# Introduce a union that use the same parameter: > > > > union > > > > { > > > > struct > > > > { > > > > struct domain *d; > > > > } ipa; > > > > struct > > > > { > > > > struct vcpu *v; > > > > } gva; > > > > } > > > > The structure here would be to ensure that it is clear > > > that only > > > > domain (resp. vcpu) should be used with ipa (resp. gva). > > > > > > > > 2# Have 2 parameters, vcpu and domain. > > > > > > > > Any opinions? > > > > > > I think that would be clearer. You could also add a paddr_t/vaddr_t > > > correspondingly inside the union maybe. > > > > > > > > > Well, you will have nameclash happening I think. > > > > > > > > > And vaddr_t and paddr_t are confusing because you don't know which address > > > you speak about (guest vs hypervisor). At least ipa/gpa, gva are known > > > naming. > > > > That's not what I meant. ipa and gva are good names. > > > > I was suggesting to put an additional address field inside the union to > > avoid the issue with paddr_t and vaddr_t discussed elsewhere > > (alpine.DEB.2.10.1712081313070.8052@sstabellini-ThinkPad-X260). > > > > I am happy either way, it was just a suggestion. > > Actually looking at it. It will not be that handy to have the paddr_t/vaddr_t > inside the union. Mostly because of the common code using the address > parameter. > > I could add another union for the address, but I don't much like it. > As you are happy with either way, I will use uint64_t. Sounds good
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c index be53bee559..7958663970 100644 --- a/xen/arch/arm/guestcopy.c +++ b/xen/arch/arm/guestcopy.c @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le COPY_from_guest | COPY_linear); } +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, + paddr_t gpa, + void *buf, + unsigned int len) +{ + /* P2M is shared between all vCPUs, so the vCPU used does not matter. */ + return copy_guest(buf, gpa, len, d->vcpu[0], + COPY_to_guest | COPY_ipa | COPY_flush_dcache); +} + int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf, uint32_t size, bool is_write) { diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h index 6796801cfe..224d2a033b 100644 --- a/xen/include/asm-arm/guest_access.h +++ b/xen/include/asm-arm/guest_access.h @@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from, unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len); unsigned long raw_clear_guest(void *to, unsigned len); +/* Copy data to guest physical address, then clean the region. */ +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, + paddr_t phys, + void *buf, + unsigned int len); + int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf, uint32_t size, bool is_write);
This new function will be used in a follow-up patch to copy data to the guest using the IPA (aka guest physical address) and then clean the cache. Signed-off-by: Julien Grall <julien.grall@linaro.org> --- xen/arch/arm/guestcopy.c | 10 ++++++++++ xen/include/asm-arm/guest_access.h | 6 ++++++ 2 files changed, 16 insertions(+)