Message ID | 1464670986-10256-4-git-send-email-zhaoshenglong@huawei.com |
---|---|
State | New |
Headers | show |
On 2016年06月06日 20:16, Wei Liu wrote: > On Mon, Jun 06, 2016 at 11:00:37AM +0100, Stefano Stabellini wrote: >> > On Tue, 31 May 2016, Shannon Zhao wrote: >>> > > From: Shannon Zhao <shannon.zhao@linaro.org> >>> > > >>> > > Currently it only needs ACPI table RSDP, XSDT, GTDT, MADT, FADT, DSDT >>> > > for ARM VM. So only add placeholders for them here. >>> > > >>> > > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> >> > >> > If we are going to make this ARM only, then maybe we should consider >> > moving these structs to an ARM specific header? >> > > Agreed. Or at least have some ifdefs around the fields. > Ok, will change. Thanks.
Hello Shannon, On 31/05/16 06:02, Shannon Zhao wrote: > From: Shannon Zhao <shannon.zhao@linaro.org> > > Currently it only needs ACPI table RSDP, XSDT, GTDT, MADT, FADT, DSDT > for ARM VM. So only add placeholders for them here. > > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> > --- > tools/libxc/include/xc_dom.h | 17 +++++++++++++++++ > 1 file changed, 17 insertions(+) > > diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h > index 6cb10c4..0fe54dd 100644 > --- a/tools/libxc/include/xc_dom.h > +++ b/tools/libxc/include/xc_dom.h > @@ -56,6 +56,20 @@ struct xc_dom_phys { > xen_pfn_t count; > }; > > +struct acpitable { > + void *table; > + size_t size; > +}; > + > +struct acpitable_blob { > + struct acpitable rsdp; > + struct acpitable xsdt; > + struct acpitable gtdt; > + struct acpitable madt; > + struct acpitable fadt; > + struct acpitable dsdt; > +}; Is there any particular reason to expose the list of the tables outside of the building code? I would provide a single buffer with all the tables inside. Similar to what you did for building the tables in the hypervisor. > + > struct xc_dom_image { > /* files */ > void *kernel_blob; > @@ -64,6 +78,8 @@ struct xc_dom_image { > size_t ramdisk_size; > void *devicetree_blob; > size_t devicetree_size; > + struct acpitable_blob *acpitable_blob; > + size_t acpitable_size; > > size_t max_kernel_size; > size_t max_ramdisk_size; > @@ -92,6 +108,7 @@ struct xc_dom_image { > struct xc_dom_seg p2m_seg; > struct xc_dom_seg pgtables_seg; > struct xc_dom_seg devicetree_seg; > + struct xc_dom_seg acpi_seg; > struct xc_dom_seg start_info_seg; /* HVMlite only */ > xen_pfn_t start_info_pfn; > xen_pfn_t console_pfn; > Regards,
On 07/06/16 12:13, Shannon Zhao wrote: > > > On 2016/6/7 19:05, Julien Grall wrote: >> Hello Shannon, >> >> On 31/05/16 06:02, Shannon Zhao wrote: >>> From: Shannon Zhao <shannon.zhao@linaro.org> >>> >>> Currently it only needs ACPI table RSDP, XSDT, GTDT, MADT, FADT, DSDT >>> for ARM VM. So only add placeholders for them here. >>> >>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> >>> --- >>> tools/libxc/include/xc_dom.h | 17 +++++++++++++++++ >>> 1 file changed, 17 insertions(+) >>> >>> diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h >>> index 6cb10c4..0fe54dd 100644 >>> --- a/tools/libxc/include/xc_dom.h >>> +++ b/tools/libxc/include/xc_dom.h >>> @@ -56,6 +56,20 @@ struct xc_dom_phys { >>> xen_pfn_t count; >>> }; >>> >>> +struct acpitable { >>> + void *table; >>> + size_t size; >>> +}; >>> + >>> +struct acpitable_blob { >>> + struct acpitable rsdp; >>> + struct acpitable xsdt; >>> + struct acpitable gtdt; >>> + struct acpitable madt; >>> + struct acpitable fadt; >>> + struct acpitable dsdt; >>> +}; >> >> Is there any particular reason to expose the list of the tables outside >> of the building code? >> >> I would provide a single buffer with all the tables inside. Similar to >> what you did for building the tables in the hypervisor. > When it loads these tables to guest memory space, it needs to update the > entries (pointing to other tables) of XSDT and also the > xsdt_physical_address in RSDP. Why can't we load the ACPI tables at an hardcoded place in the memory (for instance always at the beginning of the RAM)? This would make the code a lot simpler and would avoid a duplication of the same 5 lines for each tables in patch 14: + xsdt = (struct acpi_xsdt_descriptor *)acpitablemap; + memcpy(acpitablemap, dom->acpitable_blob->xsdt.table, + dom->acpitable_blob->xsdt.size); [...] + start += dom->acpitable_blob->xsdt.size; + acpitablemap += dom->acpitable_blob->xsdt.size; Regards,
On 07/06/16 12:35, Shannon Zhao wrote: > > > On 2016/6/7 19:27, Julien Grall wrote: >> >> >> On 07/06/16 12:13, Shannon Zhao wrote: >>> >>> >>> On 2016/6/7 19:05, Julien Grall wrote: >>>> Hello Shannon, >>>> >>>> On 31/05/16 06:02, Shannon Zhao wrote: >>>>> From: Shannon Zhao <shannon.zhao@linaro.org> >>>>> >>>>> Currently it only needs ACPI table RSDP, XSDT, GTDT, MADT, FADT, DSDT >>>>> for ARM VM. So only add placeholders for them here. >>>>> >>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> >>>>> --- >>>>> tools/libxc/include/xc_dom.h | 17 +++++++++++++++++ >>>>> 1 file changed, 17 insertions(+) >>>>> >>>>> diff --git a/tools/libxc/include/xc_dom.h >>>>> b/tools/libxc/include/xc_dom.h >>>>> index 6cb10c4..0fe54dd 100644 >>>>> --- a/tools/libxc/include/xc_dom.h >>>>> +++ b/tools/libxc/include/xc_dom.h >>>>> @@ -56,6 +56,20 @@ struct xc_dom_phys { >>>>> xen_pfn_t count; >>>>> }; >>>>> >>>>> +struct acpitable { >>>>> + void *table; >>>>> + size_t size; >>>>> +}; >>>>> + >>>>> +struct acpitable_blob { >>>>> + struct acpitable rsdp; >>>>> + struct acpitable xsdt; >>>>> + struct acpitable gtdt; >>>>> + struct acpitable madt; >>>>> + struct acpitable fadt; >>>>> + struct acpitable dsdt; >>>>> +}; >>>> >>>> Is there any particular reason to expose the list of the tables outside >>>> of the building code? >>>> >>>> I would provide a single buffer with all the tables inside. Similar to >>>> what you did for building the tables in the hypervisor. >>> When it loads these tables to guest memory space, it needs to update the >>> entries (pointing to other tables) of XSDT and also the >>> xsdt_physical_address in RSDP. >> >> Why can't we load the ACPI tables at an hardcoded place in the memory >> (for instance always at the beginning of the RAM)? >> > I think it's more reasonable to let the codes dynamically compute where > it should put these tables at like what it does for the devicetree blob. > > And to an hardcoded place, can you make sure that kind of place is > always available and not used by others? Yes, the toolstack is in charge of the memory layout. So it can ensure that no-one else is using this region. My concern is, based on you patch #13, the ACPI tables are allocated just after all the other modules. However, they cannot be relocated by the kernel because they contain physical address. So they have to stay in place for all the life of the domain. We should put them in a place where it will not impact the memory allocation of the guest. The start of the RAM is a good place for that. Regards,
Hello Shannon, On 07/06/16 12:42, Julien Grall wrote: >>>>> Is there any particular reason to expose the list of the tables >>>>> outside >>>>> of the building code? >>>>> >>>>> I would provide a single buffer with all the tables inside. Similar to >>>>> what you did for building the tables in the hypervisor. >>>> When it loads these tables to guest memory space, it needs to update >>>> the >>>> entries (pointing to other tables) of XSDT and also the >>>> xsdt_physical_address in RSDP. >>> >>> Why can't we load the ACPI tables at an hardcoded place in the memory >>> (for instance always at the beginning of the RAM)? >>> >> I think it's more reasonable to let the codes dynamically compute where >> it should put these tables at like what it does for the devicetree blob. >> >> And to an hardcoded place, can you make sure that kind of place is >> always available and not used by others? > > Yes, the toolstack is in charge of the memory layout. So it can ensure > that no-one else is using this region. > > My concern is, based on you patch #13, the ACPI tables are allocated > just after all the other modules. However, they cannot be relocated by > the kernel because they contain physical address. So they have to stay > in place for all the life of the domain. > > We should put them in a place where it will not impact the memory > allocation of the guest. The start of the RAM is a good place for that. I though a bit more on this suggestion. If the ACPI tables are put at the beginning of the RAM, a guest may not be able to use super page. I would suggest to move the ACPI table out of the real RAM to avoid any potential issue with the kernel memory allocation. For instance we could define a IPA range to be use for ACPI (e.g 0x20000000 - 0x20200000) and expose to the guest using the ACPI_NVS type in the UEFI memory map. Any opinion on this? Regards,
On 07/06/16 12:59, Shannon Zhao wrote: > > > On 2016/6/7 19:42, Julien Grall wrote: >> >> >> On 07/06/16 12:35, Shannon Zhao wrote: >>> >>> >>> On 2016/6/7 19:27, Julien Grall wrote: >>>> >>>> >>>> On 07/06/16 12:13, Shannon Zhao wrote: >>>>> >>>>> >>>>> On 2016/6/7 19:05, Julien Grall wrote: >>>>>> Hello Shannon, >>>>>> >>>>>> On 31/05/16 06:02, Shannon Zhao wrote: >>>>>>> From: Shannon Zhao <shannon.zhao@linaro.org> >>>>>>> >>>>>>> Currently it only needs ACPI table RSDP, XSDT, GTDT, MADT, FADT, DSDT >>>>>>> for ARM VM. So only add placeholders for them here. >>>>>>> >>>>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> >>>>>>> --- >>>>>>> tools/libxc/include/xc_dom.h | 17 +++++++++++++++++ >>>>>>> 1 file changed, 17 insertions(+) >>>>>>> >>>>>>> diff --git a/tools/libxc/include/xc_dom.h >>>>>>> b/tools/libxc/include/xc_dom.h >>>>>>> index 6cb10c4..0fe54dd 100644 >>>>>>> --- a/tools/libxc/include/xc_dom.h >>>>>>> +++ b/tools/libxc/include/xc_dom.h >>>>>>> @@ -56,6 +56,20 @@ struct xc_dom_phys { >>>>>>> xen_pfn_t count; >>>>>>> }; >>>>>>> >>>>>>> +struct acpitable { >>>>>>> + void *table; >>>>>>> + size_t size; >>>>>>> +}; >>>>>>> + >>>>>>> +struct acpitable_blob { >>>>>>> + struct acpitable rsdp; >>>>>>> + struct acpitable xsdt; >>>>>>> + struct acpitable gtdt; >>>>>>> + struct acpitable madt; >>>>>>> + struct acpitable fadt; >>>>>>> + struct acpitable dsdt; >>>>>>> +}; >>>>>> >>>>>> Is there any particular reason to expose the list of the tables >>>>>> outside >>>>>> of the building code? >>>>>> >>>>>> I would provide a single buffer with all the tables inside. Similar to >>>>>> what you did for building the tables in the hypervisor. >>>>> When it loads these tables to guest memory space, it needs to update >>>>> the >>>>> entries (pointing to other tables) of XSDT and also the >>>>> xsdt_physical_address in RSDP. >>>> >>>> Why can't we load the ACPI tables at an hardcoded place in the memory >>>> (for instance always at the beginning of the RAM)? >>>> >>> I think it's more reasonable to let the codes dynamically compute where >>> it should put these tables at like what it does for the devicetree blob. >>> >>> And to an hardcoded place, can you make sure that kind of place is >>> always available and not used by others? >> >> Yes, the toolstack is in charge of the memory layout. So it can ensure >> that no-one else is using this region. >> >> My concern is, based on you patch #13, the ACPI tables are allocated >> just after all the other modules. However, they cannot be relocated by >> the kernel because they contain physical address. So they have to stay >> in place for all the life of the domain. >> > No, this doesn't like what you say. UEFI will install and relocate the > ACPI tables again for guest kernel. That means the ACPI tables guest > gets are not from the original place where toolstack puts them at. Why does UEFI need to relocate the ACPI tables? Anyway, you rely on the UEFI firmware to always relocating the tables. What would happen if they decide to remove this behavior? Regards,
On 07/06/16 13:32, Shannon Zhao wrote: > > > On 2016/6/7 20:06, Julien Grall wrote: >> >> >> On 07/06/16 12:59, Shannon Zhao wrote: >>> >>> >>> On 2016/6/7 19:42, Julien Grall wrote: >>>> >>>> >>>> On 07/06/16 12:35, Shannon Zhao wrote: >>>>> >>>>> >>>>> On 2016/6/7 19:27, Julien Grall wrote: >>>>>> >>>>>> >>>>>> On 07/06/16 12:13, Shannon Zhao wrote: >>>>>>> >>>>>>> >>>>>>> On 2016/6/7 19:05, Julien Grall wrote: >>>>>>>> Hello Shannon, >>>>>>>> >>>>>>>> On 31/05/16 06:02, Shannon Zhao wrote: >>>>>>>>> From: Shannon Zhao <shannon.zhao@linaro.org> >>>>>>>>> >>>>>>>>> Currently it only needs ACPI table RSDP, XSDT, GTDT, MADT, FADT, >>>>>>>>> DSDT >>>>>>>>> for ARM VM. So only add placeholders for them here. >>>>>>>>> >>>>>>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> >>>>>>>>> --- >>>>>>>>> tools/libxc/include/xc_dom.h | 17 +++++++++++++++++ >>>>>>>>> 1 file changed, 17 insertions(+) >>>>>>>>> >>>>>>>>> diff --git a/tools/libxc/include/xc_dom.h >>>>>>>>> b/tools/libxc/include/xc_dom.h >>>>>>>>> index 6cb10c4..0fe54dd 100644 >>>>>>>>> --- a/tools/libxc/include/xc_dom.h >>>>>>>>> +++ b/tools/libxc/include/xc_dom.h >>>>>>>>> @@ -56,6 +56,20 @@ struct xc_dom_phys { >>>>>>>>> xen_pfn_t count; >>>>>>>>> }; >>>>>>>>> >>>>>>>>> +struct acpitable { >>>>>>>>> + void *table; >>>>>>>>> + size_t size; >>>>>>>>> +}; >>>>>>>>> + >>>>>>>>> +struct acpitable_blob { >>>>>>>>> + struct acpitable rsdp; >>>>>>>>> + struct acpitable xsdt; >>>>>>>>> + struct acpitable gtdt; >>>>>>>>> + struct acpitable madt; >>>>>>>>> + struct acpitable fadt; >>>>>>>>> + struct acpitable dsdt; >>>>>>>>> +}; >>>>>>>> >>>>>>>> Is there any particular reason to expose the list of the tables >>>>>>>> outside >>>>>>>> of the building code? >>>>>>>> >>>>>>>> I would provide a single buffer with all the tables inside. >>>>>>>> Similar to >>>>>>>> what you did for building the tables in the hypervisor. >>>>>>> When it loads these tables to guest memory space, it needs to update >>>>>>> the >>>>>>> entries (pointing to other tables) of XSDT and also the >>>>>>> xsdt_physical_address in RSDP. >>>>>> >>>>>> Why can't we load the ACPI tables at an hardcoded place in the memory >>>>>> (for instance always at the beginning of the RAM)? >>>>>> >>>>> I think it's more reasonable to let the codes dynamically compute where >>>>> it should put these tables at like what it does for the devicetree >>>>> blob. >>>>> >>>>> And to an hardcoded place, can you make sure that kind of place is >>>>> always available and not used by others? >>>> >>>> Yes, the toolstack is in charge of the memory layout. So it can ensure >>>> that no-one else is using this region. >>>> >>>> My concern is, based on you patch #13, the ACPI tables are allocated >>>> just after all the other modules. However, they cannot be relocated by >>>> the kernel because they contain physical address. So they have to stay >>>> in place for all the life of the domain. >>>> >>> No, this doesn't like what you say. UEFI will install and relocate the >>> ACPI tables again for guest kernel. That means the ACPI tables guest >>> gets are not from the original place where toolstack puts them at. >> >> Why does UEFI need to relocate the ACPI tables? >> > That's its own mechanism I think and UEFI wants all the memory maps > under its control. And it's same for QEMU(x86 and ARM) and also for Xen > on x86. You can have a look at the OvmfPkg/AcpiPlatformDxe/Xen.c which > is used for x86 Xen DomU. UEFI cannot control the memory map because it is not capable to know what a region is used for (such as magic pages, grant table...). In the case of domU for x86, the ACPI table are located in predefine physical address by hvmloader. I really don't see any reason that would prevent us to do the same on ARM. If the UEFI firmware wants to relocate the tables, then fine. But we should also think about any firmware which may not relocate the tables. > >> Anyway, you rely on the UEFI firmware to always relocating the tables. >> What would happen if they decide to remove this behavior? > Eh, I don't believe this would happen. You seem very optimistic. Xen does not rely on a specific UEFI implementation nor how a guest will behave. We have to predict what could happen and find the best way to do it. Regards,
On 16/06/16 07:53, Shannon Zhao wrote: > Hi Julien, Hi Shannon, > > On 2016/6/7 21:06, Julien Grall wrote: >>> That's its own mechanism I think and UEFI wants all the memory maps >>> under its control. And it's same for QEMU(x86 and ARM) and also for Xen >>> on x86. You can have a look at the OvmfPkg/AcpiPlatformDxe/Xen.c which >>> is used for x86 Xen DomU. >> >> UEFI cannot control the memory map because it is not capable to know >> what a region is used for (such as magic pages, grant table...). >> >> In the case of domU for x86, the ACPI table are located in predefine >> physical address by hvmloader. > The truth is that hvmloader puts the tables at address > ACPI_PHYSICAL_ADDRESS(0x000EA020), but UEFI will install the tables and > relocate them as well. You can see OvmfPkg/AcpiPlatformDxe/Xen.c in edk2 > source code. > > So I think it's same with ARM except x86 puts the tables at one fixed > address while ARM dynamically computes the address and passes the > address information to UEFI through dts. > > Yeah, of course we can let ARM put the tables at one fixed address and > also it doesn't need the ACPI module to pass the address and let UEFI > find the RSDP table from the fixed address. But what's the difference > between the fixed and dynamicall ones? Because both ways UEFI will > install and relocate the tables. You seem to think that OVMF is the only UEFI implementation available and its behavior is set in stone. Can you quote the UEFI spec stating the ACPI tables will always be relocated? If not, putting the ACPI module right in the middle of memory is not the wisest choice because the tables can not be easily relocated like other modules (DT, Kernel, initramfs). My suggestion to put the ACPI tables at a static address outside the RAM is to let the choice to the firmware to do whatever it wants without impacting the performance of a kernel if it ever decides to keep in place the tables. However, this static address is from the toolstack point of view. The firmware should not assume any static address because the guest memory layout is not part of the ABI for Xen ARM (i.e it can be modified between two releases). This is to allow us reshuffling the layout to make space for new features. > >> I really don't see any reason that would >> prevent us to do the same on ARM. >> >> If the UEFI firmware wants to relocate the tables, then fine. But we >> should also think about any firmware which may not relocate the tables. > Can you name one firmware except the UEFI? Sorry I meant any other UEFI implementation (i.e other than OVMF) may not relocate the tables. Cheers,
Hello Shannon, On 16/06/16 11:45, Shannon Zhao wrote: >>> On 2016/6/7 21:06, Julien Grall wrote: >>>>> That's its own mechanism I think and UEFI wants all the memory maps >>>>> under its control. And it's same for QEMU(x86 and ARM) and also for Xen >>>>> on x86. You can have a look at the OvmfPkg/AcpiPlatformDxe/Xen.c which >>>>> is used for x86 Xen DomU. >>>> >>>> UEFI cannot control the memory map because it is not capable to know >>>> what a region is used for (such as magic pages, grant table...). >>>> >>>> In the case of domU for x86, the ACPI table are located in predefine >>>> physical address by hvmloader. >>> The truth is that hvmloader puts the tables at address >>> ACPI_PHYSICAL_ADDRESS(0x000EA020), but UEFI will install the tables and >>> relocate them as well. You can see OvmfPkg/AcpiPlatformDxe/Xen.c in edk2 >>> source code. >>> >>> So I think it's same with ARM except x86 puts the tables at one fixed >>> address while ARM dynamically computes the address and passes the >>> address information to UEFI through dts. >>> >>> Yeah, of course we can let ARM put the tables at one fixed address and >>> also it doesn't need the ACPI module to pass the address and let UEFI >>> find the RSDP table from the fixed address. But what's the difference >>> between the fixed and dynamicall ones? Because both ways UEFI will >>> install and relocate the tables. >> >> You seem to think that OVMF is the only UEFI implementation available >> and its behavior is set in stone. Can you quote the UEFI spec stating >> the ACPI tables will always be relocated? >> >> If not, putting the ACPI module right in the middle of memory is not the >> wisest choice because the tables can not be easily relocated like other >> modules (DT, Kernel, initramfs). >> > The ACPI tables are put after the DT. How could you say the ACPI is in > the middle while DT not? Because the position of the DT in the memory is defined by the Linux boot ABI (see section 4.b in Documentation/arm/Booting). > >> My suggestion to put the ACPI tables at a static address outside the RAM >> is to let the choice to the firmware to do whatever it wants without >> impacting the performance of a kernel if it ever decides to keep in >> place the tables. >> >> However, this static address is from the toolstack point of view. The >> firmware should not assume any static address because the guest memory >> layout is not part of the ABI for Xen ARM (i.e it can be modified >> between two releases). This is to allow us reshuffling the layout to >> make space for new features. >> > Sure? Currently for Xen x86, edk2 uses XEN_ACPI_PHYSICAL_ADDRESS to find > RSDP. If we put the tables at another address, DomU will fail to boot. I am not sure to follow here. Why do you mention x86 here? My point was only for ARM and we are not tight to what x86 does. > >>> >>>> I really don't see any reason that would >>>> prevent us to do the same on ARM. >>>> >>>> If the UEFI firmware wants to relocate the tables, then fine. But we >>>> should also think about any firmware which may not relocate the tables. >>> Can you name one firmware except the UEFI? >> >> Sorry I meant any other UEFI implementation (i.e other than OVMF) may >> not relocate the tables. > So please tell me the name of other UEFI implementation. What is the point? All our decision should be based on the specification and not ONE specific implementation. Regards,
diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h index 6cb10c4..0fe54dd 100644 --- a/tools/libxc/include/xc_dom.h +++ b/tools/libxc/include/xc_dom.h @@ -56,6 +56,20 @@ struct xc_dom_phys { xen_pfn_t count; }; +struct acpitable { + void *table; + size_t size; +}; + +struct acpitable_blob { + struct acpitable rsdp; + struct acpitable xsdt; + struct acpitable gtdt; + struct acpitable madt; + struct acpitable fadt; + struct acpitable dsdt; +}; + struct xc_dom_image { /* files */ void *kernel_blob; @@ -64,6 +78,8 @@ struct xc_dom_image { size_t ramdisk_size; void *devicetree_blob; size_t devicetree_size; + struct acpitable_blob *acpitable_blob; + size_t acpitable_size; size_t max_kernel_size; size_t max_ramdisk_size; @@ -92,6 +108,7 @@ struct xc_dom_image { struct xc_dom_seg p2m_seg; struct xc_dom_seg pgtables_seg; struct xc_dom_seg devicetree_seg; + struct xc_dom_seg acpi_seg; struct xc_dom_seg start_info_seg; /* HVMlite only */ xen_pfn_t start_info_pfn; xen_pfn_t console_pfn;