This patch set is introducing ARM64 PCI hostbridge init based on ACPI, which based on Jiang Liu's patch set "Consolidate ACPI PCI root common code into ACPI core":
https://lkml.org/lkml/2015/5/14/98
This patch set including three parts:
- the first part is PATCH 1, which should be merged into Jiang Liu's patch set to fix the compile error on ARM64 when ACPI enabled.
- the senconed part is the refactoring of mmconfig to let that mechanism can be used for ARM64 too, it's Tomasz's work but he is moving to other work and pretty busy for now, so I will take care of those patches, Tomasz will show up when some comments need to be addressed :)
In this version of mmconfig refactor patches, I removed the rename of mmconfig -> ecam patch, because mmconfig is in multi places, and need much more effort to convert them all to ecam, Bjorn, if you don't like it, I can add them back.
- The third part is about the ARM64 PCI hostbridge init based on ACPI, first I fixed a compile error for XEN PCI on ARM64 when PCI_MMCONFIG=y, and then introduce PCI init based on Jiang Liu and Tomasz's patch set.
patch for ARM64 ACPI PCI still reserve the bus sysdata to get the domain number, because Yijing's patch set is still under review, will be removed when Yijing's patch set hits upstream.
This patch set was tested by Suravee on Seattle board with legacy interrupt (not MSI), and it works, also tested on qemu by Graeme.
You can get the code from: git://git.linaro.org/leg/acpi/acpi.git, devel branch
Comments are welcomed.
Thanks Hanjun
Hanjun Guo (3): ARM64 / PCI: introduce struct pci_controller for ACPI XEN / PCI: Remove the dependence on arch x86 when PCI_MMCONFIG=y ARM64 / PCI / ACPI: support for ACPI based PCI hostbridge init
Tomasz Nowicki (8): x86, pci: Clean up comment about buggy MMIO config space access for AMD Fam10h CPUs. x86, pci: Abstract PCI config accessors and use AMD Fam10h workaround exclusively. x86, pci: Reorder logic of pci_mmconfig_insert() function x86, pci, acpi: Move arch-agnostic MMCONFIG (aka ECAM) and ACPI code out of arch/x86/ directory pci, acpi, mcfg: Provide generic implementation of MCFG code initialization. x86, pci: mmconfig_{32,64}.c code refactoring - remove code duplication. x86, pci, ecam: mmconfig_64.c becomes default implementation for ECAM driver. pci, acpi, mcfg: Share ACPI PCI config space accessors.
arch/arm64/Kconfig | 7 + arch/arm64/include/asm/pci.h | 10 ++ arch/arm64/kernel/pci.c | 245 ++++++++++++++++++++++++++-- arch/x86/Kconfig | 4 + arch/x86/include/asm/pci_x86.h | 34 +--- arch/x86/pci/Makefile | 4 +- arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 301 +++++++++++----------------------- arch/x86/pci/mmconfig_32.c | 35 +--- arch/x86/pci/mmconfig_64.c | 153 ------------------ arch/x86/pci/numachip.c | 25 +-- drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 103 ++++++++++++ drivers/pci/Kconfig | 10 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 358 +++++++++++++++++++++++++++++++++++++++++ drivers/pci/pci.c | 26 +-- drivers/xen/pci.c | 7 +- include/linux/acpi.h | 2 + include/linux/ecam.h | 56 +++++++ 20 files changed, 923 insertions(+), 464 deletions(-) delete mode 100644 arch/x86/pci/mmconfig_64.c create mode 100644 drivers/acpi/mcfg.c create mode 100644 drivers/pci/ecam.c create mode 100644 include/linux/ecam.h
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com --- arch/arm64/include/asm/pci.h | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/pci.h b/arch/arm64/include/asm/pci.h index b008a72..7088495 100644 --- a/arch/arm64/include/asm/pci.h +++ b/arch/arm64/include/asm/pci.h @@ -10,6 +10,16 @@ #include <asm-generic/pci-bridge.h> #include <asm-generic/pci-dma-compat.h>
+struct acpi_device; + +struct pci_controller { +#ifdef CONFIG_ACPI + struct acpi_device *companion; /* ACPI companion device */ +#endif + int segment; /* PCI domain */ + int node; /* NUMA node */ +}; + #define PCIBIOS_MIN_IO 0x1000 #define PCIBIOS_MIN_MEM 0
On Tue, May 26, 2015 at 01:49:14PM +0100, Hanjun Guo wrote:
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Hi Hanjun,
Two questions: why don't you introduce this patch next to the one that is going to make use of it (or even merge it there)? Second, why is the whole struct pci_controller not surrounded by #ifdef CONFIG_ACPI as you are implying that this is needed only for ACPI?
Btw, looking through the whole series I'm not (yet) convinced that this is needed at all.
Best regards, Liviu
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com
arch/arm64/include/asm/pci.h | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/pci.h b/arch/arm64/include/asm/pci.h index b008a72..7088495 100644 --- a/arch/arm64/include/asm/pci.h +++ b/arch/arm64/include/asm/pci.h @@ -10,6 +10,16 @@ #include <asm-generic/pci-bridge.h> #include <asm-generic/pci-dma-compat.h> +struct acpi_device;
+struct pci_controller { +#ifdef CONFIG_ACPI
- struct acpi_device *companion; /* ACPI companion device */
+#endif
- int segment; /* PCI domain */
- int node; /* NUMA node */
+};
#define PCIBIOS_MIN_IO 0x1000 #define PCIBIOS_MIN_MEM 0 -- 1.9.1
On 2015/5/27 0:58, Liviu Dudau wrote:
On Tue, May 26, 2015 at 01:49:14PM +0100, Hanjun Guo wrote:
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Hi Hanjun,
Two questions: why don't you introduce this patch next to the one that is going to make use of it (or even merge it there)? Second, why is the whole struct pci_controller not surrounded by #ifdef CONFIG_ACPI as you are implying that this is needed only for ACPI?
Btw, looking through the whole series I'm not (yet) convinced that this is needed at all.
Hi Liviu, This structure is required by the requested patch set at http://patchwork.ozlabs.org/patch/472249/, which consolidates the common code to support PCI host bridge into ACPI core. Thanks! Gerry
Best regards, Liviu
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com
arch/arm64/include/asm/pci.h | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/pci.h b/arch/arm64/include/asm/pci.h index b008a72..7088495 100644 --- a/arch/arm64/include/asm/pci.h +++ b/arch/arm64/include/asm/pci.h @@ -10,6 +10,16 @@ #include <asm-generic/pci-bridge.h> #include <asm-generic/pci-dma-compat.h> +struct acpi_device;
+struct pci_controller { +#ifdef CONFIG_ACPI
- struct acpi_device *companion; /* ACPI companion device */
+#endif
- int segment; /* PCI domain */
- int node; /* NUMA node */
+};
#define PCIBIOS_MIN_IO 0x1000 #define PCIBIOS_MIN_MEM 0 -- 1.9.1
Hi Liviu,
On 2015年05月27日 01:20, Jiang Liu wrote:
On 2015/5/27 0:58, Liviu Dudau wrote:
On Tue, May 26, 2015 at 01:49:14PM +0100, Hanjun Guo wrote:
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Hi Hanjun,
Two questions: why don't you introduce this patch next to the one that is going to make use of it (or even merge it there)?
this is because of this patch is needed by Jiang Liu's patch set to fix the compile error on ARM64, I'd rather do that, but It's better to let Jiang Liu's patch goes in, and then this one, that's why I prepared a single patch for the struct. (I mentioned it in the cover letter)
Second, why is the whole struct pci_controller not surrounded by #ifdef CONFIG_ACPI as you are implying that this is needed only for ACPI?
I hope it can be reused, since the NUMA node and segment (domain) is both needed for DT and ACPI, if it's not the case foe now, I can surrounded them all by #ifdef CONFIG_ACPI.
Btw, looking through the whole series I'm not (yet) convinced that this is needed at all.
Hi Liviu, This structure is required by the requested patch set at http://patchwork.ozlabs.org/patch/472249/, which consolidates the common code to support PCI host bridge into ACPI core.
Jiang, thanks for the explanation :)
Thanks Hanjun
Hi Hanjun,
On Wed, May 27, 2015 at 1:51 PM, Hanjun Guo hanjun.guo@linaro.org wrote:
Hi Liviu,
On 2015年05月27日 01:20, Jiang Liu wrote:
On 2015/5/27 0:58, Liviu Dudau wrote:
On Tue, May 26, 2015 at 01:49:14PM +0100, Hanjun Guo wrote:
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Hi Hanjun,
Two questions: why don't you introduce this patch next to the one that is going to make use of it (or even merge it there)?
this is because of this patch is needed by Jiang Liu's patch set to fix the compile error on ARM64, I'd rather do that, but It's better to let Jiang Liu's patch goes in, and then this one, that's why I prepared a single patch for the struct. (I mentioned it in the cover letter)
Second, why is the whole struct pci_controller not surrounded by #ifdef CONFIG_ACPI as you are implying that this is needed only for ACPI?
I hope it can be reused, since the NUMA node and segment (domain) is both needed for DT and ACPI, if it's not the case foe now, I can surrounded them all by #ifdef CONFIG_ACPI.
we can make use of this structure to hold pci to numa node mapping(pcibus_to_node). can you please pull node member out of CONFIG_ACPI ifdef. or you can put only acpi_device under ifdef.
Btw, looking through the whole series I'm not (yet) convinced that this is needed at all.
Hi Liviu, This structure is required by the requested patch set at http://patchwork.ozlabs.org/patch/472249/, which consolidates the common code to support PCI host bridge into ACPI core.
Jiang, thanks for the explanation :)
Thanks Hanjun
thanks Ganapat
linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Sep 07, 2015 at 05:14:22AM +0100, Ganapatrao Kulkarni wrote:
Hi Hanjun,
On Wed, May 27, 2015 at 1:51 PM, Hanjun Guo hanjun.guo@linaro.org wrote:
Hi Liviu,
On 2015???05???27??? 01:20, Jiang Liu wrote:
On 2015/5/27 0:58, Liviu Dudau wrote:
On Tue, May 26, 2015 at 01:49:14PM +0100, Hanjun Guo wrote:
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Hi Hanjun,
Two questions: why don't you introduce this patch next to the one that is going to make use of it (or even merge it there)?
this is because of this patch is needed by Jiang Liu's patch set to fix the compile error on ARM64, I'd rather do that, but It's better to let Jiang Liu's patch goes in, and then this one, that's why I prepared a single patch for the struct. (I mentioned it in the cover letter)
Second, why is the whole struct pci_controller not surrounded by #ifdef CONFIG_ACPI as you are implying that this is needed only for ACPI?
I hope it can be reused, since the NUMA node and segment (domain) is both needed for DT and ACPI, if it's not the case foe now, I can surrounded them all by #ifdef CONFIG_ACPI.
we can make use of this structure to hold pci to numa node mapping(pcibus_to_node). can you please pull node member out of CONFIG_ACPI ifdef. or you can put only acpi_device under ifdef.
That struct disappeared in the latest series:
https://lkml.org/lkml/2015/6/8/443
we have to have a common way to handle the NUMA info in DT and ACPI so we should still find a solution that can be shared between the two, it is yet another thing to take into account for PCI ACPI on arm64.
Thanks, Lorenzo
On 09/07/2015 04:45 PM, Lorenzo Pieralisi wrote:
On Mon, Sep 07, 2015 at 05:14:22AM +0100, Ganapatrao Kulkarni wrote:
Hi Hanjun,
On Wed, May 27, 2015 at 1:51 PM, Hanjun Guo hanjun.guo@linaro.org wrote:
Hi Liviu,
On 2015???05???27??? 01:20, Jiang Liu wrote:
On 2015/5/27 0:58, Liviu Dudau wrote:
On Tue, May 26, 2015 at 01:49:14PM +0100, Hanjun Guo wrote:
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Hi Hanjun,
Two questions: why don't you introduce this patch next to the one that is going to make use of it (or even merge it there)?
this is because of this patch is needed by Jiang Liu's patch set to fix the compile error on ARM64, I'd rather do that, but It's better to let Jiang Liu's patch goes in, and then this one, that's why I prepared a single patch for the struct. (I mentioned it in the cover letter)
Second, why is the whole struct pci_controller not surrounded by #ifdef CONFIG_ACPI as you are implying that this is needed only for ACPI?
I hope it can be reused, since the NUMA node and segment (domain) is both needed for DT and ACPI, if it's not the case foe now, I can surrounded them all by #ifdef CONFIG_ACPI.
we can make use of this structure to hold pci to numa node mapping(pcibus_to_node). can you please pull node member out of CONFIG_ACPI ifdef. or you can put only acpi_device under ifdef.
That struct disappeared in the latest series:
Yes, I think that is the right direction going.
we have to have a common way to handle the NUMA info in DT and ACPI so we should still find a solution that can be shared between the two, it is yet another thing to take into account for PCI ACPI on arm64.
Agreed, we can take that into account when finished the basic support.
Thanks Hanjun
On Tue, May 26, 2015 at 06:20:40PM +0100, Jiang Liu wrote:
On 2015/5/27 0:58, Liviu Dudau wrote:
On Tue, May 26, 2015 at 01:49:14PM +0100, Hanjun Guo wrote:
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Hi Hanjun,
Two questions: why don't you introduce this patch next to the one that is going to make use of it (or even merge it there)? Second, why is the whole struct pci_controller not surrounded by #ifdef CONFIG_ACPI as you are implying that this is needed only for ACPI?
Btw, looking through the whole series I'm not (yet) convinced that this is needed at all.
Hi Liviu, This structure is required by the requested patch set at http://patchwork.ozlabs.org/patch/472249/, which consolidates the common code to support PCI host bridge into ACPI core. Thanks! Gerry
Hi Jiang,
Thanks for pointing me on the right answer, I've missed that series! Probably not the best place to comment on that series here, but I wonder why did you not made the pci_controller structure available in a more generic header file that can be included so that arches don't have to redefine the structure every time. After all, you are trying to consolidate things.
Oh, and pci_controller name throws a lot of false negatives, maybe a more specific one (acpi_pci_controller?) would make things clear?
Best regards, Liviu
Best regards, Liviu
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com
arch/arm64/include/asm/pci.h | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/pci.h b/arch/arm64/include/asm/pci.h index b008a72..7088495 100644 --- a/arch/arm64/include/asm/pci.h +++ b/arch/arm64/include/asm/pci.h @@ -10,6 +10,16 @@ #include <asm-generic/pci-bridge.h> #include <asm-generic/pci-dma-compat.h> +struct acpi_device;
+struct pci_controller { +#ifdef CONFIG_ACPI
- struct acpi_device *companion; /* ACPI companion device */
+#endif
- int segment; /* PCI domain */
- int node; /* NUMA node */
+};
#define PCIBIOS_MIN_IO 0x1000 #define PCIBIOS_MIN_MEM 0 -- 1.9.1
On 2015/5/27 17:47, Liviu Dudau wrote:
On Tue, May 26, 2015 at 06:20:40PM +0100, Jiang Liu wrote:
On 2015/5/27 0:58, Liviu Dudau wrote:
On Tue, May 26, 2015 at 01:49:14PM +0100, Hanjun Guo wrote:
ARM64 ACPI based PCI host bridge init needs a arch dependent struct pci_controller to accommodate common PCI host bridge code which is introduced later, or it will lead to compile errors on ARM64.
Hi Hanjun,
Two questions: why don't you introduce this patch next to the one that is going to make use of it (or even merge it there)? Second, why is the whole struct pci_controller not surrounded by #ifdef CONFIG_ACPI as you are implying that this is needed only for ACPI?
Btw, looking through the whole series I'm not (yet) convinced that this is needed at all.
Hi Liviu, This structure is required by the requested patch set at http://patchwork.ozlabs.org/patch/472249/, which consolidates the common code to support PCI host bridge into ACPI core. Thanks! Gerry
Hi Jiang,
Thanks for pointing me on the right answer, I've missed that series! Probably not the best place to comment on that series here, but I wonder why did you not made the pci_controller structure available in a more generic header file that can be included so that arches don't have to redefine the structure every time. After all, you are trying to consolidate things.
Oh, and pci_controller name throws a lot of false negatives, maybe a more specific one (acpi_pci_controller?) would make things clear?
Hi Liviu, It's a trade-off. Once I tried to rename it too, but gave up later. There are several reasons to keep it as is: 1) Several architectures define pci_controller to support PCI root bus. 2) struct pci_controller is a generic concept, I guess, and ACPI code extends pci_controller to host some ACPI specific data on IA64 and x86. 3) It will cause big code changes if we rename pci_controller to something else. Thanks! Gerry
Best regards, Liviu
Best regards, Liviu
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com
arch/arm64/include/asm/pci.h | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/pci.h b/arch/arm64/include/asm/pci.h index b008a72..7088495 100644 --- a/arch/arm64/include/asm/pci.h +++ b/arch/arm64/include/asm/pci.h @@ -10,6 +10,16 @@ #include <asm-generic/pci-bridge.h> #include <asm-generic/pci-dma-compat.h> +struct acpi_device;
+struct pci_controller { +#ifdef CONFIG_ACPI
- struct acpi_device *companion; /* ACPI companion device */
+#endif
- int segment; /* PCI domain */
- int node; /* NUMA node */
+};
#define PCIBIOS_MIN_IO 0x1000 #define PCIBIOS_MIN_MEM 0 -- 1.9.1
From: Tomasz Nowicki tomasz.nowicki@linaro.org
- fix typo - improve explanation - add reference to the related document
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com --- arch/x86/include/asm/pci_x86.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h index 164e3f8..eddf8f0 100644 --- a/arch/x86/include/asm/pci_x86.h +++ b/arch/x86/include/asm/pci_x86.h @@ -154,10 +154,13 @@ extern struct list_head pci_mmcfg_list;
/* * AMD Fam10h CPUs are buggy, and cannot access MMIO config space - * on their northbrige except through the * %eax register. As such, you MUST - * NOT use normal IOMEM accesses, you need to only use the magic mmio-config + * on their northbridge except through the * %eax register. As such, you MUST + * NOT use normal IOMEM accesses, you need to only use the magic mmio_config_* * accessor functions. - * In fact just use pci_config_*, nothing else please. + * + * Please refer to the following doc: + * "BIOS and Kernel Developer's Guide (BKDG) For AMD Family 10h Processors", + * rev. 3.48, sec 2.11.1, "MMIO Configuration Coding Requirements". */ static inline unsigned char mmio_config_readb(void __iomem *pos) {
On 26.05.2015 14:49, Hanjun Guo wrote:
From: Tomasz Nowicki tomasz.nowicki@linaro.org
- fix typo
- improve explanation
- add reference to the related document
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com
arch/x86/include/asm/pci_x86.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h index 164e3f8..eddf8f0 100644 --- a/arch/x86/include/asm/pci_x86.h +++ b/arch/x86/include/asm/pci_x86.h @@ -154,10 +154,13 @@ extern struct list_head pci_mmcfg_list;
/*
- AMD Fam10h CPUs are buggy, and cannot access MMIO config space
- on their northbrige except through the * %eax register. As such, you MUST
- NOT use normal IOMEM accesses, you need to only use the magic mmio-config
- on their northbridge except through the * %eax register. As such, you MUST
- NOT use normal IOMEM accesses, you need to only use the magic mmio_config_*
- accessor functions.
- In fact just use pci_config_*, nothing else please.
- Please refer to the following doc:
- "BIOS and Kernel Developer's Guide (BKDG) For AMD Family 10h Processors",
*/ static inline unsigned char mmio_config_readb(void __iomem *pos) {
- rev. 3.48, sec 2.11.1, "MMIO Configuration Coding Requirements".
Hi Bjorn,
Can you please consider to pick up this one patch?
Regards, Tomasz
From: Tomasz Nowicki tomasz.nowicki@linaro.org
approach. Special MMIO accessors are registered for AMD Fam10h CPUs only.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com --- arch/x86/include/asm/pci_x86.h | 8 +++ arch/x86/pci/mmconfig-shared.c | 114 +++++++++++++++++++++++++++++++++++++++++ arch/x86/pci/mmconfig_32.c | 24 +-------- arch/x86/pci/mmconfig_64.c | 24 +-------- arch/x86/pci/numachip.c | 24 +-------- 5 files changed, 128 insertions(+), 66 deletions(-)
diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h index eddf8f0..f7f3b6a 100644 --- a/arch/x86/include/asm/pci_x86.h +++ b/arch/x86/include/asm/pci_x86.h @@ -139,6 +139,11 @@ struct pci_mmcfg_region { char name[PCI_MMCFG_RESOURCE_NAME_LEN]; };
+struct pci_mmcfg_mmio_ops { + u32 (*read)(int len, void __iomem *addr); + void (*write)(int len, void __iomem *addr, u32 value); +}; + extern int __init pci_mmcfg_arch_init(void); extern void __init pci_mmcfg_arch_free(void); extern int pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg); @@ -147,6 +152,9 @@ extern int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end, phys_addr_t addr); extern int pci_mmconfig_delete(u16 seg, u8 start, u8 end); extern struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus); +extern u32 pci_mmio_read(int len, void __iomem *addr); +extern void pci_mmio_write(int len, void __iomem *addr, u32 value); +extern void pci_mmconfig_register_mmio(struct pci_mmcfg_mmio_ops *ops);
extern struct list_head pci_mmcfg_list;
diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c index dd30b7e..8b3bc4f 100644 --- a/arch/x86/pci/mmconfig-shared.c +++ b/arch/x86/pci/mmconfig-shared.c @@ -31,6 +31,118 @@ static DEFINE_MUTEX(pci_mmcfg_lock);
LIST_HEAD(pci_mmcfg_list);
+static u32 +pci_mmconfig_generic_read(int len, void __iomem *addr) +{ + u32 data = 0; + + switch (len) { + case 1: + data = readb(addr); + break; + case 2: + data = readw(addr); + break; + case 4: + data = readl(addr); + break; + } + + return data; +} + +static void +pci_mmconfig_generic_write(int len, void __iomem *addr, u32 value) +{ + switch (len) { + case 1: + writeb(value, addr); + break; + case 2: + writew(value, addr); + break; + case 4: + writel(value, addr); + break; + } +} + +static struct pci_mmcfg_mmio_ops pci_mmcfg_mmio_default = { + .read = pci_mmconfig_generic_read, + .write = pci_mmconfig_generic_write, +}; + +static struct pci_mmcfg_mmio_ops *pci_mmcfg_mmio = &pci_mmcfg_mmio_default; + +static u32 +pci_mmconfig_amd_read(int len, void __iomem *addr) +{ + u32 data = 0; + + switch (len) { + case 1: + data = mmio_config_readb(addr); + break; + case 2: + data = mmio_config_readw(addr); + break; + case 4: + data = mmio_config_readl(addr); + break; + } + + return data; +} + +static void +pci_mmconfig_amd_write(int len, void __iomem *addr, u32 value) +{ + switch (len) { + case 1: + mmio_config_writeb(addr, value); + break; + case 2: + mmio_config_writew(addr, value); + break; + case 4: + mmio_config_writel(addr, value); + break; + } +} + +static struct pci_mmcfg_mmio_ops pci_mmcfg_mmio_amd_fam10h = { + .read = pci_mmconfig_amd_read, + .write = pci_mmconfig_amd_write, +}; + +void +pci_mmconfig_register_mmio(struct pci_mmcfg_mmio_ops *ops) +{ + pci_mmcfg_mmio = ops; +} + +u32 +pci_mmio_read(int len, void __iomem *addr) +{ + if (!pci_mmcfg_mmio) { + pr_err("PCI config space has no accessors !"); + return 0; + } + + return pci_mmcfg_mmio->read(len, addr); +} + +void +pci_mmio_write(int len, void __iomem *addr, u32 value) +{ + if (!pci_mmcfg_mmio) { + pr_err("PCI config space has no accessors !"); + return; + } + + pci_mmcfg_mmio->write(len, addr, value); +} + static void __init pci_mmconfig_remove(struct pci_mmcfg_region *cfg) { if (cfg->res.parent) @@ -231,6 +343,8 @@ static const char *__init pci_mmcfg_amd_fam10h(void) return NULL; }
+ pci_mmconfig_register_mmio(&pci_mmcfg_mmio_amd_fam10h); + return "AMD Family 10h NB"; }
diff --git a/arch/x86/pci/mmconfig_32.c b/arch/x86/pci/mmconfig_32.c index 43984bc..4b3d025 100644 --- a/arch/x86/pci/mmconfig_32.c +++ b/arch/x86/pci/mmconfig_32.c @@ -71,17 +71,7 @@ err: *value = -1;
pci_exp_set_dev_base(base, bus, devfn);
- switch (len) { - case 1: - *value = mmio_config_readb(mmcfg_virt_addr + reg); - break; - case 2: - *value = mmio_config_readw(mmcfg_virt_addr + reg); - break; - case 4: - *value = mmio_config_readl(mmcfg_virt_addr + reg); - break; - } + *value = pci_mmio_read(len, mmcfg_virt_addr + reg); raw_spin_unlock_irqrestore(&pci_config_lock, flags); rcu_read_unlock();
@@ -108,17 +98,7 @@ static int pci_mmcfg_write(unsigned int seg, unsigned int bus,
pci_exp_set_dev_base(base, bus, devfn);
- switch (len) { - case 1: - mmio_config_writeb(mmcfg_virt_addr + reg, value); - break; - case 2: - mmio_config_writew(mmcfg_virt_addr + reg, value); - break; - case 4: - mmio_config_writel(mmcfg_virt_addr + reg, value); - break; - } + pci_mmio_write(len, mmcfg_virt_addr + reg, value); raw_spin_unlock_irqrestore(&pci_config_lock, flags); rcu_read_unlock();
diff --git a/arch/x86/pci/mmconfig_64.c b/arch/x86/pci/mmconfig_64.c index bea5249..032593d 100644 --- a/arch/x86/pci/mmconfig_64.c +++ b/arch/x86/pci/mmconfig_64.c @@ -42,17 +42,7 @@ err: *value = -1; goto err; }
- switch (len) { - case 1: - *value = mmio_config_readb(addr + reg); - break; - case 2: - *value = mmio_config_readw(addr + reg); - break; - case 4: - *value = mmio_config_readl(addr + reg); - break; - } + *value = pci_mmio_read(len, addr + reg); rcu_read_unlock();
return 0; @@ -74,17 +64,7 @@ static int pci_mmcfg_write(unsigned int seg, unsigned int bus, return -EINVAL; }
- switch (len) { - case 1: - mmio_config_writeb(addr + reg, value); - break; - case 2: - mmio_config_writew(addr + reg, value); - break; - case 4: - mmio_config_writel(addr + reg, value); - break; - } + pci_mmio_write(len, addr + reg, value); rcu_read_unlock();
return 0; diff --git a/arch/x86/pci/numachip.c b/arch/x86/pci/numachip.c index 2e565e6..5047e9b 100644 --- a/arch/x86/pci/numachip.c +++ b/arch/x86/pci/numachip.c @@ -51,17 +51,7 @@ err: *value = -1; goto err; }
- switch (len) { - case 1: - *value = mmio_config_readb(addr + reg); - break; - case 2: - *value = mmio_config_readw(addr + reg); - break; - case 4: - *value = mmio_config_readl(addr + reg); - break; - } + *value = pci_mmio_read(len, addr + reg); rcu_read_unlock();
return 0; @@ -87,17 +77,7 @@ static int pci_mmcfg_write_numachip(unsigned int seg, unsigned int bus, return -EINVAL; }
- switch (len) { - case 1: - mmio_config_writeb(addr + reg, value); - break; - case 2: - mmio_config_writew(addr + reg, value); - break; - case 4: - mmio_config_writel(addr + reg, value); - break; - } + pci_mmio_write(len, addr + reg, value); rcu_read_unlock();
return 0;
From: Tomasz Nowicki tomasz.nowicki@linaro.org
This patch is the first step for MMCONFIG refactoring process.
Code that uses pci_mmcfg_lock will be moved to common file and become accessible for all architectures. pci_mmconfig_insert() cannot be moved so easily since it is mixing generic mmconfig code with x86 specific logic inside of mutual exclusive block guarded by pci_mmcfg_lock.
To get rid of that constraint, we reorder actions as follow: 1. sanity check for mmconfig region presence, if we already have such region it doesn't make snese to alloc new mmconfig list entry 2. mmconfig entry allocation, no need to lock 3. insertion to iomem_resource has its own lock, no need to wrap it into mutex 4. insertion to mmconfig list can be done as the final step in separate function (candidate for further refactoring) and needs another mmconfig lookup to avoid race condition.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com --- arch/x86/pci/mmconfig-shared.c | 99 +++++++++++++++++++++++------------------- 1 file changed, 54 insertions(+), 45 deletions(-)
diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c index 8b3bc4f..e770b70 100644 --- a/arch/x86/pci/mmconfig-shared.c +++ b/arch/x86/pci/mmconfig-shared.c @@ -834,6 +834,38 @@ static int __init pci_mmcfg_late_insert_resources(void) */ late_initcall(pci_mmcfg_late_insert_resources);
+static int pci_mmconfig_inject(struct pci_mmcfg_region *cfg) +{ + struct pci_mmcfg_region *cfg_conflict; + int err = 0; + + mutex_lock(&pci_mmcfg_lock); + cfg_conflict = pci_mmconfig_lookup(cfg->segment, cfg->start_bus); + if (cfg_conflict) { + if (cfg_conflict->end_bus < cfg->end_bus) + pr_info(FW_INFO "MMCONFIG for " + "domain %04x [bus %02x-%02x] " + "only partially covers this bridge\n", + cfg_conflict->segment, cfg_conflict->start_bus, + cfg_conflict->end_bus); + err = -EEXIST; + goto out; + } + + if (pci_mmcfg_arch_map(cfg)) { + pr_warn("fail to map MMCONFIG %pR.\n", &cfg->res); + err = -ENOMEM; + goto out; + } else { + list_add_sorted(cfg); + pr_info("MMCONFIG at %pR (base %#lx)\n", + &cfg->res, (unsigned long)cfg->address); + } +out: + mutex_unlock(&pci_mmcfg_lock); + return err; +} + /* Add MMCFG information for host bridges */ int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end, phys_addr_t addr) @@ -845,66 +877,43 @@ int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end, if (!(pci_probe & PCI_PROBE_MMCONF) || pci_mmcfg_arch_init_failed) return -ENODEV;
- if (start > end) + if ((start > end) || !addr) return -EINVAL;
- mutex_lock(&pci_mmcfg_lock); + rcu_read_lock(); cfg = pci_mmconfig_lookup(seg, start); - if (cfg) { - if (cfg->end_bus < end) - dev_info(dev, FW_INFO - "MMCONFIG for " - "domain %04x [bus %02x-%02x] " - "only partially covers this bridge\n", - cfg->segment, cfg->start_bus, cfg->end_bus); - mutex_unlock(&pci_mmcfg_lock); + rcu_read_unlock(); + if (cfg) return -EEXIST; - } - - if (!addr) { - mutex_unlock(&pci_mmcfg_lock); - return -EINVAL; - }
- rc = -EBUSY; cfg = pci_mmconfig_alloc(seg, start, end, addr); - if (cfg == NULL) { + if (!cfg) { dev_warn(dev, "fail to add MMCONFIG (out of memory)\n"); - rc = -ENOMEM; + return -ENOMEM; } else if (!pci_mmcfg_check_reserved(dev, cfg, 0)) { dev_warn(dev, FW_BUG "MMCONFIG %pR isn't reserved\n", &cfg->res); - } else { - /* Insert resource if it's not in boot stage */ - if (pci_mmcfg_running_state) - tmp = insert_resource_conflict(&iomem_resource, - &cfg->res); - - if (tmp) { - dev_warn(dev, - "MMCONFIG %pR conflicts with " - "%s %pR\n", - &cfg->res, tmp->name, tmp); - } else if (pci_mmcfg_arch_map(cfg)) { - dev_warn(dev, "fail to map MMCONFIG %pR.\n", - &cfg->res); - } else { - list_add_sorted(cfg); - dev_info(dev, "MMCONFIG at %pR (base %#lx)\n", - &cfg->res, (unsigned long)addr); - cfg = NULL; - rc = 0; - } + goto error; }
- if (cfg) { - if (cfg->res.parent) - release_resource(&cfg->res); - kfree(cfg); + /* Insert resource if it's not in boot stage */ + if (pci_mmcfg_running_state) + tmp = insert_resource_conflict(&iomem_resource, &cfg->res); + + if (tmp) { + dev_warn(dev, "MMCONFIG %pR conflicts with %s %pR\n", + &cfg->res, tmp->name, tmp); + goto error; }
- mutex_unlock(&pci_mmcfg_lock); + rc = pci_mmconfig_inject(cfg); + if (!rc) + return 0;
+error: + if (cfg->res.parent) + release_resource(&cfg->res); + kfree(cfg); return rc; }
From: Tomasz Nowicki tomasz.nowicki@linaro.org
ECAM standard and MCFG table are architecture independent and it makes sense to share common code across all architectures. Both are going to corresponding files - ecam.c and mcfg.c
While we are here, rename pci_parse_mcfg to acpi_parse_mcfg. We already have acpi_parse_mcfg prototype which is used nowhere. At the same time, we need pci_parse_mcfg been global so acpi_parse_mcfg can be used perfectly here.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com --- arch/x86/Kconfig | 3 + arch/x86/include/asm/pci_x86.h | 33 ------ arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 244 +--------------------------------------- arch/x86/pci/mmconfig_32.c | 1 + arch/x86/pci/mmconfig_64.c | 1 + arch/x86/pci/numachip.c | 1 + drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 57 ++++++++++ drivers/pci/Kconfig | 7 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 245 +++++++++++++++++++++++++++++++++++++++++ drivers/xen/pci.c | 1 + include/linux/acpi.h | 2 + include/linux/ecam.h | 51 +++++++++ 15 files changed, 381 insertions(+), 272 deletions(-) create mode 100644 drivers/acpi/mcfg.c create mode 100644 drivers/pci/ecam.c create mode 100644 include/linux/ecam.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 226d569..4e3dcb3 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -143,6 +143,7 @@ config X86 select ACPI_LEGACY_TABLES_LOOKUP if ACPI select X86_FEATURE_NAMES if PROC_FS select SRCU + select HAVE_PCI_ECAM
config INSTRUCTION_DECODER def_bool y @@ -2276,6 +2277,7 @@ config PCI_DIRECT
config PCI_MMCONFIG def_bool y + select PCI_ECAM depends on X86_32 && PCI && (ACPI || SFI) && (PCI_GOMMCONFIG || PCI_GOANY)
config PCI_OLPC @@ -2293,6 +2295,7 @@ config PCI_DOMAINS
config PCI_MMCONFIG bool "Support mmconfig PCI config space access" + select PCI_ECAM depends on X86_64 && PCI && ACPI
config PCI_CNB20LE_QUIRK diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h index f7f3b6a..2ea44a7 100644 --- a/arch/x86/include/asm/pci_x86.h +++ b/arch/x86/include/asm/pci_x86.h @@ -124,41 +124,8 @@ extern int pci_legacy_init(void); extern void pcibios_fixup_irqs(void);
/* pci-mmconfig.c */ - -/* "PCI MMCONFIG %04x [bus %02x-%02x]" */ -#define PCI_MMCFG_RESOURCE_NAME_LEN (22 + 4 + 2 + 2) - -struct pci_mmcfg_region { - struct list_head list; - struct resource res; - u64 address; - char __iomem *virt; - u16 segment; - u8 start_bus; - u8 end_bus; - char name[PCI_MMCFG_RESOURCE_NAME_LEN]; -}; - -struct pci_mmcfg_mmio_ops { - u32 (*read)(int len, void __iomem *addr); - void (*write)(int len, void __iomem *addr, u32 value); -}; - -extern int __init pci_mmcfg_arch_init(void); -extern void __init pci_mmcfg_arch_free(void); -extern int pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg); -extern void pci_mmcfg_arch_unmap(struct pci_mmcfg_region *cfg); extern int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end, phys_addr_t addr); -extern int pci_mmconfig_delete(u16 seg, u8 start, u8 end); -extern struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus); -extern u32 pci_mmio_read(int len, void __iomem *addr); -extern void pci_mmio_write(int len, void __iomem *addr, u32 value); -extern void pci_mmconfig_register_mmio(struct pci_mmcfg_mmio_ops *ops); - -extern struct list_head pci_mmcfg_list; - -#define PCI_MMCFG_BUS_OFFSET(bus) ((bus) << 20)
/* * AMD Fam10h CPUs are buggy, and cannot access MMIO config space diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c index 89bd79b..217508e 100644 --- a/arch/x86/pci/acpi.c +++ b/arch/x86/pci/acpi.c @@ -5,6 +5,7 @@ #include <linux/dmi.h> #include <linux/slab.h> #include <linux/pci-acpi.h> +#include <linux/ecam.h> #include <asm/numa.h> #include <asm/pci_x86.h>
diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c index e770b70..6fa3080 100644 --- a/arch/x86/pci/mmconfig-shared.c +++ b/arch/x86/pci/mmconfig-shared.c @@ -18,6 +18,7 @@ #include <linux/slab.h> #include <linux/mutex.h> #include <linux/rculist.h> +#include <linux/ecam.h> #include <asm/e820.h> #include <asm/pci_x86.h> #include <asm/acpi.h> @@ -27,52 +28,6 @@ /* Indicate if the mmcfg resources have been placed into the resource table. */ static bool pci_mmcfg_running_state; static bool pci_mmcfg_arch_init_failed; -static DEFINE_MUTEX(pci_mmcfg_lock); - -LIST_HEAD(pci_mmcfg_list); - -static u32 -pci_mmconfig_generic_read(int len, void __iomem *addr) -{ - u32 data = 0; - - switch (len) { - case 1: - data = readb(addr); - break; - case 2: - data = readw(addr); - break; - case 4: - data = readl(addr); - break; - } - - return data; -} - -static void -pci_mmconfig_generic_write(int len, void __iomem *addr, u32 value) -{ - switch (len) { - case 1: - writeb(value, addr); - break; - case 2: - writew(value, addr); - break; - case 4: - writel(value, addr); - break; - } -} - -static struct pci_mmcfg_mmio_ops pci_mmcfg_mmio_default = { - .read = pci_mmconfig_generic_read, - .write = pci_mmconfig_generic_write, -}; - -static struct pci_mmcfg_mmio_ops *pci_mmcfg_mmio = &pci_mmcfg_mmio_default;
static u32 pci_mmconfig_amd_read(int len, void __iomem *addr) @@ -115,128 +70,6 @@ static struct pci_mmcfg_mmio_ops pci_mmcfg_mmio_amd_fam10h = { .write = pci_mmconfig_amd_write, };
-void -pci_mmconfig_register_mmio(struct pci_mmcfg_mmio_ops *ops) -{ - pci_mmcfg_mmio = ops; -} - -u32 -pci_mmio_read(int len, void __iomem *addr) -{ - if (!pci_mmcfg_mmio) { - pr_err("PCI config space has no accessors !"); - return 0; - } - - return pci_mmcfg_mmio->read(len, addr); -} - -void -pci_mmio_write(int len, void __iomem *addr, u32 value) -{ - if (!pci_mmcfg_mmio) { - pr_err("PCI config space has no accessors !"); - return; - } - - pci_mmcfg_mmio->write(len, addr, value); -} - -static void __init pci_mmconfig_remove(struct pci_mmcfg_region *cfg) -{ - if (cfg->res.parent) - release_resource(&cfg->res); - list_del(&cfg->list); - kfree(cfg); -} - -static void __init free_all_mmcfg(void) -{ - struct pci_mmcfg_region *cfg, *tmp; - - pci_mmcfg_arch_free(); - list_for_each_entry_safe(cfg, tmp, &pci_mmcfg_list, list) - pci_mmconfig_remove(cfg); -} - -static void list_add_sorted(struct pci_mmcfg_region *new) -{ - struct pci_mmcfg_region *cfg; - - /* keep list sorted by segment and starting bus number */ - list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) { - if (cfg->segment > new->segment || - (cfg->segment == new->segment && - cfg->start_bus >= new->start_bus)) { - list_add_tail_rcu(&new->list, &cfg->list); - return; - } - } - list_add_tail_rcu(&new->list, &pci_mmcfg_list); -} - -static struct pci_mmcfg_region *pci_mmconfig_alloc(int segment, int start, - int end, u64 addr) -{ - struct pci_mmcfg_region *new; - struct resource *res; - - if (addr == 0) - return NULL; - - new = kzalloc(sizeof(*new), GFP_KERNEL); - if (!new) - return NULL; - - new->address = addr; - new->segment = segment; - new->start_bus = start; - new->end_bus = end; - - res = &new->res; - res->start = addr + PCI_MMCFG_BUS_OFFSET(start); - res->end = addr + PCI_MMCFG_BUS_OFFSET(end + 1) - 1; - res->flags = IORESOURCE_MEM | IORESOURCE_BUSY; - snprintf(new->name, PCI_MMCFG_RESOURCE_NAME_LEN, - "PCI MMCONFIG %04x [bus %02x-%02x]", segment, start, end); - res->name = new->name; - - return new; -} - -static struct pci_mmcfg_region *__init pci_mmconfig_add(int segment, int start, - int end, u64 addr) -{ - struct pci_mmcfg_region *new; - - new = pci_mmconfig_alloc(segment, start, end, addr); - if (new) { - mutex_lock(&pci_mmcfg_lock); - list_add_sorted(new); - mutex_unlock(&pci_mmcfg_lock); - - pr_info(PREFIX - "MMCONFIG for domain %04x [bus %02x-%02x] at %pR " - "(base %#lx)\n", - segment, start, end, &new->res, (unsigned long)addr); - } - - return new; -} - -struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus) -{ - struct pci_mmcfg_region *cfg; - - list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) - if (cfg->segment == segment && - cfg->start_bus <= bus && bus <= cfg->end_bus) - return cfg; - - return NULL; -} - static const char *__init pci_mmcfg_e7520(void) { u32 win; @@ -657,8 +490,8 @@ static void __init pci_mmcfg_reject_broken(int early) } }
-static int __init acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg, - struct acpi_mcfg_allocation *cfg) +int __init acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg, + struct acpi_mcfg_allocation *cfg) { int year;
@@ -680,50 +513,6 @@ static int __init acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg, return -EINVAL; }
-static int __init pci_parse_mcfg(struct acpi_table_header *header) -{ - struct acpi_table_mcfg *mcfg; - struct acpi_mcfg_allocation *cfg_table, *cfg; - unsigned long i; - int entries; - - if (!header) - return -EINVAL; - - mcfg = (struct acpi_table_mcfg *)header; - - /* how many config structures do we have */ - free_all_mmcfg(); - entries = 0; - i = header->length - sizeof(struct acpi_table_mcfg); - while (i >= sizeof(struct acpi_mcfg_allocation)) { - entries++; - i -= sizeof(struct acpi_mcfg_allocation); - } - if (entries == 0) { - pr_err(PREFIX "MMCONFIG has no entries\n"); - return -ENODEV; - } - - cfg_table = (struct acpi_mcfg_allocation *) &mcfg[1]; - for (i = 0; i < entries; i++) { - cfg = &cfg_table[i]; - if (acpi_mcfg_check_entry(mcfg, cfg)) { - free_all_mmcfg(); - return -ENODEV; - } - - if (pci_mmconfig_add(cfg->pci_segment, cfg->start_bus_number, - cfg->end_bus_number, cfg->address) == NULL) { - pr_warn(PREFIX "no memory for MCFG entries\n"); - free_all_mmcfg(); - return -ENOMEM; - } - } - - return 0; -} - #ifdef CONFIG_ACPI_APEI extern int (*arch_apei_filter_addr)(int (*func)(__u64 start, __u64 size, void *data), void *data); @@ -782,7 +571,7 @@ void __init pci_mmcfg_early_init(void) if (pci_mmcfg_check_hostbridge()) known_bridge = 1; else - acpi_sfi_table_parse(ACPI_SIG_MCFG, pci_parse_mcfg); + acpi_sfi_table_parse(ACPI_SIG_MCFG, acpi_parse_mcfg); __pci_mmcfg_init(1);
set_apei_filter(); @@ -800,7 +589,7 @@ void __init pci_mmcfg_late_init(void)
/* MMCONFIG hasn't been enabled yet, try again */ if (pci_probe & PCI_PROBE_MASK & ~PCI_PROBE_MMCONF) { - acpi_sfi_table_parse(ACPI_SIG_MCFG, pci_parse_mcfg); + acpi_sfi_table_parse(ACPI_SIG_MCFG, acpi_parse_mcfg); __pci_mmcfg_init(0); } } @@ -916,26 +705,3 @@ error: kfree(cfg); return rc; } - -/* Delete MMCFG information for host bridges */ -int pci_mmconfig_delete(u16 seg, u8 start, u8 end) -{ - struct pci_mmcfg_region *cfg; - - mutex_lock(&pci_mmcfg_lock); - list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) - if (cfg->segment == seg && cfg->start_bus == start && - cfg->end_bus == end) { - list_del_rcu(&cfg->list); - synchronize_rcu(); - pci_mmcfg_arch_unmap(cfg); - if (cfg->res.parent) - release_resource(&cfg->res); - mutex_unlock(&pci_mmcfg_lock); - kfree(cfg); - return 0; - } - mutex_unlock(&pci_mmcfg_lock); - - return -ENOENT; -} diff --git a/arch/x86/pci/mmconfig_32.c b/arch/x86/pci/mmconfig_32.c index 4b3d025..5cf6291 100644 --- a/arch/x86/pci/mmconfig_32.c +++ b/arch/x86/pci/mmconfig_32.c @@ -12,6 +12,7 @@ #include <linux/pci.h> #include <linux/init.h> #include <linux/rcupdate.h> +#include <linux/ecam.h> #include <asm/e820.h> #include <asm/pci_x86.h>
diff --git a/arch/x86/pci/mmconfig_64.c b/arch/x86/pci/mmconfig_64.c index 032593d..b62ff18 100644 --- a/arch/x86/pci/mmconfig_64.c +++ b/arch/x86/pci/mmconfig_64.c @@ -10,6 +10,7 @@ #include <linux/acpi.h> #include <linux/bitmap.h> #include <linux/rcupdate.h> +#include <linux/ecam.h> #include <asm/e820.h> #include <asm/pci_x86.h>
diff --git a/arch/x86/pci/numachip.c b/arch/x86/pci/numachip.c index 5047e9b..01868b6 100644 --- a/arch/x86/pci/numachip.c +++ b/arch/x86/pci/numachip.c @@ -13,6 +13,7 @@ * */
+#include <linux/ecam.h> #include <linux/pci.h> #include <asm/pci_x86.h>
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index 3e4aec3..576fbb1 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -68,6 +68,7 @@ obj-$(CONFIG_ACPI_BUTTON) += button.o obj-$(CONFIG_ACPI_FAN) += fan.o obj-$(CONFIG_ACPI_VIDEO) += video.o obj-$(CONFIG_ACPI_PCI_SLOT) += pci_slot.o +obj-$(CONFIG_PCI_MMCONFIG) += mcfg.o obj-$(CONFIG_ACPI_PROCESSOR) += processor.o obj-y += container.o obj-$(CONFIG_ACPI_THERMAL) += thermal.o diff --git a/drivers/acpi/mcfg.c b/drivers/acpi/mcfg.c new file mode 100644 index 0000000..63775af --- /dev/null +++ b/drivers/acpi/mcfg.c @@ -0,0 +1,57 @@ +/* + * MCFG ACPI table parser. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include <linux/acpi.h> +#include <linux/ecam.h> + +#define PREFIX "MCFG: " + +int __init acpi_parse_mcfg(struct acpi_table_header *header) +{ + struct acpi_table_mcfg *mcfg; + struct acpi_mcfg_allocation *cfg_table, *cfg; + unsigned long i; + int entries; + + if (!header) + return -EINVAL; + + mcfg = (struct acpi_table_mcfg *)header; + + /* how many config structures do we have */ + free_all_mmcfg(); + entries = 0; + i = header->length - sizeof(struct acpi_table_mcfg); + while (i >= sizeof(struct acpi_mcfg_allocation)) { + entries++; + i -= sizeof(struct acpi_mcfg_allocation); + } + if (entries == 0) { + pr_err(PREFIX "MCFG table has no entries\n"); + return -ENODEV; + } + + cfg_table = (struct acpi_mcfg_allocation *) &mcfg[1]; + for (i = 0; i < entries; i++) { + cfg = &cfg_table[i]; + if (acpi_mcfg_check_entry(mcfg, cfg)) { + free_all_mmcfg(); + return -ENODEV; + } + + if (pci_mmconfig_add(cfg->pci_segment, cfg->start_bus_number, + cfg->end_bus_number, cfg->address) == NULL) { + pr_warn(PREFIX "no memory for MCFG entries\n"); + free_all_mmcfg(); + return -ENOMEM; + } + } + + return 0; +} diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 7a8f1c5..90a5fb9 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -22,6 +22,13 @@ config PCI_MSI_IRQ_DOMAIN depends on PCI_MSI select GENERIC_MSI_IRQ_DOMAIN
+config PCI_ECAM + bool "Enhanced Configuration Access Mechanism (ECAM)" + depends on PCI && HAVE_PCI_ECAM + +config HAVE_PCI_ECAM + bool + config PCI_DEBUG bool "PCI Debugging" depends on PCI && DEBUG_KERNEL diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 73e4af4..ce7b630 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -41,6 +41,11 @@ obj-$(CONFIG_SPARC_LEON) += setup-irq.o obj-$(CONFIG_M68K) += setup-irq.o
# +# Enhanced Configuration Access Mechanism (ECAM) +# +obj-$(CONFIG_PCI_ECAM) += ecam.o + +# # ACPI Related PCI FW Functions # ACPI _DSM provided firmware instance and string name # diff --git a/drivers/pci/ecam.c b/drivers/pci/ecam.c new file mode 100644 index 0000000..c588234 --- /dev/null +++ b/drivers/pci/ecam.c @@ -0,0 +1,245 @@ +/* + * Arch agnostic direct PCI config space access via + * ECAM (Enhanced Configuration Access Mechanism) + * + * Per-architecture code takes care of the mappings, region validation and + * accesses themselves. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include <linux/mutex.h> +#include <linux/rculist.h> +#include <linux/ecam.h> + +#include <asm/io.h> + +#define PREFIX "PCI: " + +static DEFINE_MUTEX(pci_mmcfg_lock); + +LIST_HEAD(pci_mmcfg_list); + +static u32 +pci_mmconfig_generic_read(int len, void __iomem *addr) +{ + u32 data = 0; + + switch (len) { + case 1: + data = readb(addr); + break; + case 2: + data = readw(addr); + break; + case 4: + data = readl(addr); + break; + } + + return data; +} + +static void +pci_mmconfig_generic_write(int len, void __iomem *addr, u32 value) +{ + switch (len) { + case 1: + writeb(value, addr); + break; + case 2: + writew(value, addr); + break; + case 4: + writel(value, addr); + break; + } +} + +static struct pci_mmcfg_mmio_ops pci_mmcfg_mmio_default = { + .read = pci_mmconfig_generic_read, + .write = pci_mmconfig_generic_write, +}; + +static struct pci_mmcfg_mmio_ops *pci_mmcfg_mmio = &pci_mmcfg_mmio_default; + +void +pci_mmconfig_register_mmio(struct pci_mmcfg_mmio_ops *ops) +{ + pci_mmcfg_mmio = ops; +} + +u32 +pci_mmio_read(int len, void __iomem *addr) +{ + if (!pci_mmcfg_mmio) { + pr_err("PCI config space has no accessors !"); + return 0; + } + + return pci_mmcfg_mmio->read(len, addr); +} + +void +pci_mmio_write(int len, void __iomem *addr, u32 value) +{ + if (!pci_mmcfg_mmio) { + pr_err("PCI config space has no accessors !"); + return; + } + + pci_mmcfg_mmio->write(len, addr, value); +} + +static void __init pci_mmconfig_remove(struct pci_mmcfg_region *cfg) +{ + if (cfg->res.parent) + release_resource(&cfg->res); + list_del(&cfg->list); + kfree(cfg); +} + +void __init free_all_mmcfg(void) +{ + struct pci_mmcfg_region *cfg, *tmp; + + pci_mmcfg_arch_free(); + list_for_each_entry_safe(cfg, tmp, &pci_mmcfg_list, list) + pci_mmconfig_remove(cfg); +} + +void list_add_sorted(struct pci_mmcfg_region *new) +{ + struct pci_mmcfg_region *cfg; + + /* keep list sorted by segment and starting bus number */ + list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) { + if (cfg->segment > new->segment || + (cfg->segment == new->segment && + cfg->start_bus >= new->start_bus)) { + list_add_tail_rcu(&new->list, &cfg->list); + return; + } + } + list_add_tail_rcu(&new->list, &pci_mmcfg_list); +} + +struct pci_mmcfg_region *pci_mmconfig_alloc(int segment, int start, + int end, u64 addr) +{ + struct pci_mmcfg_region *new; + struct resource *res; + + if (addr == 0) + return NULL; + + new = kzalloc(sizeof(*new), GFP_KERNEL); + if (!new) + return NULL; + + new->address = addr; + new->segment = segment; + new->start_bus = start; + new->end_bus = end; + + res = &new->res; + res->start = addr + PCI_MMCFG_BUS_OFFSET(start); + res->end = addr + PCI_MMCFG_BUS_OFFSET(end + 1) - 1; + res->flags = IORESOURCE_MEM | IORESOURCE_BUSY; + snprintf(new->name, PCI_MMCFG_RESOURCE_NAME_LEN, + "PCI MMCONFIG %04x [bus %02x-%02x]", segment, start, end); + res->name = new->name; + + return new; +} + +struct pci_mmcfg_region *pci_mmconfig_add(int segment, int start, + int end, u64 addr) +{ + struct pci_mmcfg_region *new; + + new = pci_mmconfig_alloc(segment, start, end, addr); + if (new) { + mutex_lock(&pci_mmcfg_lock); + list_add_sorted(new); + mutex_unlock(&pci_mmcfg_lock); + + pr_info(PREFIX + "MMCONFIG for domain %04x [bus %02x-%02x] at %pR " + "(base %#lx)\n", + segment, start, end, &new->res, (unsigned long)addr); + } + + return new; +} + +struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus) +{ + struct pci_mmcfg_region *cfg; + + list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) + if (cfg->segment == segment && + cfg->start_bus <= bus && bus <= cfg->end_bus) + return cfg; + + return NULL; +} + +/* Delete MMCFG information for host bridges */ +int pci_mmconfig_delete(u16 seg, u8 start, u8 end) +{ + struct pci_mmcfg_region *cfg; + + mutex_lock(&pci_mmcfg_lock); + list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) + if (cfg->segment == seg && cfg->start_bus == start && + cfg->end_bus == end) { + list_del_rcu(&cfg->list); + synchronize_rcu(); + pci_mmcfg_arch_unmap(cfg); + if (cfg->res.parent) + release_resource(&cfg->res); + mutex_unlock(&pci_mmcfg_lock); + kfree(cfg); + return 0; + } + mutex_unlock(&pci_mmcfg_lock); + + return -ENOENT; +} + +int pci_mmconfig_inject(struct pci_mmcfg_region *cfg) +{ + struct pci_mmcfg_region *cfg_conflict; + int err = 0; + + mutex_lock(&pci_mmcfg_lock); + cfg_conflict = pci_mmconfig_lookup(cfg->segment, cfg->start_bus); + if (cfg_conflict) { + if (cfg_conflict->end_bus < cfg->end_bus) + pr_info(FW_INFO "MMCONFIG for " + "domain %04x [bus %02x-%02x] " + "only partially covers this bridge\n", + cfg_conflict->segment, cfg_conflict->start_bus, + cfg_conflict->end_bus); + err = -EEXIST; + goto out; + } + + if (pci_mmcfg_arch_map(cfg)) { + pr_warn("fail to map MMCONFIG %pR.\n", &cfg->res); + err = -ENOMEM; + goto out; + } else { + list_add_sorted(cfg); + pr_info("MMCONFIG at %pR (base %#lx)\n", + &cfg->res, (unsigned long)cfg->address); + + } +out: + mutex_unlock(&pci_mmcfg_lock); + return err; +} diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c index 7494dbe..6785ebb 100644 --- a/drivers/xen/pci.c +++ b/drivers/xen/pci.c @@ -20,6 +20,7 @@ #include <linux/pci.h> #include <linux/acpi.h> #include <linux/pci-acpi.h> +#include <linux/ecam.h> #include <xen/xen.h> #include <xen/interface/physdev.h> #include <xen/interface/xen.h> diff --git a/include/linux/acpi.h b/include/linux/acpi.h index b904af3..5063429 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -144,6 +144,8 @@ int acpi_table_parse_madt(enum acpi_madt_type id, acpi_tbl_entry_handler handler, unsigned int max_entries); int acpi_parse_mcfg (struct acpi_table_header *header); +int acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg, + struct acpi_mcfg_allocation *cfg); void acpi_table_print_madt_entry (struct acpi_subtable_header *madt);
/* the following four functions are architecture-dependent */ diff --git a/include/linux/ecam.h b/include/linux/ecam.h new file mode 100644 index 0000000..2387df5 --- /dev/null +++ b/include/linux/ecam.h @@ -0,0 +1,51 @@ +#ifndef __ECAM_H +#define __ECAM_H +#ifdef __KERNEL__ + +#include <linux/types.h> +#include <linux/acpi.h> + +/* "PCI MMCONFIG %04x [bus %02x-%02x]" */ +#define PCI_MMCFG_RESOURCE_NAME_LEN (22 + 4 + 2 + 2) + +struct pci_mmcfg_region { + struct list_head list; + struct resource res; + u64 address; + char __iomem *virt; + u16 segment; + u8 start_bus; + u8 end_bus; + char name[PCI_MMCFG_RESOURCE_NAME_LEN]; +}; + +struct pci_mmcfg_mmio_ops { + u32 (*read)(int len, void __iomem *addr); + void (*write)(int len, void __iomem *addr, u32 value); +}; + +struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus); +struct pci_mmcfg_region *pci_mmconfig_alloc(int segment, int start, + int end, u64 addr); +int pci_mmconfig_inject(struct pci_mmcfg_region *cfg); +struct pci_mmcfg_region *pci_mmconfig_add(int segment, int start, + int end, u64 addr); +void list_add_sorted(struct pci_mmcfg_region *new); +void free_all_mmcfg(void); +int pci_mmconfig_delete(u16 seg, u8 start, u8 end); + +/* Arch specific calls */ +int pci_mmcfg_arch_init(void); +void pci_mmcfg_arch_free(void); +int pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg); +void pci_mmcfg_arch_unmap(struct pci_mmcfg_region *cfg); +extern u32 pci_mmio_read(int len, void __iomem *addr); +extern void pci_mmio_write(int len, void __iomem *addr, u32 value); +extern void pci_mmconfig_register_mmio(struct pci_mmcfg_mmio_ops *ops); + +extern struct list_head pci_mmcfg_list; + +#define PCI_MMCFG_BUS_OFFSET(bus) ((bus) << 20) + +#endif /* __KERNEL__ */ +#endif /* __ECAM_H */
On Tue, May 26, 2015 at 01:49:18PM +0100, Hanjun Guo wrote:
From: Tomasz Nowicki tomasz.nowicki@linaro.org
ECAM standard and MCFG table are architecture independent and it makes sense to share common code across all architectures. Both are going to corresponding files - ecam.c and mcfg.c
While we are here, rename pci_parse_mcfg to acpi_parse_mcfg. We already have acpi_parse_mcfg prototype which is used nowhere. At the same time, we need pci_parse_mcfg been global so acpi_parse_mcfg can be used perfectly here.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com
arch/x86/Kconfig | 3 + arch/x86/include/asm/pci_x86.h | 33 ------ arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 244 +--------------------------------------- arch/x86/pci/mmconfig_32.c | 1 + arch/x86/pci/mmconfig_64.c | 1 + arch/x86/pci/numachip.c | 1 + drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 57 ++++++++++ drivers/pci/Kconfig | 7 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 245 +++++++++++++++++++++++++++++++++++++++++
Why can't we make use of the ECAM implementation used by pci-host-generic and drivers/pci/access.c?
Will
On 26.05.2015 19:08, Will Deacon wrote:
On Tue, May 26, 2015 at 01:49:18PM +0100, Hanjun Guo wrote:
From: Tomasz Nowicki tomasz.nowicki@linaro.org
ECAM standard and MCFG table are architecture independent and it makes sense to share common code across all architectures. Both are going to corresponding files - ecam.c and mcfg.c
While we are here, rename pci_parse_mcfg to acpi_parse_mcfg. We already have acpi_parse_mcfg prototype which is used nowhere. At the same time, we need pci_parse_mcfg been global so acpi_parse_mcfg can be used perfectly here.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com
arch/x86/Kconfig | 3 + arch/x86/include/asm/pci_x86.h | 33 ------ arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 244 +--------------------------------------- arch/x86/pci/mmconfig_32.c | 1 + arch/x86/pci/mmconfig_64.c | 1 + arch/x86/pci/numachip.c | 1 + drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 57 ++++++++++ drivers/pci/Kconfig | 7 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 245 +++++++++++++++++++++++++++++++++++++++++
Why can't we make use of the ECAM implementation used by pci-host-generic and drivers/pci/access.c?
We had that question when I had posted MMCFG patch set separately, please see: https://lkml.org/lkml/2015/3/11/492
Tomasz
On Wed, May 27, 2015 at 09:06:26AM +0100, Tomasz Nowicki wrote:
On 26.05.2015 19:08, Will Deacon wrote:
On Tue, May 26, 2015 at 01:49:18PM +0100, Hanjun Guo wrote:
From: Tomasz Nowicki tomasz.nowicki@linaro.org
ECAM standard and MCFG table are architecture independent and it makes sense to share common code across all architectures. Both are going to corresponding files - ecam.c and mcfg.c
While we are here, rename pci_parse_mcfg to acpi_parse_mcfg. We already have acpi_parse_mcfg prototype which is used nowhere. At the same time, we need pci_parse_mcfg been global so acpi_parse_mcfg can be used perfectly here.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com
arch/x86/Kconfig | 3 + arch/x86/include/asm/pci_x86.h | 33 ------ arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 244 +--------------------------------------- arch/x86/pci/mmconfig_32.c | 1 + arch/x86/pci/mmconfig_64.c | 1 + arch/x86/pci/numachip.c | 1 + drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 57 ++++++++++ drivers/pci/Kconfig | 7 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 245 +++++++++++++++++++++++++++++++++++++++++
Why can't we make use of the ECAM implementation used by pci-host-generic and drivers/pci/access.c?
We had that question when I had posted MMCFG patch set separately, please see: https://lkml.org/lkml/2015/3/11/492
Yes, but the real question is, why do we need to have PCI config space up and running before a bus struct is even created ? I think the reason is the PCI configuration address space format (ACPI 6.0, Table 5-27, page 108):
"PCI Configuration space addresses must be confined to devices on PCI Segment Group 0, bus 0. This restriction exists to accommodate access to fixed hardware prior to PCI bus enumeration".
On HW reduced platforms I do not even think this is required at all, we have to look into this to avoid code duplication that might well turn out useless.
Lorenzo
Hi Lorenzo,
On 2015年06月02日 21:32, Lorenzo Pieralisi wrote:
On Wed, May 27, 2015 at 09:06:26AM +0100, Tomasz Nowicki wrote:
On 26.05.2015 19:08, Will Deacon wrote:
On Tue, May 26, 2015 at 01:49:18PM +0100, Hanjun Guo wrote:
From: Tomasz Nowicki tomasz.nowicki@linaro.org
ECAM standard and MCFG table are architecture independent and it makes sense to share common code across all architectures. Both are going to corresponding files - ecam.c and mcfg.c
While we are here, rename pci_parse_mcfg to acpi_parse_mcfg. We already have acpi_parse_mcfg prototype which is used nowhere. At the same time, we need pci_parse_mcfg been global so acpi_parse_mcfg can be used perfectly here.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com
arch/x86/Kconfig | 3 + arch/x86/include/asm/pci_x86.h | 33 ------ arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 244 +--------------------------------------- arch/x86/pci/mmconfig_32.c | 1 + arch/x86/pci/mmconfig_64.c | 1 + arch/x86/pci/numachip.c | 1 + drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 57 ++++++++++ drivers/pci/Kconfig | 7 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 245 +++++++++++++++++++++++++++++++++++++++++
Why can't we make use of the ECAM implementation used by pci-host-generic and drivers/pci/access.c?
We had that question when I had posted MMCFG patch set separately, please see: https://lkml.org/lkml/2015/3/11/492
Yes, but the real question is, why do we need to have PCI config space up and running before a bus struct is even created ? I think the reason is the PCI configuration address space format (ACPI 6.0, Table 5-27, page 108):
"PCI Configuration space addresses must be confined to devices on PCI Segment Group 0, bus 0. This restriction exists to accommodate access to fixed hardware prior to PCI bus enumeration".
On HW reduced platforms I do not even think this is required at all, we have to look into this to avoid code duplication that might well turn out useless.
This is only for the fixed hardware, which will be not available for ARM64 (reduced hardware mode), but in Generic Hardware Programming Model, we using OEM-provided ACPI Machine Language (AML) code to access generic hardware registers, this will be available for reduced hardware too.
So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph)
ACPI defines eight address spaces that may be accessed by generic hardware implementations. These include: • System I/O space • System memory space • PCI configuration space • Embedded controller space • System Management Bus (SMBus) space • CMOS • PCI BAR Target • IPMI space
So if any device using the PCI address space for control, such as a system reset control device, its address space can be reside in PCI configuration space (who can prevent a OEM do that crazy thing? :) ), and it should be accessible before the PCI bus is created.
Thanks Hanjun
Hi Hanjun,
On Thu, Jun 04, 2015 at 10:28:17AM +0100, Hanjun Guo wrote:
Hi Lorenzo,
On 2015???06???02??? 21:32, Lorenzo Pieralisi wrote:
On Wed, May 27, 2015 at 09:06:26AM +0100, Tomasz Nowicki wrote:
On 26.05.2015 19:08, Will Deacon wrote:
On Tue, May 26, 2015 at 01:49:18PM +0100, Hanjun Guo wrote:
From: Tomasz Nowicki tomasz.nowicki@linaro.org
ECAM standard and MCFG table are architecture independent and it makes sense to share common code across all architectures. Both are going to corresponding files - ecam.c and mcfg.c
While we are here, rename pci_parse_mcfg to acpi_parse_mcfg. We already have acpi_parse_mcfg prototype which is used nowhere. At the same time, we need pci_parse_mcfg been global so acpi_parse_mcfg can be used perfectly here.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com
arch/x86/Kconfig | 3 + arch/x86/include/asm/pci_x86.h | 33 ------ arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 244 +--------------------------------------- arch/x86/pci/mmconfig_32.c | 1 + arch/x86/pci/mmconfig_64.c | 1 + arch/x86/pci/numachip.c | 1 + drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 57 ++++++++++ drivers/pci/Kconfig | 7 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 245 +++++++++++++++++++++++++++++++++++++++++
Why can't we make use of the ECAM implementation used by pci-host-generic and drivers/pci/access.c?
We had that question when I had posted MMCFG patch set separately, please see: https://lkml.org/lkml/2015/3/11/492
Yes, but the real question is, why do we need to have PCI config space up and running before a bus struct is even created ? I think the reason is the PCI configuration address space format (ACPI 6.0, Table 5-27, page 108):
"PCI Configuration space addresses must be confined to devices on PCI Segment Group 0, bus 0. This restriction exists to accommodate access to fixed hardware prior to PCI bus enumeration".
On HW reduced platforms I do not even think this is required at all, we have to look into this to avoid code duplication that might well turn out useless.
This is only for the fixed hardware, which will be not available for ARM64 (reduced hardware mode), but in Generic Hardware Programming Model, we using OEM-provided ACPI Machine Language (AML) code to access generic hardware registers, this will be available for reduced hardware too.
So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph)
ACPI defines eight address spaces that may be accessed by generic hardware implementations. These include:
- System I/O space
- System memory space
- PCI configuration space
- Embedded controller space
- System Management Bus (SMBus) space
- CMOS
- PCI BAR Target
- IPMI space
So if any device using the PCI address space for control, such as a system reset control device, its address space can be reside in PCI configuration space (who can prevent a OEM do that crazy thing? :) ), and it should be accessible before the PCI bus is created.
Us, by changing attitude and questioning features whose usefulness is questionable. I will look into this and raise the point, I am not thrilled by the idea of adding another set of PCI accessor functions and drivers because we have to access a register through PCI before enumerating the bus (and on arm64 this is totally useless since we are not meant to support fixed HW anyway). Maybe we can make acpica code use a "special" stub (ACPI specific, PCI configuration space address space has restrictions anyway), I have to review this set in its entirety to see how to do that (and I would kindly ask you to do it too, before saying it is not possible to implement it).
Thanks, Lorenzo
On 2015年06月04日 18:22, Lorenzo Pieralisi wrote:
Hi Hanjun,
On Thu, Jun 04, 2015 at 10:28:17AM +0100, Hanjun Guo wrote:
Hi Lorenzo,
On 2015???06???02??? 21:32, Lorenzo Pieralisi wrote:
On Wed, May 27, 2015 at 09:06:26AM +0100, Tomasz Nowicki wrote:
On 26.05.2015 19:08, Will Deacon wrote:
On Tue, May 26, 2015 at 01:49:18PM +0100, Hanjun Guo wrote:
From: Tomasz Nowicki tomasz.nowicki@linaro.org
ECAM standard and MCFG table are architecture independent and it makes sense to share common code across all architectures. Both are going to corresponding files - ecam.c and mcfg.c
While we are here, rename pci_parse_mcfg to acpi_parse_mcfg. We already have acpi_parse_mcfg prototype which is used nowhere. At the same time, we need pci_parse_mcfg been global so acpi_parse_mcfg can be used perfectly here.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com
arch/x86/Kconfig | 3 + arch/x86/include/asm/pci_x86.h | 33 ------ arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 244 +--------------------------------------- arch/x86/pci/mmconfig_32.c | 1 + arch/x86/pci/mmconfig_64.c | 1 + arch/x86/pci/numachip.c | 1 + drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 57 ++++++++++ drivers/pci/Kconfig | 7 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 245 +++++++++++++++++++++++++++++++++++++++++
Why can't we make use of the ECAM implementation used by pci-host-generic and drivers/pci/access.c?
We had that question when I had posted MMCFG patch set separately, please see: https://lkml.org/lkml/2015/3/11/492
Yes, but the real question is, why do we need to have PCI config space up and running before a bus struct is even created ? I think the reason is the PCI configuration address space format (ACPI 6.0, Table 5-27, page 108):
"PCI Configuration space addresses must be confined to devices on PCI Segment Group 0, bus 0. This restriction exists to accommodate access to fixed hardware prior to PCI bus enumeration".
On HW reduced platforms I do not even think this is required at all, we have to look into this to avoid code duplication that might well turn out useless.
This is only for the fixed hardware, which will be not available for ARM64 (reduced hardware mode), but in Generic Hardware Programming Model, we using OEM-provided ACPI Machine Language (AML) code to access generic hardware registers, this will be available for reduced hardware too.
So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph)
ACPI defines eight address spaces that may be accessed by generic hardware implementations. These include:
- System I/O space
- System memory space
- PCI configuration space
- Embedded controller space
- System Management Bus (SMBus) space
- CMOS
- PCI BAR Target
- IPMI space
So if any device using the PCI address space for control, such as a system reset control device, its address space can be reside in PCI configuration space (who can prevent a OEM do that crazy thing? :) ), and it should be accessible before the PCI bus is created.
Us, by changing attitude and questioning features whose usefulness is questionable. I will look into this and raise the point, I am not thrilled by the idea of adding another set of PCI accessor functions and drivers because we have to access a register through PCI before enumerating the bus (and on arm64 this is totally useless since we are not meant to support fixed HW anyway). Maybe we can make acpica code use a "special" stub (ACPI specific, PCI configuration space address space has restrictions anyway), I have to review this set in its entirety to see how to do that (and I would kindly ask you to do it too, before saying it is not possible to implement it).
I'm willing to do that, actually, if we don't need a mechanism to access PCI config space before the bus is created, the code can be simplified a lot.
Thanks for your help and patient.
Hanjun
On 2015年06月04日 20:28, Hanjun Guo wrote:
On 2015年06月04日 18:22, Lorenzo Pieralisi wrote:
Hi Hanjun,
On Thu, Jun 04, 2015 at 10:28:17AM +0100, Hanjun Guo wrote:
Hi Lorenzo,
On 2015???06???02??? 21:32, Lorenzo Pieralisi wrote:
On Wed, May 27, 2015 at 09:06:26AM +0100, Tomasz Nowicki wrote:
On 26.05.2015 19:08, Will Deacon wrote:
On Tue, May 26, 2015 at 01:49:18PM +0100, Hanjun Guo wrote: > From: Tomasz Nowicki tomasz.nowicki@linaro.org > > ECAM standard and MCFG table are architecture independent and it > makes > sense to share common code across all architectures. Both are > going to > corresponding files - ecam.c and mcfg.c > > While we are here, rename pci_parse_mcfg to acpi_parse_mcfg. > We already have acpi_parse_mcfg prototype which is used nowhere. > At the same time, we need pci_parse_mcfg been global so > acpi_parse_mcfg > can be used perfectly here. > > Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org > Signed-off-by: Hanjun Guo hanjun.guo@linaro.org > Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com > --- > arch/x86/Kconfig | 3 + > arch/x86/include/asm/pci_x86.h | 33 ------ > arch/x86/pci/acpi.c | 1 + > arch/x86/pci/mmconfig-shared.c | 244 > +--------------------------------------- > arch/x86/pci/mmconfig_32.c | 1 + > arch/x86/pci/mmconfig_64.c | 1 + > arch/x86/pci/numachip.c | 1 + > drivers/acpi/Makefile | 1 + > drivers/acpi/mcfg.c | 57 ++++++++++ > drivers/pci/Kconfig | 7 ++ > drivers/pci/Makefile | 5 + > drivers/pci/ecam.c | 245 > +++++++++++++++++++++++++++++++++++++++++
Why can't we make use of the ECAM implementation used by pci-host-generic and drivers/pci/access.c?
We had that question when I had posted MMCFG patch set separately, please see: https://lkml.org/lkml/2015/3/11/492
Yes, but the real question is, why do we need to have PCI config space up and running before a bus struct is even created ? I think the reason is the PCI configuration address space format (ACPI 6.0, Table 5-27, page 108):
"PCI Configuration space addresses must be confined to devices on PCI Segment Group 0, bus 0. This restriction exists to accommodate access to fixed hardware prior to PCI bus enumeration".
On HW reduced platforms I do not even think this is required at all, we have to look into this to avoid code duplication that might well turn out useless.
This is only for the fixed hardware, which will be not available for ARM64 (reduced hardware mode), but in Generic Hardware Programming Model, we using OEM-provided ACPI Machine Language (AML) code to access generic hardware registers, this will be available for reduced hardware too.
So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph)
ACPI defines eight address spaces that may be accessed by generic hardware implementations. These include:
- System I/O space
- System memory space
- PCI configuration space
- Embedded controller space
- System Management Bus (SMBus) space
- CMOS
- PCI BAR Target
- IPMI space
So if any device using the PCI address space for control, such as a system reset control device, its address space can be reside in PCI configuration space (who can prevent a OEM do that crazy thing? :) ), and it should be accessible before the PCI bus is created.
Us, by changing attitude and questioning features whose usefulness is questionable. I will look into this and raise the point, I am not thrilled by the idea of adding another set of PCI accessor functions and drivers because we have to access a register through PCI before enumerating the bus (and on arm64 this is totally useless since we are not meant to support fixed HW anyway). Maybe we can make acpica code use a "special" stub (ACPI specific, PCI configuration space address space has restrictions anyway), I have to review this set in its entirety to see how to do that (and I would kindly ask you to do it too, before saying it is not possible to implement it).
I'm willing to do that, actually, if we don't need a mechanism to access PCI config space before the bus is created, the code can be simplified a lot.
After more investigation on the spec and the ACPI core code, I'm still not convinced that accessing to PCI config space before PCI bus creating is impossible, also there is no enough ARM64 hardware to prove that too. But I think we can go in this way, reuse the ECAM implementation by pci-host-generic for now, and implement the PCI accessor functions before enumerating PCI bus when needed in the future, does it make sense?
Thanks Hanjun
On Mon, Jun 08, 2015 at 03:57:38AM +0100, Hanjun Guo wrote:
[...]
> Why can't we make use of the ECAM implementation used by > pci-host-generic > and drivers/pci/access.c?
We had that question when I had posted MMCFG patch set separately, please see: https://lkml.org/lkml/2015/3/11/492
Yes, but the real question is, why do we need to have PCI config space up and running before a bus struct is even created ? I think the reason is the PCI configuration address space format (ACPI 6.0, Table 5-27, page 108):
"PCI Configuration space addresses must be confined to devices on PCI Segment Group 0, bus 0. This restriction exists to accommodate access to fixed hardware prior to PCI bus enumeration".
On HW reduced platforms I do not even think this is required at all, we have to look into this to avoid code duplication that might well turn out useless.
This is only for the fixed hardware, which will be not available for ARM64 (reduced hardware mode), but in Generic Hardware Programming Model, we using OEM-provided ACPI Machine Language (AML) code to access generic hardware registers, this will be available for reduced hardware too.
So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph)
ACPI defines eight address spaces that may be accessed by generic hardware implementations. These include:
- System I/O space
- System memory space
- PCI configuration space
- Embedded controller space
- System Management Bus (SMBus) space
- CMOS
- PCI BAR Target
- IPMI space
So if any device using the PCI address space for control, such as a system reset control device, its address space can be reside in PCI configuration space (who can prevent a OEM do that crazy thing? :) ), and it should be accessible before the PCI bus is created.
Us, by changing attitude and questioning features whose usefulness is questionable. I will look into this and raise the point, I am not thrilled by the idea of adding another set of PCI accessor functions and drivers because we have to access a register through PCI before enumerating the bus (and on arm64 this is totally useless since we are not meant to support fixed HW anyway). Maybe we can make acpica code use a "special" stub (ACPI specific, PCI configuration space address space has restrictions anyway), I have to review this set in its entirety to see how to do that (and I would kindly ask you to do it too, before saying it is not possible to implement it).
I'm willing to do that, actually, if we don't need a mechanism to access PCI config space before the bus is created, the code can be simplified a lot.
After more investigation on the spec and the ACPI core code, I'm still not convinced that accessing to PCI config space before PCI bus creating is impossible, also there is no enough ARM64 hardware to prove that too. But I think we can go in this way, reuse the ECAM implementation by pci-host-generic for now, and implement the PCI accessor functions before enumerating PCI bus when needed in the future, does it make sense?
You mean we rewrite the patch to make sure we can use the PCI host generic driver with MCFG and we leave the acpica PCI config call empty stubs on arm64 (as they are now) ?
Thanks, Lorenzo
On 08.06.2015 17:14, Lorenzo Pieralisi wrote:
On Mon, Jun 08, 2015 at 03:57:38AM +0100, Hanjun Guo wrote:
[...]
>> Why can't we make use of the ECAM implementation used by >> pci-host-generic >> and drivers/pci/access.c? > > We had that question when I had posted MMCFG patch set separately, > please see: > https://lkml.org/lkml/2015/3/11/492
Yes, but the real question is, why do we need to have PCI config space up and running before a bus struct is even created ? I think the reason is the PCI configuration address space format (ACPI 6.0, Table 5-27, page 108):
"PCI Configuration space addresses must be confined to devices on PCI Segment Group 0, bus 0. This restriction exists to accommodate access to fixed hardware prior to PCI bus enumeration".
On HW reduced platforms I do not even think this is required at all, we have to look into this to avoid code duplication that might well turn out useless.
This is only for the fixed hardware, which will be not available for ARM64 (reduced hardware mode), but in Generic Hardware Programming Model, we using OEM-provided ACPI Machine Language (AML) code to access generic hardware registers, this will be available for reduced hardware too.
So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph)
ACPI defines eight address spaces that may be accessed by generic hardware implementations. These include:
- System I/O space
- System memory space
- PCI configuration space
- Embedded controller space
- System Management Bus (SMBus) space
- CMOS
- PCI BAR Target
- IPMI space
So if any device using the PCI address space for control, such as a system reset control device, its address space can be reside in PCI configuration space (who can prevent a OEM do that crazy thing? :) ), and it should be accessible before the PCI bus is created.
Us, by changing attitude and questioning features whose usefulness is questionable. I will look into this and raise the point, I am not thrilled by the idea of adding another set of PCI accessor functions and drivers because we have to access a register through PCI before enumerating the bus (and on arm64 this is totally useless since we are not meant to support fixed HW anyway). Maybe we can make acpica code use a "special" stub (ACPI specific, PCI configuration space address space has restrictions anyway), I have to review this set in its entirety to see how to do that (and I would kindly ask you to do it too, before saying it is not possible to implement it).
I'm willing to do that, actually, if we don't need a mechanism to access PCI config space before the bus is created, the code can be simplified a lot.
After more investigation on the spec and the ACPI core code, I'm still not convinced that accessing to PCI config space before PCI bus creating is impossible, also there is no enough ARM64 hardware to prove that too. But I think we can go in this way, reuse the ECAM implementation by pci-host-generic for now, and implement the PCI accessor functions before enumerating PCI bus when needed in the future, does it make sense?
You mean we rewrite the patch to make sure we can use the PCI host generic driver with MCFG and we leave the acpica PCI config call empty stubs on arm64 (as they are now) ?
Hi Bjorn, Rafael,
Lorenzo pointed out very important problem we are having with PCI config space access for ARM64. Please refer to the above discussion and add your 2 cents. Can we forget about accessing PCI config space (for Hardware Reduced profile) before PCI bus creation? If not, do you see a way to use drivers/pci/access.c accessors here, like acpica change? Any opinion is very appreciated.
Regards, Tomasz
On 31.08.2015 13:01, Tomasz Nowicki wrote:
On 08.06.2015 17:14, Lorenzo Pieralisi wrote:
On Mon, Jun 08, 2015 at 03:57:38AM +0100, Hanjun Guo wrote:
[...]
>>> Why can't we make use of the ECAM implementation used by >>> pci-host-generic >>> and drivers/pci/access.c? >> >> We had that question when I had posted MMCFG patch set separately, >> please see: >> https://lkml.org/lkml/2015/3/11/492 > > Yes, but the real question is, why do we need to have PCI config > space > up and running before a bus struct is even created ? I think the > reason is > the PCI configuration address space format (ACPI 6.0, Table 5-27, > page > 108): > > "PCI Configuration space addresses must be confined to devices on > PCI Segment Group 0, bus 0. This restriction exists to accommodate > access to fixed hardware prior to PCI bus enumeration". > > On HW reduced platforms I do not even think this is required at all, > we have to look into this to avoid code duplication that might well > turn out useless.
This is only for the fixed hardware, which will be not available for ARM64 (reduced hardware mode), but in Generic Hardware Programming Model, we using OEM-provided ACPI Machine Language (AML) code to access generic hardware registers, this will be available for reduced hardware too.
So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph)
ACPI defines eight address spaces that may be accessed by generic hardware implementations. These include:
- System I/O space
- System memory space
- PCI configuration space
- Embedded controller space
- System Management Bus (SMBus) space
- CMOS
- PCI BAR Target
- IPMI space
So if any device using the PCI address space for control, such as a system reset control device, its address space can be reside in PCI configuration space (who can prevent a OEM do that crazy thing? :) ), and it should be accessible before the PCI bus is created.
Us, by changing attitude and questioning features whose usefulness is questionable. I will look into this and raise the point, I am not thrilled by the idea of adding another set of PCI accessor functions and drivers because we have to access a register through PCI before enumerating the bus (and on arm64 this is totally useless since we are not meant to support fixed HW anyway). Maybe we can make acpica code use a "special" stub (ACPI specific, PCI configuration space address space has restrictions anyway), I have to review this set in its entirety to see how to do that (and I would kindly ask you to do it too, before saying it is not possible to implement it).
I'm willing to do that, actually, if we don't need a mechanism to access PCI config space before the bus is created, the code can be simplified a lot.
After more investigation on the spec and the ACPI core code, I'm still not convinced that accessing to PCI config space before PCI bus creating is impossible, also there is no enough ARM64 hardware to prove that too. But I think we can go in this way, reuse the ECAM implementation by pci-host-generic for now, and implement the PCI accessor functions before enumerating PCI bus when needed in the future, does it make sense?
You mean we rewrite the patch to make sure we can use the PCI host generic driver with MCFG and we leave the acpica PCI config call empty stubs on arm64 (as they are now) ?
Hi Bjorn, Rafael,
Lorenzo pointed out very important problem we are having with PCI config space access for ARM64. Please refer to the above discussion and add your 2 cents. Can we forget about accessing PCI config space (for Hardware Reduced profile) before PCI bus creation? If not, do you see a way to use drivers/pci/access.c accessors here, like acpica change? Any opinion is very appreciated.
Kindly remainder.
Thanks, Tomasz
Hi Tomasz,
On Mon, Sep 07, 2015 at 10:59:44AM +0100, Tomasz Nowicki wrote:
On 31.08.2015 13:01, Tomasz Nowicki wrote:
On 08.06.2015 17:14, Lorenzo Pieralisi wrote:
On Mon, Jun 08, 2015 at 03:57:38AM +0100, Hanjun Guo wrote:
[...]
>>>> Why can't we make use of the ECAM implementation used by >>>> pci-host-generic >>>> and drivers/pci/access.c? >>> >>> We had that question when I had posted MMCFG patch set separately, >>> please see: >>> https://lkml.org/lkml/2015/3/11/492 >> >> Yes, but the real question is, why do we need to have PCI config >> space >> up and running before a bus struct is even created ? I think the >> reason is >> the PCI configuration address space format (ACPI 6.0, Table 5-27, >> page >> 108): >> >> "PCI Configuration space addresses must be confined to devices on >> PCI Segment Group 0, bus 0. This restriction exists to accommodate >> access to fixed hardware prior to PCI bus enumeration". >> >> On HW reduced platforms I do not even think this is required at all, >> we have to look into this to avoid code duplication that might well >> turn out useless. > > This is only for the fixed hardware, which will be not available for > ARM64 (reduced hardware mode), but in Generic Hardware Programming > Model, we using OEM-provided ACPI Machine Language (AML) code to > access > generic hardware registers, this will be available for reduced > hardware > too. > > So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph) > > ACPI defines eight address spaces that may be accessed by generic > hardware implementations. These include: > * System I/O space > * System memory space > * PCI configuration space > * Embedded controller space > * System Management Bus (SMBus) space > * CMOS > * PCI BAR Target > * IPMI space > > So if any device using the PCI address space for control, such > as a system reset control device, its address space can be reside > in PCI configuration space (who can prevent a OEM do that crazy > thing? :) ), and it should be accessible before the PCI bus is > created.
Us, by changing attitude and questioning features whose usefulness is questionable. I will look into this and raise the point, I am not thrilled by the idea of adding another set of PCI accessor functions and drivers because we have to access a register through PCI before enumerating the bus (and on arm64 this is totally useless since we are not meant to support fixed HW anyway). Maybe we can make acpica code use a "special" stub (ACPI specific, PCI configuration space address space has restrictions anyway), I have to review this set in its entirety to see how to do that (and I would kindly ask you to do it too, before saying it is not possible to implement it).
I'm willing to do that, actually, if we don't need a mechanism to access PCI config space before the bus is created, the code can be simplified a lot.
After more investigation on the spec and the ACPI core code, I'm still not convinced that accessing to PCI config space before PCI bus creating is impossible, also there is no enough ARM64 hardware to prove that too. But I think we can go in this way, reuse the ECAM implementation by pci-host-generic for now, and implement the PCI accessor functions before enumerating PCI bus when needed in the future, does it make sense?
You mean we rewrite the patch to make sure we can use the PCI host generic driver with MCFG and we leave the acpica PCI config call empty stubs on arm64 (as they are now) ?
Hi Bjorn, Rafael,
Lorenzo pointed out very important problem we are having with PCI config space access for ARM64. Please refer to the above discussion and add your 2 cents. Can we forget about accessing PCI config space (for Hardware Reduced profile) before PCI bus creation? If not, do you see a way to use drivers/pci/access.c accessors here, like acpica change? Any opinion is very appreciated.
Kindly remainder.
I think (but I am happy to be corrected) that the map_bus() hook (ie that's why struct pci_bus is required in eg pci_generic_config_write) is there to ensure that when the generic accessors are called a) we have a valid bus b) the host controllers implementing it has been initialized.
I had another look and I noticed you are trying to solve multiple things at once:
1) ACPICA seems to need PCI config space on bus 0 to be working before PCI enumerates (ie before we have a root bus), we need to countercheck on that, but you can't use the generic PCI accessors for that reasons (ie root bus might not be available, you do not have a pci_bus struct) 2) the raw_pci_read/write require _generic_ mmio back-ends, since AMD can't cope with standard x86 read/write{b,w,l}
Overall, it seems to me that we can avoid code duplication by shuffling your code a bit.
You could modify the generic accessors in drivers/pci/access.c to use your mmio back-end instead of using plain read/write{b,w,l} functions (we should check if RobH is ok with that there can be reasons that prevent this from happening). This would solve the AMD mmio issue.
By factoring out the code that actually carries out the reads and writes in the accessors basically you decouple the functions requiring the struct pci_bus from the ones that does not require it (ie raw_pci_{read/write}.
The generic MMIO layer belongs in the drivers/pci/access.c file, it has nothing to do with ECAM.
The mmcfg interface should probably live in pci-acpi.c, I do not think you need an extra file in there but that's a detail.
Basically the generic accessors would become something like eg:
int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND;
pci_mmio_write(size, addr + where, value);
return PCIBIOS_SUCCESSFUL; }
With that in place using raw_pci_write/read or the generic accessors becomes almost identical, with code requiring the pci_bus to be created using the generic accessors and ACPICA using the raw version.
I might be missing something, so apologies if that's the case.
Comments welcome.
Lorenzo
Hi Lorenzo,
On 08.09.2015 17:07, Lorenzo Pieralisi wrote:
Hi Tomasz,
On Mon, Sep 07, 2015 at 10:59:44AM +0100, Tomasz Nowicki wrote:
On 31.08.2015 13:01, Tomasz Nowicki wrote:
On 08.06.2015 17:14, Lorenzo Pieralisi wrote:
On Mon, Jun 08, 2015 at 03:57:38AM +0100, Hanjun Guo wrote:
[...]
>>>>> Why can't we make use of the ECAM implementation used by >>>>> pci-host-generic >>>>> and drivers/pci/access.c? >>>> >>>> We had that question when I had posted MMCFG patch set separately, >>>> please see: >>>> https://lkml.org/lkml/2015/3/11/492 >>> >>> Yes, but the real question is, why do we need to have PCI config >>> space >>> up and running before a bus struct is even created ? I think the >>> reason is >>> the PCI configuration address space format (ACPI 6.0, Table 5-27, >>> page >>> 108): >>> >>> "PCI Configuration space addresses must be confined to devices on >>> PCI Segment Group 0, bus 0. This restriction exists to accommodate >>> access to fixed hardware prior to PCI bus enumeration". >>> >>> On HW reduced platforms I do not even think this is required at all, >>> we have to look into this to avoid code duplication that might well >>> turn out useless. >> >> This is only for the fixed hardware, which will be not available for >> ARM64 (reduced hardware mode), but in Generic Hardware Programming >> Model, we using OEM-provided ACPI Machine Language (AML) code to >> access >> generic hardware registers, this will be available for reduced >> hardware >> too. >> >> So in ACPI spec, it says: (ACPI 6.0 page 66, last paragraph) >> >> ACPI defines eight address spaces that may be accessed by generic >> hardware implementations. These include: >> * System I/O space >> * System memory space >> * PCI configuration space >> * Embedded controller space >> * System Management Bus (SMBus) space >> * CMOS >> * PCI BAR Target >> * IPMI space >> >> So if any device using the PCI address space for control, such >> as a system reset control device, its address space can be reside >> in PCI configuration space (who can prevent a OEM do that crazy >> thing? :) ), and it should be accessible before the PCI bus is >> created. > > Us, by changing attitude and questioning features whose usefulness > is questionable. I will look into this and raise the point, I am not > thrilled by the idea of adding another set of PCI accessor functions > and drivers because we have to access a register through PCI before > enumerating the bus (and on arm64 this is totally useless since > we are not meant to support fixed HW anyway). Maybe we can make acpica > code use a "special" stub (ACPI specific, PCI configuration space > address > space has restrictions anyway), I have to review this set in its > entirety to see how to do that (and I would kindly ask you to do > it too, before saying it is not possible to implement it).
I'm willing to do that, actually, if we don't need a mechanism to access PCI config space before the bus is created, the code can be simplified a lot.
After more investigation on the spec and the ACPI core code, I'm still not convinced that accessing to PCI config space before PCI bus creating is impossible, also there is no enough ARM64 hardware to prove that too. But I think we can go in this way, reuse the ECAM implementation by pci-host-generic for now, and implement the PCI accessor functions before enumerating PCI bus when needed in the future, does it make sense?
You mean we rewrite the patch to make sure we can use the PCI host generic driver with MCFG and we leave the acpica PCI config call empty stubs on arm64 (as they are now) ?
Hi Bjorn, Rafael,
Lorenzo pointed out very important problem we are having with PCI config space access for ARM64. Please refer to the above discussion and add your 2 cents. Can we forget about accessing PCI config space (for Hardware Reduced profile) before PCI bus creation? If not, do you see a way to use drivers/pci/access.c accessors here, like acpica change? Any opinion is very appreciated.
Kindly remainder.
I think (but I am happy to be corrected) that the map_bus() hook (ie that's why struct pci_bus is required in eg pci_generic_config_write) is there to ensure that when the generic accessors are called a) we have a valid bus b) the host controllers implementing it has been initialized.
I had another look and I noticed you are trying to solve multiple things at once:
- ACPICA seems to need PCI config space on bus 0 to be working before PCI enumerates (ie before we have a root bus), we need to countercheck on that, but you can't use the generic PCI accessors for that reasons (ie root bus might not be available, you do not have a pci_bus struct)
- the raw_pci_read/write require _generic_ mmio back-ends, since AMD can't cope with standard x86 read/write{b,w,l}
Overall, it seems to me that we can avoid code duplication by shuffling your code a bit.
You could modify the generic accessors in drivers/pci/access.c to use your mmio back-end instead of using plain read/write{b,w,l} functions (we should check if RobH is ok with that there can be reasons that prevent this from happening). This would solve the AMD mmio issue.
By factoring out the code that actually carries out the reads and writes in the accessors basically you decouple the functions requiring the struct pci_bus from the ones that does not require it (ie raw_pci_{read/write}.
The generic MMIO layer belongs in the drivers/pci/access.c file, it has nothing to do with ECAM.
The mmcfg interface should probably live in pci-acpi.c, I do not think you need an extra file in there but that's a detail.
Basically the generic accessors would become something like eg:
int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND;
pci_mmio_write(size, addr + where, value);
return PCIBIOS_SUCCESSFUL; }
With that in place using raw_pci_write/read or the generic accessors becomes almost identical, with code requiring the pci_bus to be created using the generic accessors and ACPICA using the raw version.
I might be missing something, so apologies if that's the case.
Actually, I think you showed me the right direction :) Here are some conclusions/comments/concerns. Please correct me if I am wrong:
1. We need raw_pci_write/read accessors (based on ECAM) for ARM64 too but only up to the point where buses are enumerated. From that point on, we should reuse generic accessors from access.c file, right?
2. For ARM64 ACPI PCI, we can use generic accessors right away, .map_bus would call common code part (pci_dev_base()). The only thing that worry me is fact that MCFG regions are RCU list so it needs rcu_read_lock() for the .map_bus (mcfg lookup) *and* read/write operation.
3. Changing generic accessors to introduce generic MMIO layer (because of AMD issue) like this: int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND;
pci_mmio_write(size, addr + where, val);
return PCIBIOS_SUCCESSFUL; } would imply using those accessors for x86 ACPI PCI host bridge driver, see arch/x86/pci/common.c
int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn, int reg, int len, u32 *val) { if (domain == 0 && reg < 256 && raw_pci_ops) return raw_pci_ops->read(domain, bus, devfn, reg, len, val); if (raw_pci_ext_ops) return raw_pci_ext_ops->read(domain, bus, devfn, reg, len, val); return -EINVAL; } [...] static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *value) { return raw_pci_read(pci_domain_nr(bus), bus->number, devfn, where, size, value); } [...] struct pci_ops pci_root_ops = { .read = pci_read, .write = pci_write, };
Currently, the above code may call lots of different accessors (not necessarily generic accessor friendly :), moreover it possible that x86 may have registered two accessor sets (raw_pci_ops, raw_pci_ext_ops). I am happy to fix that but I would need x86 PCI expert to get know if that is possible at all.
I really appreciate your help.
Thanks, Tomasz
On Wed, Sep 09, 2015 at 02:47:55PM +0100, Tomasz Nowicki wrote:
[...]
I think (but I am happy to be corrected) that the map_bus() hook (ie that's why struct pci_bus is required in eg pci_generic_config_write) is there to ensure that when the generic accessors are called a) we have a valid bus b) the host controllers implementing it has been initialized.
I had another look and I noticed you are trying to solve multiple things at once:
- ACPICA seems to need PCI config space on bus 0 to be working before PCI enumerates (ie before we have a root bus), we need to countercheck on that, but you can't use the generic PCI accessors for that reasons (ie root bus might not be available, you do not have a pci_bus struct)
- the raw_pci_read/write require _generic_ mmio back-ends, since AMD can't cope with standard x86 read/write{b,w,l}
Overall, it seems to me that we can avoid code duplication by shuffling your code a bit.
You could modify the generic accessors in drivers/pci/access.c to use your mmio back-end instead of using plain read/write{b,w,l} functions (we should check if RobH is ok with that there can be reasons that prevent this from happening). This would solve the AMD mmio issue.
By factoring out the code that actually carries out the reads and writes in the accessors basically you decouple the functions requiring the struct pci_bus from the ones that does not require it (ie raw_pci_{read/write}.
The generic MMIO layer belongs in the drivers/pci/access.c file, it has nothing to do with ECAM.
The mmcfg interface should probably live in pci-acpi.c, I do not think you need an extra file in there but that's a detail.
Basically the generic accessors would become something like eg:
int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND; pci_mmio_write(size, addr + where, value); return PCIBIOS_SUCCESSFUL;
}
With that in place using raw_pci_write/read or the generic accessors becomes almost identical, with code requiring the pci_bus to be created using the generic accessors and ACPICA using the raw version.
I might be missing something, so apologies if that's the case.
Actually, I think you showed me the right direction :) Here are some conclusions/comments/concerns. Please correct me if I am wrong:
- We need raw_pci_write/read accessors (based on ECAM) for ARM64 too
but only up to the point where buses are enumerated. From that point on, we should reuse generic accessors from access.c file, right?
Well, I still have not figured out whether on arm64 the raw accessors required by ACPICA make sense.
So either arm64 relies on the generic MCFG based raw read and writes or we define the global raw read and writes as empty (ie x86 overrides them anyway).
I will get back to you on this.
- For ARM64 ACPI PCI, we can use generic accessors right away, .map_bus
would call common code part (pci_dev_base()). The only thing that worry me is fact that MCFG regions are RCU list so it needs rcu_read_lock() for the .map_bus (mcfg lookup) *and* read/write operation.
Do you mean the address look-up and the mmio operation should be carried out atomically right ? I have to review the MCFG descriptor locking anyway to check if and when there is a problem here.
- Changing generic accessors to introduce generic MMIO layer (because
of AMD issue) like this: int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND; pci_mmio_write(size, addr + where, val); return PCIBIOS_SUCCESSFUL;
} would imply using those accessors for x86 ACPI PCI host bridge driver, see arch/x86/pci/common.c
int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn, int reg, int len, u32 *val) { if (domain == 0 && reg < 256 && raw_pci_ops) return raw_pci_ops->read(domain, bus, devfn, reg, len, val); if (raw_pci_ext_ops) return raw_pci_ext_ops->read(domain, bus, devfn, reg, len, val); return -EINVAL; } [...] static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *value) { return raw_pci_read(pci_domain_nr(bus), bus->number, devfn, where, size, value); } [...] struct pci_ops pci_root_ops = { .read = pci_read, .write = pci_write, };
Currently, the above code may call lots of different accessors (not necessarily generic accessor friendly :), moreover it possible that x86 may have registered two accessor sets (raw_pci_ops, raw_pci_ext_ops). I am happy to fix that but I would need x86 PCI expert to get know if that is possible at all.
Well, we can let x86 code use the same pci_ops as they are using today without bothering converting it to generic accessors.
Honestly, even the AMD requirement for special MMIO back-end could be left in x86 code, which would simplify your task even more (it would leave more x86 churn but that's not my call).
I really appreciate your help.
You are welcome, I will get back to you shortly on the points above.
Thanks, Lorenzo
On 11.09.2015 13:20, Lorenzo Pieralisi wrote:
On Wed, Sep 09, 2015 at 02:47:55PM +0100, Tomasz Nowicki wrote:
[...]
I think (but I am happy to be corrected) that the map_bus() hook (ie that's why struct pci_bus is required in eg pci_generic_config_write) is there to ensure that when the generic accessors are called a) we have a valid bus b) the host controllers implementing it has been initialized.
I had another look and I noticed you are trying to solve multiple things at once:
- ACPICA seems to need PCI config space on bus 0 to be working before PCI enumerates (ie before we have a root bus), we need to countercheck on that, but you can't use the generic PCI accessors for that reasons (ie root bus might not be available, you do not have a pci_bus struct)
- the raw_pci_read/write require _generic_ mmio back-ends, since AMD can't cope with standard x86 read/write{b,w,l}
Overall, it seems to me that we can avoid code duplication by shuffling your code a bit.
You could modify the generic accessors in drivers/pci/access.c to use your mmio back-end instead of using plain read/write{b,w,l} functions (we should check if RobH is ok with that there can be reasons that prevent this from happening). This would solve the AMD mmio issue.
By factoring out the code that actually carries out the reads and writes in the accessors basically you decouple the functions requiring the struct pci_bus from the ones that does not require it (ie raw_pci_{read/write}.
The generic MMIO layer belongs in the drivers/pci/access.c file, it has nothing to do with ECAM.
The mmcfg interface should probably live in pci-acpi.c, I do not think you need an extra file in there but that's a detail.
Basically the generic accessors would become something like eg:
int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND; pci_mmio_write(size, addr + where, value); return PCIBIOS_SUCCESSFUL;
}
With that in place using raw_pci_write/read or the generic accessors becomes almost identical, with code requiring the pci_bus to be created using the generic accessors and ACPICA using the raw version.
I might be missing something, so apologies if that's the case.
Actually, I think you showed me the right direction :) Here are some conclusions/comments/concerns. Please correct me if I am wrong:
- We need raw_pci_write/read accessors (based on ECAM) for ARM64 too
but only up to the point where buses are enumerated. From that point on, we should reuse generic accessors from access.c file, right?
Well, I still have not figured out whether on arm64 the raw accessors required by ACPICA make sense.
So either arm64 relies on the generic MCFG based raw read and writes or we define the global raw read and writes as empty (ie x86 overrides them anyway).
I will get back to you on this.
- For ARM64 ACPI PCI, we can use generic accessors right away, .map_bus
would call common code part (pci_dev_base()). The only thing that worry me is fact that MCFG regions are RCU list so it needs rcu_read_lock() for the .map_bus (mcfg lookup) *and* read/write operation.
Do you mean the address look-up and the mmio operation should be carried out atomically right ?
Yes.
I have to review the MCFG descriptor locking anyway
to check if and when there is a problem here.
- Changing generic accessors to introduce generic MMIO layer (because
of AMD issue) like this: int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND; pci_mmio_write(size, addr + where, val); return PCIBIOS_SUCCESSFUL;
} would imply using those accessors for x86 ACPI PCI host bridge driver, see arch/x86/pci/common.c
int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn, int reg, int len, u32 *val) { if (domain == 0 && reg < 256 && raw_pci_ops) return raw_pci_ops->read(domain, bus, devfn, reg, len, val); if (raw_pci_ext_ops) return raw_pci_ext_ops->read(domain, bus, devfn, reg, len, val); return -EINVAL; } [...] static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *value) { return raw_pci_read(pci_domain_nr(bus), bus->number, devfn, where, size, value); } [...] struct pci_ops pci_root_ops = { .read = pci_read, .write = pci_write, };
Currently, the above code may call lots of different accessors (not necessarily generic accessor friendly :), moreover it possible that x86 may have registered two accessor sets (raw_pci_ops, raw_pci_ext_ops). I am happy to fix that but I would need x86 PCI expert to get know if that is possible at all.
Well, we can let x86 code use the same pci_ops as they are using today without bothering converting it to generic accessors.
Honestly, even the AMD requirement for special MMIO back-end could be left in x86 code, which would simplify your task even more (it would leave more x86 churn but that's not my call).
AMD special MMIO back-end was my optional goal and wanted to kill two birds with one stone :) I will drop this in next version and focus on main aspect of these patches.
Regards, Tomasz
On Fri, Sep 11, 2015 at 01:35:36PM +0100, Tomasz Nowicki wrote:
On 11.09.2015 13:20, Lorenzo Pieralisi wrote:
[...]
With that in place using raw_pci_write/read or the generic accessors becomes almost identical, with code requiring the pci_bus to be created using the generic accessors and ACPICA using the raw version.
I might be missing something, so apologies if that's the case.
Actually, I think you showed me the right direction :) Here are some conclusions/comments/concerns. Please correct me if I am wrong:
- We need raw_pci_write/read accessors (based on ECAM) for ARM64 too
but only up to the point where buses are enumerated. From that point on, we should reuse generic accessors from access.c file, right?
Well, I still have not figured out whether on arm64 the raw accessors required by ACPICA make sense.
So either arm64 relies on the generic MCFG based raw read and writes or we define the global raw read and writes as empty (ie x86 overrides them anyway).
I will get back to you on this.
- For ARM64 ACPI PCI, we can use generic accessors right away, .map_bus
would call common code part (pci_dev_base()). The only thing that worry me is fact that MCFG regions are RCU list so it needs rcu_read_lock() for the .map_bus (mcfg lookup) *and* read/write operation.
Do you mean the address look-up and the mmio operation should be carried out atomically right ?
Yes.
We can wrap the calls pci_generic_read/write() within a function and add rcu_read_lock()/unlock() around them, eg:
int pci_generic_config_read_rcu() { rcu_read_lock(); pci_generic_config_read(...); rcu_read_unlock(); }
Honestly it seems the RCU API is needed just because config space can be also accessed by raw_ accessors in ACPICA code, that's the only reason I see to protect the config structs against config space removal (basically config entries are removed only when the host bridge is released if I read the code correctly, and the only way this can happen concurrently is having ACPICA code reusing the same config space but accessing it with no pci_bus struct attached to it, by just using the (segment, bus, dev, fn) tuple).
Lorenzo
On 14.09.2015 11:37, Lorenzo Pieralisi wrote:
On Fri, Sep 11, 2015 at 01:35:36PM +0100, Tomasz Nowicki wrote:
On 11.09.2015 13:20, Lorenzo Pieralisi wrote:
[...]
With that in place using raw_pci_write/read or the generic accessors becomes almost identical, with code requiring the pci_bus to be created using the generic accessors and ACPICA using the raw version.
I might be missing something, so apologies if that's the case.
Actually, I think you showed me the right direction :) Here are some conclusions/comments/concerns. Please correct me if I am wrong:
- We need raw_pci_write/read accessors (based on ECAM) for ARM64 too
but only up to the point where buses are enumerated. From that point on, we should reuse generic accessors from access.c file, right?
Well, I still have not figured out whether on arm64 the raw accessors required by ACPICA make sense.
So either arm64 relies on the generic MCFG based raw read and writes or we define the global raw read and writes as empty (ie x86 overrides them anyway).
I will get back to you on this.
- For ARM64 ACPI PCI, we can use generic accessors right away, .map_bus
would call common code part (pci_dev_base()). The only thing that worry me is fact that MCFG regions are RCU list so it needs rcu_read_lock() for the .map_bus (mcfg lookup) *and* read/write operation.
Do you mean the address look-up and the mmio operation should be carried out atomically right ?
Yes.
We can wrap the calls pci_generic_read/write() within a function and add rcu_read_lock()/unlock() around them, eg:
int pci_generic_config_read_rcu() { rcu_read_lock(); pci_generic_config_read(...); rcu_read_unlock(); }
It looks good to me, thanks for suggestion.
Honestly it seems the RCU API is needed just because config space can be also accessed by raw_ accessors in ACPICA code, that's the only reason I see to protect the config structs against config space removal (basically config entries are removed only when the host bridge is released if I read the code correctly, and the only way this can happen concurrently is having ACPICA code reusing the same config space but accessing it with no pci_bus struct attached to it, by just using the (segment, bus, dev, fn) tuple).
Right.
Side note: MCFG region can be removed from the pci_mmcfg_list list only if it has been "hot added" there. Which means that PCI host bridge specified configuration base address (_CBA) different than those from MCFG static table e.g.:
DSDT.asl: Device (PCI0) { Name (_HID, EISAID ("PNP0A03")) [...] Name (_CBA, 0xB0000000) [...] }
But pci_mmcfg_list elements coming from static MCFG table cannot be removed, hence they are living there for ever.
Thanks, Tomasz
On 11.09.2015 13:20, Lorenzo Pieralisi wrote:
On Wed, Sep 09, 2015 at 02:47:55PM +0100, Tomasz Nowicki wrote:
[...]
I think (but I am happy to be corrected) that the map_bus() hook (ie that's why struct pci_bus is required in eg pci_generic_config_write) is there to ensure that when the generic accessors are called a) we have a valid bus b) the host controllers implementing it has been initialized.
I had another look and I noticed you are trying to solve multiple things at once:
- ACPICA seems to need PCI config space on bus 0 to be working before PCI enumerates (ie before we have a root bus), we need to countercheck on that, but you can't use the generic PCI accessors for that reasons (ie root bus might not be available, you do not have a pci_bus struct)
- the raw_pci_read/write require _generic_ mmio back-ends, since AMD can't cope with standard x86 read/write{b,w,l}
Overall, it seems to me that we can avoid code duplication by shuffling your code a bit.
You could modify the generic accessors in drivers/pci/access.c to use your mmio back-end instead of using plain read/write{b,w,l} functions (we should check if RobH is ok with that there can be reasons that prevent this from happening). This would solve the AMD mmio issue.
By factoring out the code that actually carries out the reads and writes in the accessors basically you decouple the functions requiring the struct pci_bus from the ones that does not require it (ie raw_pci_{read/write}.
The generic MMIO layer belongs in the drivers/pci/access.c file, it has nothing to do with ECAM.
The mmcfg interface should probably live in pci-acpi.c, I do not think you need an extra file in there but that's a detail.
Basically the generic accessors would become something like eg:
int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND; pci_mmio_write(size, addr + where, value); return PCIBIOS_SUCCESSFUL;
}
With that in place using raw_pci_write/read or the generic accessors becomes almost identical, with code requiring the pci_bus to be created using the generic accessors and ACPICA using the raw version.
I might be missing something, so apologies if that's the case.
Actually, I think you showed me the right direction :) Here are some conclusions/comments/concerns. Please correct me if I am wrong:
- We need raw_pci_write/read accessors (based on ECAM) for ARM64 too
but only up to the point where buses are enumerated. From that point on, we should reuse generic accessors from access.c file, right?
Well, I still have not figured out whether on arm64 the raw accessors required by ACPICA make sense.
So either arm64 relies on the generic MCFG based raw read and writes or we define the global raw read and writes as empty (ie x86 overrides them anyway).
My concerns/ideas related to raw accessors for ARM64, please correct me at any point.
ACPI spec - chapter: 19.5.96 OperationRegion (Declare Operation Region) defines PCI_Config as one of region types. Every time ASL opcode operates on corresponding PCI config space region, ASL interpreter is dispatching address space to our raw accessors, please see acpi_ex_pci_config_space_handler, acpi_ev_pci_config_region_setup calls. What is more important, such operations may happen after (yes after) bus enumeration, but always raw accessors are called at the end with the {segment, bus, dev, fn} tuple.
Giving above, here are some ideas: 1. We force somehow vendors to avoid operations on PCI config regions in ASL code. PCI config region definitions still fall into Hardware Reduced profile, so new ACPICA special subset for ARM64 is need. Then raw ACPI accessors can be empty (and overridden by x86). 2. We provide raw accessors which translate {segment, bus, dev, fn} tuple to Linux generic accessors (this can be considered only if PCI config accesses happened after bus enumeration for HR profile, thus tuple to bus structure map is possible). 4. We rely on the generic MCFG based raw read and writes.
Let me know your opinion.
Thanks, Tomasz
Hi Lorenzo,
On 09/14/2015 04:55 PM, Tomasz Nowicki wrote:
On 11.09.2015 13:20, Lorenzo Pieralisi wrote:
On Wed, Sep 09, 2015 at 02:47:55PM +0100, Tomasz Nowicki wrote:
[...]
I think (but I am happy to be corrected) that the map_bus() hook (ie that's why struct pci_bus is required in eg pci_generic_config_write) is there to ensure that when the generic accessors are called a) we have a valid bus b) the host controllers implementing it has been initialized.
I had another look and I noticed you are trying to solve multiple things at once:
- ACPICA seems to need PCI config space on bus 0 to be working before PCI enumerates (ie before we have a root bus), we need to countercheck on that, but you can't use the generic PCI accessors for that reasons (ie root bus might not be available, you do not have a pci_bus struct)
- the raw_pci_read/write require _generic_ mmio back-ends, since AMD can't cope with standard x86 read/write{b,w,l}
Overall, it seems to me that we can avoid code duplication by shuffling your code a bit.
You could modify the generic accessors in drivers/pci/access.c to use your mmio back-end instead of using plain read/write{b,w,l} functions (we should check if RobH is ok with that there can be reasons that prevent this from happening). This would solve the AMD mmio issue.
By factoring out the code that actually carries out the reads and writes in the accessors basically you decouple the functions requiring the struct pci_bus from the ones that does not require it (ie raw_pci_{read/write}.
The generic MMIO layer belongs in the drivers/pci/access.c file, it has nothing to do with ECAM.
The mmcfg interface should probably live in pci-acpi.c, I do not think you need an extra file in there but that's a detail.
Basically the generic accessors would become something like eg:
int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val) { void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where); if (!addr) return PCIBIOS_DEVICE_NOT_FOUND; pci_mmio_write(size, addr + where, value); return PCIBIOS_SUCCESSFUL;
}
With that in place using raw_pci_write/read or the generic accessors becomes almost identical, with code requiring the pci_bus to be created using the generic accessors and ACPICA using the raw version.
I might be missing something, so apologies if that's the case.
Actually, I think you showed me the right direction :) Here are some conclusions/comments/concerns. Please correct me if I am wrong:
- We need raw_pci_write/read accessors (based on ECAM) for ARM64 too
but only up to the point where buses are enumerated. From that point on, we should reuse generic accessors from access.c file, right?
Well, I still have not figured out whether on arm64 the raw accessors required by ACPICA make sense.
So either arm64 relies on the generic MCFG based raw read and writes or we define the global raw read and writes as empty (ie x86 overrides them anyway).
My concerns/ideas related to raw accessors for ARM64, please correct me at any point.
ACPI spec - chapter: 19.5.96 OperationRegion (Declare Operation Region) defines PCI_Config as one of region types. Every time ASL opcode operates on corresponding PCI config space region, ASL interpreter is dispatching address space to our raw accessors, please see acpi_ex_pci_config_space_handler, acpi_ev_pci_config_region_setup calls. What is more important, such operations may happen after (yes after) bus enumeration, but always raw accessors are called at the end with the {segment, bus, dev, fn} tuple.
Giving above, here are some ideas:
- We force somehow vendors to avoid operations on PCI config regions in
ASL code. PCI config region definitions still fall into Hardware Reduced profile, so new ACPICA special subset for ARM64 is need. Then raw ACPI accessors can be empty (and overridden by x86). 2. We provide raw accessors which translate {segment, bus, dev, fn} tuple to Linux generic accessors (this can be considered only if PCI config accesses happened after bus enumeration for HR profile, thus tuple to bus structure map is possible). 4. We rely on the generic MCFG based raw read and writes.
I will appreciate your opinion on above ideas.
Tomasz
On Fri, Sep 25, 2015 at 05:02:09PM +0100, Tomasz Nowicki wrote:
[...]
My concerns/ideas related to raw accessors for ARM64, please correct me at any point.
ACPI spec - chapter: 19.5.96 OperationRegion (Declare Operation Region) defines PCI_Config as one of region types. Every time ASL opcode operates on corresponding PCI config space region, ASL interpreter is dispatching address space to our raw accessors, please see acpi_ex_pci_config_space_handler, acpi_ev_pci_config_region_setup calls. What is more important, such operations may happen after (yes after) bus enumeration, but always raw accessors are called at the end with the {segment, bus, dev, fn} tuple.
Giving above, here are some ideas:
- We force somehow vendors to avoid operations on PCI config regions in
ASL code. PCI config region definitions still fall into Hardware Reduced profile, so new ACPICA special subset for ARM64 is need. Then raw ACPI accessors can be empty (and overridden by x86). 2. We provide raw accessors which translate {segment, bus, dev, fn} tuple to Linux generic accessors (this can be considered only if PCI config accesses happened after bus enumeration for HR profile, thus tuple to bus structure map is possible). 4. We rely on the generic MCFG based raw read and writes.
I will appreciate your opinion on above ideas.
Well, (1) does not seem allowed by the ACPI specification, the only way we can detect that is by leaving the raw accessors empty for now and see how things will turn out on ARM64, in the meantime I will start a thread on ASWG to check how that's used on x86, I do not have any machine to test this and grokking ACPICA is not trivial, there is lots of history there and it is hard to fathom.
(2) is tempting but I am not sure it works all the time (I still think that's a quirk of ACPI specs, namely that some OperationRegion should always be available to ASL, maybe that's just an unused ACPI spec quirk).
If I read you series correctly (4) can be implemented easily if and when we deem the raw accessors necessary, on top of the MCFG layer, by making the MCFG raw accessors the default instead of leaving them empty.
I pulled your branch and started testing it on AMD Seattle, next week we should try to get this done.
I think you should target option (1) and in the meantime we should reach a conclusion on the raw accessors usage on ARM64.
Thanks, Lorenzo
On Mon, Sep 14, 2015 at 03:55:50PM +0100, Tomasz Nowicki wrote:
[...]
Well, I still have not figured out whether on arm64 the raw accessors required by ACPICA make sense.
So either arm64 relies on the generic MCFG based raw read and writes or we define the global raw read and writes as empty (ie x86 overrides them anyway).
My concerns/ideas related to raw accessors for ARM64, please correct me at any point.
ACPI spec - chapter: 19.5.96 OperationRegion (Declare Operation Region) defines PCI_Config as one of region types. Every time ASL opcode operates on corresponding PCI config space region, ASL interpreter is dispatching address space to our raw accessors, please see acpi_ex_pci_config_space_handler, acpi_ev_pci_config_region_setup calls. What is more important, such operations may happen after (yes after) bus enumeration, but always raw accessors are called at the end with the {segment, bus, dev, fn} tuple.
Giving above, here are some ideas:
- We force somehow vendors to avoid operations on PCI config regions in
ASL code. PCI config region definitions still fall into Hardware Reduced profile, so new ACPICA special subset for ARM64 is need. Then raw ACPI accessors can be empty (and overridden by x86).
I am coming back to this, I am not sure that PCI config based OperationRegions fall into Hardware Reduced profile, I will finally start a thread on ASWG to check that.
Other than that, are you posting an updated version of this series soon ? Let me know if you need help refactoring/testing the patches.
Thanks, Lorenzo
- We provide raw accessors which translate {segment, bus, dev, fn}
tuple to Linux generic accessors (this can be considered only if PCI config accesses happened after bus enumeration for HR profile, thus tuple to bus structure map is possible). 4. We rely on the generic MCFG based raw read and writes.
Let me know your opinion.
Thanks, Tomasz
On 15.10.2015 15:22, Lorenzo Pieralisi wrote:
On Mon, Sep 14, 2015 at 03:55:50PM +0100, Tomasz Nowicki wrote:
[...]
Well, I still have not figured out whether on arm64 the raw accessors required by ACPICA make sense.
So either arm64 relies on the generic MCFG based raw read and writes or we define the global raw read and writes as empty (ie x86 overrides them anyway).
My concerns/ideas related to raw accessors for ARM64, please correct me at any point.
ACPI spec - chapter: 19.5.96 OperationRegion (Declare Operation Region) defines PCI_Config as one of region types. Every time ASL opcode operates on corresponding PCI config space region, ASL interpreter is dispatching address space to our raw accessors, please see acpi_ex_pci_config_space_handler, acpi_ev_pci_config_region_setup calls. What is more important, such operations may happen after (yes after) bus enumeration, but always raw accessors are called at the end with the {segment, bus, dev, fn} tuple.
Giving above, here are some ideas:
- We force somehow vendors to avoid operations on PCI config regions in
ASL code. PCI config region definitions still fall into Hardware Reduced profile, so new ACPICA special subset for ARM64 is need. Then raw ACPI accessors can be empty (and overridden by x86).
I am coming back to this, I am not sure that PCI config based OperationRegions fall into Hardware Reduced profile, I will finally start a thread on ASWG to check that.
Other than that, are you posting an updated version of this series soon ? Let me know if you need help refactoring/testing the patches.
I am planing to send updated version next week, but honestly next version does make sense if we figure out raw accessors issue. So I will clean up all around meanwhile and optionally raw accessor.
Of course your help in testing is welcomed. Also please have a look at my GICv3/ITS patches, they are important for ACPI PCI.
Regards, Tomasz
On 15/10/15 15:34, Tomasz Nowicki wrote:
Of course your help in testing is welcomed. Also please have a look at my GICv3/ITS patches, they are important for ACPI PCI.
Where is the dependency? ACPI/PCI should really be standalone.
Thanks,
M.
On 10/15/2015 06:26 PM, Marc Zyngier wrote:
On 15/10/15 15:34, Tomasz Nowicki wrote:
Of course your help in testing is welcomed. Also please have a look at my GICv3/ITS patches, they are important for ACPI PCI.
Where is the dependency? ACPI/PCI should really be standalone.
There are no dependencies in code. Just wanted to say that ARM64 machines with GICv3/ITS and PCI on board need both series to use MSI with ACPI kernel. Sorry for confusion.
Regards, Tomasz
From: Tomasz Nowicki tomasz.nowicki@linaro.org
First function acpi_mcfg_check_entry() does not apply any quirks by default.
Last two functions are required by ACPI subsystem to make PCI config space accessible. Generic code assume to do nothing for early init call but late init call does as follow: - parse MCFG table and add regions to ECAM resource list - map regions - add regions to iomem_resource
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com --- drivers/acpi/mcfg.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/drivers/acpi/mcfg.c b/drivers/acpi/mcfg.c index 63775af..745b83e 100644 --- a/drivers/acpi/mcfg.c +++ b/drivers/acpi/mcfg.c @@ -55,3 +55,29 @@ int __init acpi_parse_mcfg(struct acpi_table_header *header)
return 0; } + +int __init __weak acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg, + struct acpi_mcfg_allocation *cfg) +{ + return 0; +} + +void __init __weak pci_mmcfg_early_init(void) +{ + +} + +void __init __weak pci_mmcfg_late_init(void) +{ + struct pci_mmcfg_region *cfg; + + acpi_table_parse(ACPI_SIG_MCFG, acpi_parse_mcfg); + + if (list_empty(&pci_mmcfg_list)) + return; + if (!pci_mmcfg_arch_init()) + free_all_mmcfg(); + + list_for_each_entry(cfg, &pci_mmcfg_list, list) + insert_resource(&iomem_resource, &cfg->res); +}
From: Tomasz Nowicki tomasz.nowicki@linaro.org
mmconfig_64.c version is going to be default implementation for arch agnostic low-level direct PCI config space accessors for ECAM dirver. However, now it initialize raw_pci_ext_ops pointer which is x86 specific code only. Moreover, mmconfig_32.c is doing the same thing at the same time.
Move it to mmconfig_shared.c so it becomes common for both and mmconfig_64.c turns out to be purely arch agnostic.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com --- arch/x86/pci/mmconfig-shared.c | 10 ++++++++-- arch/x86/pci/mmconfig_32.c | 10 ++-------- arch/x86/pci/mmconfig_64.c | 11 ++--------- include/linux/ecam.h | 5 +++++ 4 files changed, 17 insertions(+), 19 deletions(-)
diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c index 6fa3080..8938db3 100644 --- a/arch/x86/pci/mmconfig-shared.c +++ b/arch/x86/pci/mmconfig-shared.c @@ -29,6 +29,11 @@ static bool pci_mmcfg_running_state; static bool pci_mmcfg_arch_init_failed;
+const struct pci_raw_ops pci_mmcfg = { + .read = pci_mmcfg_read, + .write = pci_mmcfg_write, +}; + static u32 pci_mmconfig_amd_read(int len, void __iomem *addr) { @@ -555,9 +560,10 @@ static void __init __pci_mmcfg_init(int early) } }
- if (pci_mmcfg_arch_init()) + if (pci_mmcfg_arch_init()) { + raw_pci_ext_ops = &pci_mmcfg; pci_probe = (pci_probe & ~PCI_PROBE_MASK) | PCI_PROBE_MMCONF; - else { + } else { free_all_mmcfg(); pci_mmcfg_arch_init_failed = true; } diff --git a/arch/x86/pci/mmconfig_32.c b/arch/x86/pci/mmconfig_32.c index 5cf6291..7a050cb 100644 --- a/arch/x86/pci/mmconfig_32.c +++ b/arch/x86/pci/mmconfig_32.c @@ -50,7 +50,7 @@ static void pci_exp_set_dev_base(unsigned int base, int bus, int devfn) } }
-static int pci_mmcfg_read(unsigned int seg, unsigned int bus, +int pci_mmcfg_read(unsigned int seg, unsigned int bus, unsigned int devfn, int reg, int len, u32 *value) { unsigned long flags; @@ -79,7 +79,7 @@ err: *value = -1; return 0; }
-static int pci_mmcfg_write(unsigned int seg, unsigned int bus, +int pci_mmcfg_write(unsigned int seg, unsigned int bus, unsigned int devfn, int reg, int len, u32 value) { unsigned long flags; @@ -106,15 +106,9 @@ static int pci_mmcfg_write(unsigned int seg, unsigned int bus, return 0; }
-const struct pci_raw_ops pci_mmcfg = { - .read = pci_mmcfg_read, - .write = pci_mmcfg_write, -}; - int __init pci_mmcfg_arch_init(void) { printk(KERN_INFO "PCI: Using MMCONFIG for extended config space\n"); - raw_pci_ext_ops = &pci_mmcfg; return 1; }
diff --git a/arch/x86/pci/mmconfig_64.c b/arch/x86/pci/mmconfig_64.c index b62ff18..fd857ea 100644 --- a/arch/x86/pci/mmconfig_64.c +++ b/arch/x86/pci/mmconfig_64.c @@ -25,7 +25,7 @@ static char __iomem *pci_dev_base(unsigned int seg, unsigned int bus, unsigned i return NULL; }
-static int pci_mmcfg_read(unsigned int seg, unsigned int bus, +int pci_mmcfg_read(unsigned int seg, unsigned int bus, unsigned int devfn, int reg, int len, u32 *value) { char __iomem *addr; @@ -49,7 +49,7 @@ err: *value = -1; return 0; }
-static int pci_mmcfg_write(unsigned int seg, unsigned int bus, +int pci_mmcfg_write(unsigned int seg, unsigned int bus, unsigned int devfn, int reg, int len, u32 value) { char __iomem *addr; @@ -71,11 +71,6 @@ static int pci_mmcfg_write(unsigned int seg, unsigned int bus, return 0; }
-const struct pci_raw_ops pci_mmcfg = { - .read = pci_mmcfg_read, - .write = pci_mmcfg_write, -}; - static void __iomem *mcfg_ioremap(struct pci_mmcfg_region *cfg) { void __iomem *addr; @@ -101,8 +96,6 @@ int __init pci_mmcfg_arch_init(void) return 0; }
- raw_pci_ext_ops = &pci_mmcfg; - return 1; }
diff --git a/include/linux/ecam.h b/include/linux/ecam.h index 2387df5..fba5d6b 100644 --- a/include/linux/ecam.h +++ b/include/linux/ecam.h @@ -47,5 +47,10 @@ extern struct list_head pci_mmcfg_list;
#define PCI_MMCFG_BUS_OFFSET(bus) ((bus) << 20)
+int pci_mmcfg_read(unsigned int seg, unsigned int bus, + unsigned int devfn, int reg, int len, u32 *value); +int pci_mmcfg_write(unsigned int seg, unsigned int bus, + unsigned int devfn, int reg, int len, u32 value); + #endif /* __KERNEL__ */ #endif /* __ECAM_H */
From: Tomasz Nowicki tomasz.nowicki@linaro.org
Host which want to take advantage of ECAM generic goodness should select CONFIG_PCI_ECAM_GENERIC. Otherwise, machines like 32bits x86, are obligated to provide own low-level ECAM calls.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com --- arch/x86/Kconfig | 1 + arch/x86/pci/Makefile | 4 +- arch/x86/pci/mmconfig_64.c | 127 --------------------------------------------- drivers/pci/Kconfig | 3 ++ drivers/pci/ecam.c | 113 ++++++++++++++++++++++++++++++++++++++++ 5 files changed, 120 insertions(+), 128 deletions(-) delete mode 100644 arch/x86/pci/mmconfig_64.c
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 4e3dcb3..87a6393 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -144,6 +144,7 @@ config X86 select X86_FEATURE_NAMES if PROC_FS select SRCU select HAVE_PCI_ECAM + select GENERIC_PCI_ECAM if X86_64
config INSTRUCTION_DECODER def_bool y diff --git a/arch/x86/pci/Makefile b/arch/x86/pci/Makefile index 5c6fc35..efc193d 100644 --- a/arch/x86/pci/Makefile +++ b/arch/x86/pci/Makefile @@ -1,7 +1,9 @@ obj-y := i386.o init.o
obj-$(CONFIG_PCI_BIOS) += pcbios.o -obj-$(CONFIG_PCI_MMCONFIG) += mmconfig_$(BITS).o direct.o mmconfig-shared.o +mmconfig-y := direct.o mmconfig-shared.o +mmconfig-$(CONFIG_X86_32) += mmconfig_32.o +obj-$(CONFIG_PCI_MMCONFIG) += $(mmconfig-y) obj-$(CONFIG_PCI_DIRECT) += direct.o obj-$(CONFIG_PCI_OLPC) += olpc.o obj-$(CONFIG_PCI_XEN) += xen.o diff --git a/arch/x86/pci/mmconfig_64.c b/arch/x86/pci/mmconfig_64.c deleted file mode 100644 index fd857ea..0000000 --- a/arch/x86/pci/mmconfig_64.c +++ /dev/null @@ -1,127 +0,0 @@ -/* - * mmconfig.c - Low-level direct PCI config space access via MMCONFIG - * - * This is an 64bit optimized version that always keeps the full mmconfig - * space mapped. This allows lockless config space operation. - */ - -#include <linux/pci.h> -#include <linux/init.h> -#include <linux/acpi.h> -#include <linux/bitmap.h> -#include <linux/rcupdate.h> -#include <linux/ecam.h> -#include <asm/e820.h> -#include <asm/pci_x86.h> - -#define PREFIX "PCI: " - -static char __iomem *pci_dev_base(unsigned int seg, unsigned int bus, unsigned int devfn) -{ - struct pci_mmcfg_region *cfg = pci_mmconfig_lookup(seg, bus); - - if (cfg && cfg->virt) - return cfg->virt + (PCI_MMCFG_BUS_OFFSET(bus) | (devfn << 12)); - return NULL; -} - -int pci_mmcfg_read(unsigned int seg, unsigned int bus, - unsigned int devfn, int reg, int len, u32 *value) -{ - char __iomem *addr; - - /* Why do we have this when nobody checks it. How about a BUG()!? -AK */ - if (unlikely((bus > 255) || (devfn > 255) || (reg > 4095))) { -err: *value = -1; - return -EINVAL; - } - - rcu_read_lock(); - addr = pci_dev_base(seg, bus, devfn); - if (!addr) { - rcu_read_unlock(); - goto err; - } - - *value = pci_mmio_read(len, addr + reg); - rcu_read_unlock(); - - return 0; -} - -int pci_mmcfg_write(unsigned int seg, unsigned int bus, - unsigned int devfn, int reg, int len, u32 value) -{ - char __iomem *addr; - - /* Why do we have this when nobody checks it. How about a BUG()!? -AK */ - if (unlikely((bus > 255) || (devfn > 255) || (reg > 4095))) - return -EINVAL; - - rcu_read_lock(); - addr = pci_dev_base(seg, bus, devfn); - if (!addr) { - rcu_read_unlock(); - return -EINVAL; - } - - pci_mmio_write(len, addr + reg, value); - rcu_read_unlock(); - - return 0; -} - -static void __iomem *mcfg_ioremap(struct pci_mmcfg_region *cfg) -{ - void __iomem *addr; - u64 start, size; - int num_buses; - - start = cfg->address + PCI_MMCFG_BUS_OFFSET(cfg->start_bus); - num_buses = cfg->end_bus - cfg->start_bus + 1; - size = PCI_MMCFG_BUS_OFFSET(num_buses); - addr = ioremap_nocache(start, size); - if (addr) - addr -= PCI_MMCFG_BUS_OFFSET(cfg->start_bus); - return addr; -} - -int __init pci_mmcfg_arch_init(void) -{ - struct pci_mmcfg_region *cfg; - - list_for_each_entry(cfg, &pci_mmcfg_list, list) - if (pci_mmcfg_arch_map(cfg)) { - pci_mmcfg_arch_free(); - return 0; - } - - return 1; -} - -void __init pci_mmcfg_arch_free(void) -{ - struct pci_mmcfg_region *cfg; - - list_for_each_entry(cfg, &pci_mmcfg_list, list) - pci_mmcfg_arch_unmap(cfg); -} - -int pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg) -{ - cfg->virt = mcfg_ioremap(cfg); - if (!cfg->virt) { - pr_err(PREFIX "can't map MMCONFIG at %pR\n", &cfg->res); - return -ENOMEM; - } - - return 0; -} - -void pci_mmcfg_arch_unmap(struct pci_mmcfg_region *cfg) -{ - if (cfg && cfg->virt) { - iounmap(cfg->virt + PCI_MMCFG_BUS_OFFSET(cfg->start_bus)); - cfg->virt = NULL; - } -} diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 90a5fb9..fae4aa7 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -29,6 +29,9 @@ config PCI_ECAM config HAVE_PCI_ECAM bool
+config GENERIC_PCI_ECAM + bool + config PCI_DEBUG bool "PCI Debugging" depends on PCI && DEBUG_KERNEL diff --git a/drivers/pci/ecam.c b/drivers/pci/ecam.c index c588234..796b6e7 100644 --- a/drivers/pci/ecam.c +++ b/drivers/pci/ecam.c @@ -23,6 +23,119 @@ static DEFINE_MUTEX(pci_mmcfg_lock);
LIST_HEAD(pci_mmcfg_list);
+#ifdef CONFIG_GENERIC_PCI_ECAM +static char __iomem *pci_dev_base(unsigned int seg, unsigned int bus, + unsigned int devfn) +{ + struct pci_mmcfg_region *cfg = pci_mmconfig_lookup(seg, bus); + + if (cfg && cfg->virt) + return cfg->virt + (PCI_MMCFG_BUS_OFFSET(bus) | (devfn << 12)); + return NULL; +} + +int pci_mmcfg_read(unsigned int seg, unsigned int bus, + unsigned int devfn, int reg, int len, u32 *value) +{ + char __iomem *addr; + + /* Why do we have this when nobody checks it. How about a BUG()!? -AK */ + if (unlikely((bus > 255) || (devfn > 255) || (reg > 4095))) { +err: *value = -1; + return -EINVAL; + } + + rcu_read_lock(); + addr = pci_dev_base(seg, bus, devfn); + if (!addr) { + rcu_read_unlock(); + goto err; + } + + *value = pci_mmio_read(len, addr + reg); + rcu_read_unlock(); + + return 0; +} + +int pci_mmcfg_write(unsigned int seg, unsigned int bus, + unsigned int devfn, int reg, int len, u32 value) +{ + char __iomem *addr; + + /* Why do we have this when nobody checks it. How about a BUG()!? -AK */ + if (unlikely((bus > 255) || (devfn > 255) || (reg > 4095))) + return -EINVAL; + + rcu_read_lock(); + addr = pci_dev_base(seg, bus, devfn); + if (!addr) { + rcu_read_unlock(); + return -EINVAL; + } + + pci_mmio_write(len, addr + reg, value); + rcu_read_unlock(); + + return 0; +} + +static void __iomem *mcfg_ioremap(struct pci_mmcfg_region *cfg) +{ + void __iomem *addr; + u64 start, size; + int num_buses; + + start = cfg->address + PCI_MMCFG_BUS_OFFSET(cfg->start_bus); + num_buses = cfg->end_bus - cfg->start_bus + 1; + size = PCI_MMCFG_BUS_OFFSET(num_buses); + addr = ioremap_nocache(start, size); + if (addr) + addr -= PCI_MMCFG_BUS_OFFSET(cfg->start_bus); + return addr; +} + +int __init pci_mmcfg_arch_init(void) +{ + struct pci_mmcfg_region *cfg; + + list_for_each_entry(cfg, &pci_mmcfg_list, list) + if (pci_mmcfg_arch_map(cfg)) { + pci_mmcfg_arch_free(); + return 0; + } + + return 1; +} + +void __init pci_mmcfg_arch_free(void) +{ + struct pci_mmcfg_region *cfg; + + list_for_each_entry(cfg, &pci_mmcfg_list, list) + pci_mmcfg_arch_unmap(cfg); +} + +int pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg) +{ + cfg->virt = mcfg_ioremap(cfg); + if (!cfg->virt) { + pr_err(PREFIX "can't map MMCONFIG at %pR\n", &cfg->res); + return -ENOMEM; + } + + return 0; +} + +void pci_mmcfg_arch_unmap(struct pci_mmcfg_region *cfg) +{ + if (cfg && cfg->virt) { + iounmap(cfg->virt + PCI_MMCFG_BUS_OFFSET(cfg->start_bus)); + cfg->virt = NULL; + } +} +#endif + static u32 pci_mmconfig_generic_read(int len, void __iomem *addr) {
From: Tomasz Nowicki tomasz.nowicki@linaro.org
MCFG can be used perfectly for all architectures which support ACPI. ACPI mandates MCFG to describe PCI config space ranges which means we should use MMCONFIG accessors by default.
Signed-off-by: Tomasz Nowicki tomasz.nowicki@linaro.org Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com --- drivers/acpi/mcfg.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+)
diff --git a/drivers/acpi/mcfg.c b/drivers/acpi/mcfg.c index 745b83e..90c81fa 100644 --- a/drivers/acpi/mcfg.c +++ b/drivers/acpi/mcfg.c @@ -12,6 +12,26 @@
#define PREFIX "MCFG: "
+/* + * raw_pci_read/write - ACPI PCI config space accessors. + * + * ACPI spec defines MCFG table as the way we can describe access to PCI config + * space, so let MCFG be default (__weak). + * + * If platform needs more fancy stuff, should provides its own implementation. + */ +int __weak raw_pci_read(unsigned int domain, unsigned int bus, + unsigned int devfn, int reg, int len, u32 *val) +{ + return pci_mmcfg_read(domain, bus, devfn, reg, len, val); +} + +int __weak raw_pci_write(unsigned int domain, unsigned int bus, + unsigned int devfn, int reg, int len, u32 val) +{ + return pci_mmcfg_write(domain, bus, devfn, reg, len, val); +} + int __init acpi_parse_mcfg(struct acpi_table_header *header) { struct acpi_table_mcfg *mcfg;
In drivers/xen/pci.c, there are arch x86 dependent codes when CONFIG_PCI_MMCONFIG is enabled, since CONFIG_PCI_MMCONFIG depends on ACPI, so this will prevent XEN PCI running on other architectures using ACPI with PCI_MMCONFIG enabled (such as ARM64).
Fortunatly, it can be sloved in a simple way. In drivers/xen/pci.c, the only x86 dependent code is if ((pci_probe & PCI_PROBE_MMCONF) == 0), and it's defined in asm/pci_x86.h, the code means that if the PCI resource is not probed in PCI_PROBE_MMCONF way, just ingnore the xen mcfg init. Actually this is duplicate, because if PCI resource is not probed in PCI_PROBE_MMCONF way, the pci_mmconfig_list will be empty, and the if (list_empty()) after it will do the same job.
So just remove the arch related code and the head file, this will be no functional change for x86, and also makes xen/pci.c usable for other architectures.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org CC: Konrad Rzeszutek Wilk konrad.wilk@oracle.com CC: Boris Ostrovsky boris.ostrovsky@oracle.com --- drivers/xen/pci.c | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c index 6785ebb..9a8dbe3 100644 --- a/drivers/xen/pci.c +++ b/drivers/xen/pci.c @@ -28,9 +28,6 @@ #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> #include "../pci/pci.h" -#ifdef CONFIG_PCI_MMCONFIG -#include <asm/pci_x86.h> -#endif
static bool __read_mostly pci_seg_supported = true;
@@ -222,9 +219,6 @@ static int __init xen_mcfg_late(void) if (!xen_initial_domain()) return 0;
- if ((pci_probe & PCI_PROBE_MMCONF) == 0) - return 0; - if (list_empty(&pci_mmcfg_list)) return 0;
On 05/26/2015 08:49 AM, Hanjun Guo wrote:
In drivers/xen/pci.c, there are arch x86 dependent codes when CONFIG_PCI_MMCONFIG is enabled, since CONFIG_PCI_MMCONFIG depends on ACPI, so this will prevent XEN PCI running on other architectures using ACPI with PCI_MMCONFIG enabled (such as ARM64).
Fortunatly, it can be sloved in a simple way. In drivers/xen/pci.c, the only x86 dependent code is if ((pci_probe & PCI_PROBE_MMCONF) == 0), and it's defined in asm/pci_x86.h, the code means that if the PCI resource is not probed in PCI_PROBE_MMCONF way, just ingnore the xen mcfg init. Actually this is duplicate, because if PCI resource is not probed in PCI_PROBE_MMCONF way, the pci_mmconfig_list will be empty, and the if (list_empty()) after it will do the same job.
So just remove the arch related code and the head file, this will be no functional change for x86, and also makes xen/pci.c usable for other architectures.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org CC: Konrad Rzeszutek Wilk konrad.wilk@oracle.com CC: Boris Ostrovsky boris.ostrovsky@oracle.com
drivers/xen/pci.c | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c index 6785ebb..9a8dbe3 100644 --- a/drivers/xen/pci.c +++ b/drivers/xen/pci.c @@ -28,9 +28,6 @@ #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> #include "../pci/pci.h" -#ifdef CONFIG_PCI_MMCONFIG -#include <asm/pci_x86.h> -#endif
static bool __read_mostly pci_seg_supported = true;
@@ -222,9 +219,6 @@ static int __init xen_mcfg_late(void) if (!xen_initial_domain()) return 0;
- if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return 0;
- if (list_empty(&pci_mmcfg_list)) return 0;
(+Stefano who is Xen ARM maintainer)
This will not build on x86 since pci_mmcfg_list since, for example, pci_mmcfg_list is declared in pci_x86.h.
And I am not sure I understand why you are trying to do this since AFAIK CONFIG_PCI_MMCONFIG is only defined on x86 so neither pci_x86.h will be included nor xen_mcfg_late() will be defined on ARM.
-boris
On 05/26/2015 09:54 AM, Boris Ostrovsky wrote:
On 05/26/2015 08:49 AM, Hanjun Guo wrote:
In drivers/xen/pci.c, there are arch x86 dependent codes when CONFIG_PCI_MMCONFIG is enabled, since CONFIG_PCI_MMCONFIG depends on ACPI, so this will prevent XEN PCI running on other architectures using ACPI with PCI_MMCONFIG enabled (such as ARM64).
Fortunatly, it can be sloved in a simple way. In drivers/xen/pci.c, the only x86 dependent code is if ((pci_probe & PCI_PROBE_MMCONF) == 0), and it's defined in asm/pci_x86.h, the code means that if the PCI resource is not probed in PCI_PROBE_MMCONF way, just ingnore the xen mcfg init. Actually this is duplicate, because if PCI resource is not probed in PCI_PROBE_MMCONF way, the pci_mmconfig_list will be empty, and the if (list_empty()) after it will do the same job.
So just remove the arch related code and the head file, this will be no functional change for x86, and also makes xen/pci.c usable for other architectures.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org CC: Konrad Rzeszutek Wilk konrad.wilk@oracle.com CC: Boris Ostrovsky boris.ostrovsky@oracle.com
drivers/xen/pci.c | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c index 6785ebb..9a8dbe3 100644 --- a/drivers/xen/pci.c +++ b/drivers/xen/pci.c @@ -28,9 +28,6 @@ #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> #include "../pci/pci.h" -#ifdef CONFIG_PCI_MMCONFIG -#include <asm/pci_x86.h> -#endif
static bool __read_mostly pci_seg_supported = true;
@@ -222,9 +219,6 @@ static int __init xen_mcfg_late(void) if (!xen_initial_domain()) return 0;
- if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return 0;
if (list_empty(&pci_mmcfg_list)) return 0;
(+Stefano who is Xen ARM maintainer)
This will not build on x86 since pci_mmcfg_list since, for example, pci_mmcfg_list is declared in pci_x86.h.
And now really with Stefano and with parsable first sentence, sorry:
This will not build on x86 since pci_mmcfg_list, for example, is declared in pci_x86.h.
And I am not sure I understand why you are trying to do this since AFAIK CONFIG_PCI_MMCONFIG is only defined on x86 so neither pci_x86.h will be included nor xen_mcfg_late() will be defined on ARM.
-boris
On 26.05.2015 16:00, Boris Ostrovsky wrote:
On 05/26/2015 09:54 AM, Boris Ostrovsky wrote:
On 05/26/2015 08:49 AM, Hanjun Guo wrote:
In drivers/xen/pci.c, there are arch x86 dependent codes when CONFIG_PCI_MMCONFIG is enabled, since CONFIG_PCI_MMCONFIG depends on ACPI, so this will prevent XEN PCI running on other architectures using ACPI with PCI_MMCONFIG enabled (such as ARM64).
Fortunatly, it can be sloved in a simple way. In drivers/xen/pci.c, the only x86 dependent code is if ((pci_probe & PCI_PROBE_MMCONF) == 0), and it's defined in asm/pci_x86.h, the code means that if the PCI resource is not probed in PCI_PROBE_MMCONF way, just ingnore the xen mcfg init. Actually this is duplicate, because if PCI resource is not probed in PCI_PROBE_MMCONF way, the pci_mmconfig_list will be empty, and the if (list_empty()) after it will do the same job.
So just remove the arch related code and the head file, this will be no functional change for x86, and also makes xen/pci.c usable for other architectures.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org CC: Konrad Rzeszutek Wilk konrad.wilk@oracle.com CC: Boris Ostrovsky boris.ostrovsky@oracle.com
drivers/xen/pci.c | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c index 6785ebb..9a8dbe3 100644 --- a/drivers/xen/pci.c +++ b/drivers/xen/pci.c @@ -28,9 +28,6 @@ #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> #include "../pci/pci.h" -#ifdef CONFIG_PCI_MMCONFIG -#include <asm/pci_x86.h> -#endif
static bool __read_mostly pci_seg_supported = true;
@@ -222,9 +219,6 @@ static int __init xen_mcfg_late(void) if (!xen_initial_domain()) return 0;
- if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return 0;
if (list_empty(&pci_mmcfg_list)) return 0;
(+Stefano who is Xen ARM maintainer)
This will not build on x86 since pci_mmcfg_list since, for example, pci_mmcfg_list is declared in pci_x86.h.
And now really with Stefano and with parsable first sentence, sorry:
This will not build on x86 since pci_mmcfg_list, for example, is declared in pci_x86.h.
With this patch set, not any more. Please see preceding patches.
Regards, Tomasz
On 05/26/2015 10:54 AM, Tomasz Nowicki wrote:
On 26.05.2015 16:00, Boris Ostrovsky wrote:
On 05/26/2015 09:54 AM, Boris Ostrovsky wrote:
On 05/26/2015 08:49 AM, Hanjun Guo wrote:
In drivers/xen/pci.c, there are arch x86 dependent codes when CONFIG_PCI_MMCONFIG is enabled, since CONFIG_PCI_MMCONFIG depends on ACPI, so this will prevent XEN PCI running on other architectures using ACPI with PCI_MMCONFIG enabled (such as ARM64).
Fortunatly, it can be sloved in a simple way. In drivers/xen/pci.c, the only x86 dependent code is if ((pci_probe & PCI_PROBE_MMCONF) == 0), and it's defined in asm/pci_x86.h, the code means that if the PCI resource is not probed in PCI_PROBE_MMCONF way, just ingnore the xen mcfg init. Actually this is duplicate, because if PCI resource is not probed in PCI_PROBE_MMCONF way, the pci_mmconfig_list will be empty, and the if (list_empty()) after it will do the same job.
So just remove the arch related code and the head file, this will be no functional change for x86, and also makes xen/pci.c usable for other architectures.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org CC: Konrad Rzeszutek Wilk konrad.wilk@oracle.com CC: Boris Ostrovsky boris.ostrovsky@oracle.com
drivers/xen/pci.c | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c index 6785ebb..9a8dbe3 100644 --- a/drivers/xen/pci.c +++ b/drivers/xen/pci.c @@ -28,9 +28,6 @@ #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> #include "../pci/pci.h" -#ifdef CONFIG_PCI_MMCONFIG -#include <asm/pci_x86.h> -#endif
static bool __read_mostly pci_seg_supported = true;
@@ -222,9 +219,6 @@ static int __init xen_mcfg_late(void) if (!xen_initial_domain()) return 0;
- if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return 0;
if (list_empty(&pci_mmcfg_list)) return 0;
(+Stefano who is Xen ARM maintainer)
This will not build on x86 since pci_mmcfg_list since, for example, pci_mmcfg_list is declared in pci_x86.h.
And now really with Stefano and with parsable first sentence, sorry:
This will not build on x86 since pci_mmcfg_list, for example, is declared in pci_x86.h.
With this patch set, not any more. Please see preceding patches.
OK, I didn't notice this was part of a series.
Then if not having PCI_PROBE_MMCONF bit set is indeed equivalent to list_empty(&pci_mmcfg_list), is there any reason for this flag to (continue to) exist? (and also for pci_mmcfg_arch_init_failed.)
-boris
On 2015年05月26日 23:44, Boris Ostrovsky wrote:
On 05/26/2015 10:54 AM, Tomasz Nowicki wrote:
On 26.05.2015 16:00, Boris Ostrovsky wrote:
On 05/26/2015 09:54 AM, Boris Ostrovsky wrote:
On 05/26/2015 08:49 AM, Hanjun Guo wrote:
In drivers/xen/pci.c, there are arch x86 dependent codes when CONFIG_PCI_MMCONFIG is enabled, since CONFIG_PCI_MMCONFIG depends on ACPI, so this will prevent XEN PCI running on other architectures using ACPI with PCI_MMCONFIG enabled (such as ARM64).
Fortunatly, it can be sloved in a simple way. In drivers/xen/pci.c, the only x86 dependent code is if ((pci_probe & PCI_PROBE_MMCONF) == 0), and it's defined in asm/pci_x86.h, the code means that if the PCI resource is not probed in PCI_PROBE_MMCONF way, just ingnore the xen mcfg init. Actually this is duplicate, because if PCI resource is not probed in PCI_PROBE_MMCONF way, the pci_mmconfig_list will be empty, and the if (list_empty()) after it will do the same job.
So just remove the arch related code and the head file, this will be no functional change for x86, and also makes xen/pci.c usable for other architectures.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org CC: Konrad Rzeszutek Wilk konrad.wilk@oracle.com CC: Boris Ostrovsky boris.ostrovsky@oracle.com
drivers/xen/pci.c | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c index 6785ebb..9a8dbe3 100644 --- a/drivers/xen/pci.c +++ b/drivers/xen/pci.c @@ -28,9 +28,6 @@ #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> #include "../pci/pci.h" -#ifdef CONFIG_PCI_MMCONFIG -#include <asm/pci_x86.h> -#endif
static bool __read_mostly pci_seg_supported = true;
@@ -222,9 +219,6 @@ static int __init xen_mcfg_late(void) if (!xen_initial_domain()) return 0;
- if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return 0;
if (list_empty(&pci_mmcfg_list)) return 0;
(+Stefano who is Xen ARM maintainer)
This will not build on x86 since pci_mmcfg_list since, for example, pci_mmcfg_list is declared in pci_x86.h.
And now really with Stefano and with parsable first sentence, sorry:
This will not build on x86 since pci_mmcfg_list, for example, is declared in pci_x86.h.
With this patch set, not any more. Please see preceding patches.
OK, I didn't notice this was part of a series.
Sorry, I didn't cc you all of those patches.
Then if not having PCI_PROBE_MMCONF bit set is indeed equivalent to list_empty(&pci_mmcfg_list), is there any reason for this flag to (continue to) exist? (and also for pci_mmcfg_arch_init_failed.)
I think PCI_PROBE_MMCONF bit is needed for early init of pci mmconfig in the x86 arch related code, but for xen_mcfg_late(), it's called after acpi_init() which the mmconfig is ready for use if it's available (the pci_mmcfg_list is empty or not), so not having PCI_PROBE_MMCONF bit set is equivalent list_empty(&pci_mmcfg_list) is not suitable for all cases, but I think it will be ok after mmconfig is initialized.
I think my change log is misleading and needs updating :)
Thanks Hanjun
Based on Jiang Liu's common interface to support PCI host bridge init and refactoring of MMCONFIG, this patch using information from ACPI MCFG table and IO/irq resources from _CRS to init ARM64 PCI hostbridge, then PCI will work on ARM64.
This patch is based on Mark Salter and Tomasz Nowicki's work.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com --- arch/arm64/Kconfig | 7 ++ arch/arm64/kernel/pci.c | 245 +++++++++++++++++++++++++++++++++++++++++++++--- drivers/pci/pci.c | 26 +++-- 3 files changed, 257 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9b80428..8e4b789 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -276,6 +276,13 @@ config PCI_DOMAINS_GENERIC config PCI_SYSCALL def_bool PCI
+config PCI_MMCONFIG + def_bool y + select PCI_ECAM + select HAVE_PCI_ECAM + select GENERIC_PCI_ECAM + depends on ACPI + source "drivers/pci/Kconfig" source "drivers/pci/pcie/Kconfig" source "drivers/pci/hotplug/Kconfig" diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c index 4095379..d1629dc 100644 --- a/arch/arm64/kernel/pci.c +++ b/arch/arm64/kernel/pci.c @@ -11,12 +11,15 @@ */
#include <linux/acpi.h> +#include <linux/ecam.h> #include <linux/init.h> #include <linux/io.h> #include <linux/kernel.h> #include <linux/mm.h> +#include <linux/of_address.h> #include <linux/of_pci.h> #include <linux/of_platform.h> +#include <linux/pci-acpi.h> #include <linux/slab.h>
#include <asm/pci-bridge.h> @@ -43,31 +46,251 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res, */ int pcibios_add_device(struct pci_dev *dev) { - dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); + if (acpi_disabled) + dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
return 0; }
+#ifdef CONFIG_ACPI +int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge) +{ + struct pci_controller *sd = bridge->bus->sysdata; + + ACPI_COMPANION_SET(&bridge->dev, sd->companion); + return 0; +} + +void pcibios_add_bus(struct pci_bus *bus) +{ + acpi_pci_add_bus(bus); +} + +void pcibios_remove_bus(struct pci_bus *bus) +{ + acpi_pci_remove_bus(bus); +} + +int pcibios_enable_irq(struct pci_dev *dev) +{ + if (!pci_dev_msi_enabled(dev)) + acpi_pci_irq_enable(dev); + return 0; +} + +int pcibios_disable_irq(struct pci_dev *dev) +{ + if (!pci_dev_msi_enabled(dev)) + acpi_pci_irq_disable(dev); + return 0; +} + +int pcibios_enable_device(struct pci_dev *dev, int bars) +{ + int err; + + err = pci_enable_resources(dev, bars); + if (err < 0) + return err; + + return pcibios_enable_irq(dev); +} + +static int __init pcibios_assign_resources(void) +{ + struct pci_bus *root_bus; + + if (acpi_disabled) + return 0; + + list_for_each_entry(root_bus, &pci_root_buses, node) { + pcibios_resource_survey_bus(root_bus); + pci_assign_unassigned_root_bus_resources(root_bus); + } + return 0; +} /* - * raw_pci_read/write - Platform-specific PCI config space access. + * fs_initcall comes after subsys_initcall, so we know acpi scan + * has run. */ -int raw_pci_read(unsigned int domain, unsigned int bus, - unsigned int devfn, int reg, int len, u32 *val) +fs_initcall(pcibios_assign_resources); + +static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, + int size, u32 *value) { - return -ENXIO; + return raw_pci_read(pci_domain_nr(bus), bus->number, + devfn, where, size, value); }
-int raw_pci_write(unsigned int domain, unsigned int bus, - unsigned int devfn, int reg, int len, u32 val) +static int pci_write(struct pci_bus *bus, unsigned int devfn, int where, + int size, u32 value) { - return -ENXIO; + return raw_pci_write(pci_domain_nr(bus), bus->number, + devfn, where, size, value); }
-#ifdef CONFIG_ACPI +struct pci_ops pci_root_ops = { + .read = pci_read, + .write = pci_write, +}; + +struct pci_root_info { + struct acpi_pci_root_info common; +#ifdef CONFIG_PCI_MMCONFIG + bool mcfg_added; + u8 start_bus; + u8 end_bus; +#endif +}; + +#ifdef CONFIG_PCI_MMCONFIG +static int pci_add_mmconfig_region(struct acpi_pci_root_info *ci) +{ + struct pci_mmcfg_region *cfg; + struct pci_root_info *info; + struct acpi_pci_root *root = ci->root; + int err, seg = ci->controller.segment; + + info = container_of(ci, struct pci_root_info, common); + info->start_bus = (u8)root->secondary.start; + info->end_bus = (u8)root->secondary.end; + info->mcfg_added = false; + + rcu_read_lock(); + cfg = pci_mmconfig_lookup(seg, info->start_bus); + rcu_read_unlock(); + if (cfg) + return 0; + + cfg = pci_mmconfig_alloc(seg, info->start_bus, info->end_bus, + root->mcfg_addr); + if (!cfg) + return -ENOMEM; + + err = pci_mmconfig_inject(cfg); + if (!err) + info->mcfg_added = true; + + return err; +} + +static void pci_remove_mmconfig_region(struct acpi_pci_root_info *ci) +{ + struct pci_root_info *info; + + info = container_of(ci, struct pci_root_info, common); + if (info->mcfg_added) { + pci_mmconfig_delete(ci->controller.segment, info->start_bus, + info->end_bus); + info->mcfg_added = false; + } +} +#else +static int pci_add_mmconfig_region(struct acpi_pci_root_info *ci) +{ + return 0; +} + +static void pci_remove_mmconfig_region(struct acpi_pci_root_info *ci) { } +#endif + +static int pci_acpi_root_init_info(struct acpi_pci_root_info *ci) +{ + return pci_add_mmconfig_region(ci); +} + +static void pci_acpi_root_release_info(struct acpi_pci_root_info *ci) +{ + struct pci_root_info *info; + + info = container_of(ci, struct pci_root_info, common); + pci_remove_mmconfig_region(ci); + kfree(info); +} + +static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci, + int status) +{ + struct resource_entry *entry, *tmp; + + resource_list_for_each_entry_safe(entry, tmp, &ci->resources) { + struct resource *res = entry->res; + + /* + * Special handling for ARM IO range + * TODO: need to move pci_register_io_range() function out + * of drivers/of/address.c for both used by DT and ACPI + */ + if (res->flags & IORESOURCE_IO) { + unsigned long port; + int err; + resource_size_t length = res->end - res->start; + + err = pci_register_io_range(res->start, length); + if (err) { + resource_list_destroy_entry(entry); + continue; + } + + port = pci_address_to_pio(res->start); + if (port == (unsigned long)-1) { + resource_list_destroy_entry(entry); + continue; + } + + res->start = port; + res->end = res->start + length - 1; + + if (pci_remap_iospace(res, res->start) < 0) + resource_list_destroy_entry(entry); + } + } + + return status; +} + +static struct acpi_pci_root_ops acpi_pci_root_ops = { + .pci_ops = &pci_root_ops, + .init_info = pci_acpi_root_init_info, + .release_info = pci_acpi_root_release_info, + .prepare_resources = pci_acpi_root_prepare_resources, +}; + /* Root bridge scanning */ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) { - /* TODO: Should be revisited when implementing PCI on ACPI */ - return NULL; + struct pci_root_info *info; + int node; + int domain = root->segment; + int busnum = root->secondary.start; + struct pci_bus *bus; + + if (domain && !pci_domains_supported) { + pr_warn("PCI %04x:%02x: multiple domains not supported.\n", + domain, busnum); + return NULL; + } + + node = acpi_get_node(root->device->handle); + info = kzalloc_node(sizeof(*info), GFP_KERNEL, node); + if (!info) { + dev_err(&root->device->dev, "pci_bus %04x:%02x: ignored (out of memory)\n", + domain, busnum); + return NULL; + } + + bus = acpi_pci_root_create(root, &acpi_pci_root_ops, &info->common); + + /* After the PCI-E bus has been walked and all devices discovered, + * configure any settings of the fabric that might be necessary. + */ + if (bus) { + struct pci_bus *child; + + list_for_each_entry(child, &bus->children, node) + pcie_bus_configure_settings(child); + } + + return bus; } #endif diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index acc4b6e..0268a1d 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -25,6 +25,7 @@ #include <linux/device.h> #include <linux/pm_runtime.h> #include <linux/pci_hotplug.h> +#include <linux/acpi.h> #include <asm-generic/pci-bridge.h> #include <asm/setup.h> #include "pci.h" @@ -4509,7 +4510,7 @@ int pci_get_new_domain_nr(void) void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent) { static int use_dt_domains = -1; - int domain = of_get_pci_domain_nr(parent->of_node); + int domain;
/* * Check DT domain and use_dt_domains values. @@ -4537,17 +4538,22 @@ void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent) * invalidating the domain value (domain = -1) and printing a * corresponding error. */ - if (domain >= 0 && use_dt_domains) { - use_dt_domains = 1; - } else if (domain < 0 && use_dt_domains != 1) { - use_dt_domains = 0; - domain = pci_get_new_domain_nr(); + if (acpi_disabled) { + domain = of_get_pci_domain_nr(parent->of_node); + if (domain >= 0 && use_dt_domains) { + use_dt_domains = 1; + } else if (domain < 0 && use_dt_domains != 1) { + use_dt_domains = 0; + domain = pci_get_new_domain_nr(); + } else { + dev_err(parent, "Node %s has inconsistent "linux,pci-domain" property in DT\n", + parent->of_node->full_name); + domain = -1; + } } else { - dev_err(parent, "Node %s has inconsistent "linux,pci-domain" property in DT\n", - parent->of_node->full_name); - domain = -1; + struct pci_controller *sd = bus->sysdata; + domain = sd->segment; } - bus->domain_nr = domain; } #endif
On 26.05.2015 14:49, Hanjun Guo wrote:
Based on Jiang Liu's common interface to support PCI host bridge init and refactoring of MMCONFIG, this patch using information from ACPI MCFG table and IO/irq resources from _CRS to init ARM64 PCI hostbridge, then PCI will work on ARM64.
This patch is based on Mark Salter and Tomasz Nowicki's work.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com
arch/arm64/Kconfig | 7 ++ arch/arm64/kernel/pci.c | 245 +++++++++++++++++++++++++++++++++++++++++++++--- drivers/pci/pci.c | 26 +++-- 3 files changed, 257 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9b80428..8e4b789 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -276,6 +276,13 @@ config PCI_DOMAINS_GENERIC config PCI_SYSCALL def_bool PCI
+config PCI_MMCONFIG
- def_bool y
- select PCI_ECAM
- select HAVE_PCI_ECAM
- select GENERIC_PCI_ECAM
HAVE_PCI_ECAM and GENERIC_PCI_ECAM should be selected by platform.
- depends on ACPI
I think we should depend on PCI too.
Tomasz
- source "drivers/pci/Kconfig" source "drivers/pci/pcie/Kconfig" source "drivers/pci/hotplug/Kconfig"
diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c index 4095379..d1629dc 100644 --- a/arch/arm64/kernel/pci.c +++ b/arch/arm64/kernel/pci.c @@ -11,12 +11,15 @@ */
#include <linux/acpi.h> +#include <linux/ecam.h> #include <linux/init.h> #include <linux/io.h> #include <linux/kernel.h> #include <linux/mm.h> +#include <linux/of_address.h> #include <linux/of_pci.h> #include <linux/of_platform.h> +#include <linux/pci-acpi.h> #include <linux/slab.h>
#include <asm/pci-bridge.h> @@ -43,31 +46,251 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res, */ int pcibios_add_device(struct pci_dev *dev) {
- dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
if (acpi_disabled)
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
return 0; }
+#ifdef CONFIG_ACPI +int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge) +{
- struct pci_controller *sd = bridge->bus->sysdata;
- ACPI_COMPANION_SET(&bridge->dev, sd->companion);
- return 0;
+}
+void pcibios_add_bus(struct pci_bus *bus) +{
- acpi_pci_add_bus(bus);
+}
+void pcibios_remove_bus(struct pci_bus *bus) +{
- acpi_pci_remove_bus(bus);
+}
+int pcibios_enable_irq(struct pci_dev *dev) +{
- if (!pci_dev_msi_enabled(dev))
acpi_pci_irq_enable(dev);
- return 0;
+}
+int pcibios_disable_irq(struct pci_dev *dev) +{
- if (!pci_dev_msi_enabled(dev))
acpi_pci_irq_disable(dev);
- return 0;
+}
+int pcibios_enable_device(struct pci_dev *dev, int bars) +{
- int err;
- err = pci_enable_resources(dev, bars);
- if (err < 0)
return err;
- return pcibios_enable_irq(dev);
+}
+static int __init pcibios_assign_resources(void) +{
- struct pci_bus *root_bus;
- if (acpi_disabled)
return 0;
- list_for_each_entry(root_bus, &pci_root_buses, node) {
pcibios_resource_survey_bus(root_bus);
pci_assign_unassigned_root_bus_resources(root_bus);
- }
- return 0;
+} /*
- raw_pci_read/write - Platform-specific PCI config space access.
- fs_initcall comes after subsys_initcall, so we know acpi scan
*/
- has run.
-int raw_pci_read(unsigned int domain, unsigned int bus,
unsigned int devfn, int reg, int len, u32 *val)
+fs_initcall(pcibios_assign_resources);
+static int pci_read(struct pci_bus *bus, unsigned int devfn, int where,
{int size, u32 *value)
- return -ENXIO;
- return raw_pci_read(pci_domain_nr(bus), bus->number,
}devfn, where, size, value);
-int raw_pci_write(unsigned int domain, unsigned int bus,
unsigned int devfn, int reg, int len, u32 val)
+static int pci_write(struct pci_bus *bus, unsigned int devfn, int where,
{int size, u32 value)
- return -ENXIO;
- return raw_pci_write(pci_domain_nr(bus), bus->number,
}devfn, where, size, value);
-#ifdef CONFIG_ACPI +struct pci_ops pci_root_ops = {
- .read = pci_read,
- .write = pci_write,
+};
+struct pci_root_info {
- struct acpi_pci_root_info common;
+#ifdef CONFIG_PCI_MMCONFIG
- bool mcfg_added;
- u8 start_bus;
- u8 end_bus;
+#endif +};
+#ifdef CONFIG_PCI_MMCONFIG +static int pci_add_mmconfig_region(struct acpi_pci_root_info *ci) +{
- struct pci_mmcfg_region *cfg;
- struct pci_root_info *info;
- struct acpi_pci_root *root = ci->root;
- int err, seg = ci->controller.segment;
- info = container_of(ci, struct pci_root_info, common);
- info->start_bus = (u8)root->secondary.start;
- info->end_bus = (u8)root->secondary.end;
- info->mcfg_added = false;
- rcu_read_lock();
- cfg = pci_mmconfig_lookup(seg, info->start_bus);
- rcu_read_unlock();
- if (cfg)
return 0;
- cfg = pci_mmconfig_alloc(seg, info->start_bus, info->end_bus,
root->mcfg_addr);
- if (!cfg)
return -ENOMEM;
- err = pci_mmconfig_inject(cfg);
- if (!err)
info->mcfg_added = true;
- return err;
+}
+static void pci_remove_mmconfig_region(struct acpi_pci_root_info *ci) +{
- struct pci_root_info *info;
- info = container_of(ci, struct pci_root_info, common);
- if (info->mcfg_added) {
pci_mmconfig_delete(ci->controller.segment, info->start_bus,
info->end_bus);
info->mcfg_added = false;
- }
+} +#else +static int pci_add_mmconfig_region(struct acpi_pci_root_info *ci) +{
- return 0;
+}
+static void pci_remove_mmconfig_region(struct acpi_pci_root_info *ci) { } +#endif
+static int pci_acpi_root_init_info(struct acpi_pci_root_info *ci) +{
- return pci_add_mmconfig_region(ci);
+}
+static void pci_acpi_root_release_info(struct acpi_pci_root_info *ci) +{
- struct pci_root_info *info;
- info = container_of(ci, struct pci_root_info, common);
- pci_remove_mmconfig_region(ci);
- kfree(info);
+}
+static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci,
int status)
+{
- struct resource_entry *entry, *tmp;
- resource_list_for_each_entry_safe(entry, tmp, &ci->resources) {
struct resource *res = entry->res;
/*
* Special handling for ARM IO range
* TODO: need to move pci_register_io_range() function out
* of drivers/of/address.c for both used by DT and ACPI
*/
if (res->flags & IORESOURCE_IO) {
unsigned long port;
int err;
resource_size_t length = res->end - res->start;
err = pci_register_io_range(res->start, length);
if (err) {
resource_list_destroy_entry(entry);
continue;
}
port = pci_address_to_pio(res->start);
if (port == (unsigned long)-1) {
resource_list_destroy_entry(entry);
continue;
}
res->start = port;
res->end = res->start + length - 1;
if (pci_remap_iospace(res, res->start) < 0)
resource_list_destroy_entry(entry);
}
- }
- return status;
+}
+static struct acpi_pci_root_ops acpi_pci_root_ops = {
- .pci_ops = &pci_root_ops,
- .init_info = pci_acpi_root_init_info,
- .release_info = pci_acpi_root_release_info,
- .prepare_resources = pci_acpi_root_prepare_resources,
+};
- /* Root bridge scanning */ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) {
- /* TODO: Should be revisited when implementing PCI on ACPI */
- return NULL;
- struct pci_root_info *info;
- int node;
- int domain = root->segment;
- int busnum = root->secondary.start;
- struct pci_bus *bus;
- if (domain && !pci_domains_supported) {
pr_warn("PCI %04x:%02x: multiple domains not supported.\n",
domain, busnum);
return NULL;
- }
- node = acpi_get_node(root->device->handle);
- info = kzalloc_node(sizeof(*info), GFP_KERNEL, node);
- if (!info) {
dev_err(&root->device->dev, "pci_bus %04x:%02x: ignored (out of memory)\n",
domain, busnum);
return NULL;
- }
- bus = acpi_pci_root_create(root, &acpi_pci_root_ops, &info->common);
- /* After the PCI-E bus has been walked and all devices discovered,
* configure any settings of the fabric that might be necessary.
*/
- if (bus) {
struct pci_bus *child;
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
- }
- return bus; } #endif
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index acc4b6e..0268a1d 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -25,6 +25,7 @@ #include <linux/device.h> #include <linux/pm_runtime.h> #include <linux/pci_hotplug.h> +#include <linux/acpi.h> #include <asm-generic/pci-bridge.h> #include <asm/setup.h> #include "pci.h" @@ -4509,7 +4510,7 @@ int pci_get_new_domain_nr(void) void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent) { static int use_dt_domains = -1;
- int domain = of_get_pci_domain_nr(parent->of_node);
int domain;
/*
- Check DT domain and use_dt_domains values.
@@ -4537,17 +4538,22 @@ void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent) * invalidating the domain value (domain = -1) and printing a * corresponding error. */
- if (domain >= 0 && use_dt_domains) {
use_dt_domains = 1;
- } else if (domain < 0 && use_dt_domains != 1) {
use_dt_domains = 0;
domain = pci_get_new_domain_nr();
- if (acpi_disabled) {
domain = of_get_pci_domain_nr(parent->of_node);
if (domain >= 0 && use_dt_domains) {
use_dt_domains = 1;
} else if (domain < 0 && use_dt_domains != 1) {
use_dt_domains = 0;
domain = pci_get_new_domain_nr();
} else {
dev_err(parent, "Node %s has inconsistent \"linux,pci-domain\" property in DT\n",
parent->of_node->full_name);
domain = -1;
} else {}
dev_err(parent, "Node %s has inconsistent \"linux,pci-domain\" property in DT\n",
parent->of_node->full_name);
domain = -1;
struct pci_controller *sd = bus->sysdata;
}domain = sd->segment;
- bus->domain_nr = domain; } #endif
On 2015年05月26日 23:12, Tomasz Nowicki wrote:
On 26.05.2015 14:49, Hanjun Guo wrote:
Based on Jiang Liu's common interface to support PCI host bridge init and refactoring of MMCONFIG, this patch using information from ACPI MCFG table and IO/irq resources from _CRS to init ARM64 PCI hostbridge, then PCI will work on ARM64.
This patch is based on Mark Salter and Tomasz Nowicki's work.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com
arch/arm64/Kconfig | 7 ++ arch/arm64/kernel/pci.c | 245 +++++++++++++++++++++++++++++++++++++++++++++--- drivers/pci/pci.c | 26 +++-- 3 files changed, 257 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9b80428..8e4b789 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -276,6 +276,13 @@ config PCI_DOMAINS_GENERIC config PCI_SYSCALL def_bool PCI
+config PCI_MMCONFIG
- def_bool y
- select PCI_ECAM
- select HAVE_PCI_ECAM
- select GENERIC_PCI_ECAM
HAVE_PCI_ECAM and GENERIC_PCI_ECAM should be selected by platform.
OK.
- depends on ACPI
I think we should depend on PCI too.
Since ACPI depends on PCI, denpends on PCI is duplicate I think.
Thanks Hanjun
On Tue, May 26, 2015 at 01:49:24PM +0100, Hanjun Guo wrote:
Based on Jiang Liu's common interface to support PCI host bridge init and refactoring of MMCONFIG, this patch using information from ACPI MCFG table and IO/irq resources from _CRS to init ARM64 PCI hostbridge, then PCI will work on ARM64.
This patch is based on Mark Salter and Tomasz Nowicki's work.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com
arch/arm64/Kconfig | 7 ++ arch/arm64/kernel/pci.c | 245 +++++++++++++++++++++++++++++++++++++++++++++--- drivers/pci/pci.c | 26 +++-- 3 files changed, 257 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9b80428..8e4b789 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -276,6 +276,13 @@ config PCI_DOMAINS_GENERIC config PCI_SYSCALL def_bool PCI
+config PCI_MMCONFIG
def_bool y
select PCI_ECAM
select HAVE_PCI_ECAM
select GENERIC_PCI_ECAM
depends on ACPI
source "drivers/pci/Kconfig" source "drivers/pci/pcie/Kconfig" source "drivers/pci/hotplug/Kconfig" diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c index 4095379..d1629dc 100644 --- a/arch/arm64/kernel/pci.c +++ b/arch/arm64/kernel/pci.c @@ -11,12 +11,15 @@ */
#include <linux/acpi.h> +#include <linux/ecam.h> #include <linux/init.h> #include <linux/io.h> #include <linux/kernel.h> #include <linux/mm.h> +#include <linux/of_address.h> #include <linux/of_pci.h> #include <linux/of_platform.h> +#include <linux/pci-acpi.h> #include <linux/slab.h>
#include <asm/pci-bridge.h> @@ -43,31 +46,251 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res, */ int pcibios_add_device(struct pci_dev *dev) {
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
if (acpi_disabled)
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); return 0;
}
+#ifdef CONFIG_ACPI +int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge) +{
struct pci_controller *sd = bridge->bus->sysdata;
ACPI_COMPANION_SET(&bridge->dev, sd->companion);
return 0;
+}
+void pcibios_add_bus(struct pci_bus *bus) +{
acpi_pci_add_bus(bus);
+}
+void pcibios_remove_bus(struct pci_bus *bus) +{
acpi_pci_remove_bus(bus);
+}
+int pcibios_enable_irq(struct pci_dev *dev) +{
if (!pci_dev_msi_enabled(dev))
acpi_pci_irq_enable(dev);
return 0;
+}
+int pcibios_disable_irq(struct pci_dev *dev) +{
if (!pci_dev_msi_enabled(dev))
acpi_pci_irq_disable(dev);
return 0;
+}
+int pcibios_enable_device(struct pci_dev *dev, int bars) +{
int err;
err = pci_enable_resources(dev, bars);
if (err < 0)
return err;
return pcibios_enable_irq(dev);
+}
+static int __init pcibios_assign_resources(void) +{
struct pci_bus *root_bus;
if (acpi_disabled)
return 0;
list_for_each_entry(root_bus, &pci_root_buses, node) {
pcibios_resource_survey_bus(root_bus);
pci_assign_unassigned_root_bus_resources(root_bus);
}
return 0;
+}
I'm starting to sound like a stuck record (and getting bored of it myself!), but none of this looks arch-specific. Can it be moved into the core?
Will
On 2015/5/27 1:13, Will Deacon wrote:
On Tue, May 26, 2015 at 01:49:24PM +0100, Hanjun Guo wrote:
Based on Jiang Liu's common interface to support PCI host bridge init and refactoring of MMCONFIG, this patch using information from ACPI MCFG table and IO/irq resources from _CRS to init ARM64 PCI hostbridge, then PCI will work on ARM64.
This patch is based on Mark Salter and Tomasz Nowicki's work.
Signed-off-by: Hanjun Guo hanjun.guo@linaro.org Tested-by: Suravee Suthikulpanit Suravee.Suthikulpanit@amd.com CC: Arnd Bergmann arnd@arndb.de CC: Catalin Marinas catalin.marinas@arm.com CC: Liviu Dudau Liviu.Dudau@arm.com CC: Lorenzo Pieralisi Lorenzo.Pieralisi@arm.com CC: Will Deacon will.deacon@arm.com
arch/arm64/Kconfig | 7 ++ arch/arm64/kernel/pci.c | 245 +++++++++++++++++++++++++++++++++++++++++++++--- drivers/pci/pci.c | 26 +++-- 3 files changed, 257 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9b80428..8e4b789 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -276,6 +276,13 @@ config PCI_DOMAINS_GENERIC config PCI_SYSCALL def_bool PCI
+config PCI_MMCONFIG
def_bool y
select PCI_ECAM
select HAVE_PCI_ECAM
select GENERIC_PCI_ECAM
depends on ACPI
source "drivers/pci/Kconfig" source "drivers/pci/pcie/Kconfig" source "drivers/pci/hotplug/Kconfig" diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c index 4095379..d1629dc 100644 --- a/arch/arm64/kernel/pci.c +++ b/arch/arm64/kernel/pci.c @@ -11,12 +11,15 @@ */
#include <linux/acpi.h> +#include <linux/ecam.h> #include <linux/init.h> #include <linux/io.h> #include <linux/kernel.h> #include <linux/mm.h> +#include <linux/of_address.h> #include <linux/of_pci.h> #include <linux/of_platform.h> +#include <linux/pci-acpi.h> #include <linux/slab.h>
#include <asm/pci-bridge.h> @@ -43,31 +46,251 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res, */ int pcibios_add_device(struct pci_dev *dev) {
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
if (acpi_disabled)
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); return 0;
}
+#ifdef CONFIG_ACPI +int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge) +{
struct pci_controller *sd = bridge->bus->sysdata;
ACPI_COMPANION_SET(&bridge->dev, sd->companion);
return 0;
+}
+void pcibios_add_bus(struct pci_bus *bus) +{
acpi_pci_add_bus(bus);
+}
+void pcibios_remove_bus(struct pci_bus *bus) +{
acpi_pci_remove_bus(bus);
+}
+int pcibios_enable_irq(struct pci_dev *dev) +{
if (!pci_dev_msi_enabled(dev))
acpi_pci_irq_enable(dev);
return 0;
+}
+int pcibios_disable_irq(struct pci_dev *dev) +{
if (!pci_dev_msi_enabled(dev))
acpi_pci_irq_disable(dev);
return 0;
+}
+int pcibios_enable_device(struct pci_dev *dev, int bars) +{
int err;
err = pci_enable_resources(dev, bars);
if (err < 0)
return err;
return pcibios_enable_irq(dev);
+}
+static int __init pcibios_assign_resources(void) +{
struct pci_bus *root_bus;
if (acpi_disabled)
return 0;
list_for_each_entry(root_bus, &pci_root_buses, node) {
pcibios_resource_survey_bus(root_bus);
pci_assign_unassigned_root_bus_resources(root_bus);
}
return 0;
+}
I'm starting to sound like a stuck record (and getting bored of it myself!), but none of this looks arch-specific. Can it be moved into the core?
Hi Will, Kindly reminder, there's a pending patch set to change the way to manage IRQ for pci devices, which affects pcibios_enable_irq()/pcibios_enable_device() and friends. Please refer to https://lkml.org/lkml/2015/5/6/989 Thanks! Gerry
Will
On Tuesday, May 26, 2015 08:49:13 PM Hanjun Guo wrote:
This patch set is introducing ARM64 PCI hostbridge init based on ACPI, which based on Jiang Liu's patch set "Consolidate ACPI PCI root common code into ACPI core":
I'll be regarding this patchset as an RFC until the one from Jiang Liu goes in.
On 2015年05月27日 08:30, Rafael J. Wysocki wrote:
On Tuesday, May 26, 2015 08:49:13 PM Hanjun Guo wrote:
This patch set is introducing ARM64 PCI hostbridge init based on ACPI, which based on Jiang Liu's patch set "Consolidate ACPI PCI root common code into ACPI core":
I'll be regarding this patchset as an RFC until the one from Jiang Liu goes in.
Yes, please, Jiang Liu's patch set should go in first.
Thanks Hanjun
On 27 May 2015 at 09:27, Hanjun Guo hanjun.guo@linaro.org wrote:
On 2015年05月27日 08:30, Rafael J. Wysocki wrote:
On Tuesday, May 26, 2015 08:49:13 PM Hanjun Guo wrote:
This patch set is introducing ARM64 PCI hostbridge init based on ACPI, which based on Jiang Liu's patch set "Consolidate ACPI PCI root common code into ACPI core":
I'll be regarding this patchset as an RFC until the one from Jiang Liu goes in.
Yes, please, Jiang Liu's patch set should go in first.
Apart from picking all, I have skipped x86, ia64 changes from above and Jiang Liu's series with disabling MSI, but I'm observing my arm64 board isn't booted from uefi.
I'm suspecting pci-acpi changes were generic to specific pcie root bridge controller, but do I need to do any changes wrt pcie bridge? let me know if I miss anything here.
thanks!
Hi Jagan,
On 06/08/2015 08:05 PM, Jagan Teki wrote:
On 27 May 2015 at 09:27, Hanjun Guo hanjun.guo@linaro.org wrote:
On 2015年05月27日 08:30, Rafael J. Wysocki wrote:
On Tuesday, May 26, 2015 08:49:13 PM Hanjun Guo wrote:
This patch set is introducing ARM64 PCI hostbridge init based on ACPI, which based on Jiang Liu's patch set "Consolidate ACPI PCI root common code into ACPI core":
I'll be regarding this patchset as an RFC until the one from Jiang Liu goes in.
Yes, please, Jiang Liu's patch set should go in first.
Apart from picking all, I have skipped x86, ia64 changes from above and Jiang Liu's series with disabling MSI, but I'm observing my arm64 board isn't booted from uefi.
Do you boot with ACPI? There is no support for ACPI PCI for ARM64 in mainline, which kernel do you use?
I'm suspecting pci-acpi changes were generic to specific pcie root bridge controller, but do I need to do any changes wrt pcie bridge? let me know if I miss anything here.
ARM64 need special handling for IO port resources based on Jiang's patch set, we still working on it.
Thanks Hanjun
Hi Hanjun,
Thanks for your hard work on ARM64 PCI host bridge/root complex so far!
On 05/26/2015 08:49 AM, Hanjun Guo wrote:
This patch set is introducing ARM64 PCI hostbridge init based on ACPI, which based on Jiang Liu's patch set "Consolidate ACPI PCI root common code into ACPI core":
The latest version (v7) of the ACPI PCI root common code consolidation series was posted by Gerry (Jiang Liu) yesterday:
https://lkml.kernel.org/g/1444804182-6596-1-git-send-email-jiang.liu@linux.i...
This got me wondering what the status is of the ARM64 specific hostbridge patch series that has a dependency upon the former. Are you planning to wait for that series to be accepted or post a newer rebased version for additional review, or something else?
Thanks very much in advance! :)
Jon.
On 10/16/2015 03:15 AM, Jon Masters wrote:
Hi Hanjun,
Thanks for your hard work on ARM64 PCI host bridge/root complex so far!
On 05/26/2015 08:49 AM, Hanjun Guo wrote:
This patch set is introducing ARM64 PCI hostbridge init based on ACPI, which based on Jiang Liu's patch set "Consolidate ACPI PCI root common code into ACPI core":
The latest version (v7) of the ACPI PCI root common code consolidation series was posted by Gerry (Jiang Liu) yesterday:
https://lkml.kernel.org/g/1444804182-6596-1-git-send-email-jiang.liu@linux.i...
This got me wondering what the status is of the ARM64 specific hostbridge patch series that has a dependency upon the former. Are you planning to wait for that series to be accepted or post a newer rebased version for additional review, or something else?
Tomasz and Lorenzo are discussing about the PCI config based OperationRegions, which still have no conclusion yet, Lorenzo started a discussion on ASWG and hope can get some feedback.
Other than that, Tomasz volunteered to cleanup this patch set, I think we need to rebase on Jiang's v7 patchset which was accepted By Rafael, of course I will work with Tomasz to prepare those patches.
Thanks Hanjun
Sorry for top posting. Can you summarize the OperationRegion issue? I am aware of a few early firmwares that effectively had the producer/consumer type for bridge resources the wrong way around but not of the specific issue. If ASWG confidential I can take it over there.
On Tue, May 26, 2015 at 08:49:13PM +0800, Hanjun Guo wrote:
This patch set is introducing ARM64 PCI hostbridge init based on ACPI, which based on Jiang Liu's patch set "Consolidate ACPI PCI root common code into ACPI core":
https://lkml.org/lkml/2015/5/14/98
This patch set including three parts:
the first part is PATCH 1, which should be merged into Jiang Liu's patch set to fix the compile error on ARM64 when ACPI enabled.
the senconed part is the refactoring of mmconfig to let that mechanism can be used for ARM64 too, it's Tomasz's work but he is moving to other work and pretty busy for now, so I will take care of those patches, Tomasz will show up when some comments need to be addressed :)
In this version of mmconfig refactor patches, I removed the rename of mmconfig -> ecam patch, because mmconfig is in multi places, and need much more effort to convert them all to ecam, Bjorn, if you don't like it, I can add them back.
The third part is about the ARM64 PCI hostbridge init based on ACPI, first I fixed a compile error for XEN PCI on ARM64 when PCI_MMCONFIG=y, and then introduce PCI init based on Jiang Liu and Tomasz's patch set.
patch for ARM64 ACPI PCI still reserve the bus sysdata to get the domain number, because Yijing's patch set is still under review, will be removed when Yijing's patch set hits upstream.
This patch set was tested by Suravee on Seattle board with legacy interrupt (not MSI), and it works, also tested on qemu by Graeme.
You can get the code from: git://git.linaro.org/leg/acpi/acpi.git, devel branch
Comments are welcomed.
Thanks Hanjun
Hanjun Guo (3): ARM64 / PCI: introduce struct pci_controller for ACPI XEN / PCI: Remove the dependence on arch x86 when PCI_MMCONFIG=y ARM64 / PCI / ACPI: support for ACPI based PCI hostbridge init
Tomasz Nowicki (8): x86, pci: Clean up comment about buggy MMIO config space access for AMD Fam10h CPUs. x86, pci: Abstract PCI config accessors and use AMD Fam10h workaround exclusively. x86, pci: Reorder logic of pci_mmconfig_insert() function x86, pci, acpi: Move arch-agnostic MMCONFIG (aka ECAM) and ACPI code out of arch/x86/ directory pci, acpi, mcfg: Provide generic implementation of MCFG code initialization. x86, pci: mmconfig_{32,64}.c code refactoring - remove code duplication. x86, pci, ecam: mmconfig_64.c becomes default implementation for ECAM driver. pci, acpi, mcfg: Share ACPI PCI config space accessors.
arch/arm64/Kconfig | 7 + arch/arm64/include/asm/pci.h | 10 ++ arch/arm64/kernel/pci.c | 245 ++++++++++++++++++++++++++-- arch/x86/Kconfig | 4 + arch/x86/include/asm/pci_x86.h | 34 +--- arch/x86/pci/Makefile | 4 +- arch/x86/pci/acpi.c | 1 + arch/x86/pci/mmconfig-shared.c | 301 +++++++++++----------------------- arch/x86/pci/mmconfig_32.c | 35 +--- arch/x86/pci/mmconfig_64.c | 153 ------------------ arch/x86/pci/numachip.c | 25 +-- drivers/acpi/Makefile | 1 + drivers/acpi/mcfg.c | 103 ++++++++++++ drivers/pci/Kconfig | 10 ++ drivers/pci/Makefile | 5 + drivers/pci/ecam.c | 358 +++++++++++++++++++++++++++++++++++++++++ drivers/pci/pci.c | 26 +-- drivers/xen/pci.c | 7 +- include/linux/acpi.h | 2 + include/linux/ecam.h | 56 +++++++ 20 files changed, 923 insertions(+), 464 deletions(-) delete mode 100644 arch/x86/pci/mmconfig_64.c create mode 100644 drivers/acpi/mcfg.c create mode 100644 drivers/pci/ecam.c create mode 100644 include/linux/ecam.h
There was a fair amount of unresolved discussion about this series, so I'm going to wait for an update.
Bjorn